\n "]},{"cell_type":"markdown","metadata":{},"source":["There are still some challenges that should be further explored, e.g., making the algorithm more robust to be applied to different practice and promoting the calculation to make it real-time."]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f0.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 1: GFF  Figure 2: DSIFT  Figure 3: MWGF  Figure 4: The proposed Figure 5: Comparative fused images. (Introduction)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f1.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 6: The scheme of the proposed multi-focus image fusion method. (Introduction)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f2.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 7: The schematic diagram of SURF scale space representation. (Introduction)"]},{"cell_type":"markdown","metadata":{},"source":["$$\n\\begin{equation}\nG_{o}^{k}(x,y)=\\mathrm{det}\\left( \\mathcal{H}_{\\mathrm{approx}}(x,y,w(o,k)) \\right),\n\\end{equation}$$"]},{"cell_type":"code","execution_count":0,"outputs":[],"metadata":{},"source":["G_o(x, y)**k == det(H_approx(x, y, w(o, k)))"]},{"cell_type":"markdown","metadata":{},"source":["$$\n\\begin{equation}\\label{H}\n \\mathcal{H}_{\\mathrm{approx}}(x,y,w)=\\left [\\begin{array}{ccc} D_{xx}(x,y,w) & \\quad \\alpha D_{xy}(x,y,w) \\\\\n\\alpha D_{xy}(x,y,w) & \\quad D_{yy}(x,y,w)\\end{array} \\right ],\n\\end{equation}$$"]},{"cell_type":"code","execution_count":0,"outputs":[],"metadata":{},"source":["H_approx(x, y, w) == ((ccc) D_xx(x, y, w) alpha * D_xy(x, y,\n"," w) \n"," alpha * D_xy(x, y, w) D_yy(x, y, w))"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f4.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 11 Figure 12 Figure 13 Figure 14 Figure 15 Figure 16 Figure 17 Figure 18 Figure 19: Samples from our unregistered multi-focus 4K ultra HD microscopic images dataset. (Weighted Map Construction and Fusion)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f5.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 20: The matching accuracy under varying O and L. (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["$$\n\\begin{equation}\\label{s}\nw(o,k) =(2^{o} \\times k+1)\\times 3.\n\\end{equation}$$"]},{"cell_type":"code","execution_count":0,"outputs":[],"metadata":{},"source":["w(o, k) ==(2**o x k + 1) x 3"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f6.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 21: O1L8 Figure 22: O2L8 Figure 23: O3L8 Figure 24: O4L3 Figure 25: O5L3 Figure 26: O6L3 Figure 27: O7L3 Figure 28: O8L3 Figure 29: O5L1 Figure 30: O5L2 Figure 31: O5L7 Figure 32: O5L8 Figure 33: The fused results with different octave and layer numbers. O1L8 means the octave number is 1 and the layer number is 8, similar to (b)-(l). For the images in the third row, the red bounding box in the bottom right corner is the magnification of the corresponding region. (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f7.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 34: 16D Figure 35: 36D Figure 36: 64D Figure 37: 100D Figure 38: The fused results with different dimensions of the feature points descriptors. The number of octaves and layers is set as 5 and 2 respectively. (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f8.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 39 Figure 40 Figure 41 Figure 42 Figure 43 Figure 44 Figure 45 Figure 46 Figure 47 Figure 48 Figure 49 Figure 50 Figure 51: The unregistered multi-focus source images and fusion results: (a)-(h) source images; (i) DSIFT; (j) GFF; (k) MWGF; (l) Ours. The regions in the upper right corner are the magnified view for comparison in (k) and (l). (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f9.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 52 Figure 53 Figure 54 Figure 55 Figure 56 Figure 57 Figure 58 Figure 59 Figure 60: The unregistered multi-focus source images and fusion results: (a)-(d) source images; (e) DSIFT; (f) GFF; (g) MWGF and (h) ours. The regions in the bottom left corner are the magnified view for comparison in (g) and (h). (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["![title](https://a2c.fyi/q/yAPE26RNSy/f10.png)"]},{"cell_type":"markdown","metadata":{},"source":["Figure 61 Figure 62 Figure 63 Figure 64 Figure 65 Figure 66 Figure 67 Figure 68 Figure 69: The multi-focus source images and fusion results: (a)-(d) source images; (e) is the result of DSIFT; (f) is the result GFF; (g) is the result of MWGF and (h) is the result of our method. The regions in the bottom left corner are the magnified view for comparison in (g) and (h). (Experiments and Discussion)"]},{"cell_type":"markdown","metadata":{},"source":["$$\n\\begin{equation}\n\\begin{bmatrix}\nx_r\\\\\ny_r\\\\\n1\n\\end{bmatrix}=T\\begin{bmatrix}\nx_s\\\\\ny_s\\\\\n1\n\\end{bmatrix},\n\\end{equation}$$"]},{"cell_type":"code","execution_count":0,"outputs":[],"metadata":{},"source":["x_r \n"," y_r \n"," 1 == T * x_s \n"," y_s \n"," 1"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2}