[2001.03734v1] A Light Field Camera Calibration Method Using Sub-Aperture Related Bipartition Projection Model and 4D Corner Detection
Due to the two-part structure of the proposed model, the calibration method for the traditional camera can be effectively reused

Abstract—Accurate calibration of intrinsic parameters of the light field (LF) camera is the key issue of many applications, especially of the 3D reconstruction. In this paper, we propose the SubAperture Related Bipartition (SARB) projection model to characterize the LF camera. This projection model is composed with two sets of parameters targeting on center view sub-aperture and relations between sub-apertures. Moreover, we also propose a corner point detection algorithm which fully utilizes the 4D LF information in the raw image. Experimental results have demonstrated the accuracy and robustness of the corner detection method. Both the 2D re-projection errors in the lateral direction and errors in the depth direction are minimized because two sets of parameters in SARB projection model are solved separately.
‹Fig. 1: Optical paths of the MLA-based LF camera (2D space) (Related Works)Fig. 2: Space-angle TPP model (SARB Projection Model) (SARB Projection Model)Fig. 3: The SARB projection model (SARB projection model)Fig. 4: Part of checkerboard and its image of the center view sub-aperture (Detection of LF-Points of Corner Points from 4D Raw Data)Fig. 5: 3D slice of the 4D template of a 3D line (LF-Line)

Fig. 6: Results of line detection. (a) shows the detection result of the method [22] (b) shows the re-projection of the LF-line to the raw data of the proposed method. (Experimental Result)Fig. 7: Results of corner detection of the proposed method and the method [22] . Green dot shows the result of projected corner locations in the method of [22] , and red dot shows corner locations projected from LF-point detected by the proposed method. They are all come from the corner area in Fig. ?? and Fig. ??. (Experimental Result)

Fig. 8: Results of corner detection by Harris detection manner (a) before eliminating false corner (b) after eliminating false corner (Experimental Result)Fig. 9: Image of a sub-aperture far from the optic axis extracted in [18] . (Re-Projection Errors in Lateral Direction)Fig. 10: Image of center view sub-aperture extracted in [18] . (Re-Projection Errors in Lateral Direction)

Fig. 11: 3D corner points calculated from raw image by using the intrinsic parameters (blue points), and 3D corner points calculated from their actual world coordinates by using the extrinsic parameters (red points). (For a clearer illustration, this figure only shows the 2D projection in XZ coordinate plane). (a)(c)(d) come from dataset D-E, and (b)(d)(e) come from dataset P-B. (a)(b) is the result of [18] , (c)(d) is the result of [22] , and (e)(f) is the result of the proposed method. (Errors in depth direction)›