Using Angles for Internal Camera Calibration

Marina Kolesnik

Abstract Since traditional calibration methods use known world Standard camera calibration technique is based on coordinates they are hardly suitable for “off-lab” self- the relationship between the 3D point coordinates in the calibration by mobile robots. To tackle the problem some object space and their respective 2D coordinates in the calibration techniques use geometric objects whose image plane. Thus precise distance measurements for a images display invariant properties [0], [0], or special set of reference points is the necessary burden of the camera motion [0], [0]. calibration procedure. Nevertheless camera projective Our angular calibration is based on images of a pattern mapping is irreversible process: whereas a set of six with known angular characteristics. A small light laser reference points uniquely defines the camera parameters equipped with a special optical head generates the their precise position can not be retrieved on the basis of calibration pattern in the field of view of the camera. a single image. The image will not change as long as the Camera pinhole model with no lens distortion is used. 3D position of the reference point varies along its optical Four internal camera parameters are subject of calibration; ray tended by the respective image point and the camera these are two scaling factors and two coordinates of the optical center. It follows that a set of such “reference principle point. We do not directly recover the focal rays” uniquely defines camera model. length, although we suggest the algorithm that updates The idea presented in the paper is to perform internal scaling factors for the camera with zoom lens. camera calibration using angular information for a set of The work of Stein given in [0] is the most closely reference points as they are seen in the image plane. We related with this paper and it is in order to point out the call this approach angular calibration. We use a special differences. The calibration method in [0] also exploits laser facility that generates reference pattern with known angles but uses them in the “reverse” order. Highly angular characteristics. Angular calibration is based on accurate rotary platform provides pure rotations of the the knowledge about the viewing angles to the camera. This requires special mechanical equipment. The distinguished 3-D points in the laser pattern observed by angles of rotations are measured and used in the the camera. No use of a reference set or metric minimization process to make them consistent with the information is required. The calculation is done in the camera internal parameters. The mathematical camera standard coordinate system; only the intrinsic presentation used for the camera modeling is different camera parameters are of importance. We show how from that one in our method. Strong coupling between angular calibration approach can be used for the camera lens distortion model and the camera internal parameters with changing focal length. We give experimental results is used. In our method we use “independent” optical of the camera parameter calculations. The angular distortion correction that adjusts distortion parameters calibration is suitable for “off-lab” self-calibration of with arbitrary internal parameters of the camera. mobile robots. There are at least two advantages of the angular Keywords: Pinhole Model, Internal Parameters, Focal calibration approach. Firstly, it does not require any Length, Laser Crosshair Projector. measurements as the input for calibration. Secondly, it directly recovers internal camera parameters while working naturally in the standard coordinate system. The 1 Introduction disadvantage is that the system of equations to be solved Recent advances in camera self-calibration [0], [0] for the camera internal parameters is not linear and could not yet eliminate the standard calibration step from appears to be not having analytical solution. those computer vision applications where metric The paper is organized as follows. In Section 2.1 an information is to be derived from the images. Vast experimental setup for the angular calibration is described. majority of the standard calibration techniques uses In Section 2.2 the mathematical model for the angular precise distance measurements for a set of 3-D reference calibration is given. The accuracy of the angular points. That involves special measurement procedure (for assumption is investigated in section 2.3. On-line instance based on teodolite) or accurately drawn calibration method for a camera with changing focal calibration chart. As the result the entries of the camera length is given in Section 3. Section 4 presents perspective projection matrix depend on an irrelevant experimental results of the calibration. Concluding coordinate system where the actual reference remarks are made in Section 5. measurements were done. Finally, the camera perspective projection matrix ought to be broken down into internal 2 Angular Calibration Approach and external camera parameters [0]. Laboratory calibration is known to be the most precise among the 2.1 Experimental setup for calibration methods based on reference points [0], [0]. More An experimental setup for the calibration includes the extensive review of calibration methods appear in [0], [0]. camera and the small laser ([0], Figure 1) equipped with an optical head that projects a high quality crosshair from

1 the laser beam. The fan angle of the crossed lines M  P 1m~ (2) produced by the laser is known with the high accuracy. ~ Both the camera and the laser are fixed together on the where m  mu , mv ,1 is the homogeneous coordinate tripod to ensure the closest possible location of the camera vector of the pixel m, the 3-D vector M gives a point M on optical center and the laser beam. The viewing angle of the optical ray C,m as  - varies between - and +. the camera must exceed the fan angle of the laser so that Let us consider the image of the laser cross with the the camera sees the end points of the cross in its field of two pairs of the opposite end points m, n and s, t, and the view. The image of the cross merely depends upon the center point o. Let C,m, C,n be the “opposite” optical shape of objects the laser cross is projected upon. rays tended by the opposite cross end points m and n; However, the angles subtended by the cross end points C,s, C,t be the “opposite” optical rays tended bythe and the camera optical center do not depend on the shape points s and t; C,o be the “central” optical ray defined of objects; their values are used for calibration. The cross by the cross center point in the image. Each optical ray end points and the cross-center point are clearly seen from may be expressed similar to those in (2). Now take into large distances and can be detected accurately in the account that the distance from the camera/laser system to image plane. If the distance between the camera/laser the end/center laser cross points in the object space is at system and the projected 3-D cross points is much larger least ten times bigger than the distance between the than the distance between the camera center and the laser, camera optical center and the laser beam. With that we the angle at which the camera sees the end cross points assume that the angle between the “opposite” optical rays almost coincide with the fan angle of the projected laser C,m and C,n is equal to the angle between the cross. This is the core of our calibration. The idea is ”opposite” optical rays C,s and C,t and is equal to the described mathematically in the next section. fan angle  of the laser cross. Similarly, the angle between any of the optical rays tended by the cross end points m, n, s or t and the “central” ray is equal to /2, i.e. the half fan angle of the laser cross. We express it using four scalar products of the 3-D vectors that define the optical rays of the laser cross: MT N  M N cos ST T  S T cos (3) MT O  M O cos( 2) ST O  C O cos( 2)

Here the 3-D vectors M, N, S, T and O define a point on the respective optical rays C,m, C,n, C,s, C,t and C,o. The expansion of the first equation in (3) to pixel Figure 1. Laser crosshair projector: commercial product coordinates using (1) and (2) yields: of LASIRIS Inc. (u  a )(u  b ) (v  a )(v  b ) 0 1 0 1  0 2 0 2 1  2.2 Angular relations 2 2  u v Standard pinhole camera model as in [0] is considered. The camera’s perspective projection matrix P in the (u  a ) 2 (v  a ) 2  cos 0 1  0 2 1 coordinate system attached to the camera (standard 2 2 (4) coordinate system) is modeled by the four internal  u  v parameters: (u  b ) 2 (v  b ) 2  0 1  0 2 1 2 2   u 0 u0  u  v P  0   v v0 (1) 0 0 1 Having four relations (4) in their pixel representation we can solve the system for the unknown internal camera where u0, v0 are the coordinates of the principle point in parameters. We are interested only in the real positive the image and u, v are the image scaling factors for the solutions for the internal parameters as the origin of the rows and columns, respectively. standard coordinate system is in front of the camera. It follows that four equations written in the standard There is no general way to simplify the system of coordinate system are enough to solve the system for the equations (3). Therefore, only numerical solutions can be four internal camera parameters. obtained. We used the Maple software package to find Let us consider the equation of the 3-D line C,m numerical solutions for our experiments, although other defined by a pixel m and the camera optical center C. In methods are also available. Recently developed methods the standard coordinate system this line, which is called in numerical continuation can reliably compute all the optical ray defined by m, is given by: solutions to a polynomial system. The solution by

2 numerical continuation is suggested by the idea that small changes in the parameters of the system usually produce A small changes in the solutions. Although for a general nonlinear system numerous difficulties can arise, such as a´ divergence or bifurcation of a solution path, for a polynomial system all such difficulties can be avoided and C a fairly precise solutions can be obtained. A detailed tutorial  ´ presentation of the methods in numerical continuation d  c  appears in [0]. L b´ 2.3 Accuracy of the angular values b S B d ´ s It follows from (5) that there is an optimal C laser/camera configuration when the difference between  ´ ds d  the angles ´ and  is of second order in d:  d ´ L t Sin( ) Sin  d Sin   s      Arctg  (6) dt d t d s  d t  d s Cos  T Though in practice it is difficult to provide the optimal Figure 2. 2-D example of the camera-laser system value for the , formula (6) gives a hint for the optimal geometry. adjustment of the camera/laser system. Assuming for d  d Let us investigate how accurate is our assumption instance s t and  = 45 the optimal value of the concerning the angles at which the camera sees the angle  is about 80. end/center points of the laser cross. A two-dimensional example of the camera/laser system geometry is sketched 3 Camera with changing focal length in Figure 2. We denote by L the laser location and by C the camera optical center. The two 3-D points S and T are The idea of camera calibration on the basis of angular the end points of the one line of the projected cross. The information is useful for the interactive calibration of a fan angle  of the laser is accurately known. The distance camera with changing focal length. Let us consider a d between the laser and the camera are much less than the calibrated camera, which changes its focus between the distance from the laser to the 3-D points S and T, i.e. two consecutive image frames. That means we know four internal camera parameters for the perspective projection d d s 1 and d dt 1. Applying the cosine theorem to matrix P1 defined by (1) for the first image frame. We also the triangles LST and CST, after some algebra we obtain: know that both camera scaling factors u and v are linear d 2  d dCos  d dCos( )  d d Cos functions of the focal length. Hence we can write: Cos '  s t s t 2 2 2 2     d s  d  2d s dCos dt  d  2d t dCos( ) u u     Using first order Taylor expansion in the small f ´ f  v v u  u variable d we obtain the following approximation for the 0 0 angle ´: v0  v0 Where f, f´ are the focal lengths of the first and the second  Sin( ) Sin   '    d   (5)   image frames, respectively,  u , v , u0 , v0 are the internal  dt d s  parameters of the first image frame, and  u , v ,u0 , v0 d d ~ d d Assuming for instance that s t ~ 0.01 and    are the internal parameters of the second image frame and  45 the relative error for the angle ´ is given by:  is unknown variable.    0.2% . Similar geometrical consideration in the 3- D case suggests that: d ˆ    d s Consequently, the relative error for ´ in 3-D case does not exceed 0.6%.

3 A By construction the above equation must have a real positive root. Summarizing, we have obtained the B following algorithm for calibration update of the camera with changing focal length: 1. Compute correspondences for a pair of selected C points in the two consecutive image frames taken by the camera with changing focal length. a 2. Write down the equation (8) for the pair of

1 corresponding points. a b 3. Solve (8) for the real positive root and update the scaling factors u, v for the camera.

1 Frame 1 b 4 Experimental results Frame 2 The camera calibration presented below is the result of our “first-pass” experiment. No special equipment other Figure 3. Geometry of the camera with changing focal than the camera and the laser fixed on a tripod were used length. for calibration. The described calibration technique is only Let us now assume there is enough overlap between suitable for images that are not excessively corrupted by the two consecutive image frames so that point lens distortion. Therefore, we used a correction matrix to correspondence can be established. We select two remove optical distortions induced by the wide-angle lens arbitrary points A and B in the scene (Fig. 3) and their before the process of calibration. The correction method images a, a1 and b, b1 on the first and the second image used is independent from the camera internal parameters. frames, respectively. The angle defined by the optical rays The correction matrix is computed with the help of and and the angle defined by optical rays commercially available software package [0] that adjusts and of the corresponding pixels are distortion parameters to be consistent with arbitrary physically the same angle expressed as: internal camera parameters u0, v0, u and .. The correction ~ ~ (P 1~a)T (P 1b) (P 1~a1 )T (P 1b1 ) matrix is computed using a chart with the regular pattern 1 1  2 2 of small black circles printed with the laser printer. 1~ 1~ 1~1 1~1 (7) P1 a P1 b P2 a P2 b We carried out two calibration sessions for two different types of lenses and laser crosses: ~ ~ Here ~a, ~a1, b, b1 are the homogeneous vectors of the 1. The view angle of the lens is 95/71 (horizontal / vertical). The fan angle of the laser cross is 60. An image pixels a, a1, b, and b1; P and P are the perspective 1 2 example of the corrected from optical distortions projection matrices for the consecutive frames as in (1). image of the laser cross is given in Figure 4. As the Let K be the value of the left-hand side of (7) that does not result of the optical distortion removal the image contain unknown. The expansion of (7) to pixel column/row aspect ratio is set to the known: value: coordinates using (1) and (2) yields: vu=1.3. There is no a priory information about the 1 1 1 1 camera principle point: (u0,v0). The internal u0  a1 u0  b1  v0  a2 v0  b2  2 2  2 2 1  parameters are computed for a set of the laser cross  u  v images taken with different laser positions and 2 2 2 2 1 1 1 1 various aspect distances: ddsddt 0.007, 0.005, u0  a1  v0  a2  u0  b1  v0  b2   K 2 2  2 2 1  2 2  2 2 1 0.004. The results of calibration are listed in Table 1.  u  v  u  v 2. The view angle of the lens is 63/49 (horizontal / vertical). The fan angle of the laser cross is 45. As From this we obtain a quadratic equation for the unknown 2 the result of the optical distortion removal the image 1= : column/row aspect ratio is set to: vu=1. There is no 2 2 2 2 2 a priory information about the camera principle point: 1 (1 K )  1 (2  K (1  2))    K 12  0 (u0, v0). The internal parameters are computed for the where : two different laser positions and aspect distances: 1 1 1 1 u  a u  b v  a v  b ddsddt 0.007, 0.004. The results of calibration are    0 1  0 1    0 2  0 2   2  2 listed in Table 2. u v As it is seen from the tables the calculations are quite (8) 1 2 1 2 stable with respect to the scaling factors  and  . The u0  a1  v0  a2  u v 1   coincidence with known aspect ratio improves by 2  2 u v averaging the calibration results over the different 2 2 1 1 calibrations. The localization of the principle point (u0,v0) u0  b1  v0  b2  2  2  2 is evidently more unstable. This conclusion is consistent u v with theoretically predicted instability for the principle point localization made in [0].

4 5 Conclusions The internal calibration method based on angular information is presented. The set of optical rays to the 3-D reference pattern as generated by the laser is computed. The angles between the pairs of the particular optical rays are known and used for calibration. The camera is modeled as the pinhole with four internal parameters. The set of four polynomial equations written in the camera standard coordinate system ought to be solved numerically. It is shown how the angular based approach can be used for self calibration of the camera with zoom lens. The results of angular calibration are presented and compared against the linear calibration method proposed by Faugeras. The angular calibration method reliably recovers scaling factors but displays familiar instability in localizing the principle point. The proposed angular calibration does not challenge precise “in-lab” calibration techniques but rather suggests Figure 4. Example of the laser cross image used for the quick outdoor calibration idea. The method is calculations of the internal camera parameters. specifically targeted on direct recovering internal camera parameters without referring to irrelevant space coordinate We compared the calibration results against those ones frame. The major drawback is that numerical solution obtained with the linear calibration method proposed by must be found to the system of polynomial equations. The Faugeras [0]. Precisely measured 3-D calibration target following advantages of the proposed method can be with the set of bright markers is used (Figure 5). 3-D listed: coordinates of the markers are known. 2-D image 1. No measurements are necessary, thus an extra coordinates of the markers are accurately localized on the source of possible errors is eliminated; image with the center of gravity algorithm. Two reference 2. No complicated mechanical design is required sets of 3-D and 2-D coordinates are the input to Faugeras except light and low power consuming laser; linear calibration method. The obtained calibration results 3. The reference pattern generated by the laser is are given in the last rows of the respective tables. With easy to work with: no difficulties can be that general convergence between the two calibration encountered while extracting the center/end points results is verified. of the laser cross in the image; As the consequences of the above benefits the angular calibration can be transformed into fully automatic procedure. This in turn makes the idea attractive for outdoor use by mobile robots. Moreover that geometrically ideal laser pattern can be effectively used for robot autonomous orientation [0].

Acknowledgement. The author is indebted to Mr. G. Paar for his professional support in removing optical distortions from the original images.

References [0] S. J. Maybank and O. D. Faugeras. “A Theory of Self- Calibration of a Moving Camera”. International Journal of Computer Vision. 8:2, p.123-151, 1992. [0] L. Quan. “Self-Calibration of an Affine Camera from Multiple View”. International Journal of Computer Vision. 19(1), p.93-105, 1996. [0] O. Faugeras. “Three-Dimensional Computer Vision. A Geometric Viewpoint”. MIT Press, Cambridge, MA, p.55- Figure 5. The 3-D calibration target with the bright 65, 1993. markers. The original image is darker, so that the markers [0] R.Y. Tsai, “A Versatile Camera Calibration Technique are localized by automatic procedure. for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses”. Journal of Robotics and Automation, Vol.3(4),1987, pp.323-344.

5 [0] J. Weng et al. “Camera Calibration with Distortion [0] Laser Crosshair Projector LAS-635-15. Commercial Models and Accuracy Evaluation”. IEEE Trans. Pattern Product of LASIRIS Inc. Anal. Machine Intell. Vol.14, 1992, pp. 965-980. [0] C.W.Wampler, A.P.Morgan, and A.J.Sommese. [0] R.K. Lenz and R.Y. Tsai. “Techniques for Calibration “Numerical continuation methods for solving polynomial of the Scale Factor and Image Center for High Accuracy systems arising in kinematics. Technical Report GMR- 3-D Machine Vision Metrology”. IEEE Trans. Pattern 6372. General Motors Research Labs, August 1988. Anal. Machine Intell. Vol.10, 1988, pp.713-720. [0] H. Kager (ed.): ORIENT, a universal photogrammetric [0] C.C. Slama, ed. Manual of Photogrammetry. 4th adjustment system. Reference Manual. Inst. for edition. American Society of Photogrammetry, 1980. Photogrammetry and Remote Sensing, Technical [0] M.A. Penna. “Camera Calibration: A quick and Easy University Vienna, 1995 Way to Determine the Scale Factor”. IEEE Trans. Pattern [0] Z. Zhang, Q.-T. Luong, and O. Faugeras. “Motion of Anal. Machine Intell. Vol.13, 1991, pp.1240-1245 an Uncalibrated Stereo Rig: Self-Calibration and Metric [0] B. Caprile and V. Torre. “Using Vanishing Points for Reconstruction”. Research Report 2079, INRIA Sophia- Camera Calibration”. Intern. J. of Computer Vision. Vol. Antipolis, France, October 1993. 4, 1990, pp. 127-140. [0] O.D. Faugeras and G. Toskani. The calibration [0] F. Du and M. Brady. “Self Calibration of the Intrinsic problem for stereo. In Proceedings of CVPR’86, pp. 15- Parameters of Cameras for Active Vision Systems”. In 20, 1986. Proceedings of IEEE Conference on Computer Vision and [0] M. Kolesnik. “View-based method for relative Pattern Recognition. New York, NY, June 1993, pp. 477- orientation in the pipe”. Proceedings of SPIE, “Sensor 482. Fusion: Architectures, Algorithms, and Applications. Vol. [0] G. P. Stein. Accurate Internal Camera Calibration 3719, 1999. using Rotation, with Analysis of Sources of Error”. In Proceedings of Inter. Conf. on Computer Vision. 1995.

Calibration # u v vu u0 v0 1 332 422 1.271 290 269 2 328 427 1.302 289 270 3 332 422 1.271 276 261 4 332 422 1.271 295 254 5 325 431 1.326 307 231 6 325 430 1.323 289 254 7 325 430 1.323 323 263 8 332 422 1.271 309 236 9 325 430 1.323 306 238 10 327 429 1.312 292 254 11 331 421 1.272 311 237 12 324 429 1.324 325 263 Average 328 426 1.299 301 252 Faugeras 322.725 415.172 1.286 282.091 234.028

Table 1. Results of calibration for the 95/71 view angle of the lens. Image size is 640x486 pixels. Calibrations are carried out for three laser cross images and various reference points combinations. Identical calibration results are not mentioned.

Calibration # u v vu u0 v0 1 550 563 1.024 321 225 2 549 561 1.023 320 289 3 569 554 0.974 317 230 4 554 563 1.016 302 287 5 555 564 1.016 309 224 Average 555,4 561 1.011 313.8 251 Faugeras 561.86 564.518 1.005 346.102 231.129

Table 2. Results of calibration for the 63/49 view angle of the lens. Image size is 640x486 pixels. Calibrations are carried out for two laser cross images and various reference points combinations. Identical calibration results are not mentioned.

6 “P of the document prepared by the author. “Manuscript” refers to the pages that convey the technical content of the paper.

II.

Tables and figures should be sized such that they contrast appropriately with the text in the paper and thus are not excessively large or small.The Conclusions sectionThis section

7 8 9