<<

Lecture 3 Camera Models 2 & Camera Calibration

Professor Silvio Savarese Computational Vision and Geometry Lab

Silvio Savarese Lecture 3 - 13-Jan-15

In this lecture, we will discuss the topic of camera calibration. We will start with a recap of camera models. Next, we will formulate the camera calibration problem and investigate how to estimate the unknown parameters. We will see how Lecture 3 radial distortion affects our calibration process, and how we can model it. Finally, Camera Models 2 & we will end with an example calibration session in MATLAB. Camera Calibration • Recap of camera models • Camera calibration problem • Camera calibration with radial distortion • Example Reading: [FP] Chapter 1 “Geometric Camera Calibration” [HZ] Chapter 7 “Computation of Camera P”

Some slides in this lecture are courtesy to Profs. J. Ponce, F-F Li

Silvio Savarese Lecture 3 - 14-Jan-15

Let's revisit our projective camera model. Please refer to the lecture 2 notes for Projective camera details. Pinhole perspective projection

f

f = focal length

Projective camera Pinhole perspective projection

f

y yc f = focal length u , v = offset xc o o (note a different convention w.r.t. lecture 2)

C’=[uo, vo] x

Projective camera

f

Units: k,l [pixel/m] f = focal length

f [m] uo, vo = offset Non-square pixels α, β → non-square pixels α, β [pixel]

Projective camera

f

y yc f = focal length c x uo, vo = offset , non-square pixels C’ = [u , v ] α β → o o x θ θ = skew angle Projective camera

f

& )& ) α −α cotθ uo 0 x ( +( + f = focal length ( β +( y + P! = 0 vo 0 u , v = offset ( sinθ +( z + o o ( +( + α, → non-square pixels 0 0 1 0 1 β '( *+' * θ = skew angle K has 5 degrees of freedom!

Projective camera jw R,T k f w Ow

'R T $ f = focal length P = P u , v = offset %0 1" w o o & # 4×4 α, β → non-square pixels θ = skew angle R,T = rotation, translation

Projective camera jw R,T k f w Ow

P! = M Pw f = focal length uo, vo = offset = K[R T]Pw α, β → non-square pixels θ = skew angle Internal parameters External parameters R,T = rotation, translation This slide summarizes the main equation that describes the projective transformation from point P in the world coordinate system (in homogenous j w Projective camera w coordinates) into a point P in the image pixels (in Euclidean coordinate systems). E R,T This equation was also presented in lecture 2. k f w Ow

&m1 # " $ M $m ! P'3×1 = M Pw = K3×3 R T Pw4×1 = 2 # % 3×4 $ ! ! $ ! $ $m3 ! m1 m1PW % " # & # & E m P m P m1Pw m2Pw = # 2 & W = # 2 W & → P' = ( , ) # & # & E m m P m3Pw m3Pw "# 3 %& "# 3 W %&

In 1993 Olivier Faugeras derived a powerful theorem that relates the properties Theorem (Faugeras, 1993) of a generic projective transformation M with the properties of the matrix A as 3x3 defined in Eq. 13. A is defined as K R and a , a and a are the first, second and &a1 # 1 2 3 M = K[R T ]= [K R K T ]= [A b] A $a ! third row vectors of A, respectively. These relationships are summarized in the 3 = $ 2 ! necessary and sufficient conditions illustrated in the slide. [Eq.13; lect. 2] %$a3 "!

Let’s consider now the following exercise: suppose that the two reference systems coincide (R=I and T=0 ) and the camera model has focal length f, Exercise! 3x3 3x1 zero-skew, no offset, square pixels. Can we write a simplified expression for P’ ? E R,T Notice that the result is exactly what we derived for the pinhole camera in j w lecture 2.

f P kw

Ow

P’ iw

! f 0 0 0 $ # & ! # K! I 0 # M = K R T = " $ = # 0 f 0 0 & " $ # & 0 0 1 0 ! x $ " % # w & # y & m1Pw m2Pw xw yw P = w à P' = ( , ) = ( f , f ) w # & E zw m3Pw m3Pw zw zw # & "# 1 %& So far, we have discussed a projective camera model. Given a point in the world, Projective camera its image coordinates depends on its depth.

f’ R p’ q’ r’ Q O

P

We will now describe a simpler model known as the Weak Perspective Projection model. Weak perspective projection In the weak perspective points are: 1) first projected to a reference plane (using When the relative scene depth is small compared to its distance from the camera orthogonal projection) and 2) then projected to the image plane using a projective transformation. f’ zo 1) As the slide shows, given a reference plane π at a distance ζ from the center R 0 p’ R_ of the camera, the 3 points P, Q and R are first projected to the plane π using an orthogonal projection; this generates the points R_, Q_ and P_. q’ (this is equivalent to assigning the z-coordinate of the points P, Q and R to z ) r’ Q o O Q_ This is a reasonable approximation when deviations in depth from the plane are small compared to the distance from the camera. P_ P π

2) Then, the points R_, Q_ and P_ are projected to the image plane using a regular projective transformation producing the points p’, q’, r’. Notice, however, Weak perspective projection that because we have approximated the depth of each point to ζ the projection 0, has been reduced to a simple (constant) magnification. The magnification is equal to the focal length f’ (which is a constant) is divided by ζ (which is also a zo 0 f’ constant). R p’ R_ q’ r’ Q O Q_

P_ P ! f ' π # x' = x ! f ' z # x' = x # # z " → " 0 f ' Magnification m # f ' # y' = y # y' = y # z $ 0 $# z This also simplifies the projection matrix M. Note that the last row of M is now [0 0 0 1] (written as [0 1] using the notation above). We do not prove Weak perspective projection this result and leave it to students as an exercise. We can observe this simplification algebraically in the next slide. π f’ zo R p’ R_ q’ r’ Q O Q_

P_ P

Projective (perspective) Weak perspective &A b# &A b# M = K[R T ]= $ ! à M = $ ! % v 1" % 0 1"

In the equations above, m , m , and m denote the rows of the projection 1 2 3 &m1 # &m1 PW # &m1 # matrix. In our full perspective model, m is equal to [v 1], where v is some non- $ ! $ ! &A b# $ ! 3 P! = M P = m2 PW = m2 PW M = = m zero 1x3 vector. On the other hand, m = [0 0 0 1] for the weak perspective w $ ! $ ! $ ! $ 2 ! 3 % v 1" model. This results in the denominator term m P to be 1. As result, the non $m3 ! $m3 PW ! $m3 ! 3 w % " % " % " linearity of the projective transformation disappears and the weak perspective transformation acts as a mere magnifier. E m P m P → ( 1 w , 2 w ) m P m P 3 w 3 w Perspective

A b &m1 # &m1 PW # & # M = $ ! = $m ! P = $m P ! % 0 1" P! = M Pw $ 2 ! W $ 2 W ! m 1 %$ 3 "! %$ "! &m1 # & m1 # $m ! $ m ! E = $ 2 ! = $ 2 ! $m3 ! $0 0 0 1! → (m1 Pw , m2 Pw ) % " % "

magnification Weak perspective

Further simplification leads to the orthographic (or affine) projection model. In Orthographic (affine) projection this case, the optical center is located at infinity. The projection rays are now perpendicular to the retinal plane (parallel to the optical axis). As a result, this Distance from center of projection to image plane is infinite model ignores depth altogether. It’s often used for architecture and industrial design.

! f ' # x' = x # z # #! x' = x " → " # y' = y # f ' $ # y' = y $# z Pros and Cons of These Models

• Weak perspective much simpler math. – Accurate when object is small and distant. – Most useful for recognition.

• Pinhole perspective much more accurate for scenes. – Used in structure from motion or SLAM.

Painted by Wang Hui (1632-1717) and assistants, it was executed before Western perspective was introduced into Chinese art. In the video D. Hockney comments Weak perspective projection the properties of the prospective geometry depicted in the painting.

The Kangxi Emperor's Southern Inspection Tour (1691-1698) By Wang Hui

Weak perspective projection

The Kangxi Emperor's Southern Inspection Tour (1691-1698) By Wang Hui In this section, we will introduce the problem of camera calibration and discuss its importance. Next, we will look at some common methods for solving the Lecture 3 calibration problem. Camera Calibration

• Recap of camera models • Camera calibration problem • Camera calibration with radial distortion • Example Reading: [FP] Chapter 1 “Geometric Camera Calibration” [HZ] Chapter 7 “Computation of Camera Matrix P”

Some slides in this lecture are courtesy to Profs. J. Ponce, F-F Li

Silvio Savarese Lecture 3 - 14-Jan-15

Recall that our camera is modeled using internal, or intrinsic, parameters (K) and external, or extrinsic, parameters ([R T]). Their product gives us the full 3x4 Projective camera projection matrix, shown above in the expanded form. It has 11 degrees of freedom (we discussed that in lecture 2).

P! = M Pw = K[R T]Pw

Internal parameters External parameters

3× 4

T &α −α cotθ uo # &r1 # &t x # $ ! $ T ! $ ! β R = r T = t K = $0 vo ! $ 2 ! $ y ! sinθ T $0 0 1 ! $r3 ! $t z ! % " % " % "

We now pose the following problem: given one or more images taken by a camera, estimate its intrinsic and extrinsic parameters. Notice that from this Goal of calibration point on we change the notation for P which now is denoted as P, and P’ which w Estimate intrinsic and extrinsic parameters is now denoted as p. from 1 or multiple images

P! = M Pw = K[R T]Pw

3× 4

T &α −α cotθ uo # &r # &t # 1 x Change notation: $ β ! $ T ! $ ! R = r T = t P = P K = $0 vo ! $ 2 ! $ y ! w sinθ T $0 0 1 ! $r3 ! $t z ! % " % " % " p = P’ We can describe this problem more precisely using a calibration rig, such as the one shown above. The rig usually consists of a pattern (usually a checkerboard) Calibration Problem with known dimensions. The rig also establishes our world reference coordinate frame ([O ,i ,j ,k ] as shown in the figure. w w w w Calibration rig

jC

•P1… Pn with known positions in [Ow,iw,jw,kw]

Now, suppose we are given n points on the calibration rig, P … P , along with 1 n their corresponding coordinates, p , … p in the image. Using these Calibration Problem 1 n correspondences, our goal is to estimate both intrinsic and extrinsic parameters.

Calibration rig image

jC pi

•P1… Pn with known positions in [Ow,iw,jw,kw]

•p1, … pn known positions in the image Goal: compute intrinsic and extrinsic parameters

How many of such correspondences would we need to compute both intrinsics and extrinsics)? Our projection matrix has 11 degrees of freedom (5 for the Calibration Problem intrinsics and 6 for the extrinsics), so we need at least 11 equations. Each correspondence gives us two equations (one for x and y each). Therefore, we Calibration rig would need at least 6 correspondences. image

jC pi

How many correspondences do we need?

• M has 11 unknown • We need 11 equations • 6 correspondences would do it As always, our correspondences are not perfect and susceptible to noise. Having more than the minimal number of correspondences allows us to be robust to Calibration Problem these imprecisions.

Calibration rig image

jC pi

In practice, using more than 6 correspondences enables more robust results

In the next slides, we will set up a linear system of equations from n of such

correspondences and propose a procedure for solving it against the unknowns th Calibration Problem (intrinsic and extrinsic parameters). Let’s consider the i correspondence pair defined by a point P in the calibration rig (in the world reference system) and its i observation p in the image. u and v are the coordinates of p (measured in i i i i pixels and in Euclidean coordinates); the relationship between u and v and P is i i i expressed by (Eq. 1) as also illustrated in the following slide. jC

& m1 Pi # &m1 # u $ ! $ ! & i # m3 Pi M = m2 Pi → M Pi → pi = = $ ! $ ! $ ! m2 Pi vi $ ! %$m3 "! % " $ ! in pixels % m3 Pi " [Eq. 1]

From Eq.1, we can derive a pair of equations that relate u with P and v with P i i i i Calibration Problem (the system of Eqs. 2)

& m1 Pi # &ui # $ m P ! = $ 3 i ! [Eq. 1] $ ! m P vi $ 2 i ! % " $ ! % m3 Pi "

m1 Pi ui = → ui (m3 Pi ) = m1 Pi → ui (m3 Pi )− m1 Pi = 0 m3 Pi

m2 Pi vi = → vi (m3 Pi ) = m2 Pi → vi (m3 Pi ) − m2 Pi = 0 m3 Pi [Eqs. 2] Assuming that we have n of such correspondences, we can set up a system of 2n Calibration Problem equations as illustrated above [Eqs. 3].

u1(m3 P1) − m1 P1 = 0

v1(m3 P1) − m2 P1 = 0 …

ui (m3 Pi ) − m1 Pi = 0 [Eqs. 3]

vi (m3 Pi ) − m2 Pi = 0 …

un (m3 Pn ) − m1 Pn = 0

vn (m3 Pn ) − m2 Pn = 0

Before we move forward, let’s briefly discuss the concept of block matrix Block multiplication (from matrix theory 101). For conciseness and efficiency, large matrices can be partitioned into smaller blocks (or submatrices). Given two such block matrices, we can define their product in terms of their submatrices (assuming that the partitioning is such that the product is feasible) as shown. This way of handling matrices is very compact and helps us make the ensuing &A11 A12 # &B11 B12 # derivation easier to describe. A = $ ! B = $ ! %A21 A22 " %B21 B22 "

What is AB ?

& A11B11 + A12B21 A11B12 + A12B22 # AB = $ ! %A21B11 + A22B21 A21B12 + A22B22 "

Returning to the camera calibration problem, the 2n equations obtained from Calibration Problem the correspondences constitute a homogeneous linear system [Eqs. 3]. Therefore, we can express it in terms of a matrix equation of the form Pm = 0 known unknown [Eq. 4]. P comprises the known coefficients (from our correspondences) while m − u1(m3 P1) + m1 P1 = 0 comprises the unknown parameters we wish to estimate.

− v1(m3 P1) + m2 P1 = 0 Using block matrix notation, we can concisely write down P and m as shown

… [Eq. 4] P m = 0 above. Note that m is a vectorized version of the 3x4 projection matrix, where each 4x1 block corresponds to a row in the original matrix. − u (m P ) + m P = 0 n 3 n 1 n Homogenous linear system

− vn (m3 Pn ) + m2 Pn = 0

1x4 T T T 4x1 ! P1 0 −u1P1 " T $ T T T % m 0 P v P & 1 # $ 1 − 1 1 % def $ ! def T P = $ ! % m m $ % = $ 2 ! T T T $ Pn 0 −unPn % $ T ! $ % m3 $ T T T % % " 12x1 &0 Pn −unPn ' 2n x 12 When 2n>11, our homogeneous linear system is overdetermined. For such a Homogeneous M x N Linear Systems system, zero is always a (trivial) solution. Furthermore, if m is a solution, then so is k m, where k is an arbitrary scaling factor. Therefore, to constrain our M=number of equations = 2n 2 2 optimization, we minimize |P m| subject to the constraint that |m| = 1 (see N=number of unknown = 11 review session for details). The boxes around P, m and 0 give a qualitative illustration of the dimensionality of the matrix associated to each of the variables. The dashed sides illustrate a N Rectangular system (M>N) situation where M=N. • 0 is always a solution • To find non-zero solution M P m = 0 Minimize |P m|2 under the constraint |m|2 =1

Recall that for homogeneous linear system such as Pm = 0, the SVD gives us the Calibration Problem least-squares solution, subject to |m| = 1.

P m = 0

• How do we solve this homogenous linear system? • Via SVD decomposition!

T We can obtain the least-squares solution for m by factorizing P to UDV using Calibration Problem SVD and then taking the last column of V. The derivation as for why this is true goes beyond the scope of this lecture. Please refer to Sec. A5.3 HZ (Pag. 592, 593) for details.

Once m is computed, we can repackage it into M and compute all the camera parameters as we’ll P m = 0 see next;

SVD decomposition of P

T U2n×12 D 12×12V 12×12

Last column of V gives m Why? See pag 592 of HZ

T &m1 # def $ ! T m = $m2 ! $mT ! M % 3 " Once we have estimated the combined parameters in m, the next step is to Extracting camera parameters extract the actual intrinsics and extrinsics from it.

Remember, that M is know up a scale. We explicitly identify the scale parameter as ρ. Notice that ρ it’s not a real unknown. The constraint that allows to estimate this additional unknown is given by the fact that |m| must be =1 (or, M = ρ equivalently, the Frobenius norm of M = 1).

It is relatively straightforward to derive all the intrinsics and extrinsics from the Extracting camera parameters 3x4 matrix M (which we estimated in the previous slides). We refer to [FP], Sec. 1.3.1, for the details or leave it as an exercise. To avoid overcomplicating the notation, we rename M/ρ as [A b], and provide a solution for the intrinsics and extrinsics as function of the elements of A and b (as defined in the box 1 in the slide). M = = K[R T] A solution for the scale ρ, offset, skew are reported in the slide. ρ &α −α cotθ uo # $ β ! K = 0 v $ sinθ o ! $0 0 1 ! A b % " Box 1 Intrinsic T 2 &a1 # &b1 # ±1 uo = ρ (a1 ⋅a3 ) $ T ! $ ! ρ = 2 A = a2 b = b2 a $ ! $ ! 3 vo = ρ (a2 ⋅a3 ) $aT ! $b ! % 3 " % 3 " (a1 ×a3 )⋅(a2 ×a3 ) cosθ = Estimated values a1 ×a3 ⋅ a2 ×a3

Interestingly, the equation that provides a solution for the skew angle theta, supplies an easy way for verifying the second claim of the Faugeras theorem that Theorem (Faugeras, 1993) we introduced earlier. Here we have solution for alpha and beta and, thus, for the focal length. Extracting camera parameters

= K[R T] ρ A b Intrinsic T &a1 # &b1 # $ T ! $ ! 2 A a b b α = ρ a ×a sinθ = $ 2 ! = $ 2 ! 1 3 T $a3 ! $b3 ! 2 % " % " β = ρ a2 ×a3 sinθ

Estimated values

Here we have solutions for the extrinsics R and T. Extracting camera parameters

= K[R T] ρ A b Extrinsic T &a1 # &b1 # (a2 ×a3 ) ± a3 $ T ! $ ! r1 = r3 = A = a2 b = b2 $ ! $ ! a2 ×a3 a3 $aT ! $b ! % 3 " % 3 " −1 r2 = r3 ×r1 Estimated values T = ρ K b

Degenerate cases

•Pi’s cannot lie on the same plane! • Points cannot lie on the intersection curve of two quadric surfaces We will now discuss the more complex scenario where real world lenses introduce radial distortion. We will analyze a few common types of radial Lecture 3 distortions and incorporate them in our model. Camera Calibration

• Recap of projective cameras • Camera calibration problem • Camera calibration with radial distortion • Example Reading: [FP] Chapter 1 “Geometric Camera Calibration” [HZ] Chapter 7 “Computation of Camera Matrix P”

Some slides in this lecture are courtesy to Profs. J. Ponce, F-F Li

Silvio Savarese Lecture 3 - 14-Jan-15

So far, we have been working with ideal lenses which are free from any Radial Distortion distortion. However, real lenses can deviate from rectilinear projection. The resulting distortions are often radially symmetric, which can be attributed to the – Image magnification (in)decreases with distance from the optical axis physical symmetry of the lens. – Caused by imperfect lenses The image above exhibits barrel distortion. This occurs when the image – Deviations are most noticeable for rays that pass through the edge of magnification decreases with distance from the optical axis. Fisheye lenses the lens usually produce this type of distortion.

No distortion

Pin cushion

Barrel

The radial distortion can be modeled using an isotropic transformation S as λ shown in the slide. This transformation is regulated by the distortion factor λ. Radial Distortion Notice that unlike a traditional scale transformation (of the type [s 0 0;0 s 0; 0 0 Image magnification decreases with distance from the optical center 1]), where s is constant, λ is a function of the distance d from the center (since the distortion is radially symmetric about the center) and thus function of the coordinates u , v . This causes the distortion transformation to introduce a non- i i v linearity to the mapping from P to p 1 S i i & 0 0# λ $λ ! d $ ! We approximate λ using a polynomial expansion (Eq. 5). The resulting 1 &ui # coefficients, κ , are known as the distortion coefficients. In order to model the u $0 0! M P → = p p i $ ! i radial behavior, the distance can be expressed as quadratic function of the $ λ ! %vi " $0 0 1! coordinates u,v in the image plane (Eq. 6). %$ "!

Distortion coefficient 3 2p 2 2 2 λ =1± ∑ κ pd d = a u + b v + c u v p=1 [Eq. 5] To model radial behavior [Eq. 6] Polynomial function We can rewrite our projection equations by using the block matrix notation and defining Q as S M. We follow the same procedure that led to Eq. 1 and Eqs.2 λ Radial Distortion (see previous slides). Unlike for the system defined by Eqs.2, however, the system of equations Eqs. 7 1 & 0 0# is no longer a linear one. $λ ! &q1 # $ 1 ! &u # $0 0! M P → i = p $ ! i $ ! i Q = q2 $ λ ! %vi " $ ! $0 0 1! %$q3 "! %$ "!

Is this a linear system of equations? Q

& q1 Pi # #uiq3 Pi = q1 Pi &ui # $ q P ! p = = $ 3 i ! " i $ ! q P v q P = q P vi $ 2 i ! ! i 3 i 2 % " $ ! % q3 Pi " No! why? [Eqs.7]

If n correspondences are available, all these constraints can be packaged into a (non-linear) multi-variable function f which relates all (the unknown) parameters General Calibration Problem Q with the observations/measurements [Eq .8]. A common way to solve this is to resort to non-linear optimization techniques. Two common ones include the & q P # Newton’s method and the Levenberg-Marquardt algorithm. 1 i [Eq .8] $ ! X = f (Q) The slide lists some of the advantages and trade-offs of Levenberg-Marquardt. J &ui # q P = $ 3 i ! and H refer to the Jacobian and Hessian, respectively. $ ! q P %vi " $ 2 i ! parameters $ ! measurements For more details about optimization methods, please refer to [FP] Sec. 22.2 (page % q3 Pi " 669-672) or any reference text books. i=1…n f( ) is the nonlinear mapping

-Newton Method -Levenberg-Marquardt Algorithm • Iterative, starts from initial solution • May be slow if initial solution far from real solution • Estimated solution may be function of the initial solution • Newton requires the computation of J, H • Levenberg-Marquardt doesn’t require the computation of H

A possible simple algorithm that people use in practice is: - solve the system using n correspondences (as we did for Eq. 4) by assuming General Calibration Problem that there is no distortion. - Use this solution as initial condition for solving the problem in Eq. 8. - Solve the full problem in Eq. 8 using Newton or L.M. & q1 Pi # $ ! X = f (Q) [Eq .8] &ui # q P = $ 3 i ! $ ! q P %vi " $ 2 i ! parameters $ ! measurements % q3 Pi " i=1…n f( ) is the nonlinear mapping

A possible algorithm

1. Solve linear part of the system to find approximated solution 2. Use this solution as initial condition for the full system 3. Solve full system using Newton or L.M. We can simplify the calibration problem if we make certain assumptions. Some typical assumptions include zero skew and square pixels (which is reasonable for General Calibration Problem many modern cameras), a known image center, and negligible distortion. Under these assumptions, the dimensionality of the problem is reduced and the optimization problem is simplified. & q1 Pi # $ ! X = f (Q) &ui # q P = $ 3 i ! $ ! q P %vi " $ 2 i ! parameter $ ! measurement % q3 Pi " f( ) is nonlinear

Typical assumptions:

- zero-skew, square pixel

- uo, vo = known center of the image

An alternative and very elegant approach, that doesn’t require the use of non- linear optimization, follows next. Radial Distortion We note that the ratio between two coordinates u and v of the point p in the i i i image is not affected by the distortion (remember the barrel distortion acts & q1 Pi # & m1 Pi # radially). $ ! $ ! &u # q3 Pi 1 m3 Pi i = $ ! = $ ! pi = q P m P $v ! $ 2 i ! λ $ 2 i ! % i " $ ! $ ! % q3 Pi " % m3 Pi "

Can we estimate m1 and m2 and ignore the radial distortion?

v

d u Hint: u i = slope vi

Thus, let’s compute this ratio of the each p . This leads us to Eq.9. i By assuming that n correspondences are available, we can set up the system in Radial Distortion Eq. 10. Similarly to the derivation we used to compute Eq. 4, we can obtain the homogenous system in Eq. 11. This can be solved using SVD and allows a solution for m and m . Estimating m1 and m2… 1 2 m P & 1 i # (m1 Pi ) [Eq .9] &ui # 1 $ m P ! u (m P ) m P p = $ 3 i ! i = 3 i = 1 i i = $ ! m P λ $ 2 i ! v (m2 Pi ) m P %vi " i 2 i $ m P ! % 3 i " (m3 Pi ) [Eq .10] v (m P ) − u (m P ) = 0 [Eq .11] 1 1 1 1 2 1 &m # L n = 0 n 1 vi (m1 Pi ) − ui (m2 Pi ) = 0 = $ ! … %m2 "

Get m and v (m P ) − u (m P ) = 0 1 n 1 n n 2 n Tsai [87] m2 by SVD Once that m and m are estimated m can be expressed a non linear function of 1 2 3 m and m and lambda. This still requires to solve an non-optimization problem Radial Distortion 1 2 whose complexity, however, is very much simplified.

Once that m1 and m2 are estimated…

& m1 Pi # u $ ! & i # 1 m3 Pi pi = = $ ! $ ! λ m2 Pi vi $ ! % " $ ! % m3 Pi "

m3 is non linear function of m1, m2, λ

There are some degenerate configurations for which m1 and m2 cannot be computed

We will now go over an example of camera calibration using MATLAB. Lecture 3 Camera Calibration

• Recap of projective cameras • Camera calibration problem • Camera calibration with radial distortion • Example Reading: [FP] Chapter 1 “Geometric Camera Calibration” [HZ] Chapter 7 “Computation of Camera Matrix P”

Some slides in this lecture are courtesy to Profs. J. Ponce, F-F Li

Silvio Savarese Lecture 3 - 14-Jan-15

Calibration Procedure Camera Calibration Toolbox for Matlab J. Bouguet – [1998-2000]

http://www.vision.caltech.edu/bouguetj/calib_doc/index.html#examples Calibration Procedure

Calibration Procedure

Calibration Procedure Calibration Procedure

Calibration Procedure

Calibration Procedure Calibration Procedure

Next lecture

• Single view reconstruction