Lecture 3 - Two View Geometry & the Essential Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 3 - Two View Geometry & the Essential Matrix Lecture 3 - Two view geometry & the Essential Matrix DD2429 September 10, 2018 Review of the general perspective projection camera matrix Note: Most slides are copied/adapted from Multiple View Geometry Tutorial Part 1, CVPR 1999, by R. Hartley and A. Zisserman Imaging Geometry Multiple View Geometry Perspective projection Imaging Geometry - Perspective projection Y X ¥ ¥ X §©¨ Richard Hartley and Andrew Zisserman X y ¤ x ¦ ¦ Y x O Z p Z ¤ where Z . image plane CVPR June 1999 This can be written as a•linearWorld andmapping camera axesbetw aligned,een imagehomogeneous plane at (0; 0; f)coordinatesT . (the equation is only up to a scale• Perspectivefactor): projection of point X can be written as 0x1 ¥ 0 X1 ¥ §¨ @yA =Xλ @Y A ¦ f Y Z ¦ where λ = f=Z. Z £ ¡ where a ¢ projection matrix represents a map from 3D to 2D. Image Coordinate System Part I: Single and Two View Geometry Internal camera parameters ycam ! cam The main points covered in this part are: " # # # ! y p cam 0 x £ ¡ ¢ cam y A perspective (central) projection camera is represented by a matrix. " where the units of $ %'& / 4 ,.- , 3 (*) 2 1 The most general perspective transformation transformation between two planes + + 0 are . x x (a world plane and the image plane, or two image planes induced by a world 0 plane) is a plane projective transformation. This can be computed from the cor- respondence of four (or more) points. 6 < @ D 6 6 < < C ! The epipolar geometry between two views is represented by the fundamental : cam cam G 798 =9> 798 798 =9> =9> A9B E9F 5 #;: C # # # ! " matrix. This can be computed from the correspondence of seven (or more) ? cam cam ? ? points. : ? ? " C C " where $ . Imaging Geometry Multiple View Geometry Perspective projection Imaging Geometry - Perspective projection Y X ¥ ¥ X §©¨ Richard Hartley and Andrew Zisserman X y ¤ x ¦ ¦ Y x O Z p Z ¤ where Z . image plane CVPR June 1999 • World and camera axes aligned, image plane at (0; 0; f)T . This can be written as a linear mapping between homogeneous coordinates (the equation is only up to a scale• Perspectivefactor): projection written as a linear mapping between homogeneous coordinates (λ is the scale factor): ¥ 0 1 ¥ 0 1 0 1 X §¨ x 1 0X 0 0 BY C ¦ @ yA = λ @0 1Y 0 0A B C ¦ Z f 0 0 1 0 @ A Z 1 £ ¡ where a ¢ projection matrix represents a map from 3D to 2D. Image Coordinate System Part I: Single and Two View Geometry Internal camera parameters ycam ! cam The main points covered in this part are: " # # # ! y p cam 0 x £ ¡ ¢ cam y A perspective (central) projection camera is represented by a matrix. " where the units of $ %'& / 4 ,.- , 3 (*) 2 1 The most general perspective transformation transformation between two planes + + 0 are . x x (a world plane and the image plane, or two image planes induced by a world 0 plane) is a plane projective transformation. This can be computed from the cor- respondence of four (or more) points. 6 < @ D 6 6 < < C ! The epipolar geometry between two views is represented by the fundamental : cam cam G 798 =9> 798 798 =9> =9> A9B E9F 5 #;: C # # # ! " matrix. This can be computed from the correspondence of seven (or more) ? cam cam ? ? points. : ? ? " C C " where $ . Imaging Geometry Multiple View Geometry Perspective projection Y X ¥ ¥ X §©¨ Richard Hartley and Andrew Zisserman X y ¤ x ¦ ¦ Y x O Z p Z ¤ where Z . image plane CVPR June 1999 This can be written as a linear mapping between homogeneous coordinates (the equation is only up to a scale factor): ¥ ¥ §¨ X ¦ Y ¦ Z £ ¡ where a ¢ projection matrix represents a map from 3D to 2D. Image Coordinate System Image Coordinate System Part I: Single and Two View Geometry Internal camera parameters • Internal camera parameters ycam ! cam kx xcam = x − x0 The main points covered in this part are: " # # # ! y p cam 0 x ky ycam = y − y0 £ ¡ ¢ cam y A perspective (central) projection camera is represented by a matrix. " where the units of $ 4 %'& / 3 ,.- , where the units of k ; k are (*) 2 x y 1 The most general perspective transformation transformation between two planes + + 0 are . x x (a world plane and the image plane, or two image planes induced by a world 0 pixels/length. plane) is a plane projective transformation. This can be computed from the cor- respondence of four (or more) points. D 6 @ < 6 < 6 < C ! : 0 1 0 1 0 1 0 1 The epipolar geometry between two views is represented by the fundamental cam x cam αx 0 x0 xcam xcam G 798 =9> 798 =9> 798 =9> E9F A9B 5 C # #;: # # ! 1 " matrix. This can be computed from the correspondence of seven (or more) ? cam x = y cam= 0 αy y0 ycam = K ycam ? @ A ? @ A @ A @ A : f points. 1 0 0 1 f f ? ? " C C " where $ . where αx = f kx; α = fky. Camera Calibration • K is a 3 × 3 upper-triangular matrix called the camera calibration matrix 0 1 αx 0 x0 K = @ 0 αy y0A 0 0 1 • There are four parameters: 1. The scaling in the image x and y directions - αx and αy. 2. The principal point (x0; y0) - point where the optic axis intersects the image plane. • The aspect ratio is αy/αx. Concatenating the three matrices, 6 Camera Calibration Matrix < @ D 6 < : X ¥ § ¦ 7 = % 4 : § G G ¦ 7 7 = = A E 5 #©: Y G © ¡ : ¨ 798 8 > =9> is a upper triangular matrix, called the camera calibration B F : Z : matrix: @ D C ! G A9B E9F C # ! " ¡ : which defines the projection matrix from Euclidean 3-space to an image as ¢ There are four parameters: # C C (i) The scaling in the image and directions, and " . ¤ £ # ! ! (ii) The principal point $ , which is the point where the optic axis intersects the image plane. / © © £ ¤ § ¦ ¢ C C " The aspect ratio is . Note, the camera centre is at X$ Y$ Z . In the following it is often only the ¡ form of that is important, rather than its decomposition. World Coordinate System A Projective Camera World Coordinate System External camera parameters The camera model for perspective projection is a linear map between Y homogeneous point coordinates $ cam $ ' / X %%%& Z 6 6 < < * . - + %& () 01 , !#" Y Xcam X ¥ § O ¦ Z 7 7 = = cam cam Z 7 Y = 7 Y = R, t " cam © X : ¨ 798 =9> 798 =9> cam Zcam Z O : : Y Image Point Scene Point X Euclidean transf•orExternalmation betwcameraeen parameters:world and camera coordinates ¦ ¡ ¢ ¢ is a rotation matrix The camera centre is the null-vector of © % 4 £ ¤ : 0 1 0 1 2 : § ¡ ¢ Xcam X is a translation vector e.g. if then the centre is $ $ $ . Ycam R t Y B C B C ¢ B C = T B C has 11 degrees of freedom (essential parameters). @Zcam A 0 1 @Z A 1 1 ¢ has rank 3. • Euclidean transformation between world and camera coordinates - R is a 3 × 3 rotation matrix - t is 3 × 1 translation vector Putting everything together 0X1 0x1 01 0 0 01 R t Y x = y = λK 0 1 0 0 B C = λK R t X @ A @ A 0T 1 BZ C 1 0 0 1 0 @ A 1 • Above defines the 3 × 4 projection matrix from Euclidean 3-space to an image as x = λMX where M = K R t = KR IRT t • Note the camera centre is at (X; Y; Z) = −RT t. Putting everything together 0X1 0x1 01 0 0 01 R t Y x = y = λK 0 1 0 0 B C = λK R t X @ A @ A 0T 1 BZ C 1 0 0 1 0 @ A 1 • Above defines the 3 × 4 projection matrix from Euclidean 3-space to an image as x = λMX where M = K R t = KR IRT t • Note the camera centre is at (X; Y; Z) = −RT t. A Projective Camera The camera model for perspective projection is a linear map between homogeneous point coordinates 0X1 0x1 Y y = λM B C @ A BZ C 1 @ A 1 where M has size 3 × 4 and projects a scene point to an image point. • The camera centre is the null-vector of M • M has 11 degrees of freedom (essential parameters). • M has rank 3. What does calibration give? • K provides the transformation between an image point and a ray in Euclidean 3-space. • Once K is known the camera is termed calibrated. • A calibrated camera is a direction sensor, able to measure the direction of rays - like a 2D protractor. Calibrating a camera correspondences between the 3D world and the camera view Camera Calibration Problem Statement • Given n correspondences xi $ Xi where each Xi is a scene point and xi is its image: • Compute M = K R t such that xi ≈ λMXi • Solution: The algorithm for camera calibration has two parts: 1. Compute the M from the set of point correspondences. 2. Decompose M into K; R and t using QR decomposition. Camera Calibration Problem Statement • Given n correspondences xi $ Xi where each Xi is a scene point and xi is its image: • Compute M = K R t such that xi ≈ λMXi • Solution: The algorithm for camera calibration has two parts: 1. Compute the M from the set of point correspondences. 2. Decompose M into K; R and t using QR decomposition. Algorithm step 1: Estimate the matrix M • The projection equation in homogeneous coordinates: 0X1 0x1 0m m m m 1 11 12 13 14 Y y ' m m m m B C @ A @ 21 22 23 24A BZ C 1 m m m m @ A 31 32 33 34 1 • Express the projection in Euclidean coordinates: m X + m Y + m Z + m x = 11 12 13 14 m31X + m32Y + m33Z + m34 m X + m Y + m Z + m y = 21 22 23 24 m31X + m32Y + m33Z + m34 • Multiply with the denominators and rearrange: m11X + m12Y + m13Z + m14 − m31xX − m32xY − m33xZ − m34x = 0 m21X + m22Y + m23Z + m24 − m31yX − m32yY − m33yZ − m34y = 0 • Two constraints per match =) six matches needed total.
Recommended publications
  • Lecture 6: Camera Computation and the Essential Matrix
    Computer Vision Lecture 6 2020-02-05 Lecture 6: Camera Computation and the Essential Matrix 1 Computing Cameras From the Fundamental Matrix In Lecture 5 we considered the two-view structure from motion problem, that is, given a number of measured points in two images we want to compute both camera matrices and 3D points such that they project to the measurements. We showed that the 3D points can be eliminated from the problem by considering the fundamental matrix F . If x is an image point belonging to the fist image and x¯ belongs to the second then there is a 3D point that projects to to these if and only if the epipolar constraint x¯T F x = 0 (1) is fulfilled. Using the projections of 8-scene points we can compute the fundamental matrix by solving a homo- geneous least squares problem (the 8-point algorithm). What remains in order to find a solution to the two-view structure from motion camera is to compute cameras from F and finally compute the 3D-points. In general we may assume (see Lecture 5) that the cameras are of the form P1 = [I 0] and P2 = [A e2] where T e2 is the epipole in the second image. Since we know that F e2 = 0 we can find e2 by computing the null space T of F . In what follows we will show that A = [e2]×F gives the correct epipolar geometry and therefore solution for the second camera is given by P2 = [[e2]×F e2] : (2) The Fundamental matrix of the camera pair P1 and P2 is according to Lecture 5 given by [t]×A = [e2]×[e2]×F we need to show that this expression reduces to F .
    [Show full text]
  • Table of Contents Stereopsis
    CS-9645 Introduction to Computer Vision Techniques Winter 2020 Table of Contents Stereopsis................................................................................................................................................... 1 The Correspondence Problem............................................................................................................... 1 Epipolar Geometry................................................................................................................................ 2 Estimating the Essential and Fundamental Matrices.............................................................................5 Algorithm.............................................................................................................................................. 5 Locating Epipoles..................................................................................................................................6 Rectification.......................................................................................................................................... 7 3D Reconstruction.................................................................................................................................8 Stereopsis In stereopsis, two visual sensors are used in order to obtain the depth of scene points, as an attempt the reconstruct the observed scene. Image features from one image must correlate with the features observed in the other image. This is commonly known as the correspondence problem. Once
    [Show full text]
  • Monitoring Mountain Cryosphere Dynamics by Time Lapse Stereo Photogrammetry
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume V-2-2020, 2020 XXIV ISPRS Congress (2020 edition) MONITORING MOUNTAIN CRYOSPHERE DYNAMICS BY TIME LAPSE STEREO PHOTOGRAMMETRY ∗ G. Marsy1,2,3,, F. Vernier1, X. Bodin2, D. Cusicanqui2,4, W. Castaings3, E. Trouve´1 1 Univ. Savoie Mont Blanc, LISTIC, 74000 Annecy, France 2 CNRS, EDYTEM, Univ. Savoie Mont Blanc, 73000 Chambery,´ France 3 TENEVIA 38240 Meylan, France 4 Universite´ Grenoble Alpes, Institut de Geosciences´ de l’Environement (IGE, UMR 5001), 38000, Grenoble France Commission II, WG II/4,5 KEY WORDS: Photogrammetry, Cryosphere monitoring, Stereo time lapse, Image registration ABSTRACT: In this paper, we propose a method to monitor surface dynamics in mountain cryosphere environments, based on a device with two fixed cameras at two different locations and with a convergent angle. Computer vision methods are used to derive pixel displace- ments between images taken at different times and to reconstruct the 3D geometry of the scene to convert pixel displacements into meter displacements. The proposed methods overcome the drawbacks of traditional methods such as lack of time repeatability or lack of spatial resolution. We present the results on two study sites instrumented with the device located in the French Alps: a temperate glacier and a rock glacier. We propose different ways to assess the accuracy of each processing steps solely based on stereo images. The method is validated using traditional measurements (GPS and LiDAR) and shows results comparable or even more robust than these traditional methods. 1. INTRODUCTION Rasmussen, 1986). Nowadays, the increase in sensor resol- ution and the digital nature of the images, coupled with new The methods to monitor mountain cryosphere, such as glaciers algorithms, allow movements of a few centimeters to be detec- and rock glaciers dynamics rely on two classical approaches: ted (Roncella et al., 2014).
    [Show full text]
  • Essential' Matrix
    Recovering Baseline and Orientation from ‘Essential’ Matrix Berthold K.P. Horn January 1990 Abstract: Certain approaches to the problem of relative orientation in binocular stereo (as well as long-range motion vision) lead to an encoding of the baseline (translation) and orientation (rotation) in a single 3 × 3 matrix called the “essential” matrix. The essential matrix is defined by E = BR, where B is the skew-symmetric matrix that satisfies Bv = b × v for any vector v, with b being the baseline and R the orientation. Shown here is a simple method for recovering the two solutions for the baseline and the orientation from a given essential matrix using elementary matrix op- erations. The two solutions for the baseline b can be obtained from the equality 1 bbT = Trace(EET )I − EET , 2 where I is the 3 × 3 identity matrix. The two solutions for the orientation can be found using (b · b) R = Cofactors(E)T − BE, where Cofactors(E) is the matrix of cofactors of E. There is no need to perform a singular value decomposition, to transform coordinates, or to use circular functions, and it is easy to see that there are exactly two solutions given a particular essential matrix. If the sign of E is reversed, an additional pair of solutions is obtained that are related the two already found in a simple fashion. This helps shed some light on the question of how many solutions a given relative orientation problem can have. 1. Coplanarity Condition in Relative Orientation Relative orientation is the well-known photogrammetric problem of recov- ering the position and orientation of one camera relative to another from five or more pairs of corresponding ray directions [Zeller 52] [Ghosh 72] [Slama et al.
    [Show full text]
  • Generalized Essential Matrix: Properties of the Singular Value Decomposition
    Generalized Essential Matrix: Properties of the Singular Value Decomposition Pedro Miraldo1,2, Helder Araujo2 Abstract When considering non-central imaging devices, the computation of the relative pose requires the estimation of the rotation and translation that transform the 3D lines from one coordinate system to the second. In most of the state-of- the-art methods, this transformation is estimated by the computing a 6 × 6 matrix, known as the Generalized Essential Matrix. To allow a better understanding of this matrix, we derive some properties associated with its singular value decomposition. Keywords: Generalized epipolar geometry, relative pose, singular value decomposition, rigid transformation of lines. 1 1. Introduction 2 Relative pose estimation is one of the main problems in computer vision, which has been studied for more than a 3 century [1]. The goal is to estimate the rigid transformation between two cameras (or the same camera in two different 4 positions) using matching between pixels that are images of the same 3D point in the world. The cameras (or camera) 5 are considered calibrated. As a result, for each image pixel, we know the corresponding 3D projection line in the 6 world. Thus, by computing the 3D projection lines associated to each match of pixels, the problem can be seen as 7 finding the rotation and translation that align the 3D projection lines to ensure that they intersect in the world, as 8 shown in Fig. 1. One of the most important applications is its use in robotics navigation, in methods such as visual 9 odometry [2]. 10 When considering conventional perspective cameras there are several solutions for the relative pose.
    [Show full text]
  • Mathematical Methods for Camera Self-Calibration in Photogrammetry and Computer Vision
    Mathematical Methods for Camera Self-Calibration in Photogrammetry and Computer Vision A thesis accepted by the Faculty of Aerospace Engineering and Geodesy of the Universität Stuttgart in partial fulfillment of the requirements for the degree of Doctor of Engineering Sciences (Dr.-Ing.) by Rongfu Tang born in Guangdong, China P. R. Committee Chair: Prof. Dr.-Ing habil. Dieter Fritsch Committee member: Prof. Dr.-Ing habil. Christian Heipke Date of defence: 28.05.2013 Institute of Photogrammetry University of Stuttgart 2013 Deutsche Geodätische Kommission bei der Bayerischen Akademie der Wissenschaften Reihe C Dissertationen Heft Nr. 703 Rongfu Tang Mathematical Methods for Camera Self-Calibration in Photogrammetry and Computer Vision München 2013 Verlag der Bayerischen Akademie der Wissenschaften in Kommission beim Verlag C. H. Beck ISSN 0065-5325 ISBN 978-3-7696-5115-7 Deutsche Geodätische Kommission bei der Bayerischen Akademie der Wissenschaften Reihe C Dissertationen Heft Nr. 703 Mathematical Methods for Camera Self-Calibration in Photogrammetry and Computer Vision Von der Fakultät Luft- und Raumfahrttechnik und Geodäsie der Universität Stuttgart zur Erlangung der Würde eines Doktors der Ingenieurwissenschaften (Dr.-Ing.) genehmigte Abhandlung Vorgelegt von M.Sc. Rongfu Tang München 2013 Verlag der Bayerischen Akademie der Wissenschaften in Kommission beim Verlag C. H. Beck ISSN 0065-5325 ISBN 978-3-7696-5115-7 Adresse der Deutschen Geodätischen Kommission: Deutsche Geodätische Kommission Alfons-Goppel-Straße 11 ! D – 80 539 München Telefon +49 – 89 – 23 031 1113 ! Telefax +49 – 89 – 23 031 - 1283 / - 1100 e-mail [email protected] ! http://www.dgk.badw.de Hauptberichter: Prof. Dr.-Ing. habil. Dieter Fritsch Mitberichter: Prof.
    [Show full text]
  • Epipolar Geometry and the Essential Matrix
    Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and leads to a key constraint between the coordinates of these points that underlies visual reconstruction. The first Section below describes the epipolar geometry. The Section thereafter expresses its key constraint algebraically. 1 The Epipolar Geometry of a Pair of Cameras Figure 1 shows the main elements of the epipolar geometry for a pair of cameras. P projection ray projection ray epipolar plane epipolar line of q J p I center of projection q epipole e baseline epipolar line of p camera C camera D center of epipole f projection Figure 1: Essential elements of the epipolar geometry of a camera pair. The world point P and the centers of projection of the two cameras identify a plane in space, the epipolar plane of point P. The Figure shows a triangle of this plane, delimited by the two projection rays and by the baseline of the camera pair, that is, the line segment that connects the two centers of projection.1 If the image planes are thought of extending indefinitely, the baseline intersects the two image planes at two points called the epipoles of the two images. In particular, if the cameras are arranged so that the baseline is parallel to an image plane, then the corresponding epipole is a point at infinity. The epipoles are fixed points for a given camera pair configuration. With cameras somewhat tilted towards each other, and with a sufficiently wide field of view, the epipoles would be image points.
    [Show full text]
  • Variational 3D Reconstruction from Stereo Image Pairs and Stereo Sequences
    Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen Dissertation zur Erlangung des Grades des Doktors der Naturwissenschaften der Naturwissenschaftlich-Technischen Fakultäten der Universität des Saarlandes vorgelegt von Levi Valgaerts Saarbrücken, 2011 R S V E I T I A N S U S S A I R S A V I E N Mathematische Bildverarbeitungsgruppe, Fakultät für Mathematik und Informatik, Universität des Saarlandes, 66041 Saarbrücken Vision and Image Processing Group, Cluster of Excellence MMCI, Universität des Saarlandes, 66041 Saarbrücken ii Tag des Kolloquiums 29.09.2011 Dekan Prof. Dr. Holger Hermanns Prüfungsausschuss Prof. Dr. Bernt Schiele (Vorsitz) Universität des Saarlandes Prof. Dr. Joachim Weickert (1. Gutachter) Universität des Saarlandes Prof. Dr. Thomas Brox (2. Gutachter) Universität Freiburg Dr. Andrés Bruhn Universität des Saarlandes Kurzzusammenfassung – Short Abstract Kurzzusammenfassung – Deutsch Diese Arbeit befasst sich mit der 3D Rekonstruktion und der 3D Bewegungsschätzung aus Stereodaten unter Verwendung von Variationsansätzen, die auf dichten Verfahren zur Berechnung des optischen Flusses beruhen. Im ersten Teil der Arbeit untersuchen wir ein neues Anwendungsgebiet von dichtem optischen Fluss, nämlich die Bestimmung der Fun- damentalmatrix aus Stereobildpaaren. Indem wir die Abhängigkeit zwischen der geschätzten Stereogeometrie in Form der Fundamentalmatrix und den berechneten Bildkorresponden- zen geeignet ausnutzen, sind wir in der Lage, im zweiten Teil der Arbeit eine gekop- pelte Bestimmung der Fundamentalmatrix und des optischen Flusses vorzuschlagen, die zur einer Erhöhung der Genauigkeit beider Schätzungen führt. Im Gegensatz zu vielen existierenden Verfahren berechnet unser gekoppelter Ansatz dabei die Lage der Kameras und die 3D Szenenstruktur nicht einzeln, sonderen bestimmt sie in einem einzigen gemein- samen Optimierungsschritt. Dem Prinzip der gemeinsamen Schätzung weiter folgend kop- peln wir im letzten Teil der Arbeit die dichte 3D Rekonstruktion der Szene zusätzlich mit der Bestimmung der zugehörigen 3D Bewegung.
    [Show full text]
  • Lecture 9: Epipolar Geometry
    Lecture 9: Epipolar Geometry Professor Fei‐Fei Li Stanford Vision Lab Fei-Fei Li Lecture 9 - 1 21‐Oct‐11 What we will learn today? •Why is stereo useful? • Epipolar constraints • Essential and fundamental matrix •Estimating F (Problem Set 2 (Q2)) • Rectification Reading: [HZ] Chapters: 4, 9, 11 [FP] Chapters: 10 Fei-Fei Li Lecture 9 - 2 21‐Oct‐11 Recovering structure from a single view Pinhole perspective projection P p Ow Scene C Calibration rig Camera K Why is it so difficult? Intrinsic ambiguity of the mapping from 3D to image (2D) Fei-Fei Li Lecture 9 - 3 21‐Oct‐11 Recovering structure from a single view Pinhole perspective projection Lazebnik S. slide Courtesy Intrinsic ambiguity of the mapping from 3D to image (2D) Fei-Fei Li Lecture 9 - 4 21‐Oct‐11 Two eyes help! Fei-Fei Li Lecture 9 - 5 21‐Oct‐11 Two eyes help! ? x1 x2 K =known K =known R, T O2 O1 This is called triangulation Fei-Fei Li Lecture 9 - 6 21‐Oct‐11 Triangulation 2 2 • Find X that minimizes d ( x1 , P1 X ) d ( x 2 , P2 X ) X x2 x1 O1 O2 Fei-Fei Li Lecture 9 - 7 21‐Oct‐11 Stereo‐view geometry • Correspondence: Given a point in one image, how can I find the corresponding point x’ in another one? • Camera geometry: Given corresponding points in two images, find camera matrices, position and pose. • Scene geometry: Find coordinates of 3D point from its projection into 2 or multiple images. This lecture (#9) Fei-Fei Li Lecture 9 - 8 21‐Oct‐11 Stereo‐view geometry Next lecture (#10) • Correspondence: Given a point in one image, how can I find the corresponding point x’ in another one? • Camera geometry: Given corresponding points in two images, find camera matrices, position and pose.
    [Show full text]
  • Inaugural-Dissertation
    Inaugural-Dissertation zur Erlangung der Doktorw¨urde der Naturwissenschaftlich-Mathematischen Gesamtfakult¨at der Ruprecht–Karls–Universit¨at Heidelberg vorgelegt von Master of Mathematics and Computer Science Andreas Neufeld aus Ramenskoje (Russland) Tag der m¨undlichen Pr¨ufung: Variational Approaches for Motion and Structure from Monocular Video Betreuer: Prof. Dr. Christoph Schn¨orr Prof. Dr. Bj¨orn Ommer Zusammenfassung Das Sch¨atzen von Bewegung und r¨aumlicher Struktur aus Bildfol- gen sind grundlegende Probleme in der Bildverarbeitung. Diese Prob- lemstellungen sind noch aktuell, obwohl die ersten Methoden bereits vor mehreren Jahrzehnten ver¨offentlicht wurden. Wir pr¨asentieren neue Verfahren zur Bewegungs- und Struktursch¨atzung f¨ur das au- tonome Fahren. Ein autonomes Auto braucht genaue Kenntnis ¨uber seine Umgebung, da eine Fehleinsch¨atzung gravierende Konsequenzen haben kann. Speziell behandeln wir monokulare Verfahren, bei denen nur eine Kamera auf dem Fahrzeug verf¨ugbar ist. Bildfolgen aus dem Straßenverkehr sind f¨ur das Sch¨atzen von Be- wegung besonders herausfordernd. Durch die hohe Geschwindigkeit ist die Bewegung sehr groß, die Lichtverh¨altnisse sind nicht stabil und es kann Verf¨alschungen geben durch Reflektionen und wetterbedingte St¨orungen. Wir stellen neue diskrete Verfahren zur Berechnung des optischen Flusses vor, welche probabilistische graphische Modelle f¨ur den optischen Fluss definieren. In dem ersten Ansatz w¨ahlen wir einige Stellen im Referenzbild aus, und vergleichen diese mit dem zweiten Bild. Die besten Korresponden- zen, welche auch zu einer monokularen Bewegung passen, werden als Kandidaten f¨ur ein graphisches Modell ausgew¨ahlt. In einem weit- eren Verfahren, vergleichen wir alle Stellen im Referenzbild, l¨osen das graphische Modell jedoch nicht direkt, sondern approximieren es mit einer Sequenz von kleineren Modellen.
    [Show full text]
  • Fundamental Matrix / Image Rectification
    Fundamental Matrix / Image Rectification COMPSCI 773 S1 T VISION GUIDED CONTROL A/P Georgy Gimel’farb Epipolar Geometry • Ol,Or - projection centres – Origins of the reference frames – fl, fr - focal lengths of cameras • πl, πr - image planes – 3D reference frame for each camera: Z-axis = the optical axis T T Pl=[Xl,Yl,Zl] , Pr=[Xr,Yr,Zr] - the same 3D point P in the reference frames T T pl=[xl ,yl, zl =fl] , pr=[xr,yr, zr =fr] - projections of P onto the image planes Epipoles COMPSCI 773 1 Basics of Epipolar Geometry • Reference frames of the left and right cameras - related via the extrinsic parameters – Translation vector T = (Or - Ol) and a rotation matrix R defining a rigid transformation in 3-D space, given a 3-D point P, between Pl and Pr: Pr = R(Pl - T) • Epipoles el and er - the points at which the line through the centres of projection intersects the image planes – Left epipole - the image of the right projection centre – Right epipole - the image of the left projection centre – Canonical geometry: the epipole is at infinity of the baseline COMPSCI 773 2 Basics of Epipolar Geometry T • 3-D point P = [X,Y,Z] ⇔ its projections pl and pr: flPl f rPr pl = ; pr = Zl Zr • Epipolar plane: the plane through P, Ol, and Or – Epipolar line: its intersection with each image plane – Conjugated€ lines: both the lines for an epipolar plane – Given pl, the 3-D point P can lie anywhere on the ray pl Ol depicted by the epipolar line through the corresponding pr – Epipolar constraint: the true match lies on the epipolar line COMPSCI 773 3 Basics of Epipolar Geometry • All epipolar lines go through the epipole – With the exception of the epipole, only one epipolar line goes through any image point – Mapping between points on one image and corresponding epipolar lines on the other image ⇒ the 1-D search region – Rejection of false matches due to occlusions – Corresponding points must lie on conjugated epipolar lines • The obvious question: how to estimate the epipolar geometry, i.e.
    [Show full text]
  • Epipolar Geometry: Essential Matrix
    CS4501: Introduction to Computer Vision Epipolar Geometry: Essential Matrix Various slides from previous courses by: D.A. Forsyth (Berkeley / UIUC), I. Kokkinos (Ecole Centrale / UCL). S. Lazebnik (UNC / UIUC), S. Seitz (MSR / Facebook), J. Hays (Brown / Georgia Tech), A. Berg (Stony Brook / UNC), D. Samaras (Stony Brook) . J. M. Frahm (UNC), V. Ordonez (UVA), Steve Seitz (UW). Last Class • Stereo Vision – Dense Stereo • More on Epipolar Geometry Today’s Class • More on Epipolar Geometry • Essential Matrix • Fundamental Matrix Es#ma#ng depth with stereo • Stereo: shape from “motion” between two views • We’ll need to consider: • Info on camera pose (“calibration”) • Image point correspondences scene point image plane optical center Key idea: Epipolar constraint X X X x x ’ x ’ x ’ Potential matches for x have to lie on the corresponding line l’. Potential matches for x’ have to lie on the corresponding line l. Epipolar geometry: notation X x x ’ • Baseline – line connecting the two camera centers • Epipoles = intersec9ons of baseline with image planes = projec9ons of the other camera center • Epipolar Plane – plane containing baseline (1D family) Epipolar geometry: notation X x x ’ • Baseline – line connecting the two camera centers • Epipoles = intersections of baseline with image planes = projections of the other camera center • Epipolar Plane – plane containing baseline (1D family) • Epipolar Lines - intersections of epipolar plane with image planes (always come in corresponding pairs) Epipolar Geometry: Another example Credit: William
    [Show full text]