Chapter 2: Gaussian Optics - Paraxial Optics Mainly We Will Discuss – Analysis Based on Paraxial Optics – Simple Ray Tracing – Aperture and Stop – Homeworks

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 2: Gaussian Optics - Paraxial Optics Mainly We Will Discuss – Analysis Based on Paraxial Optics – Simple Ray Tracing – Aperture and Stop – Homeworks Chapter 2: Gaussian Optics - paraxial optics Mainly we will discuss – Analysis based on paraxial optics – Simple ray tracing – Aperture and stop – homeworks February 2001 optical system design (2-1) 1 Analysis based on paraxial optics • Paraxial approximation • Cardinal points • Paraxial ray tracing – YNU ray tracing – YUI ray tracing • Matrix Optics • Paraxial constants • Image equations • Specification of conjugate distances • Lens setup February 2001 optical system design (2-1) 2 Apertures and Pupils • Radiometric concepts • Optical System Layout and terminology – Thin lens – Radiance conservation – Photographic objective theorem – Irradiance by a flat circular –Magnifier source – Telescope – Cos4 law – Relay system – Vigentting – Telecentric lens – Computation of irradiance • Specifying lens • Stops and Pupils apertures – Specifying aperture and field of view – Special apertures February 2001 optical system design (2-1) 3 • Centered optical system – Optical axis – Basic law – Plane of incidence nsinθ = n'sinθ '⇒ nθ = n'θ ' • Formed by incident and refracted rays – Merdinal ray (子午線) n interface • On the plane of incidence • with optical axis n’ – Skew ray (歪斜線) • Optical axis is not on the plane of incidence February 2001 optical system design (2-1) 4 Approximation: a quick way for ray tracing • Now let us take a non- Tangent plane flat surface Index arc Index line R’ ρ ρ ρ ρ ρ ρ ρ n'e = ne + Gp G = n'( p • e) − n( p • e) = n'cos I'−ncos I February 2001 optical system design (2-1) 5 Cause of aberration • Aberration : the difference between real ray and paraxial ray February 2001 optical system design (2-1) 6 Cardinal point (基點) • Using cardinal points to characterize the performance of paraxial optics – Object space –Image space Focal points • Two points are enough Principal points to determine the Nodal points position of image February 2001 optical system design (2-1) 7 Paraxial optical system and aplanatic optical system second focal point: F’ First focal point: F P, P’: principal points (O and O’ off-axis where P and P’ on axis) February 2001 optical system design (2-1) 8 Paraxial optical system and aplanatic optical system principal planes changes to principal surface OSLO used aplanatic Ray aiming February 2001 optical system design (2-1) 9 Paraxial optical system and aplanatic optical system • For an aplantic lens, the effective refracting surface for ray coming from infinity is a sphere centered on the second focal point – Most real lens more closely resemble aplantic system than paraxial systems • Note: an aplantic lens obeys the Abbe sine condition. This condition was put into the calculation in common ray-tracing and thus to achieve an aplantic ray aiming (instead of paraxial ray aiming). February 2001 optical system design (2-1) 10 Nodal points: without angular deviation Unit positive angular magnification February 2001 optical system design (2-1) 11 (nodal point) Once the locations of the cardinal points of a system are known, it is straightforward to find how the system transforms rays from object to image space February 2001 optical system design (2-1) 12 Paraxial ray tracing y − y y u = − (y = y + tu) i = + u = yc + u ni = n'i' − r t nu − yφ y u'= I = U +θ; I'= U '+θ i'= + u'= yc + u' n' r power φ'= c(n'−n) February 2001 optical system design (2-1) 13 YNU ray tracing n'u'= nu − yφ t y = y + (nu) − n Idea of “SOLVE” n'u'−nu c = Angle solve y(n − n') y − y t = − height solve Two rays u February 2001 optical system design (2-1) 14 • Usually in ray tracing, we need to check two rays – (1) so called “a ray”: axial rays (marginal paraxial ray) – (2) so called “b ray”: principal ray (chief ray) – In last viewgraph, the subscripts of y (a, b) mean a ray and b ray correspondingly. February 2001 optical system design (2-1) 15 YUI ray tracing b ray a ray In OSLO use “pxt all” y = y− + tu to display this data i = u + yc n u'= u + ( −1)i n' February 2001 optical system design (2-1) 16 homework • Using visual C++ or the other computer language to implement the ynu ray tracing and yui ray tracing • And then, calculate a case of achromat lens • Using OSLO LT, and “pxt all” command to compare your result Radius thickness diameter Material 12.8018 0.434 3.41 BAK-1 -9.0623 0.321 3.41 SF-8 -37.6553 19.631 3.41 AIR Cemented achromat F-number: f/6 Focal length: 20 in Note: unit in inch From Laikin, lens design, p. 45 February 2001 optical system design (2-1) 17 Matrix Optics Translation matrix y j 1 t j / n j y j−1 = n ju j 0 1 n ju j T Refraction matrix y j 1 0 y j = n' j u' j −φ 1n ju j R February 2001 optical system design (2-1) 18 Transfer Matrix y A B y j = 1 n' u' j j C Dn1u1 Determinant =1 Determinant L Lab = n1(yb1ua1 − ya1ub1) (again, here subscripts a and b correspond a ray and b ray) Largange invariant But you do not have to use a ray and b ray to have a Lagrange invariant February 2001 optical system design (2-1) 19 Lagrange’s Law • How to calculate the magnification? –m=h’/h – Ray tracing (not good) • Using Lagrange invariant –Lab=n1yboua1=hn1ua1 (axial ray, zero height in object space) –Lab=nkybkuak=h’nkuak (axial ray, zero height on the image plabce) – Lagrange invariant hn1ua1=h’nkuak –M=h’/h=n1ua1/nkuak February 2001 optical system design (2-1) 20 Paraxial Constants • OSLO computes seven numbers (paraxial − eh constants) f '= u' e + u' h – Effective focal length (EFL) b a PIV = Lab = hnua nu – Larange (paraxial) invariant (PIV) TMAG = m = a n'u'a – Lateral mafnification (TMAG) nu GIH = h'= a h – Gaussian image height (GIH) n'u'a NAO – Numerical aperture (NA) NA = – F-number (FNB) TMAG nu NAO = nsinU = a – Petzval radius (PTZRAD) 2 1+ u a • The radius of curvature of the surface on which an image of an extended plan object would be formed (not a paraxial quantity, February 2001actually) optical system design (2-1) 21 • So, in designing a code for optical system design, at least – You need to know how to implement ynu and yui ray tracing (this is simple) – You need to know how to put Abbe sine conditions to have an aplanatic ray aiming (this is simple) – You need to know how to use Lagrange invariant to calculate fast (this is simple) – At least, seven paraxial constants should be implemented (this is simple) – You need to know how to implement “Solve” and “Pickup” to meet some design requirement and optimization needs (this is somehow difficult; you need to consult “numerical recipe”. But, it can be done.) February 2001 optical system design (2-1) 22 Specification of conjugate distance In OSLO, you can specify object and image distance referred to cardinal points, rather than physical lens surfaces February 2001 optical system design (2-1) 23 Lens Setup •In OSLO – Solve • Axial ray height solve (PY solve) – To specify the last thickness (the one before the image surface) • Chief ray height solve (PYC solve) – Final surface becomes the paraxial exit pupil • Angle solve – To constrain the f-number – Pickup • Force some surfaces to follow a fixing setting according to some specific surface (lens) February 2001 optical system design (2-1) 24.
Recommended publications
  • Location of Cardinal Points from the ABCD Matrix for the General Optical System
    EE482 Fall 2000 Handout #2 Page 1 Location of Cardinal Points from the ABCD Matrix for the General Optical System I II n1 n2 F1 P1 N1 P2 N2 F2 f1 f2 Figure 1 Cardinal points of a system which is characterized by an ABCD matrix between an input plane I and an output plane II. Cardinal Point Measured Function Special Case From To (n1=n2) First Focal Point P1 F1 -n1/(n2C)-1/C First Principle Point I P1 (n1-n2D)/(n2C)(1-D)/C First Nodal Point I N1 (1-D)/C (1-D)/C Second Focal Point P2 F2 -1/C -1/C Second Principle Point II P2 (1-A)/C (1-A)/C Second Nodal Point II N2 (n1-n2A)/(n2C)(1-A)/C Table 1 When analyzing an optical system, we often are first interested in basic properties of the system such as the focal length (or optical power of the system) and the location of the focal planes, etc. In the limit of paraxial optics, any optical system, from a simple lens to a combination of lenses, mirrors and ducts, can be described in terms of six special surfaces, called the cardinal surfaces of the system. In paraxial optics, these surfaces are planes, normal to the optical axis. The point where the plane intersects the optical axis is the cardinal point corresponding to that cardinal plane. The six special planes are: First Focal Plane Second Focal Plane First Principle Plane Second Principle Plane First Nodal Plane Second Nodal Plane Once we know the location of these special planes, relative to the physical location of the lenses in the system, we can describe all of the paraxial imaging properties of the system.
    [Show full text]
  • Optical Performance:: Characterization of a Pupillometric Camera
    RELATED TITLES Documents Science & Tech Tech Digital & Social Media 58 views 0 0 Abb Uploaded by Mabel Ruiz AP-Physical Low Light Manual Twilight Retrato Science Sample Photography for Render 1.4.5 Aberration document explaining the Seidel coefficients Full description Save Embed Share Print S-118.4250 PPOSTGRADUATE SEMINAR ON ILLUMINATION ENGINEERING ,, SSPRING 2008 LLIGHTING UUNIT,, DDEPARTMENT OF EELECTRONICS,, HHELSINKI UUNIVERSITY OF TTECHNOLOGY (TKK) OPTICAL PERFORMANCE:: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA Petteri Teikari,, [email protected] Emmi Rautkylä,, emmi.rautkylä@tkk.fi RELATED TITLES Documents Science & Tech Tech Digital & Social Media 58 views 0 0 Abb Uploaded by Mabel Ruiz AP-Physical Low Light Manual Twilight Retrato Science Sample Photography for Render 1.4.5 Aberration document explaining the Seidel coefficients Full description Save Embed Share Print RELATED TITLES Documents Science & Tech Tech Digital & Social Media 58 views 0 0 Abb Uploaded by Mabel Ruiz AP-Physical Low Light Manual Twilight Retrato Science Sample Photography for Render 1.4.5 Aberration document explaining the Seidel coefficients Full description Save Embed Share Print TABLE OF CONTENTS A A BSTRACT .................................................................................................................................. TT ABLE OF CONTENTS ................................................................................................................2 11 IINTRODUCTION.................................................................................................................3
    [Show full text]
  • Digital Light Field Photography
    DIGITAL LIGHT FIELD PHOTOGRAPHY a dissertation submitted to the department of computer science and the committee on graduate studies of stanford university in partial fulfillment of the requirements for the degree of doctor of philosophy Ren Ng July © Copyright by Ren Ng All Rights Reserved ii IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Patrick Hanrahan Principal Adviser IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Marc Levoy IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Mark Horowitz Approved for the University Committee on Graduate Studies. iii iv Acknowledgments I feel tremendously lucky to have had the opportunity to work with Pat Hanrahan, Marc Levoy and Mark Horowitz on the ideas in this dissertation, and I would like to thank them for their support. Pat instilled in me a love for simulating the flow of light, agreed to take me on as a graduate student, and encouraged me to immerse myself in something I had a passion for.Icouldnothaveaskedforafinermentor.MarcLevoyistheonewhooriginallydrewme to computer graphics, has worked side by side with me at the optical bench, and is vigorously carrying these ideas to new frontiers in light field microscopy. Mark Horowitz inspired me to assemble my camera by sharing his love for dismantling old things and building new ones. I have never met a professor more generous with his time and experience. I am grateful to Brian Wandell and Dwight Nishimura for serving on my orals commit- tee. Dwight has been an unfailing source of encouragement during my time at Stanford. I would like to acknowledge the fine work of the other individuals who have contributed to this camera research. Mathieu Brédif worked closely with me in developing the simulation system, and he implemented the original lens correction software.
    [Show full text]
  • Sensor Interpixel Correlation Analysis and Reduction for Color Filter Array High Dynamic Range Image Reconstruction
    Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction Mikael Lindstrand The self-archived postprint version of this journal article is available at Linköping University Institutional Repository (DiVA): http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154156 N.B.: When citing this work, cite the original publication. Lindstrand, M., (2019), Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction, Color Research and Application, , 1-13. https://doi.org/10.1002/col.22343 Original publication available at: https://doi.org/10.1002/col.22343 Copyright: This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non- commercial and no modifications or adaptations are made. © 2019 The Authors. Color Research & Application published by Wiley Periodicals, Inc. http://eu.wiley.com/WileyCDA/ Received: 10 April 2018 Revised: 3 December 2018 Accepted: 4 December 2018 DOI: 10.1002/col.22343 RESEARCH ARTICLE Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction Mikael Lindstrand1,2 1gonioLabs AB, Stockholm, Sweden Abstract 2Image Reproduction and Graphics Design, Campus Norrköping, ITN, Linköping University, High dynamic range imaging (HDRI) by bracketing of low dynamic range (LDR) Linköping, Sweden images is demanding, as the sensor is deliberately operated at saturation. This exac- Correspondence erbates any crosstalk, interpixel capacitance, blooming and smear, all causing inter- gonioLabs AB, Stockholm, Sweden. pixel correlations (IC) and a deteriorated modulation transfer function (MTF).
    [Show full text]
  • Chapter 2: Design of Dic Measurements Sec
    SAND2020-9051 TR (slides) SAND2020-9046 TR (videos) CHAPTER 2: DESIGN OF DIC MEASUREMENTS SEC. 2.1: MEASUREMENT REQUIREMENTS 2 Quantity-of-Interest (QOI), Region-of-Interest (ROI), and Field-of-View (FOV) Sec. 2.1.1 – Sec. 2.1.3 1. Determine the QOIs ▸Examples include: shape, displacement, velocity, acceleration, strain, strain-rate, etc. ▸Application specific: ▸Strain field near hole or necking region? ▸Displacements at grips? 2. Select the ROI of the test piece 3. Determine the required FOV, based on the ROI ▸Recommendation 2.1: ROI should fill FOV, accounting for anticipated motion 3 2D-DIC vs Stereo-DIC Sec. 2.1.5 2D-DIC: ▸One camera, perpendicular to a planar test piece ▸Gives in-plane displacements and strains ▸Caution 2.1: Test piece should be planar and perpendicular to camera, and remain so during the test ▸Recommendation 2.3: Estimate errors due to out-of-plane motion ℎ 푚1 ℎ 푚2 Schematic top view of experimental setup tan 훼 = = tan 훼 = = 1 푑 푧 2 푑 + Δ푑 푧 Test piece 1 1 Camera 푚 − 푚 푑 detector False Strain ≈ 2 1 = 1 − 1 Pin hole 푚1 푑1 + ∆푑 h h α2 α1 m2 m1 d1 Δd False Strain 250 mm 1 mm 0.4 % 500 mm 1 mm 0.2 % ∆d 1000 mm 1 mm 0.1 % Stand-off distance, d1 Image distance, z 4 2D-DIC: Telecentric lenses Sec. 2.2.1 ▸Recommendation 2.6: ▸Caution 2.5 ▸For 2D-DIC, bi-lateral telecentric lenses are recommended ▸Do not use telecentric lenses for stereo-DIC! ▸If a telecentric lens isn’t available, use a longer focal length lens ▸Caution! (not in Guide) ▸False strains may still occur from out-of-plane Standard lens: rotations, even with a telecentric lens.
    [Show full text]
  • Telecentric Lenses Telecentric Lenses Telecentric
    Telecentric Lenses Telecentric Lenses Telecentric • Zoom telecentric system. • Single-Sided telecentric lenses. • Single-Sided telecentric accessories. • Video telecentric lenses. • Double-Sided telecentric lenses. • Double-Sided telecentric illumination accessories. LASER COMPONENTS S.A.S. 45 Bis Route des Gardes, 92190 Meudon - France, Phone: +33 (0)1 3959 5225, Fax: +33 (0)1 3959 5350, [email protected] Telecentric Lenses Telecentric Lenses Telecentric Navitar Telecentric Lenses Navitar off ers a family of high-performance telecentric lenses for use in machine vision, metrology and precision gauging applications. All provide low optical distortion and a high degree of telecentricity for maximum, accurate im- age reproduction, particularly when viewing three-dimensional objects. Benefi ts of Telecentric Lenses One of the most important benefi ts of a telecentric lens is that image magnifi ca- tion does not change as object distance varies. A telecentric lens views and displays the entire object from the same prospective angle, therefore, three-di- mensional features will not exhibit the perspective distortion and image position errors present when using a standard lens. Objects inside deep holes are visible throughout the fi eld, undistorted, therefore, telecentric lenses are extremely useful for inspecting three-dimensional objects or scenes where image size and shape accuracy are critical. Choose the Best Lens Option for Your Needs Our machine vision telecentric lenses include the 12X Telecentric Zoom system, the TC-5028 and TEC-M55 telecentric video lens. Our single-sided lenses, the In- varigon-R™, Macro Invaritar™, ELWD Macro Invaritar™ and Invaritar Large Field telecentric lenses are avail- able with several diff erent accessories that enhance the performance of the lenses.
    [Show full text]
  • Lens Design I – Seminar 1
    Y. Sekman, X. Lu, H. Gross Friedrich Schiller University Jena Institute of Applied Physics Albert-Einstein-Str 15 07745 Jena Lens Design I – Seminar 1 Exercise 1-1: Stair-mirror-setup (Homework) Setup a system with a stair mirror pair, which decenters an incoming collimated ray bundle with 10 mm diameter by 40 mm in the -y direction. The wavelength of the beam is = 632.8 nm. After this pair of mirrors, a decentered main objective lens with focal length f = 200 mm made of BK7 is located 25 mm below the optical axis and focusses the beam. a) Setup the system b) Generate layout drawings in 2D and in 3D. c) Calculate the beam cross section on the second mirror, what is the size of the pattern? d) Determine the optimal final sensor plane location. Calculate the spot of the focused beam. Discuss the shape of this pattern. Exercise 1-2: Symmetrical 4f-system Setup a telecentric 4f-imaging system with two identical plano-convex lenses made of BK7 with thickness d = 10 mm and approximate focal lengths f = 100 mm. The wavelength of the system is = 546.07 nm and the numerical aperture in the object space is NA = 0.2. The object has a diameter of 10 mm. a) If the setup is perfectly symmetrical, determine the layout and the spot diagram of the system. b) Optimize the image location. Why is the spot size improved? c) If the starting aperture is decreased, the system becomes more and more close to diffraction limited. What is the value of the NA to get a diffraction limited system on axis? Take in mind here, that a re-focussing might be necessary due to the lowered spherical aberrations, which depends on the aperture.
    [Show full text]
  • Solution of Exercises Lecture Optical Design with Zemax for Phd – Part 8
    2013-06-17 Prof. Herbert Gross Friedrich Schiller University Jena Institute of Applied Physics Albert-Einstein-Str 15 07745 Jena Solution of Exercises Lecture Optical design with Zemax for PhD – Part 8 8.1 Multi configuration, universal plot and slider Load a classical achromate with a focal length of f = 100 mm, no field and numerical aperture NA = 0.1 from one of the vendor catalogs. Fix the wavelength to = 546.07 nm. a) Add a thin mensicus shaped lens behind the system with an arteficial refractive index of n = 2 to enlarge the numerical aperture by a factor of 2 without introducing spherical aberration. To achieve this, the surfaces must be aplanatic and concentric. b) Now reduce the numerical aperture to a diameter of 2 mm and set a folding mirror in the front focal plane of the system. The incoming beam should be come from below and is deflected to the right side. c) Generate a multi-configuration system for a scan system by rotating the mirror. The first coordinate break angle can take the values -50°, -47.5°, -45°, -42.5° and -40°. The second coordinate break should be defined by a pick up with a resulting bending angle of the system axis of -90°. d) The chief ray of the scan system is telecentric in the paraxial approximation. Due to the residual aberrations of the system, there is a deviation from the telecentricity in the real system. Show this by a correponding universal plot. e) Show the variation of the spot in the image plane by using the slider.
    [Show full text]
  • Opti 415/515
    Opti 415/515 Introduction to Optical Systems 1 Copyright 2009, William P. Kuhn Optical Systems Manipulate light to form an image on a detector. Point source microscope Hubble telescope (NASA) 2 Copyright 2009, William P. Kuhn Fundamental System Requirements • Application / Performance – Field-of-view and resolution – Illumination: luminous, sunlit, … – Wavelength – Aperture size / transmittance – Polarization, – Coherence –… • Producibility: – Size, weight, environment, … – Production volume –Cost –… • Requirements are interdependent, and must be physically plausible: – May want more pixels at a faster frame rate than available detectors provide, – Specified detector and resolution requires a focal length and aperture larger than allowed package size. – Depth-focus may require F/# incompatible with resolution requirement. • Once a plausible set of performance requirements is established, then a set optical system specifications can be created. 3 Copyright 2009, William P. Kuhn Optical System Specifications First Order requirements Performance Requirements • Object distance __________ • MTF vs. FOV __________ • Image distance __________ • RMS wavefront __________ • F/number or NA __________ • Encircled energy __________ • Full field-of-view __________ • Distortion % __________ • Focal length __________ •Detector – Type Mechanical Requirements – Dimensions ____________ • Back focal dist. __________ – Pixel size ____________ • Length & diameter __________ – # of pixels ____________ • Total track __________ – Format ____________ • Wavelength
    [Show full text]
  • The Basic Purpose of a Lens of Any Kind Is to Collect the Light Scattered by an Object and Recreate an Image of the Object on A
    Optics he basic purpose of a lens of any kind is to collect the light scattered by an object and recreate Tan image of the object on a light-sensitive ‘sensor’ (usually CCD or CMOS based). A certain number of parameters must be considered when choosing optics, depending on the area that must be imaged (field of view), the thickness of the object or features of interest (depth of field), the lens to object distance (working distance), the intensity of light, the optics type (telecentric/entocentric/pericentric), etc. The following list includes the fundamental parameters that must be evaluated in optics • Field of View (FoV): total area that can be viewed by the lens and imaged onto the camera sensor. • Working distance (WD): object to lens distance where the image is at its sharpest focus. • Depth of Field (DoF): maximum range where the object appears to be in acceptable focus. • Sensor size: size of the camera sensor’s active area. This can be easily calculated by multiplying the pixel size by the sensor resolution (number of active pixels in the x and y direction). • Magnification: ratio between sensor size and FoV. • Resolution: minimum distance between two points that can still be distinguished as separate points. Resolution is a complex parameter, which depends primarily on the lens and camera resolution. www.opto-engineering.com Optics basics Lens approximations and equations he main features of most optical systems can be calculated with a few parameters, provided that some approximation is accepted. TThe paraxial approximation requires that only rays entering the optical system at small angles with respect to the optical axis are taken into account.
    [Show full text]
  • Graphical Construction of Cardinal Points from the Transference WF Harris
    S Afr Optom 2011 70(1) 3-13 Graphical construction of cardinal points from the transference WF Harris Department of Optometry, University of Johannesburg, PO Box 524, Auckland Park, 2006 South Africa <[email protected]> Received 21 December 2010; revised version accepted 24 February 2011 Abstract fected by changes in the system or when a second optical system is placed in front of the first. The Usually nodal, principal and focal points are de- paper illustrates the graphical procedure by apply- fined independently and thought of as distinct struc- ing it in several situations of interest in optometry tures with no simple relationship among them. By and ophthalmology, including the effect of a con- adopting a holistic approach, in which these three tact lens or refractive surgery on a reduced eye types of cardinal points are treated as particular and the effect of accommodation, a spectacle lens cases of a larger class of special points, this paper and an afocal telescope on the Gullstrand-Emsley develops a method of constructing the locations of schematic eye. (S Afr Optom 2011 70(1) 3-13) the cardinal points of a system graphically directly from the transference. The method provides a use- Key words: Cardinal point, nodal point, principal ful way of visualising the relationship of the loca- point, focal point, transference tions of the cardinal points and of how they are af- Introduction the system from which one can obtain the positions of the individual cardinal points by construction. The The purpose of this paper is to show how one can slopes and cutting points of the straight lines come obtain the locations of cardinal and other special directly from the entries of the system’s transference.
    [Show full text]
  • A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses
    Int J Comput Vis (2017) 123:121–159 DOI 10.1007/s11263-016-0964-8 A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses Carsten Steger1 Received: 14 March 2016 / Accepted: 30 September 2016 / Published online: 22 October 2016 © The Author(s) 2016. This article is published with open access at Springerlink.com Abstract We propose camera models for cameras that are Keywords Camera models · Tilt lenses · Scheimpflug equipped with lenses that can be tilted in an arbitrary direc- optics · Camera model degeneracies · Camera calibration · tion (often called Scheimpflug optics). The proposed models Bias removal · Stereo rectification are comprehensive: they can handle all tilt lens types that are in common use for machine vision and consumer cameras and correctly describe the imaging geometry of lenses for 1 Introduction which the ray angles in object and image space differ, which is true for many lenses. Furthermore, they are versatile since One problem that often occurs when working on machine they can also be used to describe the rectification geometry vision applications that require large magnifications is that of a stereo image pair in which one camera is perspective and the depth of field becomes progressively smaller as the mag- the other camera is telecentric. We also examine the degen- nification increases. Since for regular lenses the depth of field eracies of the models and propose methods to handle the is parallel to the image plane, problems frequently occur if degeneracies. Furthermore, we examine the relation of the objects that are not parallel to the image plane must be imaged proposed camera models to different classes of projective in focus.
    [Show full text]