Apertures, Stops, Pupils, and Windows; Single-Lens Camera

Total Page:16

File Type:pdf, Size:1020Kb

Apertures, Stops, Pupils, and Windows; Single-Lens Camera Overview: composite optical elements ray from infinity bends at 2nd PP ray from then goes FFP BFP the FFP through the BFP FFL BFL FFP BFP bends at then goes EFL EFL 1st PP on to infinity 1st PP 2nd PP 1st PP 2nd PP 1st PP 2nd PP BFP image object FFP EFL EFL MIT 2.71/2.710 02/23/09 wk4-a- 1 1 Today • Apertures & Stops – Aperture – Entrance Pupil – Exit Pupil – Field Stop – Entrance Window – Exit Window MIT 2.71/2.710 02/23/09 wk4-a- 2 2 Aperture Stop and Numerical Aperture medium of refr. index n Aperture stop θ: half-angle subtended by the imaging the physical element which system from limits the angle of acceptance of the imaging an axial object system Numerical Aperture Speed pronounced f-number, e.g. f/8 means (f/#)=8. The Aperture Stop Later we will also learn that the NA also limits the optical energy defines the resolution (or resolving power) of that is admitted through the system the optical system MIT 2.71/2.710 02/23/09 wk4-a- 3 3 Entrance & exit pupils image through image through preceding elements succeeding elements entrance aperture exit pupil stop pupil multi-element optical system MIT 2.71/2.710 02/23/09 wk4-a- 4 4 The Chief & Marginal Rays M.R. C.R. Chief Ray: Starts from off-axis object, goes through the center of the Aperture Marginal Ray: Starts from off-axis object, goes through the edge of the Aperture Together, the C.R. and M.R. define the angular acceptance of spherical ray bundles originating from an off-axis object. MIT 2.71/2.710 02/23/09 wk4-a- 5 5 The Chief & Marginal Rays M.R. C.R. Corollaries: Chief Ray also goes through the center of the Entrance & Exit Pupils Marginal Ray also goes through the edge of the Entrance & Exit Pupils The Chief and Marginal Rays meet again at the image MIT 2.71/2.710 02/23/09 wk4-a- 6 6 The Field Stop Limits the angular acceptance of Chief Rays MIT 2.71/2.710 02/23/09 wk4-a- 7 7 The Field Stop FoV Defines the Field of View (FoV) MIT 2.71/2.710 02/23/09 wk4-a- 8 8 Entrance & Exit Windows image through image through preceding elements succeeding elements M.R. C.R. field stop exit entrance window window MIT 2.71/2.710 02/23/09 wk4-a- 9 9 All together M.R. C.R. field aperture entrance stop stop exit entrance pupil exit window window pupil MIT 2.71/2.710 02/23/09 wk4-a-10 10 All together C.R. FoV field aperture entrance stop stop exit entrance pupil exit window window pupil The Field of View is the angle between the pair of Chief Rays that just make it through the edges of the Field Stop MIT 2.71/2.710 02/23/09 wk4-a-11 11 All together NA field aperture entrance stop stop exit entrance pupil exit window window pupil The Numerical Aperture is the angle between the pair of Marginal Rays that just make it through the edges of the Aperture Stop MIT 2.71/2.710 02/23/09 wk4-a-12 12 Example: single-lens camera 2nd FP 1st FP size of film object plane or digital detector array image plane MIT 2.71/2.710 02/23/09 wk4-a-13 13 Example: single-lens camera 2nd FP 1st FP object plane Aperture Stop & Entrance Pupil image plane MIT 2.71/2.710 02/23/09 wk4-a-14 14 Example: single-lens camera 2nd FP 1st FP object plane Aperture Exit Stop pupil & Entrance (virtual) Pupil image plane MIT 2.71/2.710 02/23/09 wk4-a-15 15 Example: single-lens camera marginal ray chief ray 2nd FP 1st FP object plane Field Stop & Exit Window MIT 2.71/2.710 02/23/09 wk4-a-16 16 Example: single-lens camera chief ray 2nd FP 1st FP Field Stop Entrance & Exit Window Window MIT 2.71/2.710 02/23/09 wk4-a-17 17 Example: single-lens camera 2nd FP 1st FP Aperture Exit Stop pupil & Entrance (virtual) Pupil Field Stop Entrance & Exit Window Window MIT 2.71/2.710 02/23/09 wk4-a-18 18 Example: single-lens camera C.R. from object at field edge FoV nd M.R. from on-axis object 2 FP NA 1st FP Aperture Exit Stop pupil & Entrance (virtual) Pupil Field Stop Entrance & Exit Window Window MIT 2.71/2.710 02/23/09 wk4-a-19 19 Example: single-lens camera Lens, FL f MIT 2.71/2.710 02/23/09 wk4-a-20 20 Example: single-lens camera M.R. from object Lens, at field edge C.R. from object FL f at field edge M.R. from on-axis object FS ExP Aperture & ExW EnW & EnP MIT 2.71/2.710 02/23/09 wk4-a-21 21 MIT OpenCourseWare http://ocw.mit.edu 2.71 / 2.710 Optics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. .
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Geometrical Irradiance Changes in a Symmetric Optical System
    Geometrical irradiance changes in a symmetric optical system Dmitry Reshidko Jose Sasian Dmitry Reshidko, Jose Sasian, “Geometrical irradiance changes in a symmetric optical system,” Opt. Eng. 56(1), 015104 (2017), doi: 10.1117/1.OE.56.1.015104. Downloaded From: https://www.spiedigitallibrary.org/journals/Optical-Engineering on 11/28/2017 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use Optical Engineering 56(1), 015104 (January 2017) Geometrical irradiance changes in a symmetric optical system Dmitry Reshidko* and Jose Sasian University of Arizona, College of Optical Sciences, 1630 East University Boulevard, Tucson 85721, United States Abstract. The concept of the aberration function is extended to define two functions that describe the light irra- diance distribution at the exit pupil plane and at the image plane of an axially symmetric optical system. Similar to the wavefront aberration function, the irradiance function is expanded as a polynomial, where individual terms represent basic irradiance distribution patterns. Conservation of flux in optical imaging systems is used to derive the specific relation between the irradiance coefficients and wavefront aberration coefficients. It is shown that the coefficients of the irradiance functions can be expressed in terms of wavefront aberration coefficients and first- order system quantities. The theoretical results—these are irradiance coefficient formulas—are in agreement with real ray tracing. © 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.OE.56.1.015104] Keywords: aberration theory; wavefront aberrations; irradiance. Paper 161674 received Oct. 26, 2016; accepted for publication Dec. 20, 2016; published online Jan. 23, 2017. 1 Introduction the normalized field H~ and aperture ρ~ vectors.
    [Show full text]
  • Introduction to CODE V: Optics
    Introduction to CODE V Training: Day 1 “Optics 101” Digital Camera Design Study User Interface and Customization 3280 East Foothill Boulevard Pasadena, California 91107 USA (626) 795-9101 Fax (626) 795-0184 e-mail: [email protected] World Wide Web: http://www.opticalres.com Copyright © 2009 Optical Research Associates Section 1 Optics 101 (on a Budget) Introduction to CODE V Optics 101 • 1-1 Copyright © 2009 Optical Research Associates Goals and “Not Goals” •Goals: – Brief overview of basic imaging concepts – Introduce some lingo of lens designers – Provide resources for quick reference or further study •Not Goals: – Derivation of equations – Explain all there is to know about optical design – Explain how CODE V works Introduction to CODE V Training, “Optics 101,” Slide 1-3 Sign Conventions • Distances: positive to right t >0 t < 0 • Curvatures: positive if center of curvature lies to right of vertex VC C V c = 1/r > 0 c = 1/r < 0 • Angles: positive measured counterclockwise θ > 0 θ < 0 • Heights: positive above the axis Introduction to CODE V Training, “Optics 101,” Slide 1-4 Introduction to CODE V Optics 101 • 1-2 Copyright © 2009 Optical Research Associates Light from Physics 102 • Light travels in straight lines (homogeneous media) • Snell’s Law: n sin θ = n’ sin θ’ • Paraxial approximation: –Small angles:sin θ~ tan θ ~ θ; and cos θ ~ 1 – Optical surfaces represented by tangent plane at vertex • Ignore sag in computing ray height • Thickness is always center thickness – Power of a spherical refracting surface: 1/f = φ = (n’-n)*c
    [Show full text]
  • Section 10 Vignetting Vignetting the Stop Determines Determines the Stop the Size of the Bundle of Rays That Propagates On-Axis an the System for Through Object
    10-1 I and Instrumentation Design Optical OPTI-502 © Copyright 2019 John E. Greivenkamp E. John 2019 © Copyright Section 10 Vignetting 10-2 I and Instrumentation Design Optical OPTI-502 Vignetting Greivenkamp E. John 2019 © Copyright On-Axis The stop determines the size of Ray Aperture the bundle of rays that propagates Bundle through the system for an on-axis object. As the object height increases, z one of the other apertures in the system (such as a lens clear aperture) may limit part or all of Stop the bundle of rays. This is known as vignetting. Vignetted Off-Axis Ray Rays Bundle z Stop Aperture 10-3 I and Instrumentation Design Optical OPTI-502 Ray Bundle – On-Axis Greivenkamp E. John 2018 © Copyright The ray bundle for an on-axis object is a rotationally-symmetric spindle made up of sections of right circular cones. Each cone section is defined by the pupil and the object or image point in that optical space. The individual cone sections match up at the surfaces and elements. Stop Pupil y y y z y = 0 At any z, the cross section of the bundle is circular, and the radius of the bundle is the marginal ray value. The ray bundle is centered on the optical axis. 10-4 I and Instrumentation Design Optical OPTI-502 Ray Bundle – Off Axis Greivenkamp E. John 2019 © Copyright For an off-axis object point, the ray bundle skews, and is comprised of sections of skew circular cones which are still defined by the pupil and object or image point in that optical space.
    [Show full text]
  • Depth of Focus (DOF)
    Erect Image Depth of Focus (DOF) unit: mm Also known as ‘depth of field’, this is the distance (measured in the An image in which the orientations of left, right, top, bottom and direction of the optical axis) between the two planes which define the moving directions are the same as those of a workpiece on the limits of acceptable image sharpness when the microscope is focused workstage. PG on an object. As the numerical aperture (NA) increases, the depth of 46 focus becomes shallower, as shown by the expression below: λ DOF = λ = 0.55µm is often used as the reference wavelength 2·(NA)2 Field number (FN), real field of view, and monitor display magnification unit: mm Example: For an M Plan Apo 100X lens (NA = 0.7) The depth of focus of this objective is The observation range of the sample surface is determined by the diameter of the eyepiece’s field stop. The value of this diameter in 0.55µm = 0.6µm 2 x 0.72 millimeters is called the field number (FN). In contrast, the real field of view is the range on the workpiece surface when actually magnified and observed with the objective lens. Bright-field Illumination and Dark-field Illumination The real field of view can be calculated with the following formula: In brightfield illumination a full cone of light is focused by the objective on the specimen surface. This is the normal mode of viewing with an (1) The range of the workpiece that can be observed with the optical microscope. With darkfield illumination, the inner area of the microscope (diameter) light cone is blocked so that the surface is only illuminated by light FN of eyepiece Real field of view = from an oblique angle.
    [Show full text]
  • Aperture Efficiency and Wide Field-Of-View Optical Systems Mark R
    Aperture Efficiency and Wide Field-of-View Optical Systems Mark R. Ackermann, Sandia National Laboratories Rex R. Kiziah, USAF Academy John T. McGraw and Peter C. Zimmer, J.T. McGraw & Associates Abstract Wide field-of-view optical systems are currently finding significant use for applications ranging from exoplanet search to space situational awareness. Systems ranging from small camera lenses to the 8.4-meter Large Synoptic Survey Telescope are designed to image large areas of the sky with increased search rate and scientific utility. An interesting issue with wide-field systems is the known compromises in aperture efficiency. They either use only a fraction of the available aperture or have optical elements with diameters larger than the optical aperture of the system. In either case, the complete aperture of the largest optical component is not fully utilized for any given field point within an image. System costs are driven by optical diameter (not aperture), focal length, optical complexity, and field-of-view. It is important to understand the optical design trade space and how cost, performance, and physical characteristics are influenced by various observing requirements. This paper examines the aperture efficiency of refracting and reflecting systems with one, two and three mirrors. Copyright © 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS) – www.amostech.com Introduction Optical systems exhibit an apparent falloff of image intensity from the center to edge of the image field as seen in Figure 1. This phenomenon, known as vignetting, results when the entrance pupil is viewed from an off-axis point in either object or image space.
    [Show full text]
  • Super-Resolution Imaging by Dielectric Superlenses: Tio2 Metamaterial Superlens Versus Batio3 Superlens
    hv photonics Article Super-Resolution Imaging by Dielectric Superlenses: TiO2 Metamaterial Superlens versus BaTiO3 Superlens Rakesh Dhama, Bing Yan, Cristiano Palego and Zengbo Wang * School of Computer Science and Electronic Engineering, Bangor University, Bangor LL57 1UT, UK; [email protected] (R.D.); [email protected] (B.Y.); [email protected] (C.P.) * Correspondence: [email protected] Abstract: All-dielectric superlens made from micro and nano particles has emerged as a simple yet effective solution to label-free, super-resolution imaging. High-index BaTiO3 Glass (BTG) mi- crospheres are among the most widely used dielectric superlenses today but could potentially be replaced by a new class of TiO2 metamaterial (meta-TiO2) superlens made of TiO2 nanoparticles. In this work, we designed and fabricated TiO2 metamaterial superlens in full-sphere shape for the first time, which resembles BTG microsphere in terms of the physical shape, size, and effective refractive index. Super-resolution imaging performances were compared using the same sample, lighting, and imaging settings. The results show that TiO2 meta-superlens performs consistently better over BTG superlens in terms of imaging contrast, clarity, field of view, and resolution, which was further supported by theoretical simulation. This opens new possibilities in developing more powerful, robust, and reliable super-resolution lens and imaging systems. Keywords: super-resolution imaging; dielectric superlens; label-free imaging; titanium dioxide Citation: Dhama, R.; Yan, B.; Palego, 1. Introduction C.; Wang, Z. Super-Resolution The optical microscope is the most common imaging tool known for its simple de- Imaging by Dielectric Superlenses: sign, low cost, and great flexibility.
    [Show full text]
  • Nikon Binocular Handbook
    THE COMPLETE BINOCULAR HANDBOOK While Nikon engineers of semiconductor-manufactur- FINDING THE ing equipment employ our optics to create the world’s CONTENTS PERFECT BINOCULAR most precise instrumentation. For Nikon, delivering a peerless vision is second nature, strengthened over 4 BINOCULAR OPTICS 101 ZOOM BINOCULARS the decades through constant application. At Nikon WHAT “WATERPROOF” REALLY MEANS FOR YOUR NEEDS 5 THE RELATIONSHIP BETWEEN POWER, Sport Optics, our mission is not just to meet your THE DESIGN EYE RELIEF, AND FIELD OF VIEW The old adage “the better you understand some- PORRO PRISM BINOCULARS demands, but to exceed your expectations. ROOF PRISM BINOCULARS thing—the more you’ll appreciate it” is especially true 12-14 WHERE QUALITY AND with optics. Nikon’s goal in producing this guide is to 6-11 THE NUMBERS COUNT QUANTITY COUNT not only help you understand optics, but also the EYE RELIEF/EYECUP USAGE LENS COATINGS EXIT PUPIL ABERRATIONS difference a quality optic can make in your appre- REAL FIELD OF VIEW ED GLASS AND SECONDARY SPECTRUMS ciation and intensity of every rare, special and daily APPARENT FIELD OF VIEW viewing experience. FIELD OF VIEW AT 1000 METERS 15-17 HOW TO CHOOSE FIELD FLATTENING (SUPER-WIDE) SELECTING A BINOCULAR BASED Nikon’s WX BINOCULAR UPON INTENDED APPLICATION LIGHT DELIVERY RESOLUTION 18-19 BINOCULAR OPTICS INTERPUPILLARY DISTANCE GLOSSARY DIOPTER ADJUSTMENT FOCUSING MECHANISMS INTERNAL ANTIREFLECTION OPTICS FIRST The guiding principle behind every Nikon since 1917 product has always been to engineer it from the inside out. By creating an optical system specific to the function of each product, Nikon can better match the product attri- butes specifically to the needs of the user.
    [Show full text]
  • A Practical Guide to Panoramic Multispectral Imaging
    A PRACTICAL GUIDE TO PANORAMIC MULTISPECTRAL IMAGING By Antonino Cosentino 66 PANORAMIC MULTISPECTRAL IMAGING Panoramic Multispectral Imaging is a fast and mobile methodology to perform high resolution imaging (up to about 25 pixel/mm) with budget equipment and it is targeted to institutions or private professionals that cannot invest in costly dedicated equipment and/or need a mobile and lightweight setup. This method is based on panoramic photography that uses a panoramic head to precisely rotate a camera and shoot a sequence of images around the entrance pupil of the lens, eliminating parallax error. The proposed system is made of consumer level panoramic photography tools and can accommodate any imaging device, such as a modified digital camera, an InGaAs camera for infrared reflectography and a thermal camera for examination of historical architecture. Introduction as thermal cameras for diagnostics of historical architecture. This article focuses on paintings, This paper describes a fast and mobile methodo‐ but the method remains valid for the documenta‐ logy to perform high resolution multispectral tion of any 2D object such as prints and drawings. imaging with budget equipment. This method Panoramic photography consists of taking a can be appreciated by institutions or private series of photo of a scene with a precise rotating professionals that cannot invest in more costly head and then using special software to align dedicated equipment and/or need a mobile and seamlessly stitch those images into one (lightweight) and fast setup. There are already panorama. excellent medium and large format infrared (IR) modified digital cameras on the market, as well as scanners for high resolution Infrared Reflec‐ Multispectral Imaging with a Digital Camera tography, but both are expensive.
    [Show full text]
  • To Determine the Numerical Aperture of a Given Optical Fiber
    TO DETERMINE THE NUMERICAL APERTURE OF A GIVEN OPTICAL FIBER Submitted to: Submitted By: Mr. Rohit Verma 1. Rajesh Kumar 2. Sunil Kumar 3. Varun Sharma 4. Jaswinder Singh INDRODUCTION TO AN OPTICAL FIBER Optical fiber: an optical fiber is a dielectric wave guide made of glass and plastic which is used to guide and confine an electromagnetic wave and work on the principle to total internal reflection (TIR). The diameter of the optical fiber may vary from 0.05 mm to 0.25mm. Construction Of An Optical Fiber: (Where N1, N2, N3 are the refractive indexes of core, cladding and sheath respectively) Core: it is used to guide the electromagnetic waves. Located at the center of the cable mainly made of glass or sometimes from plastics it also as the highest refractive index i.e. N1. Cladding: it is used to reduce the scattering losses and provide strength t o the core. It has less refractive index than that of the core, which is the main cause of the TIR, which is required for the propagation of height through the fiber. Sheath: it is the outer most coating of the optical fiber. It protects the core and clad ding from abrasion, contamination and moisture. Requirement for making an optical fiber: 1. It must be possible to make long thin and flexible fiber using that material 2. It must be transparent at a particular wavelength in order for the fiber to guide light efficiently. 3. Physically compatible material of slightly different index of refraction must be available for core and cladding.
    [Show full text]
  • Numerical Aperture of a Plastic Optical Fiber
    International Journal of Innovations in Engineering and Technology (IJIET) Numerical Aperture of A Plastic Optical Fiber Trilochan Patra Assistant professor, Department of Electronics and Communication Engineering Techno India College of Technology, Rajarhat, Newtown, Kolkata-156, West Bengal, India Abstract: - To use plastic optical fibers it is useful to know their numerical apertures. These fibers have a large core diameter, which is very different from those of glass fibers. For their connection a strict adjustment to the properties of the optical systems is needed, injecting the light inside and collecting it outside so as not to increase the losses resulting of their strong absorption. If it is sufficient to inject the light at the input with an aperture lower than the theoretical aperture without core stopping, it is very useful to know the out numerical aperture which is varying with the injection aperture and the length of the fiber, because the different modes may be not coupled and are not similarly absorbed. Here I propose a method of calculating numerical aperture by calculating acceptance angle of the fiber. Experimental result shows that we measure the numerical aperture by calculating the mean diameter and then the radius of the spot circle projected on a graph paper. We also measure the distance of the fiber from the target (graph paper). Then make a ratio between the radius of the spot circle and the distance. From here we calculate the acceptance angle and then numerical aperture by sin of acceptance angle. I. INTRODUCTION In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light.
    [Show full text]
  • Technical Aspects and Clinical Usage of Keplerian and Galilean Binocular Surgical Loupe Telescopes Used in Dentistry Or Medicine
    Surgical and Dental Ergonomic Loupes Magnification and Microscope Design Principles Technical Aspects and Clinical Usage of Keplerian and Galilean Binocular Surgical Loupe Telescopes used in Dentistry or Medicine John Mamoun, DMD1, Mark E. Wilkinson, OD2 , Richard Feinbloom3 1Private Practice, Manalapan, NJ, USA 2Clinical Professor of Opthalmology at the University of Iowa Carver College of Medicine, Department of Ophthalmology and Visual Sciences, Iowa City, IA, USA, and the director, Vision Rehabilitation Service, University of Iowa Carver Family Center for Macular Degeneration 3President, Designs for Vision, Inc., Ronkonkoma, NY Open Access Dental Lecture No.2: This Article is Public Domain. Published online, December 2013 Abstract Dentists often use unaided vision, or low-power telescopic loupes of 2.5-3.0x magnification, when performing dental procedures. However, some clinically relevant microscopic intra-oral details may not be visible with low magnification. About 1% of dentists use a surgical operating microscope that provides 5-20x magnification and co-axial illumination. However, a surgical operating microscope is not as portable as a surgical telescope, which are available in magnifications of 6-8x. This article reviews the technical aspects of the design of Galilean and Keplerian telescopic binocular loupes, also called telemicroscopes, and explains how these technical aspects affect the clinical performance of the loupes when performing dental tasks. Topics include: telescope lens systems, linear magnification, the exit pupil, the depth of field, the field of view, the image inverting Schmidt-Pechan prism that is used in the Keplerian telescope, determining the magnification power of lens systems, and the technical problems with designing Galilean and Keplerian loupes of extremely high magnification, beyond 8.0x.
    [Show full text]