A Basis for Estimating Digital Camera Parameters

Total Page:16

File Type:pdf, Size:1020Kb

A Basis for Estimating Digital Camera Parameters 01-073.qxd 2/4/04 11:33 AM Page 297 A Basis for Estimating Digital Camera Parameters Don Light Based on the Rayleigh Criterion 0.82 ؍ Abstract Developing ␭F/p Matching the diffraction-limited optical resolution with the for Resolving Power of Diffraction Limited Optics appropriate detector size is a fundamental design requirement for digital imaging systems. A useful Design Function based and the Airy Disk on the Airy disk is ␭F͞p ϭ 0.82. Where ␭ is the average wave- The wave nature of light and the consequent diffraction at a length, F is the camera F-number, and p is the detector sam- circular aperture make it impossible to generate an ideal di- pling pitch (pixel size). A second metric, attributed to Schade mensionless image point. The wave energy forming a point and reported by Holst (1999), produces an often used Design image is distributed in a central disk called the Airy disk (see ␭ ͞ ϭ Figure 1). The Airy disk is defined with a radius that subtends Function: F p 1. Examples demonstrate the use of the De- ␪ sign Functions to determine basic parameters, F-number, an angle , defined by the Rayleigh criterion (Jensen, 1968, focal length, aperture diameter, and pixel size for an imaging p. 82; Kraus, 1993, p. 69; Schott, 1997, p. 326): i.e., system. Pixel size is selected from commercially available ar- ␪ ϭ ␭͞ rays and the other parameters are estimated given the re- Radius of Angular Resolution, 1.22 D (radians) (1) quired ground sampled distance (GSD) with either of the two Design Functions. One of the primary uses for the Design where ␭ is the average wavelength being utilized by the detec- Functions is to provide optical systems engineers with a sim- tor and D is the diameter of the objective lens, mirror, or ple, fast, and proven means of arriving at first-order estimates aperture. for electro-optical camera designs. It follows that estimating Multiplying both sides of Equation 1 by the camera’s focal costs for building large space camera systems should be less length (f) yields complicated. f␪ ϭ 1.22 ␭f͞D. (2) Introduction Assuming adequate signal and scene contrast, the spatial reso- Recognize that f͞D is the F-number and f␪ is the linear ra- lution of an optical imaging system with digital detectors such dius of the optical resolution limit. The most popular measure as a charge coupled device (CCD) array is limited either by the of optical resolution uses Rayleigh’s criterion and defines the resolution limit of the optics or the detector sampling size. diameter of the Airy disk. Because of diffraction, what should When ground sampled distance (GSD) is used to denote the be a point source actually is a small disk of light surrounded spatial resolution required for a remote sensing system, it is by a number of light and dark rings, as shown in Figure 1. often assumed that the detector sampling is the limiting factor Approximately 84 percent of the light falls in the central disk in spatial resolution. Airborne digital camera systems such and the remainder in the outer rings (Moffitt and Mikhail, as the Kodak DCS 460 being flown by Emerge and the Contax 1980, pp. 41–42). Therefore, in the focal plane of the camera, 645 by Pictometry International specialize in spatial resolu- the Airy disk diameter (Kraus, 1993, p. 69; Schott, 1997, tions of 0.5, 1, 2 (0.15, 0.3, 0.6, or 0.9 m) or 3 ft GSD. Space p. 326; Holst, 1999) is Imaging’s Ikonos Satellite produces a 1-m GSD and Digital ϭ ␭ Globe’s latest Quickbird Satellite offers a 0.6-m GSD. Orbital dAiry 2.44 F. (3) Science plans to offer OrbView’s 3 and 4 imagery in the post- 2000 time frame with comparable resolution. These GSD’s Equation 3 expresses the diameter (d) of the Airy disk in refer to the projected pixel size on the ground and ignore any terms of ␭ and F, and d establishes the sampling pitch be- effects that the optical system may have on the spatial resolu- tween two detectors (pixels). The F-number is a measure of tion. According to Fiete (1999), even if the detector sampling the light collecting capability of the optics. For a constant (pixel size) is the limiting factor in spatial resolution, the in- focal length (f), as F increases, the aperture gets smaller. In ͞ teraction between the detector sampling (CCD) and the perfor- this derivation, d 2 is to be the detector size (pixel size) mance of the optics plays an important role in determining which the optics must resolve. This paper will use p in the the final image quality. In this paper the Design Function, equation to represent detector size to better correspond to dig- ␭F͞p ϭ 0.82, is derived using the diameter of the Airy Disk ital camera language where p stands for one pixel dimension. which is commonly used as a measure of the resolving power Matching the optical resolution (Airy disk) to the detectors is for optical elements according to the Rayleigh Criterion the fundamental essence of the design function being derived. (Schott, 1997, p. 326). The utility of the design functions for In concert with the Nyquist sampling criterion (Holst, 1998, estimating camera parameters will be explained with exam- pp. 286, 320–323), two detectors (2p) are placed within the ples. As examples, the Kodak DCS 460 Digital Camera, the area defined by the Airy disk. This means that the area de- Ikonos Sensor System, and the Quick Bird will be evaluated fined by the Airy Disk is adequately sampled (Holst, 1998, against the Design Functions ␭F͞p ϭ 0.82 and ␭F͞p ϭ 1 in order to demonstrate their usefulness for quick first-order esti- mates of the principal parameters for the camera. Photogrammetric Engineering & Remote Sensing Vol. 70, No. 3, March 2004, pp. 297–300. 0099-1112/04/7003–0297/$3.00/0 6 Tawney Point, Rochester, NY 14626 © 2004 American Society for Photogrammetry ([email protected]). and Remote Sensing PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2004 297 01-073.qxd 2/4/04 11:33 AM Page 298 size of two detectors in the focal plane. Given an adequate signal-to-noise ratio and MTF from the optics coupled with sufficient scene contrast, this combination will produce quality imagery. Example 1: Consider the Kodak DCS 460 Digital Camera with a Frame Array (100% fill factor) of 3000 by 2000 Pixels and a Lens Focal mm 28 ؍ Length f Let average ␭ ϭ 0.6 ␮m for the visible spectrum and pixel size, p ϭ 9 ␮m. Solving Equation 7 for the F-number yields F ϭ 0.82p͞␭ (8) or F ϭ 0.82(9 ␮m)͞0.6 ␮m ϭ 12.3. Conclusion: The DCS 460 using an F-number Յ 12 will have the capability to produce good image quality assuming ade- quate lighting and proper shutter speed. Because F ϭ f͞D, then the aperture diameter, is Figure 1. Airy disk containing two pixels. D ϭ f͞F. (9) pp. 286, 320–323). Figure 1 illustrates two pixels inside the D, the diameter of the aperture, is a major cost driver for large Airy disk. satellite cameras, but not for the DCS 460. For the DCS 460 Now Equation 3 representing Figure 1 can be re-written D ϭ 28 mm͞12 ഡ 2.4 mm. as ͞ ␭ ϭ Obviously, a 2.4-mm aperture at F 12 seems very small. 2.44 F 2p. (4) Actually the DCS 460 with the Nikon lens has a range from ͞ ͞ ͞ ϭ Dividing both sides of Equation 4 by 2p yields F 2.8 to F 22. The F 2.8 has a D 10 mm, and this can ac- commodate low light levels with appropriate shutter speed. Assuming the image motion is kept to one half pixel or less, 1.22 ␭F͞p ϭ 1 (5) the Kodak DCS 460 to 760 Series cameras are providing the airborne remote sensing community with excellent (Red, Green, Blue) color imagery. The DCS 460 can take color in- and frared imagery when using a blue blocking filter. ␭F͞p ϭ 1͞1.22 ϭ 0.82. (6) Computing the GSD for the DCS 460 Camera, So, the design function is given by Equation 7: i.e., GSD ϭ (H͞f )p (10) ␭F͞p ϭ 0.82 (The Design Function) (7) where H is flying height above mean ground level; then If ␭F͞p is less than 0.82, the aperture is larger than opti- GSD ϭ (3110 ft)(0.009 mm)͞28 mm mal; if ␭F͞p is greater than 0.82, the aperture is smaller than optimal (Jones, unpublished notes, 1999). GSD ϭ 1 ft ϭ 0.3 m. Generally speaking, for final design it may be necessary to slightly over aperture the system to provide a safety factor in order to accommodate non-optimum lighting, atmospheric Example 2: Consider an Ikonos-like Satellite Camera conditions, and lens aberrations which occur in the real (Fritz, 1996; Givens, 1998). The wavelength ␭ ϭ 0.675 ␮m is world. chosen half way between 0.45 and 0.90 ␮m so that Ikonos can The Design Function (Equation 7) does not take into ac- sense into the near-infrared (NIR) part of the spectrum. count such things as the signal-to-noise ratio (S/N), MTF, and Let ␭ ϭ 0.675 ␮m, H ϭ 680 km, p ϭ 12 ␮m, and f ϭ 10 m. image motion. When ␭F͞p ϭ 0.82, the area of the Airy disk is Recall from Equation 8 that F ϭ 0.82p͞␭.
Recommended publications
  • Diffraction Interference Induced Superfocusing in Nonlinear Talbot Effect24,25, to Achieve Subdiffraction by Exploiting the Phases of The
    OPEN Diffraction Interference Induced SUBJECT AREAS: Superfocusing in Nonlinear Talbot Effect NONLINEAR OPTICS Dongmei Liu1, Yong Zhang1, Jianming Wen2, Zhenhua Chen1, Dunzhao Wei1, Xiaopeng Hu1, Gang Zhao1, SUB-WAVELENGTH OPTICS S. N. Zhu1 & Min Xiao1,3 Received 1National Laboratory of Solid State Microstructures, College of Engineering and Applied Sciences, School of Physics, Nanjing 12 May 2014 University, Nanjing 210093, China, 2Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA, 3Department of Physics, University of Arkansas, Fayetteville, Arkansas 72701, USA. Accepted 24 July 2014 We report a simple, novel subdiffraction method, i.e. diffraction interference induced superfocusing in Published second-harmonic (SH) Talbot effect, to achieve focusing size of less than lSH/4 (or lpump/8) without 20 August 2014 involving evanescent waves or subwavelength apertures. By tailoring point spread functions with Fresnel diffraction interference, we observe periodic SH subdiffracted spots over a hundred of micrometers away from the sample. Our demonstration is the first experimental realization of the Toraldo di Francia’s proposal pioneered 62 years ago for superresolution imaging. Correspondence and requests for materials should be addressed to ocusing of a light beam into an extremely small spot with a high energy density plays an important role in key technologies for miniaturized structures, such as lithography, optical data storage, laser material nanopro- Y.Z. (zhangyong@nju. cessing and nanophotonics in confocal microscopy and superresolution imaging. Because of the wave edu.cn); J.W. F 1 nature of light, however, Abbe discovered at the end of 19th century that diffraction prohibits the visualization (jianming.wen@yale. of features smaller than half of the wavelength of light (also known as the Rayleigh diffraction limit) with optical edu) or M.X.
    [Show full text]
  • Introduction to Optics Part I
    Introduction to Optics part I Overview Lecture Space Systems Engineering presented by: Prof. David Miller prepared by: Olivier de Weck Revised and augmented by: Soon-Jo Chung Chart: 1 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Outline Goal: Give necessary optics background to tackle a space mission, which includes an optical payload •Light •Interaction of Light w/ environment •Optical design fundamentals •Optical performance considerations •Telescope types and CCD design •Interferometer types •Sparse aperture array •Beam combining and Control Chart: 2 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Examples - Motivation Spaceborne Astronomy Planetary nebulae NGC 6543 September 18, 1994 Hubble Space Telescope Chart: 3 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Properties of Light Wave Nature Duality Particle Nature HP w2 E 2 E 0 Energy of Detector c2 wt 2 a photon Q=hQ Solution: Photons are EAeikr( ZtI ) “packets of energy” E: Electric field vector c H: Magnetic field vector Poynting Vector: S E u H 4S 2S Spectral Bands (wavelength O): Wavelength: O Q QT Ultraviolet (UV) 300 Å -300 nm Z Visible Light 400 nm - 700 nm 2S Near IR (NIR) 700 nm - 2.5 Pm Wave Number: k O Chart: 4 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Reflection-Mirrors Mirrors (Reflective Devices) and Lenses (Refractive Devices) are both “Apertures” and are similar to each other. Law of reflection: Mirror Geometry given as Ti Ti=To a conic section rot surface: T 1 2 2 o z()U r r k 1 U Reflected wave is also k 1 in the plane of incidence Specular Circle: k=0 Ellipse -1<k<0 Reflection Parabola: k=-1 Hyperbola: k<-1 sun mirror Detectors resolve Images produced by (solar) energy reflected from detector a target scene* in Visual and NIR.
    [Show full text]
  • Journal of Interdisciplinary Science Topics Determining Distance Or
    Journal of Interdisciplinary Science Topics Determining Distance or Object Size from a Photograph Chuqiao Huang The Centre for Interdisciplinary Science, University of Leicester 19/02/2014 Abstract The purpose of this paper is to create an equation relating the distance and width of an object in a photograph to a constant when the conditions under which the photograph was taken are known. These conditions are the sensor size of the camera and the resolution and focal length of the picture. The paper will highlight when such an equation would be an accurate prediction of reality, and will conclude with an example calculation. Introduction may be expressed as a tan (opposite/adjacent) In addition to displaying visual information relationship: such as colour, texture, and shape, a photograph (1.0) may provide size and distance information on (1.1) subjects. The purpose of this paper is to develop such an equation from basic trigonometric principles when given certain parameters about the photo. Picture Angular Resolution With the image dimensions, we can make an Picture Field of View equation for one-dimensional angular resolution. First, we will determine an equation for the Since l2 describes sensor width and not height, we horizontal field of view of a photograph when given will calculate horizontal instead of vertical angular the focal length and width of the sensor. resolution: Consider the top down view of the camera (2.0) system below (figure 1), consisting of an object (left), lens (centre), and sensor (right). Let d be the (2.1) 1 distance of object to the lens, d2 be the focal length, Here, αhPixel is the average horizontal field of l1 be the width of the object, l2 be the width of the view for a single pixel, and nhPixel is the number of sensor, and α be the horizontal field of view of the pixels that make up a row of the picture.
    [Show full text]
  • Sub-Airy Disk Angular Resolution with High Dynamic Range in the Near-Infrared A
    EPJ Web of Conferences 16, 03002 (2011) DOI: 10.1051/epjconf/20111603002 C Owned by the authors, published by EDP Sciences, 2011 Sub-Airy disk angular resolution with high dynamic range in the near-infrared A. Richichi European Southern Observatory,Karl-Schwarzschildstr. 2, 85748 Garching, Germany Abstract. Lunar occultations (LO) are a simple and effective high angular resolution method, with minimum requirements in instrumentation and telescope time. They rely on the analysis of the diffraction fringes created by the lunar limb. The diffraction phenomen occurs in space, and as a result LO are highly insensitive to most of the degrading effects that limit the performance of traditional single telescope and long-baseline interferometric techniques used for direct detection of faint, close companions to bright stars. We present very recent results obtained with the technique of lunar occultations in the near-IR, showing the detection of companions with very high dynamic range as close as few milliarcseconds to the primary star. We discuss the potential improvements that could be made, to increase further the current performance. Of course, LO are fixed-time events applicable only to sources which happen to lie on the Moon’s apparent orbit. However, with the continuously increasing numbers of potential exoplanets and brown dwarfs beign discovered, the frequency of such events is not negligible. I will list some of the most favorable potential LO in the near future, to be observed from major observatories. 1. THE METHOD The geometry of a lunar occultation (LO) event is sketched in Figure 1. The lunar limb acts as a straight diffracting edge, moving across the source with an angular speed that is the product of the lunar motion vector VM and the cosine of the contact angle CA.
    [Show full text]
  • The Most Important Equation in Astronomy! 50
    The Most Important Equation in Astronomy! 50 There are many equations that astronomers use L to describe the physical world, but none is more R 1.22 important and fundamental to the research that we = conduct than the one to the left! You cannot design a D telescope, or a satellite sensor, without paying attention to the relationship that it describes. In optics, the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The diffraction pattern has a bright region in the center called the Airy Disk. The diameter of the Airy Disk is related to the wavelength of the illuminating light, L, and the size of the circular aperture (mirror, lens), given by D. When L and D are expressed in the same units (e.g. centimeters, meters), R will be in units of angular measure called radians ( 1 radian = 57.3 degrees). You cannot see details with your eye, with a camera, or with a telescope, that are smaller than the Airy Disk size for your particular optical system. The formula also says that larger telescopes (making D bigger) allow you to see much finer details. For example, compare the top image of the Apollo-15 landing area taken by the Japanese Kaguya Satellite (10 meters/pixel at 100 km orbit elevation: aperture = about 15cm ) with the lower image taken by the LRO satellite (0.5 meters/pixel at a 50km orbit elevation: aperture = ). The Apollo-15 Lunar Module (LM) can be seen by its 'horizontal shadow' near the center of the image.
    [Show full text]
  • Simple and Robust Method for Determination of Laser Fluence
    Open Research Europe Open Research Europe 2021, 1:7 Last updated: 30 JUN 2021 METHOD ARTICLE Simple and robust method for determination of laser fluence thresholds for material modifications: an extension of Liu’s approach to imperfect beams [version 1; peer review: 1 approved, 2 approved with reservations] Mario Garcia-Lechuga 1,2, David Grojo 1 1Aix Marseille Université, CNRS, LP3, UMR7341, Marseille, 13288, France 2Departamento de Física Aplicada, Universidad Autónoma de Madrid, Madrid, 28049, Spain v1 First published: 24 Mar 2021, 1:7 Open Peer Review https://doi.org/10.12688/openreseurope.13073.1 Latest published: 25 Jun 2021, 1:7 https://doi.org/10.12688/openreseurope.13073.2 Reviewer Status Invited Reviewers Abstract The so-called D-squared or Liu’s method is an extensively applied 1 2 3 approach to determine the irradiation fluence thresholds for laser- induced damage or modification of materials. However, one of the version 2 assumptions behind the method is the use of an ideal Gaussian profile (revision) report that can lead in practice to significant errors depending on beam 25 Jun 2021 imperfections. In this work, we rigorously calculate the bias corrections required when applying the same method to Airy-disk like version 1 profiles. Those profiles are readily produced from any beam by 24 Mar 2021 report report report insertion of an aperture in the optical path. Thus, the correction method gives a robust solution for exact threshold determination without any added technical complications as for instance advanced 1. Laurent Lamaignere , CEA-CESTA, Le control or metrology of the beam. Illustrated by two case-studies, the Barp, France approach holds potential to solve the strong discrepancies existing between the laser-induced damage thresholds reported in the 2.
    [Show full text]
  • Understanding Resolution Diffraction and the Airy Disk, Dawes Limit & Rayleigh Criterion by Ed Zarenski [email protected]
    Understanding Resolution Diffraction and The Airy Disk, Dawes Limit & Rayleigh Criterion by Ed Zarenski [email protected] These explanations of terms are based on my understanding and application of published data and measurement criteria with specific notation from the credited sources. Noted paragraphs are not necessarily quoted but may be summarized directly from the source stated. All information if not noted from a specific source is mine. I have attempted to be as clear and accurate as possible in my presentation of all the data and applications put forth here. Although this article utilizes much information from the sources noted, it represents my opinion and my understanding of optical theory. Any errors are mine. Comments and discussion are welcome. Clear Skies, and if not, Cloudy Nights. EdZ November 2003, Introduction: Diffraction and Resolution Common Diffraction Limit Criteria The Airy Disk Understanding Rayleigh and Dawes Limits Seeing a Black Space Between Components Affects of Magnitude and Color Affects of Central Obstruction Affect of Exit Pupil on Resolution Resolving Power Resolving Power in Extended Objects Extended Object Resolution Criteria Diffraction Fringes Interfere with Resolution Magnification is Necessary to See Acuity Determines Magnification Summary / Conclusions Credits 1 Introduction: Diffraction and Resolution All lenses or mirrors cause diffraction of light. Assuming a circular aperture, the image of a point source formed by the lens shows a small disk of light surrounded by a number of alternating dark and bright rings. This is known as the diffraction pattern or the Airy pattern. At the center of this pattern is the Airy disk. As the diameter of the aperture increases, the size of the Airy disk decreases.
    [Show full text]
  • About Resolution
    pco.knowledge base ABOUT RESOLUTION The resolution of an image sensor describes the total number of pixel which can be used to detect an image. From the standpoint of the image sensor it is sufficient to count the number and describe it usually as product of the horizontal number of pixel times the vertical number of pixel which give the total number of pixel, for example: 2 Image Sensor & Camera Starting with the image sensor in a camera system, then usually the so called modulation transfer func- Or take as an example tion (MTF) is used to describe the ability of the camera the sCMOS image sensor CIS2521: system to resolve fine structures. It is a variant of the optical transfer function1 (OTF) which mathematically 2560 describes how the system handles the optical infor- mation or the contrast of the scene and transfers it That is the usual information found in technical data onto the image sensor and then into a digital informa- sheets and camera advertisements, but the question tion in the computer. The resolution ability depends on arises on “what is the impact or benefit for a camera one side on the number and size of the pixel. user”? 1 Benefit For A Camera User It can be assumed that an image sensor or a camera system with an image sensor generally is applied to detect images, and therefore the question is about the influence of the resolution on the image quality. First, if the resolution is higher, more information is obtained, and larger data files are a consequence.
    [Show full text]
  • Telescope Optics Discussion
    DFM Engineering, Inc. 1035 Delaware Avenue, Unit D Longmont, Colorado 80501 Phone: 303-678-8143 Fax: 303-772-9411 Web: www.dfmengineering.com TELESCOPE OPTICS DISCUSSION: We recently responded to a Request For Proposal (RFP) for a 24-inch (610-mm) aperture telescope. The optical specifications specified an optical quality encircled energy (EE80) value of 80% within 0.6 to 0.8 arc seconds over the entire Field Of View (FOV) of 90-mm (1.2- degrees). From the excellent book, "Astronomical Optics" page 185 by Daniel J. Schroeder, we find the definition of "Encircled Energy" as "The fraction of the total energy E enclosed within a circle of radius r centered on the Point Spread Function peak". I want to emphasize the "radius r" as we will see this come up again later in another expression. The first problem with this specification is no wavelength has been specified. Perfect optics will produce an Airy disk whose "radius r" is a function of the wavelength and the aperture of the optic. The aperture in this case is 24-inches (610-mm). A typical Airy disk is shown below. This Airy disk was obtained in the DFM Engineering optical shop. Actual Airy disk of an unobstructed aperture with an intensity scan through the center in blue. Perfect optics produce an Airy disk composed of a central spot with alternating dark and bright rings due to diffraction. The Airy disk above only shows the central spot and the first bright ring, the next ring is too faint to be recorded. The first bright ring (seen above) is 63 times fainter than the peak intensity.
    [Show full text]
  • High-Aperture Optical Microscopy Methods for Super-Resolution Deep Imaging and Quantitative Phase Imaging by Jeongmin Kim a Diss
    High-Aperture Optical Microscopy Methods for Super-Resolution Deep Imaging and Quantitative Phase Imaging by Jeongmin Kim A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering { Mechanical Engineering and the Designated Emphasis in Nanoscale Science and Engineering in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Xiang Zhang, Chair Professor Liwei Lin Professor Laura Waller Summer 2016 High-Aperture Optical Microscopy Methods for Super-Resolution Deep Imaging and Quantitative Phase Imaging Copyright 2016 by Jeongmin Kim 1 Abstract High-Aperture Optical Microscopy Methods for Super-Resolution Deep Imaging and Quantitative Phase Imaging by Jeongmin Kim Doctor of Philosophy in Engineering { Mechanical Engineering and the Designated Emphasis in Nanoscale Science and Engineering University of California, Berkeley Professor Xiang Zhang, Chair Optical microscopy, thanks to the noninvasive nature of its measurement, takes a crucial role across science and engineering, and is particularly important in biological and medical fields. To meet ever increasing needs on its capability for advanced scientific research, even more diverse microscopic imaging techniques and their upgraded versions have been inten- sively developed over the past two decades. However, advanced microscopy development faces major challenges including super-resolution (beating the diffraction limit), imaging penetration depth, imaging speed, and label-free imaging. This dissertation aims to study high numerical aperture (NA) imaging methods proposed to tackle these imaging challenges. The dissertation first details advanced optical imaging theory needed to analyze the proposed high NA imaging methods. Starting from the classical scalar theory of optical diffraction and (partially coherent) image formation, the rigorous vectorial theory that han- dles the vector nature of light, i.e., polarization, is introduced.
    [Show full text]
  • Computation and Validation of Two-Dimensional PSF Simulation
    Computation and validation of two-dimensional PSF simulation based on physical optics K. Tayabaly1,2, D. Spiga1, G. Sironi1, R.Canestrari1, M.Lavagna2, G. Pareschi1 1 INAF/Brera Astronomical Observatory, Via Bianchi 46, 23807 Merate, Italy 2 Politecnico di Milano, Via La Masa 1, 20156 Milano, Italy ABSTRACT The Point Spread Function (PSF) is a key figure of merit for specifying the angular resolution of optical systems and, as the demand for higher and higher angular resolution increases, the problem of surface finishing must be taken seriously even in optical telescopes. From the optical design of the instrument, reliable ray-tracing routines allow computing and display of the PSF based on geometrical optics. However, such an approach does not directly account for the scattering caused by surface microroughness, which is interferential in nature. Although the scattering effect can be separately modeled, its inclusion in the ray-tracing routine requires assumptions that are difficult to verify. In that context, a purely physical optics approach is more appropriate as it remains valid regardless of the shape and size of the defects appearing on the optical surface. Such a computation, when performed in two-dimensional consideration, is memory and time consuming because it requires one to process a surface map with a few micron resolution, and the situation becomes even more complicated in case of optical systems characterized by more than one reflection. Fortunately, the computation is significantly simplified in far-field configuration, since the computation involves only a sequence of Fourier Transforms. In this paper, we provide validation of the PSF simulation with Physical Optics approach through comparison with real PSF measurement data in the case of ASTRI-SST M1 hexagonal segments.
    [Show full text]
  • Modern Astronomical Optics 1
    Modern Astronomical Optics 1. Fundamental of Astronomical Imaging Systems OUTLINE: A few key fundamental concepts used in this course: Light detection: Photon noise Diffraction: Diffraction by an aperture, diffraction limit Spatial sampling Earth's atmosphere: every ground-based telescope's first optical element Effects for imaging (transmission, emission, distortion and scattering) and quick overview of impact on optical design of telescopes and instruments Geometrical optics: Pupil and focal plane, Lagrange invariant Astronomical measurements & important characteristics of astronomical imaging systems: Collecting area and throughput (sensitivity) flux units in astronomy Angular resolution Field of View (FOV) Time domain astronomy Spectral resolution Polarimetric measurement Astrometry Light detection: Photon noise Poisson noise Photon detection of a source of constant flux F. Mean # of photon in a unit dt = F dt. Probability to detect a photon in a unit of time is independent of when last photon was detected→ photon arrival times follows Poisson distribution Probability of detecting n photon given expected number of detection x (= F dt): f(n,x) = xne-x/(n!) x = mean value of f = variance of f Signal to noise ration (SNR) and measurement uncertainties SNR is a measure of how good a detection is, and can be converted into probability of detection, degree of confidence Signal = # of photon detected Noise (std deviation) = Poisson noise + additional instrumental noises (+ noise(s) due to unknown nature of object observed) Simplest case (often
    [Show full text]