Diffraction Optics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Diffraction Interference Induced Superfocusing in Nonlinear Talbot Effect24,25, to Achieve Subdiffraction by Exploiting the Phases of The
OPEN Diffraction Interference Induced SUBJECT AREAS: Superfocusing in Nonlinear Talbot Effect NONLINEAR OPTICS Dongmei Liu1, Yong Zhang1, Jianming Wen2, Zhenhua Chen1, Dunzhao Wei1, Xiaopeng Hu1, Gang Zhao1, SUB-WAVELENGTH OPTICS S. N. Zhu1 & Min Xiao1,3 Received 1National Laboratory of Solid State Microstructures, College of Engineering and Applied Sciences, School of Physics, Nanjing 12 May 2014 University, Nanjing 210093, China, 2Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA, 3Department of Physics, University of Arkansas, Fayetteville, Arkansas 72701, USA. Accepted 24 July 2014 We report a simple, novel subdiffraction method, i.e. diffraction interference induced superfocusing in Published second-harmonic (SH) Talbot effect, to achieve focusing size of less than lSH/4 (or lpump/8) without 20 August 2014 involving evanescent waves or subwavelength apertures. By tailoring point spread functions with Fresnel diffraction interference, we observe periodic SH subdiffracted spots over a hundred of micrometers away from the sample. Our demonstration is the first experimental realization of the Toraldo di Francia’s proposal pioneered 62 years ago for superresolution imaging. Correspondence and requests for materials should be addressed to ocusing of a light beam into an extremely small spot with a high energy density plays an important role in key technologies for miniaturized structures, such as lithography, optical data storage, laser material nanopro- Y.Z. (zhangyong@nju. cessing and nanophotonics in confocal microscopy and superresolution imaging. Because of the wave edu.cn); J.W. F 1 nature of light, however, Abbe discovered at the end of 19th century that diffraction prohibits the visualization (jianming.wen@yale. of features smaller than half of the wavelength of light (also known as the Rayleigh diffraction limit) with optical edu) or M.X. -
Introduction to Optics Part I
Introduction to Optics part I Overview Lecture Space Systems Engineering presented by: Prof. David Miller prepared by: Olivier de Weck Revised and augmented by: Soon-Jo Chung Chart: 1 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Outline Goal: Give necessary optics background to tackle a space mission, which includes an optical payload •Light •Interaction of Light w/ environment •Optical design fundamentals •Optical performance considerations •Telescope types and CCD design •Interferometer types •Sparse aperture array •Beam combining and Control Chart: 2 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Examples - Motivation Spaceborne Astronomy Planetary nebulae NGC 6543 September 18, 1994 Hubble Space Telescope Chart: 3 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Properties of Light Wave Nature Duality Particle Nature HP w2 E 2 E 0 Energy of Detector c2 wt 2 a photon Q=hQ Solution: Photons are EAeikr( ZtI ) “packets of energy” E: Electric field vector c H: Magnetic field vector Poynting Vector: S E u H 4S 2S Spectral Bands (wavelength O): Wavelength: O Q QT Ultraviolet (UV) 300 Å -300 nm Z Visible Light 400 nm - 700 nm 2S Near IR (NIR) 700 nm - 2.5 Pm Wave Number: k O Chart: 4 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Reflection-Mirrors Mirrors (Reflective Devices) and Lenses (Refractive Devices) are both “Apertures” and are similar to each other. Law of reflection: Mirror Geometry given as Ti Ti=To a conic section rot surface: T 1 2 2 o z()U r r k 1 U Reflected wave is also k 1 in the plane of incidence Specular Circle: k=0 Ellipse -1<k<0 Reflection Parabola: k=-1 Hyperbola: k<-1 sun mirror Detectors resolve Images produced by (solar) energy reflected from detector a target scene* in Visual and NIR. -
Sub-Airy Disk Angular Resolution with High Dynamic Range in the Near-Infrared A
EPJ Web of Conferences 16, 03002 (2011) DOI: 10.1051/epjconf/20111603002 C Owned by the authors, published by EDP Sciences, 2011 Sub-Airy disk angular resolution with high dynamic range in the near-infrared A. Richichi European Southern Observatory,Karl-Schwarzschildstr. 2, 85748 Garching, Germany Abstract. Lunar occultations (LO) are a simple and effective high angular resolution method, with minimum requirements in instrumentation and telescope time. They rely on the analysis of the diffraction fringes created by the lunar limb. The diffraction phenomen occurs in space, and as a result LO are highly insensitive to most of the degrading effects that limit the performance of traditional single telescope and long-baseline interferometric techniques used for direct detection of faint, close companions to bright stars. We present very recent results obtained with the technique of lunar occultations in the near-IR, showing the detection of companions with very high dynamic range as close as few milliarcseconds to the primary star. We discuss the potential improvements that could be made, to increase further the current performance. Of course, LO are fixed-time events applicable only to sources which happen to lie on the Moon’s apparent orbit. However, with the continuously increasing numbers of potential exoplanets and brown dwarfs beign discovered, the frequency of such events is not negligible. I will list some of the most favorable potential LO in the near future, to be observed from major observatories. 1. THE METHOD The geometry of a lunar occultation (LO) event is sketched in Figure 1. The lunar limb acts as a straight diffracting edge, moving across the source with an angular speed that is the product of the lunar motion vector VM and the cosine of the contact angle CA. -
The Most Important Equation in Astronomy! 50
The Most Important Equation in Astronomy! 50 There are many equations that astronomers use L to describe the physical world, but none is more R 1.22 important and fundamental to the research that we = conduct than the one to the left! You cannot design a D telescope, or a satellite sensor, without paying attention to the relationship that it describes. In optics, the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The diffraction pattern has a bright region in the center called the Airy Disk. The diameter of the Airy Disk is related to the wavelength of the illuminating light, L, and the size of the circular aperture (mirror, lens), given by D. When L and D are expressed in the same units (e.g. centimeters, meters), R will be in units of angular measure called radians ( 1 radian = 57.3 degrees). You cannot see details with your eye, with a camera, or with a telescope, that are smaller than the Airy Disk size for your particular optical system. The formula also says that larger telescopes (making D bigger) allow you to see much finer details. For example, compare the top image of the Apollo-15 landing area taken by the Japanese Kaguya Satellite (10 meters/pixel at 100 km orbit elevation: aperture = about 15cm ) with the lower image taken by the LRO satellite (0.5 meters/pixel at a 50km orbit elevation: aperture = ). The Apollo-15 Lunar Module (LM) can be seen by its 'horizontal shadow' near the center of the image. -
Simple and Robust Method for Determination of Laser Fluence
Open Research Europe Open Research Europe 2021, 1:7 Last updated: 30 JUN 2021 METHOD ARTICLE Simple and robust method for determination of laser fluence thresholds for material modifications: an extension of Liu’s approach to imperfect beams [version 1; peer review: 1 approved, 2 approved with reservations] Mario Garcia-Lechuga 1,2, David Grojo 1 1Aix Marseille Université, CNRS, LP3, UMR7341, Marseille, 13288, France 2Departamento de Física Aplicada, Universidad Autónoma de Madrid, Madrid, 28049, Spain v1 First published: 24 Mar 2021, 1:7 Open Peer Review https://doi.org/10.12688/openreseurope.13073.1 Latest published: 25 Jun 2021, 1:7 https://doi.org/10.12688/openreseurope.13073.2 Reviewer Status Invited Reviewers Abstract The so-called D-squared or Liu’s method is an extensively applied 1 2 3 approach to determine the irradiation fluence thresholds for laser- induced damage or modification of materials. However, one of the version 2 assumptions behind the method is the use of an ideal Gaussian profile (revision) report that can lead in practice to significant errors depending on beam 25 Jun 2021 imperfections. In this work, we rigorously calculate the bias corrections required when applying the same method to Airy-disk like version 1 profiles. Those profiles are readily produced from any beam by 24 Mar 2021 report report report insertion of an aperture in the optical path. Thus, the correction method gives a robust solution for exact threshold determination without any added technical complications as for instance advanced 1. Laurent Lamaignere , CEA-CESTA, Le control or metrology of the beam. Illustrated by two case-studies, the Barp, France approach holds potential to solve the strong discrepancies existing between the laser-induced damage thresholds reported in the 2. -
Single‑Molecule‑Based Super‑Resolution Imaging
Histochem Cell Biol (2014) 141:577–585 DOI 10.1007/s00418-014-1186-1 REVIEW The changing point‑spread function: single‑molecule‑based super‑resolution imaging Mathew H. Horrocks · Matthieu Palayret · David Klenerman · Steven F. Lee Accepted: 20 January 2014 / Published online: 11 February 2014 © Springer-Verlag Berlin Heidelberg 2014 Abstract Over the past decade, many techniques for limit, gaining over two orders of magnitude in precision imaging systems at a resolution greater than the diffraction (Szymborska et al. 2013), allowing direct observation of limit have been developed. These methods have allowed processes at spatial scales much more compatible with systems previously inaccessible to fluorescence micros- the regime that biomolecular interactions take place on. copy to be studied and biological problems to be solved in Radically, different approaches have so far been proposed, the condensed phase. This brief review explains the basic including limiting the illumination of the sample to regions principles of super-resolution imaging in both two and smaller than the diffraction limit (targeted switching and three dimensions, summarizes recent developments, and readout) or stochastically separating single fluorophores in gives examples of how these techniques have been used to time to gain resolution in space (stochastic switching and study complex biological systems. readout). The latter also described as follows: Single-mol- ecule active control microscopy (SMACM), or single-mol- Keywords Single-molecule microscopy · Super- ecule localization microscopy (SMLM), allows imaging of resolution imaging · PALM/(d)STORM imaging · single molecules which cannot only be precisely localized, Localization microscopy but also followed through time and quantified. This brief review will focus on this “pointillism-based” SR imaging and its application to biological imaging in both two and Fluorescence microscopy allows users to dynamically three dimensions. -
Telescope Optics Discussion
DFM Engineering, Inc. 1035 Delaware Avenue, Unit D Longmont, Colorado 80501 Phone: 303-678-8143 Fax: 303-772-9411 Web: www.dfmengineering.com TELESCOPE OPTICS DISCUSSION: We recently responded to a Request For Proposal (RFP) for a 24-inch (610-mm) aperture telescope. The optical specifications specified an optical quality encircled energy (EE80) value of 80% within 0.6 to 0.8 arc seconds over the entire Field Of View (FOV) of 90-mm (1.2- degrees). From the excellent book, "Astronomical Optics" page 185 by Daniel J. Schroeder, we find the definition of "Encircled Energy" as "The fraction of the total energy E enclosed within a circle of radius r centered on the Point Spread Function peak". I want to emphasize the "radius r" as we will see this come up again later in another expression. The first problem with this specification is no wavelength has been specified. Perfect optics will produce an Airy disk whose "radius r" is a function of the wavelength and the aperture of the optic. The aperture in this case is 24-inches (610-mm). A typical Airy disk is shown below. This Airy disk was obtained in the DFM Engineering optical shop. Actual Airy disk of an unobstructed aperture with an intensity scan through the center in blue. Perfect optics produce an Airy disk composed of a central spot with alternating dark and bright rings due to diffraction. The Airy disk above only shows the central spot and the first bright ring, the next ring is too faint to be recorded. The first bright ring (seen above) is 63 times fainter than the peak intensity. -
Mapping the PSF Across Adaptive Optics Images
Mapping the PSF across Adaptive Optics images Laura Schreiber Osservatorio Astronomico di Bologna Email: [email protected] Abstract • Adaptive Optics (AO) has become a key technology for all the main existing telescopes (VLT, Keck, Gemini, Subaru, LBT..) and is considered a kind of enabling technology for future giant telescopes (E-ELT, TMT, GMT). • AO increases the energy concentration of the Point Spread Function (PSF) almost reaching the resolution imposed by the diffraction limit, but the PSF itself is characterized by complex shape, no longer easily representable with an analytical model, and by sometimes significant spatial variation across the image, depending on the AO flavour and configuration. • The aim of this lesson is to describe the AO PSF characteristics and variation in order to provide (together with some AO tips) basic elements that could be useful for AO images data reduction. Erice School 2015: Science and Technology with E-ELT What’s PSF • ‘The Point Spread Function (PSF) describes the response of an imaging system to a point source’ • Circular aperture of diameter D at a wavelenght λ (no aberrations) Airy diffraction disk 2 퐼휃 = 퐼0 퐽1(푥)/푥 Where 퐽1(푥) represents the Bessel function of order 1 푥 = 휋 퐷 휆 푠푛휗 휗 is the angular radius from the aperture center First goes to 0 when 휗 ~ 1.22 휆 퐷 Erice School 2015: Science and Technology with E-ELT Imaging of a point source through a general aperture Consider a plane wave propagating in the z direction and illuminating an aperture. The element ds = dudv becomes the a sourse of a secondary spherical wave. -
Topic 3: Operation of Simple Lens
V N I E R U S E I T H Y Modern Optics T O H F G E R D I N B U Topic 3: Operation of Simple Lens Aim: Covers imaging of simple lens using Fresnel Diffraction, resolu- tion limits and basics of aberrations theory. Contents: 1. Phase and Pupil Functions of a lens 2. Image of Axial Point 3. Example of Round Lens 4. Diffraction limit of lens 5. Defocus 6. The Strehl Limit 7. Other Aberrations PTIC D O S G IE R L O P U P P A D E S C P I A S Properties of a Lens -1- Autumn Term R Y TM H ENT of P V N I E R U S E I T H Y Modern Optics T O H F G E R D I N B U Ray Model Simple Ray Optics gives f Image Object u v Imaging properties of 1 1 1 + = u v f The focal length is given by 1 1 1 = (n − 1) + f R1 R2 For Infinite object Phase Shift Ray Optics gives Delta Fn f Lens introduces a path length difference, or PHASE SHIFT. PTIC D O S G IE R L O P U P P A D E S C P I A S Properties of a Lens -2- Autumn Term R Y TM H ENT of P V N I E R U S E I T H Y Modern Optics T O H F G E R D I N B U Phase Function of a Lens δ1 δ2 h R2 R1 n P0 P ∆ 1 With NO lens, Phase Shift between , P0 ! P1 is 2p F = kD where k = l with lens in place, at distance h from optical, F = k0d1 + d2 +n(D − d1 − d2)1 Air Glass @ A which can be arranged to|giv{ze } | {z } F = knD − k(n − 1)(d1 + d2) where d1 and d2 depend on h, the ray height. -
What to Do with Sub-Diffraction Limit (SDL) Pixels
What To Do With Sub-Diffraction-Limit (SDL) Pixels? -- A Proposal for a Gigapixel Digital Film Sensor (DFS)* Eric R. Fossum† Abstract This paper proposes the use of sub-diffraction-limit (SDL) pixels in a new solid-state imaging paradigm – the emulation of the silver halide emulsion film process. The SDL pixels can be used in a binary mode to create a gigapixel digital film sensor (DFS). It is hoped that this paper may inspire revealing research into the potential of this paradigm shift. The relentless drive to reduce feature size in microelectronics has continued now for several decades and feature sizes have shrunk in accordance to the prediction of Gordon Moore in 1975. While the “repeal” of Moore’s Law has also been anticipated for some time, we can still confidently predict that feature sizes will continue to shrink in the near future. Using the shrinking-feature-size argument, it became clear in the early 1990’s that the opportunity to put more than 3 or 4 CCD electrodes in a single pixel would soon be upon us and from that the CMOS “active pixel” sensor concept was born. Personally, I was expecting the emergence of the “hyperactive pixel” with in-pixel signal conditioning and ADC. In fact, the effective transistor count in most CMOS image sensors has hovered in the 3-4 transistor range, and if anything, is being reduced as pixel size is shrunk by shared readout techniques. The underlying reason for pixel shrinkage is to keep sensor and optics costs as low as possible as pixel resolution grows. -
Improved Single-Molecule Localization Precision in Astigmatism-Based 3D Superresolution
bioRxiv preprint doi: https://doi.org/10.1101/304816; this version posted April 19, 2018. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Improved single-molecule localization precision in astigmatism-based 3D superresolution imaging using weighted likelihood estimation Christopher H. Bohrer1Ϯ, Xinxing Yang1Ϯ, Zhixin Lyu1, Shih-Chin Wang1, Jie Xiao1* 1Department of Biophysics and Biophysical Chemistry, Johns Hopkins School of Medicine, Baltimore, Maryland, 21205, USA. Ϯ These authors contributed equally to this work. *Correspondence and requests for materials should be addressed J.X. (email: [email protected]). Abstract: Astigmatism-based superresolution microscopy is widely used to determine the position of individual fluorescent emitters in three-dimensions (3D) with subdiffraction-limited resolutions. This point spread function (PSF) engineering technique utilizes a cylindrical lens to modify the shape of the PSF and break its symmetry above and below the focal plane. The resulting asymmetric PSFs at different z-positions for single emitters are fit with an elliptical 2D-Gaussian function to extract the widths along two principle x- and y-axes, which are then compared with a pre-measured calibration function to determine its z-position. While conceptually simple and easy to implement, in practice, distorted PSFs due to an imperfect optical system often compromise the localization precision; and it is laborious to optimize a multi-purpose optical system. Here we present a methodology that is independent of obtaining a perfect PSF and enhances the localization precision along the z-axis. -
List of Abbreviations STEM Scanning Transmission Electron Microscopy
List of Abbreviations STEM Scanning Transmission Electron Microscopy PSF Point Spread Function HAADF High-Angle Annular Dark-Field Z-Contrast Atomic Number-Contrast ICM Iterative Constrained Methods RL Richardson-Lucy BD Blind Deconvolution ML Maximum Likelihood MBD Multichannel Blind Deconvolution SA Simulated Annealing SEM Scanning Electron Microscopy TEM Transmission Electron Microscopy MI Mutual Information PRF Point Response Function Registration and 3D Deconvolution of STEM Data M. MERCIMEK*, A. F. KOSCHAN*, A. Y. BORISEVICH‡, A.R. LUPINI‡, M. A. ABIDI*, & S. J. PENNYCOOK‡. *IRIS Laboratory, Electrical Engineering and Computer Science, The University of Tennessee, 1508 Middle Dr, Knoxville, TN 37996 ‡Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 Key words. 3D deconvolution, Depth sectioning, Electron microscope, 3D Reconstruction. Summary A common characteristic of many practical modalities is that imaging process distorts the signals or scene of interest. The principal data is veiled by noise and less important signals. We split the 3D enhancement process into two separate stages; deconvolution in general is referring to the improvement of fidelity in electronic signals through reversing the convolution process, and registration aims to overcome significant misalignment of the same structures in lateral planes along focal axis. The dataset acquired with VG HB603U STEM operated at 300 kV, producing a beam with a diameter of 0.6 A˚ equipped with a Nion aberration corrector has been studied. The initial experiments described how aberration corrected Z- Contrast Imaging in STEM can be combined with the enhancement process towards better visualization with a depth resolution of a few nanometers. I. INTRODUCTION 3D characterization of both biological samples and non-biological materials provides us nano-scale resolution in three dimensions permitting the study of complex relationships between structure and existing functions.