Spatial Resolution and Position Accuracy

Total Page:16

File Type:pdf, Size:1020Kb

Spatial Resolution and Position Accuracy Chapter 4 Spatial resolution and position accuracy Position accuracy refers to the precision with which an object can be localized in space. Spatial resolution, on the other hand, is a measure for the ability to distinguish two separated point-like objects from a single object. The diffraction limit implies that optical resolution is ultimately limited by the wavelength of light. Before the advent of near-field optics it was believed that the diffraction limit imposes a hard boundary and that physical laws strictly prohibit resolution significantly better than λ/2. It was found that this limit is not as strict as assumed and that various tricks allow us to access the evanescent modes of the spatial spectrum. In this chapter we will analyze the diffraction limit and discuss the principles of different imaging modes with resolutions near or beyond the diffraction limit. 4.1 The point spread function The point spread function is a measure of the resolving power of an optical system. The narrower the point spread function the better the resolution will be. As the name implies, the point spread function defines the spread of a point source. If we have a radiating point source then the image of that source will appear to have a finite size. This broadening is a direct consequence of spatial filtering. A point in space is characterized by a delta function which has an infinite spectrum of spatial frequen- cies kx, ky. On the way of propagation from the source to the image, high frequency 2 2 2 components are filtered out. Usually, the entire spectrum (kx +ky) > k associated with the evanescent waves is lost. Furthermore, not all plane wave components can be collected which leads to a further reduction in bandwidth. The reduced spectrum is not able to accurately reconstruct the original point source and the image of the point will have a finite size. The standard derivation of the point spread function is 47 48 CHAPTER 4. SPATIAL RESOLUTION AND POSITION ACCURACY based on scalar theory and the paraxial approximation. This theory is insufficient for many high resolution optical systems. With the so far established ‘angular spectrum’ framework we are in a position to rigorously investigate image formation in an optical system. Consider the situation in Fig. 4.1 which has been analyzed by Sheppard and Wil- son [1] and more recently by Enderlein [2]. An ideal electromagnetic point source is located in the focus of a high NA aplanatic objective lens with focal length f. This lens collimates the rays emanating from the point source and a second lens with focal length f ′ focuses the fields on the image plane at z =0. The situation is similar to the problem in Fig. 3.18. The only difference is that the source is a point source instead of the reflected field at an interface. The smallest radiating electromagnetic unit is a dipole. In the optical regime most subwavelength-sized particles scatter as electric dipoles. On the other hand, small apertures radiate as magnetic dipoles. In the microwave regime, paramagnetic materials exhibit magnetic transitions and in the infrared, small metal particles show magnetic dipole absorption caused by eddy currents of free carriers produced by the magnetic field. Nevertheless, we can restrict our analysis to an electric dipole since the field of a magnetic dipole is identical to the field of an electric dipole if we inter- change the electric and magnetic fields, i.e. E H and H E. → →− In its most general form, the electric field at a point r of an arbitrarily oriented electric dipole located at ro with dipole moment µ is defined by the dyadic Green’s ↔ function G (r, ro) as 2 ω ↔ E r G r r µ ( ) = 2 ( , o) . (4.1) εo c · n ns z µ ns' Θ Θ' z object image n sin f ' n' Θ = sin ' f f Θ f ' Figure 4.1: Configuration used for the calculation of the point spread function. The source is an arbitrarily oriented electric dipole with moment µ. The dipole radiation is collected with a high NA aplanatic objective lens and focused by a second lens on the image plane at z =0. 4.1. THEPOINTSPREADFUNCTION 49 We assume that the distance between dipole and objective lens is much larger than the wavelength of the emitted light. In this case, we do not need to consider the evanescent components of the dipole field. Furthermore, we choose the dipole to be located at the origin ro =0 and surrounded by a homogeneous medium with index n. ↔ In this case, we can use the free-space farfield form of G which, expressed in spherical coordinates (r, θ, φ) reads as (see Appendix ??) (1 cos2φ sin2 θ) sin φ cos φ sin2 θ cos φ sin θ cos θ ↔ exp(ikr) − 2 − 2 2 − G∞ (r, 0) = sin φ cos φ sin θ (1 sin φ sin θ) sin φ sin θ cos θ . 4πr − − − cos φ sin θ cos θ sin φ sin θ cos θ sin2 θ − − (4.2) This is simply a 3 3 matrix which has to be multiplied with the dipole moment vector × ∗ µ = (µx, µy, µz) to obtain the electric field. To describe refraction at the reference sphere f we have to project the electric field vector along the vectors nθ and nφ as already done in Section 3.5. After being refracted, the field propagates as a collimated beam to the second lens f ′ where it is refracted once again. For a dipole aligned with the x-axis (µ=µx nx) the field just after the second lens becomes ′ ′ 2 (1+cos θ cos θ ) (1 cosθ cos θ )cos2φ ′ ω µx exp(ikf) − − n cos θ E(x)(θ, φ)= (1 cos θ cos θ′)sin2φ , ∞ − ε c2 8πf − − n′ cos θ o 2cos θ sin θ′ cos φ r (4.3) where f sin θ′ = sin θ, cos θ′ = g(θ) = 1 (f/f ′)2 sin2θ . (4.4) f ′ − q The term (cos θ′/ cos θ)1/2 is a consequence of energy conservation as discussed in Section 3.5. In the limit f f ′ the contribution of cos θ′ can be ignored, but cos θ ≪ † cannot since we deal with a high NA objective lens. The fields for a dipole µy and a dipole µz can be derived in a similar way. For an arbitrarily oriented dipole µ=(µx, µy, µz) the field is simply obtained by the superposition (x) (y) (z) E∞(θ, φ) = E∞ + E∞ + E∞ . (4.5) To obtain the fields E near the focus of the second lens we insert the field E∞ into Eq. (3.47). We assume that f f ′ which allows us to use the approximations in ≪ Eq. (3.106). The integration with respect to φ can be carried out analytically and the result can be written as 2 ω ↔ E G µ (ρ,ϕ,z) = 2 P SF (ρ,ϕ,z) , (4.6) εo c · ∗ 2 The farfield at r of a dipole located at ro = 0 can also be expressed as E = ω µo [r r − × × µ] exp(ikr)/4πr3 . †The factor (cos θ)−1/2 has been omitted in Ref. [1] 50 CHAPTER 4. SPATIAL RESOLUTION AND POSITION ACCURACY where the dyadic point spread function is given by ˜ ˜ ˜ ˜ ′ (I00 +I02 cos2ϕ) I02 sin2ϕ 2iI01 cos ϕ ↔ k f ′ ′ − n G = ei (kf−k f ) I˜ sin2ϕ (I˜ I˜ cos2ϕ) 2iI˜ sin ϕ , P SF 8πi f ′ 02 00 − 02 − 01 n′ 0 0 0 r (4.7) and the integrals I˜00 - I˜02 are defined as θmax I˜ (ρ,z) = (cos θ)1/2 sin θ (1+cos θ) J (k′ ρ sin θf/f ′) (4.8) 00 0 × Z0 exp ik′z [1 1/2(f/f ′)2 sin2θ] dθ , − θmax I˜ (ρ,z) = (cos θ)1/2 sin2 θ J (k′ ρ sin θf/f ′) (4.9) 01 1 × Z0 exp ik′z [1 1/2(f/f ′)2 sin2θ] dθ , − θmax I˜ (ρ,z) = (cos θ)1/2 sin θ (1 cos θ) J (k′ ρ sin θf/f ′) (4.10) 02 − 2 × Z0 exp ik′z [1 1/2(f/f ′)2 sin2θ] dθ . − ↔ The first column of GP SF denotes the field of a dipole µx, the second column the field of a dipole µy and the third column the field of a dipole µz. The integrals I˜00 - I˜02 are similar to the integrals I00 - I02 encountered in conjunction with the focusing of a Gaussian beam [c.f. Eqs. (3.58 - 3.60)]. The main differences are the arguments of the Bessel functions and the exponential functions. Furthermore, the longitudinal field E is zero in the present case because we required f f ′. z ≪ Eqs. (4.6 - 4.10) describe the mapping of an arbitrarily oriented electric dipole from its source to its image. The result depends on the numerical aperture NA of the primary objective lens NA = n sin θmax (4.11) and the (transverse) magnification M of the optical system defined as n f ′ M = . (4.12) n′ f In the following, we will use the quantity E 2 to denote the point spread function, | | since it is the quantity relevant to optical detectors. We first consider the situation of a dipole with its axis perpendicular to the optical axis. Without loss of generality, we can define the x-axis to be parallel with the dipole axis, i.e. µ=µx nx. For a low NA objective lens, θmax is sufficiently small which allows us to make the approximations cos θ 1 and sin θ θ.
Recommended publications
  • Diffraction Interference Induced Superfocusing in Nonlinear Talbot Effect24,25, to Achieve Subdiffraction by Exploiting the Phases of The
    OPEN Diffraction Interference Induced SUBJECT AREAS: Superfocusing in Nonlinear Talbot Effect NONLINEAR OPTICS Dongmei Liu1, Yong Zhang1, Jianming Wen2, Zhenhua Chen1, Dunzhao Wei1, Xiaopeng Hu1, Gang Zhao1, SUB-WAVELENGTH OPTICS S. N. Zhu1 & Min Xiao1,3 Received 1National Laboratory of Solid State Microstructures, College of Engineering and Applied Sciences, School of Physics, Nanjing 12 May 2014 University, Nanjing 210093, China, 2Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA, 3Department of Physics, University of Arkansas, Fayetteville, Arkansas 72701, USA. Accepted 24 July 2014 We report a simple, novel subdiffraction method, i.e. diffraction interference induced superfocusing in Published second-harmonic (SH) Talbot effect, to achieve focusing size of less than lSH/4 (or lpump/8) without 20 August 2014 involving evanescent waves or subwavelength apertures. By tailoring point spread functions with Fresnel diffraction interference, we observe periodic SH subdiffracted spots over a hundred of micrometers away from the sample. Our demonstration is the first experimental realization of the Toraldo di Francia’s proposal pioneered 62 years ago for superresolution imaging. Correspondence and requests for materials should be addressed to ocusing of a light beam into an extremely small spot with a high energy density plays an important role in key technologies for miniaturized structures, such as lithography, optical data storage, laser material nanopro- Y.Z. (zhangyong@nju. cessing and nanophotonics in confocal microscopy and superresolution imaging. Because of the wave edu.cn); J.W. F 1 nature of light, however, Abbe discovered at the end of 19th century that diffraction prohibits the visualization (jianming.wen@yale. of features smaller than half of the wavelength of light (also known as the Rayleigh diffraction limit) with optical edu) or M.X.
    [Show full text]
  • The Gauss-Airy Functions and Their Properties
    Annals of the University of Craiova, Mathematics and Computer Science Series Volume 43(2), 2016, Pages 119{127 ISSN: 1223-6934 The Gauss-Airy functions and their properties Alireza Ansari Abstract. In this paper, in connection with the generating function of three-variable Hermite polynomials, we introduce the Gauss-Airy function Z 1 3 1 −yt2 zt GAi(x; y; z) = e cos(xt + )dt; y ≥ 0; x; z 2 R: π 0 3 Some properties of this function such as behaviors of zeros, orthogonal relations, corresponding inequalities and their integral transforms are investigated. Key words and phrases. Gauss-Airy function, Airy function, Inequality, Orthogonality, Integral transforms. 1. Introduction We consider the three-variable Hermite polynomials [ n ] [ n−3j ] X3 X2 zj yk H (x; y; z) = n! xn−3j−2k; (1) 3 n j! k!(n − 2k)! j=0 k=0 2 3 as the coefficient set of the generating function ext+yt +zt 1 n 2 3 X t ext+yt +zt = H (x; y; z) : (2) 3 n n! n=0 For the first time, these polynomials [8] and their generalization [9] was introduced by Dattoli et al. and later Torre [10] used them to describe the behaviors of the Hermite- Gaussian wavefunctions in optics. These are wavefunctions appeared as Gaussian apodized Airy polynomials [3,7] in elegant and standard forms. Also, similar families to the generating function (2) have been stated in literature, see [11, 12]. Now in this paper, it is our motivation to introduce the Gauss-Airy function derived from operational relation 2 3 y d +z d n 3Hn(x; y; z) = e dx2 dx3 x : (3) For this purpose, in view of the operational calculus of the Mellin transform [4{6], we apply the following integral representation 1 2 3 Z eys +zs = esξGAi(ξ; y; z) dξ; (4) −∞ Received February 28, 2015.
    [Show full text]
  • Introduction to Optics Part I
    Introduction to Optics part I Overview Lecture Space Systems Engineering presented by: Prof. David Miller prepared by: Olivier de Weck Revised and augmented by: Soon-Jo Chung Chart: 1 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Outline Goal: Give necessary optics background to tackle a space mission, which includes an optical payload •Light •Interaction of Light w/ environment •Optical design fundamentals •Optical performance considerations •Telescope types and CCD design •Interferometer types •Sparse aperture array •Beam combining and Control Chart: 2 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Examples - Motivation Spaceborne Astronomy Planetary nebulae NGC 6543 September 18, 1994 Hubble Space Telescope Chart: 3 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Properties of Light Wave Nature Duality Particle Nature HP w2 E 2 E 0 Energy of Detector c2 wt 2 a photon Q=hQ Solution: Photons are EAeikr( ZtI ) “packets of energy” E: Electric field vector c H: Magnetic field vector Poynting Vector: S E u H 4S 2S Spectral Bands (wavelength O): Wavelength: O Q QT Ultraviolet (UV) 300 Å -300 nm Z Visible Light 400 nm - 700 nm 2S Near IR (NIR) 700 nm - 2.5 Pm Wave Number: k O Chart: 4 16.684 Space Systems Product Development MIT Space Systems Laboratory February 13, 2001 Reflection-Mirrors Mirrors (Reflective Devices) and Lenses (Refractive Devices) are both “Apertures” and are similar to each other. Law of reflection: Mirror Geometry given as Ti Ti=To a conic section rot surface: T 1 2 2 o z()U r r k 1 U Reflected wave is also k 1 in the plane of incidence Specular Circle: k=0 Ellipse -1<k<0 Reflection Parabola: k=-1 Hyperbola: k<-1 sun mirror Detectors resolve Images produced by (solar) energy reflected from detector a target scene* in Visual and NIR.
    [Show full text]
  • Implementation of Airy Function Using Graphics Processing Unit (GPU)
    ITM Web of Conferences 32, 03052 (2020) https://doi.org/10.1051/itmconf/20203203052 ICACC-2020 Implementation of Airy function using Graphics Processing Unit (GPU) Yugesh C. Keluskar1, Megha M. Navada2, Chaitanya S. Jage3 and Navin G. Singhaniya4 Department of Electronics Engineering, Ramrao Adik Institute of Technology, Nerul, Navi Mumbai. Email : [email protected], [email protected], [email protected], [email protected] Abstract—Special mathematical functions are an integral part Airy function is the solution to Airy differential equa- of Fractional Calculus, one of them is the Airy function. But tion,which has the following form [22]: it’s a gruelling task for the processor as well as system that is constructed around the function when it comes to evaluating the d2y special mathematical functions on an ordinary Central Processing = xy (1) Unit (CPU). The Parallel processing capabilities of a Graphics dx2 processing Unit (GPU) hence is used. In this paper GPU is used to get a speedup in time required, with respect to CPU time for It has its applications in classical physics and mainly evaluating the Airy function on its real domain. The objective quantum physics. The Airy function also appears as the so- of this paper is to provide a platform for computing the special lution to many other differential equations related to elasticity, functions which will accelerate the time required for obtaining the heat equation, the Orr-Sommerfield equation, etc in case of result and thus comparing the performance of numerical solution classical physics. While in the case of Quantum physics,the of Airy function using CPU and GPU.
    [Show full text]
  • Sub-Airy Disk Angular Resolution with High Dynamic Range in the Near-Infrared A
    EPJ Web of Conferences 16, 03002 (2011) DOI: 10.1051/epjconf/20111603002 C Owned by the authors, published by EDP Sciences, 2011 Sub-Airy disk angular resolution with high dynamic range in the near-infrared A. Richichi European Southern Observatory,Karl-Schwarzschildstr. 2, 85748 Garching, Germany Abstract. Lunar occultations (LO) are a simple and effective high angular resolution method, with minimum requirements in instrumentation and telescope time. They rely on the analysis of the diffraction fringes created by the lunar limb. The diffraction phenomen occurs in space, and as a result LO are highly insensitive to most of the degrading effects that limit the performance of traditional single telescope and long-baseline interferometric techniques used for direct detection of faint, close companions to bright stars. We present very recent results obtained with the technique of lunar occultations in the near-IR, showing the detection of companions with very high dynamic range as close as few milliarcseconds to the primary star. We discuss the potential improvements that could be made, to increase further the current performance. Of course, LO are fixed-time events applicable only to sources which happen to lie on the Moon’s apparent orbit. However, with the continuously increasing numbers of potential exoplanets and brown dwarfs beign discovered, the frequency of such events is not negligible. I will list some of the most favorable potential LO in the near future, to be observed from major observatories. 1. THE METHOD The geometry of a lunar occultation (LO) event is sketched in Figure 1. The lunar limb acts as a straight diffracting edge, moving across the source with an angular speed that is the product of the lunar motion vector VM and the cosine of the contact angle CA.
    [Show full text]
  • The Most Important Equation in Astronomy! 50
    The Most Important Equation in Astronomy! 50 There are many equations that astronomers use L to describe the physical world, but none is more R 1.22 important and fundamental to the research that we = conduct than the one to the left! You cannot design a D telescope, or a satellite sensor, without paying attention to the relationship that it describes. In optics, the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The diffraction pattern has a bright region in the center called the Airy Disk. The diameter of the Airy Disk is related to the wavelength of the illuminating light, L, and the size of the circular aperture (mirror, lens), given by D. When L and D are expressed in the same units (e.g. centimeters, meters), R will be in units of angular measure called radians ( 1 radian = 57.3 degrees). You cannot see details with your eye, with a camera, or with a telescope, that are smaller than the Airy Disk size for your particular optical system. The formula also says that larger telescopes (making D bigger) allow you to see much finer details. For example, compare the top image of the Apollo-15 landing area taken by the Japanese Kaguya Satellite (10 meters/pixel at 100 km orbit elevation: aperture = about 15cm ) with the lower image taken by the LRO satellite (0.5 meters/pixel at a 50km orbit elevation: aperture = ). The Apollo-15 Lunar Module (LM) can be seen by its 'horizontal shadow' near the center of the image.
    [Show full text]
  • Simple and Robust Method for Determination of Laser Fluence
    Open Research Europe Open Research Europe 2021, 1:7 Last updated: 30 JUN 2021 METHOD ARTICLE Simple and robust method for determination of laser fluence thresholds for material modifications: an extension of Liu’s approach to imperfect beams [version 1; peer review: 1 approved, 2 approved with reservations] Mario Garcia-Lechuga 1,2, David Grojo 1 1Aix Marseille Université, CNRS, LP3, UMR7341, Marseille, 13288, France 2Departamento de Física Aplicada, Universidad Autónoma de Madrid, Madrid, 28049, Spain v1 First published: 24 Mar 2021, 1:7 Open Peer Review https://doi.org/10.12688/openreseurope.13073.1 Latest published: 25 Jun 2021, 1:7 https://doi.org/10.12688/openreseurope.13073.2 Reviewer Status Invited Reviewers Abstract The so-called D-squared or Liu’s method is an extensively applied 1 2 3 approach to determine the irradiation fluence thresholds for laser- induced damage or modification of materials. However, one of the version 2 assumptions behind the method is the use of an ideal Gaussian profile (revision) report that can lead in practice to significant errors depending on beam 25 Jun 2021 imperfections. In this work, we rigorously calculate the bias corrections required when applying the same method to Airy-disk like version 1 profiles. Those profiles are readily produced from any beam by 24 Mar 2021 report report report insertion of an aperture in the optical path. Thus, the correction method gives a robust solution for exact threshold determination without any added technical complications as for instance advanced 1. Laurent Lamaignere , CEA-CESTA, Le control or metrology of the beam. Illustrated by two case-studies, the Barp, France approach holds potential to solve the strong discrepancies existing between the laser-induced damage thresholds reported in the 2.
    [Show full text]
  • Single‑Molecule‑Based Super‑Resolution Imaging
    Histochem Cell Biol (2014) 141:577–585 DOI 10.1007/s00418-014-1186-1 REVIEW The changing point‑spread function: single‑molecule‑based super‑resolution imaging Mathew H. Horrocks · Matthieu Palayret · David Klenerman · Steven F. Lee Accepted: 20 January 2014 / Published online: 11 February 2014 © Springer-Verlag Berlin Heidelberg 2014 Abstract Over the past decade, many techniques for limit, gaining over two orders of magnitude in precision imaging systems at a resolution greater than the diffraction (Szymborska et al. 2013), allowing direct observation of limit have been developed. These methods have allowed processes at spatial scales much more compatible with systems previously inaccessible to fluorescence micros- the regime that biomolecular interactions take place on. copy to be studied and biological problems to be solved in Radically, different approaches have so far been proposed, the condensed phase. This brief review explains the basic including limiting the illumination of the sample to regions principles of super-resolution imaging in both two and smaller than the diffraction limit (targeted switching and three dimensions, summarizes recent developments, and readout) or stochastically separating single fluorophores in gives examples of how these techniques have been used to time to gain resolution in space (stochastic switching and study complex biological systems. readout). The latter also described as follows: Single-mol- ecule active control microscopy (SMACM), or single-mol- Keywords Single-molecule microscopy · Super- ecule localization microscopy (SMLM), allows imaging of resolution imaging · PALM/(d)STORM imaging · single molecules which cannot only be precisely localized, Localization microscopy but also followed through time and quantified. This brief review will focus on this “pointillism-based” SR imaging and its application to biological imaging in both two and Fluorescence microscopy allows users to dynamically three dimensions.
    [Show full text]
  • About Resolution
    pco.knowledge base ABOUT RESOLUTION The resolution of an image sensor describes the total number of pixel which can be used to detect an image. From the standpoint of the image sensor it is sufficient to count the number and describe it usually as product of the horizontal number of pixel times the vertical number of pixel which give the total number of pixel, for example: 2 Image Sensor & Camera Starting with the image sensor in a camera system, then usually the so called modulation transfer func- Or take as an example tion (MTF) is used to describe the ability of the camera the sCMOS image sensor CIS2521: system to resolve fine structures. It is a variant of the optical transfer function1 (OTF) which mathematically 2560 describes how the system handles the optical infor- mation or the contrast of the scene and transfers it That is the usual information found in technical data onto the image sensor and then into a digital informa- sheets and camera advertisements, but the question tion in the computer. The resolution ability depends on arises on “what is the impact or benefit for a camera one side on the number and size of the pixel. user”? 1 Benefit For A Camera User It can be assumed that an image sensor or a camera system with an image sensor generally is applied to detect images, and therefore the question is about the influence of the resolution on the image quality. First, if the resolution is higher, more information is obtained, and larger data files are a consequence.
    [Show full text]
  • Telescope Optics Discussion
    DFM Engineering, Inc. 1035 Delaware Avenue, Unit D Longmont, Colorado 80501 Phone: 303-678-8143 Fax: 303-772-9411 Web: www.dfmengineering.com TELESCOPE OPTICS DISCUSSION: We recently responded to a Request For Proposal (RFP) for a 24-inch (610-mm) aperture telescope. The optical specifications specified an optical quality encircled energy (EE80) value of 80% within 0.6 to 0.8 arc seconds over the entire Field Of View (FOV) of 90-mm (1.2- degrees). From the excellent book, "Astronomical Optics" page 185 by Daniel J. Schroeder, we find the definition of "Encircled Energy" as "The fraction of the total energy E enclosed within a circle of radius r centered on the Point Spread Function peak". I want to emphasize the "radius r" as we will see this come up again later in another expression. The first problem with this specification is no wavelength has been specified. Perfect optics will produce an Airy disk whose "radius r" is a function of the wavelength and the aperture of the optic. The aperture in this case is 24-inches (610-mm). A typical Airy disk is shown below. This Airy disk was obtained in the DFM Engineering optical shop. Actual Airy disk of an unobstructed aperture with an intensity scan through the center in blue. Perfect optics produce an Airy disk composed of a central spot with alternating dark and bright rings due to diffraction. The Airy disk above only shows the central spot and the first bright ring, the next ring is too faint to be recorded. The first bright ring (seen above) is 63 times fainter than the peak intensity.
    [Show full text]
  • On Airy Solutions of the Second Painlevé Equation
    On Airy Solutions of the Second Painleve´ Equation Peter A. Clarkson School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury, CT2 7NF, UK [email protected] October 18, 2018 Abstract In this paper we discuss Airy solutions of the second Painleve´ equation (PII) and two related equations, the Painleve´ XXXIV equation (P34) and the Jimbo-Miwa-Okamoto σ form of PII (SII), are discussed. It is shown that solutions which depend only on the Airy function Ai(z) have a completely difference structure to those which involve a linear combination of the Airy functions Ai(z) and Bi(z). For all three equations, the special solutions which depend only on Ai(t) are tronquee´ solutions, i.e. they have no poles in a sector of the complex plane. Further for both P34 and SII, it is shown that amongst these tronquee´ solutions there is a family of solutions which have no poles on the real axis. Dedicated to Mark Ablowitz on his 70th birthday 1 Introduction The six Painleve´ equations (PI–PVI) were first discovered by Painleve,´ Gambier and their colleagues in an investigation of which second order ordinary differential equations of the form d2q dq = F ; q; z ; (1.1) dz2 dz where F is rational in dq=dz and q and analytic in z, have the property that their solutions have no movable branch points. They showed that there were fifty canonical equations of the form (1.1) with this property, now known as the Painleve´ property. Further Painleve,´ Gambier and their colleagues showed that of these fifty equations, forty-four can be reduced to linear equations, solved in terms of elliptic functions, or are reducible to one of six new nonlinear ordinary differential equations that define new transcendental functions, see Ince [1].
    [Show full text]
  • Mapping the PSF Across Adaptive Optics Images
    Mapping the PSF across Adaptive Optics images Laura Schreiber Osservatorio Astronomico di Bologna Email: [email protected] Abstract • Adaptive Optics (AO) has become a key technology for all the main existing telescopes (VLT, Keck, Gemini, Subaru, LBT..) and is considered a kind of enabling technology for future giant telescopes (E-ELT, TMT, GMT). • AO increases the energy concentration of the Point Spread Function (PSF) almost reaching the resolution imposed by the diffraction limit, but the PSF itself is characterized by complex shape, no longer easily representable with an analytical model, and by sometimes significant spatial variation across the image, depending on the AO flavour and configuration. • The aim of this lesson is to describe the AO PSF characteristics and variation in order to provide (together with some AO tips) basic elements that could be useful for AO images data reduction. Erice School 2015: Science and Technology with E-ELT What’s PSF • ‘The Point Spread Function (PSF) describes the response of an imaging system to a point source’ • Circular aperture of diameter D at a wavelenght λ (no aberrations) Airy diffraction disk 2 퐼휃 = 퐼0 퐽1(푥)/푥 Where 퐽1(푥) represents the Bessel function of order 1 푥 = 휋 퐷 휆 푠푛휗 휗 is the angular radius from the aperture center First goes to 0 when 휗 ~ 1.22 휆 퐷 Erice School 2015: Science and Technology with E-ELT Imaging of a point source through a general aperture Consider a plane wave propagating in the z direction and illuminating an aperture. The element ds = dudv becomes the a sourse of a secondary spherical wave.
    [Show full text]