RANGE COMPRESSED HOLOGRAPHIC APERTURE LADAR Dissertation Submitted to the School of Engineering of the UNIVERSITY of DAYTON In

Total Page:16

File Type:pdf, Size:1020Kb

RANGE COMPRESSED HOLOGRAPHIC APERTURE LADAR Dissertation Submitted to the School of Engineering of the UNIVERSITY of DAYTON In RANGE COMPRESSED HOLOGRAPHIC APERTURE LADAR Dissertation Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of Doctor of Philosophy in Electro-Optics By Jason Wade Stafford UNIVERSITY OF DAYTON Dayton, Ohio December 2016 RANGE COMPRESSED HOLOGRAPHIC APERTURE LADAR Name: Stafford, Jason Wade APPROVED BY: Bradley D. Duncan, Ph.D. Joseph W. Haus, Ph.D. Advisory Committee Chairman Committee Member Executive Director Professor Graduate Academic Affairs Electro-Optics and Photonics Professor Electro-Optics and Photonics David J. Rabb, Ph.D. Matthew P. Dierking, Ph.D. Committee Member Committee Member Technical Advisor, Laser Radar Branch Graduate Faculty Air Force Research Laboratory Electro-Optics and Photonics Robert J. Wilkens, Ph.D., P.E. Eddy M. Rojas, Ph.D., M.A., P.E. Associate Dean for Dean Research and Innovation School of Engineering Professor School of Engineering ii ABSTRACT RANGE COMPRESSED HOLOGRAPHIC APERTURE LADAR Name: Stafford, Jason University of Dayton Advisor: Dr. Bradley D. Duncan 3-D holographic ladar uses digital holography with frequency diversity to allow the ability to resolve targets in range. A key challenge is that since individual frequency samples are not recorded simultaneously, differential phase aberrations may exist between them making it difficult to achieve range compression. Specific steps for this modality are described so that phase gradient algorithms (PGA) can be applied to 3-D holographic ladar data for phase corrections across multiple temporal frequency samples. Substantial improvement of range compression is demonstrated in a laboratory experiment where our modified PGA technique is applied. Additionally, the PGA estimator is demonstrated to be efficient for this application and the maximum entropy saturation behavior of the estimator is analytically described. Simultaneous range- compression and aperture synthesis is experimentally demonstrated with a stepped linear frequency modulated waveform and holographic aperture ladar. The resultant 3D data has high resolution in the aperture synthesis dimension and is recorded using a conventional low bandwidth focal plane array. Individual cross-range field segments are coherently combined using data driven registration, while range-compression is performed without iii the benefit of a coherent waveform. Furthermore, a synergistically enhanced ability to discriminate image objects due to the coaction of range-compression and aperture synthesis is demonstrated. Two objects are then precisely located in 3D space, despite being unresolved in two directions, due to resolution gains in both the range and azimuth cross-range dimensions. iv ACKNOWLEDGEMENTS I would like to thank Dr. Bradley Duncan for guiding me through this research project and for patiently editing my writing. Thank you to Dr. Dave Rabb and Dr. Matthew Dierking for technical assistance, encouragement and guidance, especially in the experimental portion of this project. I also wish to thank Dr. Joseph Haus for serving on my committee as well as Bob Feldman and Brian Ewert for giving me a job and letting me keep it while I conducted this research. Thank you to Andy Stokes, who has been my homework and lab partner in every graduate level course I’ve taken and to Doug Jameson, whose digital holography expertise I tapped when I began this research so long ago. Thank you to my wife and family for supporting me unfailingly throughout this process and giving me something to look forward to every day. I wouldn’t have made it without you. This effort was supported in part by the United States Research Laboratory. The views expressed in this dissertation are those of the author and do not reflect on the official policy of the Air Force, Department of Defense, or the United States government. Jason Stafford November 2016, Dayton, Ohio v TABLE OF CONTENTS ABSTRACT ....................................................................................................................... iii ACKNOWLEDGEMENTS ................................................................................................ v LIST OF FIGURES ........................................................................................................... ix LIST OF TABLES ........................................................................................................... xiv CHAPTER 1 INTRODUCTION ........................................................................................ 1 1.1 Historical Background ............................................................................................... 2 1.2 Dissertation Overview ............................................................................................... 6 CHAPTER 2 BACKGROUND .......................................................................................... 8 2.1 Circular and Inverse-Circular HAL ........................................................................... 8 2.2 Range Compression ................................................................................................. 15 2.3 Single Aperture, Multi-λ Imaging ........................................................................... 23 2.4 Simultaneous Range Compression and HAL .......................................................... 29 CHAPTER 3 THE PHASE GRADIENT ALGORITHM METHOD FOR 3D HOLOGRAPHIC LADAR IMAGING ................................................................ 37 3.1 Introduction ............................................................................................................. 37 3.2 3D Holographic Ladar: Range Resolution and Ambiguity ..................................... 39 3.3 Application of Phase Gradient Autofocus Algorithms ........................................... 41 3.3.1 Assemble complex image plane data cube .................................................... 43 3.3.2 Apply a cross-range target support mask ...................................................... 43 vi 3.3.3 Range compress (1D IDFT) .......................................................................... 43 3.3.4 Centershift brightest range values ................................................................. 43 3.3.5 Window in the range dimension.................................................................... 44 3.3.6 Decompress in range (1D DFT) .................................................................... 45 3.3.7 Compute the phase gradient estimate ............................................................ 45 3.3.8 Integrate the phase gradient to recover the phase aberration estimate .......... 46 3.3.9 Detrend .......................................................................................................... 46 3.3.10 Apply phase correction to complex image plane data................................... 47 3.3.11 Corrected? ..................................................................................................... 47 3.4 Experiment and Results ........................................................................................... 47 3.5 PGA Performance for 3D Holographic Data .......................................................... 54 3.6 Conclusion ............................................................................................................... 61 CHAPTER 4 RANGE COMPRESSION FOR HAL ....................................................... 63 4.1 Background ............................................................................................................. 65 4.2 Low Contrast Range Compressed HAL .................................................................. 67 4.3 Point Targets with Range Compressed HAL .......................................................... 76 4.3.1 Experimental design ...................................................................................... 77 4.3.2 Experimental results ...................................................................................... 84 CHAPTER 5 CONCLUSION........................................................................................... 93 5.1 3D Holographic Ladar ............................................................................................. 93 5.2 Original Contributions ............................................................................................. 95 5.3 Publications and Presentations ................................................................................ 96 REFERENCES ................................................................................................................. 97 vii APPENDIX A Digital Holography ................................................................................. 103 APPENDIX B Experimental Equipment ........................................................................ 105 APPENDIX C The Cramer-Rao Lower Bound .............................................................. 106 APPENDIX D Some Details of the Eigenvector Method for 3D Holography ............... 108 viii LIST OF FIGURES Figure 2.1. Circular HAL mode. The sensor platform follows a circular path around the target, deviating from the (xa,ya) plane (red dashed line). ....................................10 Figure 2.2. The inverse circular HAL (IC-HAL) geometry. Notice that the rotation angle θ is positive in the clockwise (CW) direction. This is in keeping with the effective rotation direction that would be observed if the target was non-rotating and the TX instead translated in the positive xa direction.
Recommended publications
  • Comparison Between Digital Fresnel Holography and Digital Image-Plane Holography: the Role of the Imaging Aperture M
    Comparison between Digital Fresnel Holography and Digital Image-Plane Holography: The Role of the Imaging Aperture M. Karray, Pierre Slangen, Pascal Picart To cite this version: M. Karray, Pierre Slangen, Pascal Picart. Comparison between Digital Fresnel Holography and Digital Image-Plane Holography: The Role of the Imaging Aperture. Experimental Mechanics, Society for Experimental Mechanics, 2012, 52 (9), pp.1275-1286. 10.1007/s11340-012-9604-6. hal-02012133 HAL Id: hal-02012133 https://hal.archives-ouvertes.fr/hal-02012133 Submitted on 26 Feb 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Comparison between Digital Fresnel Holography and Digital Image-Plane Holography: The Role of the Imaging Aperture M. Karray & P. Slangen & P. Picart Abstract Optical techniques are now broadly used in the experimental analysis. Particularly, a theoretical analysis field of experimental mechanics. The main advantages are of the influence of the aperture and lens in the case of they are non intrusive and no contact. Moreover optical image-plane holography is proposed. Optimal filtering techniques lead to full spatial resolution displacement maps and image recovering conditions are thus established. enabling the computing of mechanical value also in high Experimental results show the appropriateness of the spatial resolution.
    [Show full text]
  • Digital Holography Using a Laser Pointer and Consumer Digital Camera Report Date: June 22Nd, 2004
    Digital Holography using a Laser Pointer and Consumer Digital Camera Report date: June 22nd, 2004 by David Garber, M.E. undergraduate, Johns Hopkins University, [email protected] Available online at http://pegasus.me.jhu.edu/~lefd/shc/LPholo/lphindex.htm Acknowledgements Faculty sponsor: Prof. Joseph Katz, [email protected] Project idea and direction: Dr. Edwin Malkiel, [email protected] Digital holography reconstruction: Jian Sheng, [email protected] Abstract The point of this project was to examine the feasibility of low-cost holography. Can viable holograms be recorded using an ordinary diode laser pointer and a consumer digital camera? How much does it cost? What are the major difficulties? I set up an in-line holographic system and recorded both still images and movies using a $600 Fujifilm Finepix S602Z digital camera and a $10 laser pointer by Lazerpro. Reconstruction of the stills shows clearly that successful holograms can be created with a low-cost optical setup. The movies did not reconstruct, due to compression and very low image resolution. Garber 2 Theoretical Background What is a hologram? The Merriam-Webster dictionary defines a hologram as, “a three-dimensional image reproduced from a pattern of interference produced by a split coherent beam of radiation (as a laser).” Holograms can produce a three-dimensional image, but it is more helpful for our purposes to think of a hologram as a photograph that can be refocused at any depth. So while a photograph taken of two people standing far apart would have one in focus and one blurry, a hologram taken of the same scene could be reconstructed to bring either person into focus.
    [Show full text]
  • Depth of Field PDF Only
    Depth of Field for Digital Images Robin D. Myers Better Light, Inc. In the days before digital images, before the advent of roll film, photography was accomplished with photosensitive emulsions spread on glass plates. After processing and drying the glass negative, it was contact printed onto photosensitive paper to produce the final print. The size of the final print was the same size as the negative. During this period some of the foundational work into the science of photography was performed. One of the concepts developed was the circle of confusion. Contact prints are usually small enough that they are normally viewed at a distance of approximately 250 millimeters (about 10 inches). At this distance the human eye can resolve a detail that occupies an angle of about 1 arc minute. The eye cannot see a difference between a blurred circle and a sharp edged circle that just fills this small angle at this viewing distance. The diameter of this circle is called the circle of confusion. Converting the diameter of this circle into a size measurement, we get about 0.1 millimeters. If we assume a standard print size of 8 by 10 inches (about 200 mm by 250 mm) and divide this by the circle of confusion then an 8x10 print would represent about 2000x2500 smallest discernible points. If these points are equated to their equivalence in digital pixels, then the resolution of a 8x10 print would be about 2000x2500 pixels or about 250 pixels per inch (100 pixels per centimeter). The circle of confusion used for 4x5 film has traditionally been that of a contact print viewed at the standard 250 mm viewing distance.
    [Show full text]
  • Depth of Focus (DOF)
    Erect Image Depth of Focus (DOF) unit: mm Also known as ‘depth of field’, this is the distance (measured in the An image in which the orientations of left, right, top, bottom and direction of the optical axis) between the two planes which define the moving directions are the same as those of a workpiece on the limits of acceptable image sharpness when the microscope is focused workstage. PG on an object. As the numerical aperture (NA) increases, the depth of 46 focus becomes shallower, as shown by the expression below: λ DOF = λ = 0.55µm is often used as the reference wavelength 2·(NA)2 Field number (FN), real field of view, and monitor display magnification unit: mm Example: For an M Plan Apo 100X lens (NA = 0.7) The depth of focus of this objective is The observation range of the sample surface is determined by the diameter of the eyepiece’s field stop. The value of this diameter in 0.55µm = 0.6µm 2 x 0.72 millimeters is called the field number (FN). In contrast, the real field of view is the range on the workpiece surface when actually magnified and observed with the objective lens. Bright-field Illumination and Dark-field Illumination The real field of view can be calculated with the following formula: In brightfield illumination a full cone of light is focused by the objective on the specimen surface. This is the normal mode of viewing with an (1) The range of the workpiece that can be observed with the optical microscope. With darkfield illumination, the inner area of the microscope (diameter) light cone is blocked so that the surface is only illuminated by light FN of eyepiece Real field of view = from an oblique angle.
    [Show full text]
  • Scanning Camera for Continuous-Wave Acoustic Holography
    Scanning Camera for Continuous-Wave Acoustic Holography Hillary W. Gao, Kimberly I. Mishra, Annemarie Winters, Sidney Wolin, and David G. Griera) Department of Physics and Center for Soft Matter Research, New York University, New York, NY 10003, USA (Dated: 7 August 2018) 1 We present a system for measuring the amplitude and phase profiles of the pressure 2 field of a harmonic acoustic wave with the goal of reconstructing the volumetric sound 3 field. Unlike optical holograms that cannot be reconstructed exactly because of the 4 inverse problem, acoustic holograms are completely specified in the recording plane. 5 We demonstrate volumetric reconstructions of simple arrangements of objects using 6 the Rayleigh-Sommerfeld diffraction integral, and introduce a technique to analyze 7 the dynamic properties of insonated objects. a)[email protected] 1 Scanning Holographic Camera 8 Most technologies for acoustic imaging use the temporal and spectral characteristics of 1 9 acoustic pulses to map interfaces between distinct phases. This is the basis for sonar , and 2 10 medical and industrial ultrasonography . Imaging continuous-wave sound fields is useful for 3 11 industrial and environmental noise analysis, particularly for source localization . Substan- 12 tially less attention has been paid to visualizing the amplitude and phase profiles of sound 13 fields for their own sakes, with most effort being focused on visualizing the near-field acous- 14 tic radiation emitted by localized sources, a technique known as near-field acoustic holog- 4{6 15 raphy (NAH) . The advent of acoustic manipulation in holographically structured sound 7{11 16 fields creates a need for effective sound-field visualization.
    [Show full text]
  • Digital Refocusing with Incoherent Holography
    Digital Refocusing with Incoherent Holography Oliver Cossairt Nathan Matsuda Northwestern University Northwestern University Evanston, IL Evanston, IL [email protected] [email protected] Mohit Gupta Columbia University New York, NY [email protected] Abstract ‘adding’ and ‘subtracting’ blur to the point spread functions (PSF) of different scene points, depending on their depth. In Light field cameras allow us to digitally refocus a pho- a conventional 2D image, while it is possible to add blur to tograph after the time of capture. However, recording a a scene point’s PSF, it is not possible to subtract or remove light field requires either a significant loss in spatial res- blur without introducing artifacts in the image. This is be- olution [10, 20, 9] or a large number of images to be cap- cause 2D imaging is an inherently lossy process; the angular tured [11]. In this paper, we propose incoherent hologra- information in the 4D light field is lost in a 2D image. Thus, phy for digital refocusing without loss of spatial resolution typically, if digital refocusing is desired, 4D light fields are from only 3 captured images. The main idea is to cap- captured. Unfortunately, light field cameras sacrifice spa- ture 2D coherent holograms of the scene instead of the 4D tial resolution in order to capture the angular information. light fields. The key properties of coherent light propagation The loss in resolution can be significant, up to 1-2 orders are that the coherent spread function (hologram of a single of magnitude. While there have been attempts to improve point source) encodes scene depths and has a broadband resolution by using compressive sensing techniques [14], spatial frequency response.
    [Show full text]
  • A Guide to Smartphone Astrophotography National Aeronautics and Space Administration
    National Aeronautics and Space Administration A Guide to Smartphone Astrophotography National Aeronautics and Space Administration A Guide to Smartphone Astrophotography A Guide to Smartphone Astrophotography Dr. Sten Odenwald NASA Space Science Education Consortium Goddard Space Flight Center Greenbelt, Maryland Cover designs and editing by Abbey Interrante Cover illustrations Front: Aurora (Elizabeth Macdonald), moon (Spencer Collins), star trails (Donald Noor), Orion nebula (Christian Harris), solar eclipse (Christopher Jones), Milky Way (Shun-Chia Yang), satellite streaks (Stanislav Kaniansky),sunspot (Michael Seeboerger-Weichselbaum),sun dogs (Billy Heather). Back: Milky Way (Gabriel Clark) Two front cover designs are provided with this book. To conserve toner, begin document printing with the second cover. This product is supported by NASA under cooperative agreement number NNH15ZDA004C. [1] Table of Contents Introduction.................................................................................................................................................... 5 How to use this book ..................................................................................................................................... 9 1.0 Light Pollution ....................................................................................................................................... 12 2.0 Cameras ................................................................................................................................................
    [Show full text]
  • Samyang T-S 24Mm F/3.5 ED AS
    GEAR GEAR ON TEST: Samyang T-S 24mm f/3.5 ED AS UMC As the appeal of tilt-shift lenses continues to broaden, Samyang has unveiled a 24mm perspective-control optic in a range of popular fittings – and at a price that’s considerably lower than rivals from Nikon and Canon WORDS TERRY HOPE raditionally, tilt-shift lenses have been seen as specialist products, aimed at architectural T photographers wanting to correct converging verticals and product photographers seeking to maximise depth-of-field. The high price of such lenses reflects the low numbers sold, as well as the precision nature of their design and construction. But despite this, the tilt-shift lens seems to be undergoing something of a resurgence in popularity. This increasing appeal isn’t because photographers are shooting more architecture or box shots. It’s more down to the popularity of miniaturisation effects in landscapes, where such a tiny part of the frame is in focus that it appears as though you are looking at a scale model, rather than the real thing. It’s an effect that many hobbyist DSLRs, CSCs and compacts can generate digitally, and it’s straightforward to create in Photoshop too, but these digital recreations are just approximations. To get the real thing you’ll need to shoot with a tilt-shift lens. Up until now that would cost you over £1500, and when you consider the number of assignments you might shoot in a year using a tilt-shift lens, that’s not great value for money. Things are set to change, though, because there’s The optical quality of the lens is very good, and is a new kid on the block in the shape of the Samyang T-S testament to the reputation that Samyang is rapidly 24mm f/3.5 ED AS UMC, which has a street price of less gaining in the optics market.
    [Show full text]
  • Backlighting
    Challenge #1 - Capture Light Bokeh Have you seen those beautiful little circles of colour and light in the background of photos? It’s called Bokeh and it comes from the Japanese word “boke” meaning blur. It’s so pretty and not only does it make a gorgeous backdrop for portraits, but it can be the subject in it’s own right too! It’s also one of the most fun aspects of learning photography, being able to capture your own lovely bokeh! And that’s what our first challenge is all about, Capturing Light Bokeh! And remember - just as I said in this video, the purpose of these challenges is NOT to take perfect photos the first time you try something new… it’s about looking for light, trying new techniques, and exploring your creativity to help you build your skills and motivate and inspire you. So lets have fun & I can’t wait to see your photos! Copyright 2017 - www.clicklovegrow.com How to Achieve Light Bokeh Light Bokeh is created when light reflects off, or through, a background; and is then captured with a wide open aperture of your lens. As light reflects differently off flat surfaces, you’re not likely to see this effect when using a plain wall, for example. Instead, you’ll see it when your background has a little texture, such as light reflecting off leaves in foliage of a garden, or wet grass… or when light is broken up when streaming through trees. Looking for Light One of the key take-aways for this challenge is to start paying attention to light & how you can capture it in photos… both to add interest and feeling to your portraits, but as a creative subject in it’s own right.
    [Show full text]
  • State-Of-The-Art in Holography and Auto-Stereoscopic Displays
    State-of-the-art in holography and auto-stereoscopic displays Daniel Jönsson <Ersätt med egen bild> 2019-05-13 Contents Introduction .................................................................................................................................................. 3 Auto-stereoscopic displays ........................................................................................................................... 5 Two-View Autostereoscopic Displays ....................................................................................................... 5 Multi-view Autostereoscopic Displays ...................................................................................................... 7 Light Field Displays .................................................................................................................................. 10 Market ......................................................................................................................................................... 14 Display panels ......................................................................................................................................... 14 AR ............................................................................................................................................................ 14 Application Fields ........................................................................................................................................ 15 Companies .................................................................................................................................................
    [Show full text]
  • Full-Color See-Through Three-Dimensional Display Method Based on Volume Holography
    sensors Article Full-Color See-Through Three-Dimensional Display Method Based on Volume Holography Taihui Wu 1,2 , Jianshe Ma 2, Chengchen Wang 1,2, Haibei Wang 2 and Ping Su 2,* 1 Department of Precision Instrument, Tsinghua University, Beijing 100084, China; [email protected] (T.W.); [email protected] (C.W.) 2 Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; [email protected] (J.M.); [email protected] (H.W.) * Correspondence: [email protected] Abstract: We propose a full-color see-through three-dimensional (3D) display method based on volume holography. This method is based on real object interference, avoiding the device limitation of spatial light modulator (SLM). The volume holography has a slim and compact structure, which realizes 3D display through one single layer of photopolymer. We analyzed the recording mechanism of volume holographic gratings, diffraction characteristics, and influencing factors of refractive index modulation through Kogelnik’s coupled-wave theory and the monomer diffusion model of photopolymer. We built a multiplexing full-color reflective volume holographic recording optical system and conducted simultaneous exposure experiment. Under the illumination of white light, full-color 3D image can be reconstructed. Experimental results show that the average diffraction efficiency is about 53%, and the grating fringe pitch is less than 0.3 µm. The reconstructed image of volume holography has high diffraction efficiency, high resolution, strong stereo perception, and large observing angle, which provides a technical reference for augmented reality. Citation: Wu, T.; Ma, J.; Wang, C.; Keywords: holographic display; volume holography; photopolymer; augmented reality Wang, H.; Su, P.
    [Show full text]
  • 3D-Integrated Metasurfaces for Full-Colour Holography Yueqiang Hu 1, Xuhao Luo1, Yiqin Chen1, Qing Liu1,Xinli1,Yasiwang1,Naliu2 and Huigao Duan1
    Hu et al. Light: Science & Applications (2019) 8:86 Official journal of the CIOMP 2047-7538 https://doi.org/10.1038/s41377-019-0198-y www.nature.com/lsa ARTICLE Open Access 3D-Integrated metasurfaces for full-colour holography Yueqiang Hu 1, Xuhao Luo1, Yiqin Chen1, Qing Liu1,XinLi1,YasiWang1,NaLiu2 and Huigao Duan1 Abstract Metasurfaces enable the design of optical elements by engineering the wavefront of light at the subwavelength scale. Due to their ultrathin and compact characteristics, metasurfaces possess great potential to integrate multiple functions in optoelectronic systems for optical device miniaturisation. However, current research based on multiplexing in the 2D plane has not fully utilised the capabilities of metasurfaces for multi-tasking applications. Here, we demonstrate a 3D-integrated metasurface device by stacking a hologram metasurface on a monolithic Fabry–Pérot cavity-based colour filter microarray to simultaneously achieve low-crosstalk, polarisation-independent, high-efficiency, full-colour holography, and microprint. The dual functions of the device outline a novel scheme for data recording, security encryption, colour displays, and information processing. Our 3D integration concept can be extended to achieve multi-tasking flat optical systems by including a variety of functional metasurface layers, such as polarizers, metalenses, and others. Introduction dimensional (3D) integration of discrete metasurfaces. For 1234567890():,; 1234567890():,; 1234567890():,; 1234567890():,; Metasurfaces open a new paradigm to design optical example, anisotropic structures or interleaved subarrays elements by shaping the wavefront of electromagnetic have been specifically arranged in a 2D plane to obtain – waves by tailoring the size, shape, and arrangement of multiplexed devices10,16 18. To reduce crosstalk, plas- – subwavelength structures1 3.
    [Show full text]