<<

UNIVERSITY OF CALIFORNIA, SAN DIEGO

Concentric Multi-Reflection for Ultra-Compact Imaging Systems

A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

in

Electrical (Photonics)

by

Eric Julian Tremblay

Committee in charge:

Professor Joseph E. Ford, Chair Professor Michael W. Berns Professor Thomas R. Bewley Professor Sadik C. Esener Professor George C. Papen

2008

Copyright

Eric Julian Tremblay, 2008

All rights reserved.

The dissertation of Eric Julian Tremblay is approved, and it is acceptable in quality and form for publication on microfilm and electronically:

Chair

University of California, San Diego

2008

iii

DEDICATION

I would like to thank my research advisor, Professor Joseph Ford, for his expertise, patience and guidance throughout the course of my time at UCSD. In addition, many thanks go out to members of the UCSD Photonics Systems Integration Lab whom I have worked and shared ideas with for the past four years.

Special thanks also go to Ron Stack and Rick Morrison at Distant Focus Corporation whose skills and expertise in packaging, coding, electronics and prototyping made the demonstration cameras a reality.

This dissertation is dedicated to my parents for their love and support, and to my wife, Camille.

iv

TABLE OF CONTENTS

SIGNATURE PAGE ...... iii DEDICATION ...... iv TABLE OF CONTENTS ...... v LIST OF ABBREVIATIONS ...... vii LIST OF FIGURES ...... ix LIST OF TABLES ...... xiii ACKNOWLEDGEMENTS ...... xiv VITA ...... xvi ABSTRACT OF THE DISSERTATION ...... xvii CHAPTER I INTRODUCTION ...... 1

I.A ULTRA-COMPACT IMAGING SYSTEMS ...... 1 I.A.1 Aspheric for Compact Cameras ...... 2 I.A.2 Impact of the Digital Format ...... 3 I.B DESIGNING FOR SPECIFIC SYSTEM CONSTRAINTS ...... 4 I.B.1 Application Examples Requiring Innovation for Compactness ...... 5 I.B.2 The Montage Program ...... 7 I.C DISSERTATION OUTLINE ...... 8 CHAPTER II FUNDAMENTALS OF CONCENTRIC MULTI-REFLECTION LENSES ...... 11

II.A CONCENTRIC MULTI-REFLECTION CONCEPT ...... 11 II.B CONCENTRIC MULTI-REFLECTION OPTICS ...... 12 II.B.1 Geometrical Optics ...... 12 II.B.2 Physical Optics ...... 15 II.C RESOLUTION & LIGHT COLLECTION ...... 18 II.D VOLUME & THICKNESS ...... 19 II.E ARC-SECTIONING ...... 22 II.E.1 Increased Depth of Focus & Depth of Field ...... 23 II.E.2 Physical Optics of Arc-Sectioning ...... 26 II.E.3 Aperture and Volume ...... 28 II.F SPECIFIC DESIGN CONSIDERATIONS ...... 29 II.F.1 Design Space ...... 29 II.F.2 Implementation Challenges ...... 31 II.F.3 General Design Procedure ...... 32 II.F.4 Air-spaced ...... 34 CHAPTER III CONCENTRIC MULTI-REFLECTION CAMERA PROTOTYPES ...... 36

III.A EIGHT-REFLECTION CAMERA PROTOTYPE ...... 36 III.A.1 Eight-Reflection Lens Design ...... 36 III.A.2 Eight-Reflection Camera Demonstration ...... 42 III.B ARC-SECTIONED EIGHT-REFLECTION CAMERA PROTOTYPE ...... 50 III.B.1 Arc-Sectioned Eight-Reflection Design ...... 50 III.B.2 Arc-Sectioned Eight-Reflection Camera Demonstration ...... 52 III.C FOUR-REFLECTION CAMERA PROTOTYPE ...... 58 III.C.1 Four-Reflection Lens Design ...... 58 III.C.2 Four-Reflection Camera Demonstration ...... 64

v

CHAPTER IV PUPIL-PHASE ENCODING AND POST-PROCESSING ...... 73

IV.A APPLICATION OF PPE TO CONCENTRIC MULTI-REFLECTION CAMERAS ...... 74 IV.B PPE EIGHT-REFLECTION CAMERA PROTOTYPE ...... 75 IV.B.1 PPE Design and Nominal Performance ...... 76 IV.B.2 Experimental Results ...... 80 IV.C PPE FOUR-REFLECTION CAMERA PROTOTYPE ...... 84 IV.C.1 PPE Design and Nominal Performance ...... 85 IV.C.2 Experimental Results ...... 89 CHAPTER V ONGOING WORK ...... 94

V.A FOUR-REFLECTION 7-ELEMENT CAMERA ARRAY: 13 MEGAPIXEL VISIBLE-LIGHT CAMERA ...... 94 V.A.1 Four-Reflection 7-Element Array Design ...... 95 V.A.2 Implementation & Demonstration of a 7-Element Camera Prototype ...... 96 V.B INFRARED CONCENTRIC MULTI-REFLECTION CAMERA DESIGN ...... 100 V.B.1 Comparison to a Conventional IR Lens ...... 100 V.B.2 Infrared Four-Reflection Air-Spaced Design ...... 102 V.C COMPRESSIVE IMAGING WITH A CONCENTRIC MULTI-REFLECTION LENS ...... 104 V.C.1 Compressive Imaging System Hardware and Set-up ...... 106 V.C.2 Preliminary Experimental results ...... 109 CHAPTER VI CONCLUSION ...... 115 APPENDIX A CMR LENS PRESCRIPTION DATA ...... 119 APPENDIX B CMR LENS EXAMPLE WITH AIR-SPACED DIELECTRICS & DIFFRACTIVE CHROMATIC CORRECTION ...... 122 APPENDIX C PPE USER DEFINED SURFACE ...... 124 APPENDIX D ABERRATION MAPPING IN HIGHLY OBSCURED ANNULAR SYSTEMS .... 132 BIBLIOGRAPHY ...... 135

vi

LIST OF ABBREVIATIONS

λ of Light (unit) AWGN Additive White Gaussian Noise CaF2 Calcium Fluoride CCD Charge Coupled Device CI Compressive Imaging CMR Concentric Multi-Reflection CS Compressed Sensing CTF Contrast Transfer Function CMOS Complimentary Metal-Oxide Semiconductor DARPA Defense Advanced Research Projects Agency DDR Double Data Rate DFC Distant Focus Corporation DLL Dynamic Link Library DLP® Digital Light Processing (Trademark of Texas Instruments) DMD Digital Micromirror Device DOF Depth of Field DVI Digital Visual Interface EFL Effective F Focal Length F/# F-number fps Frames Per Second (unit) FPGA Field Programmable Gate Array FOV Field of View IR Infrared LCD Liquid Crystal Display LED Light Emitting Diode LMMSE Linear Minimum Mean Square Error lp/mm Line Pairs Per Millimeter (unit) LUT Look Up Table LWIR Long-Wave Infrared MDO Multi-Domain Optimization MB Megabytes (unit) Mpixels Megapixels (unit) MTF Modulation Transfer Function MWIR Mid-Wave Infrared NA Numerical Aperture NIR Near Infrared OTF Optical Transfer Function PC Personal PCB Printed Circuit Board PPE Pupil-Phase Encoded PSF Point Spread Function RGB Red-Green-Blue RMS Root Mean Square RMSE Root Mean Square Error SDRAM Synchronous Dynamic Random Access Memory

vii

SCSI Small Computer System Interface SNR Signal-to-Noise Ratio SPDT Single Point Diamond Turning SRAM Static Random Access Memory TIR Total Internal Reflection UAV Unmanned Aerial Vehicle UDS User Defined Surface µm microns (unit) USAF United States Air Force USB Universal Serial Bus UV Ultraviolet YUV Luminance-Bandwidth-Chrominance ZnSe Zinc Selenide

viii

LIST OF FIGURES

Figure I-1: Motivation for the MONTAGE program...... 7

Figure II-1: (a) Conventional compound refractive lens. (b) Concentric multi-reflection lens concept...... 12

Figure II-2: Simplified paraxial geometry and comparison. (a) Thin (paraxial) lens of focal length F. (b) Four-reflection obscured version using a single thin (paraxial) reflector of focal length F...... 13

Figure II-3: Example curves of annular aperture width (w) versus FOV for various diameters using paraxial geometrical calculations...... 14

Figure II-4: limited incoherent (a) PSF and (b) MTF of a 60 mm circular aperture with different levels of central obscuration...... 16

Figure II-5: Relative energy collection versus spatial frequency for various circular apertures...... 17

Figure II-6: Outer diameter versus obscuration to maintain constant collection aperture area (effective diameter) compared to an unobscured lens...... 19

Figure II-7: Relative energy collection versus spatial frequency for eight-reflection and four-reflection lens designs matched to a 35 mm F/1.4 conventional lens up to 156 lp/mm...... 21

Figure II-8: Arc-sectioned geometry with parameters for depth of focus and through-focus image shift calculations. (a) xy view, (b) x cross-section, and (c) y cross-section...... 24

Figure II-9: Diffraction from an arc-sectioned aperture...... 27

Figure II-10: Field of view versus equivalent aperture diameter for several CMR designs...... 29

Figure II-11: Diffraction limited relative illumination versus spatial frequency comparing a conventional miniature lens (F/1.8, EFL = 5mm), a four-reflection lens (F/1.13, F = 19 mm), and an eight-reflection lens (F/1.4, F = 38 mm); all of 5 mm total thickness...... 30

Figure III-1: (a) Eight-reflection lens in CaF2 schematic, (b) calculated monochromatic MTF, (c) simulated monochromatic (588 nm) geometric spot diagram, and (d) simulated broad-spectrum (486, 588, 656 nm) geometric spot diagram...... 37

Figure III-2: Monochromatic MTF curves showing refocus possible by reposition of the image plane. .... 388

Figure III-3: Through-focus monochromatic spot distributions for ±4 μm range...... 39

Figure III-4: analysis showing transmission through the eight-reflection lens versus output ray angle ...... 41

Figure III-5: Eight-reflection camera prototype fabrication and integration...... 43

Figure III-6: 3-D surface measurement of a diamond turned asphere in CaF2 using a large-magnification white-light interferometer...... 44

Figure III-7: Images taken with the first three eight-reflection camera prototypes...... 46

Figure III-8: Conventional and eight-reflection camera comparison...... 47

ix

Figure III-9: Measured in-focus incoherent system MTF (lens + sensor) comparison of the eight-reflection camera and the conventional comparison camera using identical image sensors...... 48

Figure III-10: Eight-reflection camera prototype thermal testing...... 49

Figure III-11: Arc-sectioned eight-reflection lens drawing...... 51

Figure III-12: Arc-sectioned eight-reflection design’s simulated performance...... 51

Figure III-13: Geometrical best focus and 4% object defocus spot diagrams at 2.5m for (a) F/1.9 F = 40 mm conventional reference lens, (b) full-aperture eight-reflection lens, and (c) arc-sectioned eight-reflection lens...... 52

Figure III-14: (a) The Forza/Sunplus CMOS image sensor (left) and the Omnivision 3620 CMOS image sensor (right). (b) Measured sensitivity versus incidence angles for the Forza/Sunplus and Omnivision image sensors showing the Forza/Sunplus sensor’s reduced pixel vignetting...... 53

Figure III-15: Arc-sectioned eight-reflection camera fabrication and assembly...... 54

Figure III-16: Qualitative camera Comparison. (a) F/1.9 43 mm conventional lens, (b) Conventional web mini-cam (F ≈ 3.9 mm), (c) Full-aperture eight-reflection camera, and (d) arc-sectioned eight-reflection camera...... 55

Figure III-17: Through focus images of a 1951 USAF resolution chart. (a) F/1.9 40 mm Tokina conventional lens, (b) full aperture eight-reflection camera, and (c) arc-sectioned eight-reflection camera. 57

Figure III-18: Through focus resolution for the conventional reference camera (F/1.9, 40 mm), the full- aperture eight-reflection camera, and the 50º arc-sectioned eight-reflection camera in the (a) horizontal and (b) vertical directions using a 1951 USAF resolution chart...... 58

Figure III-19: (a) Layout drawing (3/4 section) of the four-reflection camera, and (b) monochromatic MTF at the nominal design object distance of 10m...... 59

Figure III-20: (a) Cross-section and (b) perspective drawings of the mechanical lens package assembly. ... 61

Figure III-21: Simulated refocus performance of the four-reflection lens design. Monochromatic incoherent MTF curves at object distances of (a) 4m and (b) 1km...... 63

Figure III-22: Four-reflection camera assembly- (a) diamond turned and coated optical parts, and (b) image of the assembled four-reflection camera with USB interface PCB...... 65

Figure III-23: Performance at 3.9 m: (a) full image, (b) enlarged and cropped 1951 USAF resolution chart, (c) measured CTF (lens + sensor) of the four-reflection camera compared to a conventional F/1.4 lens of the same focal length and sensor, and (d) image space resolution versus object distance for the four- reflection camera and conventional F/1.4 comparison camera...... 67

Figure III-24: Refocused outdoor images captured with the four-reflection camera...... 69

Figure III-25: Stray Light in the four-reflection camera...... 71

Figure IV-1: PPE eight-reflection lens. (a) Cross-section illustrating ray path. (b) Back (aspheric) side perspective view...... 76

x

Figure IV-2: Thru-focus incoherent MTF plot for an on-axis field point at 156 cycles/mm in image space...... 78

Figure IV-3: Thru-focus simulated digital PSFs for the nominal PPE and unmodified eight-reflection cameras...... 78

Figure IV-4: Digital filter used to process PPE images...... 79

Figure IV-5: Best-focus PSF measured using the PPE eight-reflection imaging system...... 81

Figure IV-6: Filter designed using measured PSF...... 82

Figure IV-7: USAF targets imaged through eight-reflection cameras at best-focus. (a) Unmodified. (b) PPE...... 83

Figure IV-8: USAF targets imaged through eight-reflection cameras at 10 microns away from best-focus. (a) Unmodified. (b) PPE...... 84

Figure IV-9: Simulated PSFs at best focus and ±10 µm defocus (±3.5 waves defocus at 550 nm). Unmodified PSFs, PPE PSFs before filtering, and inverse-filtered PPE PSFs...... 88

Figure IV-10: Measured PSFs at 3.9 m (best focus) and ±0.3 m (~ ±3.5 waves defocus at 550 nm). Unmodified PSFs, PPE PSFs before filtering, and inverse-filtered PPE PSFs...... 89

Figure IV-11: Experimental comparison images at 3.9 m (best focus). (a) Unmodified camera, and (b) processed PPE camera...... 91

Figure IV-12: Experimental comparison images at 3.6m (~3.5 waves defocus at 550 nm). (a) Unmodified camera, and (b) processed PPE camera...... 92

Figure V-1: (a) 7-element array camera diagram, (b) Seven 1.93 Mpixel image sensors capture and stitch the seven 17º degree fields for a total FOV of >30º...... 95

Figure V-2: (a) Issues and results encountered during the development of the four-reflection assembly process as displayed by the on-axis PSF. (b) Measured PSFs of the cameras used in the arrays...... 97

Figure V-3: (a) 13.4 Mpixel 7-element array camera with >30° FOV in a conformal 5.5mm thick optical track. (b) 7-element array camera with FPGA processor and LCD distplay. (c) Indoor image at 4 m during stitching alignment. (d) A stitched outdoor image of sail boats in the San Diego Bay...... 99

Figure V-4: Diffraction limited relative energy collection versus spatial frequency comparing a conventional F/1.4 IR lens to an F/1.34 four-reflection air-spaced CMR IR lens of the same focal length (125 mm)...... 101

Figure V-5: (a) A schematic drawing and raytrace of the four-reflection MWIR-LWIR camera scaled to 125 mm focal length and evaluated incoherent MTF for (b) 1 µm light and (c) 10 µm light...... 102

Figure V-6: Preliminary optomechanical design for the first prototype NIR/LWIR CMR camera incorporating the baseline four-reflection camera...... 104

Figure V-7: Two-path compressive imaging system setup using an eight-reflection lens...... 107

Figure V-8: Compressive Imaging Setup Hardware...... 108

xi

Figure V-9: (a) Simulated on-axis spot diagram on the intermediate image plane (DMD). (b) Experimental image of a bright point source on the DMD plane showing a PSF spread of ~ 5 DMD pixels along the diagonal...... 109

Figure V-10: Two Detector Hadamard reconstruction...... 111

Figure V-11: Relative RMSE (experimental) versus number of features for 1 and 2 detectors...... 112

Figure V-12: (a) Linear and (b) non-linear reconstruction of a 64x64 binary object using 200 Hadamard features...... 113

Figure V-13: Reconstruction of a 64x64 binary object using 1000 random masks...... 114

Figure B-1: Four-reflection CMR design example using two air-spaced dielectrics and diffractive chromatic aberration correction...... 123

Figure D-1: Ray trace comparison of aberrations in a 90% obscured imaging system (dark lines) versus those in a conventional imaging system...... 132

xii

LIST OF TABLES

Table II-1: Size comparisons to match light collection aperture area to a F/1.4 35 mm conventional lens. Values relative to the conventional lens...... 20

Table II-2: Size comparisons to match diffraction limited relative energy collection to a F/1.4 35 mm conventional lens up to 156 lp/mm. Values relative to the conventional lens...... 22

Table IV-1: Calculated fabrication tolerances for the unmodified eight-reflection lens and the PPE eight- reflection lens...... 80

xiii

ACKNOWLEDGEMENTS

The text of Chapter II in part is a reprint of the material as it appears in:

• E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford, "Ultrathin cameras using

annular folded optics," Appl. Opt. 46, 463-471 (2007).

• E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-

reflection imager," Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press).

The dissertation author was the primary researcher and author.

The text of Chapter III in part is a reprint of the material as it appears in:

• E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford, "Ultrathin cameras using

annular folded optics," Appl. Opt. 46, 463-471 (2007).

• E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford, "Arc-section annular folded

optic imager," Proc. SPIE 6668, 666807 (2007).

• E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-

reflection imager," Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press).

The dissertation author was the primary researcher and author.

The text of Chapter IV in part is a reprint of the material as it appears in:

• E. J. Tremblay, J. Rutkowski, I. Tamayo, P. E. X. Silveira, R. A. Stack, R. L. Morrison,

M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication

tolerances of thin annular folded imaging systems using wavefront coding," Appl. Opt.

46, 6751-6758 (2007).

• E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-

reflection imager," Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press).

xiv

The dissertation author was the primary researcher and author.

The text of Appendix D in part is a reprint of the material as it appears in:

• E. J. Tremblay, J. Rutkowski, I. Tamayo, P. E. X. Silveira, R. A. Stack, R. L. Morrison,

M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication

tolerances of thin annular folded imaging systems using wavefront coding," Appl. Opt.

46, 6751-6758 (2007).

This material was contributed to the above publication by Joel Rutkowski and is therefore included as an appendix.

In addition to the coauthors listed above, I would like to acknowledge Jun Ke, Peter

Ilinykh, Pavel Shekhtmeyster, Pawan Baheti and Professor Mark Neifeld for contributions to the compressive imaging project (Section V.C). I would also like to acknowledge Michael Stenner for help utilizing the Multi-Domain Optimization framework, and James Sutter for help with the

ZEMAX User Defined Surface code.

Research contained in this dissertation was supported by the Defense Advanced Research

Projects Agency (DARPA) via the MONTAGE program, grant HR0011-04-I-0045; and by the

Natural Sciences and Engineering Research Council of Canada (NSERC) through a graduate student scholarship.

xv

VITA

Bachelor of Science in Engineering Physics 2001 University of Alberta

Master of Engineering in Electrical Engineering (Photonics) 2004 McGill University

Doctor of Philosophy in Electrical Engineering (Photonics) 2008 University of California, San Diego

PUBLICATIONS

E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-reflection imager" Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press).

P. Garcia, M. J. Mines, K. S. Bower, J. Hill, J. Menon, E. J. Tremblay, and B. Smith. “Robotic laser tissue welding of ocular tissue using chitosan films” Lasers Surg. Med. doc ID LSM-08- 0003.R1 (posted 23 October 2008, in press).

E. J. Tremblay, J. Rutkowski, I. Tamayo, P. E. Silveira, R. A. Stack, R. L. Morrison, M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication tolerances of thin annular folded imaging systems using wavefront coding" Appl. Opt. 46, 6751-6758 (2007).

E. J. Tremblay, R. A. Stack, R. L. Morrison and J. E. Ford, "Ultrathin cameras using annular folded optics" Appl. Opt. 46, 463-471(2007).

Y. Zuo, B. Bahamin, E. J. Tremblay, C. Pulikkaseril, E. Shoukry, M. Mony, P. Langlois, V. Aimez and D. V. Plant, “1x2 and 1x4 electrooptic switches” IEEE Photon. Tech. Lett. 17, 2080- 2082 (2005).

E. J. Tremblay, "Electro-optic beam scanning in domain inverted Lithium Tantalate for fast optical switching" Master's Thesis, McGill University (2003).

PATENT

J. E. Ford, E. Tremblay and Y. Fainman, “Multiple Reflective Lenses and Lens Systems”, filed

xvi

ABSTRACT OF THE DISSERTATION

Concentric Multi-Reflection Lenses for Ultra-Compact Imaging Systems

by

Eric Julian Tremblay

Doctor of Philosophy in Electrical Engineering (Photonics)

University of California, San Diego, 2008

Professor Joseph E. Ford, Chair

With smaller, slimmer and lighter cameras in high demand for consumer, medical and military applications, new efforts are required to improve the performance of size constrained cameras. A major challenge for the size reduction of imaging systems is due to the scalability of the optics- as conventional imaging systems are scaled down, the focal length (i.e. magnification) scales down with the allowed optical thickness. Being additionally limited by the size of the smallest available image sensors, we find that these cameras are usually limited to short focal length, small aperture lenses. Although miniature lenses work well for moderate field of view

(FOV) applications, there are others that require larger magnification and better light collection with reduced thickness, bulk and/or weight.

xvii

Most camera lenses refract light, leading to the familiar cylindrical package geometry. In some cases, where extended focal length or reduced track length are required, concentric mirrors can be used to effectively reduce barrel length. Recent advances in diamond machining and image processing make it possible to take this approach to a new extreme. With up to 8 reflections, large ray angles, and a lens shaped more like a lens cap than a tube, concentric multi- reflection (CMR) lenses (sometimes referred to as Origami Optics) allow us to squeeze long focal lengths into a thin package and still collect enough light for fast, sharp exposures. Applications may range from compact imagers for micro-unmanned aerial vehicle (UAV) surveillance craft to a miniature telephoto lens for future cell phones.

In this dissertation I will present work on a new class of imagers developed using the

CMR design approach. The resulting prototypes have yielded images comparable with much larger commercial ‘compact’ cameras. I will discuss the physical design, fabrication and testing of these cameras including hybrid optical-digital optimization of the optics and post-processing to extend the depth of field (DOF) and relax fabrication and alignment tolerances. The dissertation will conclude with some discussion of future research and development directions.

xviii

Chapter I

Introduction

I.A Ultra-Compact Imaging Systems

The need for compact visible-light cameras has led to extensive use of short focal length, small aperture lenses which meet the strict space and weight constraints required for many commercial and military applications. While these miniature refractive lenses often perform well in good lighting and within a small range of optical magnification, resolution and light collection are limited by the focal length and physical track length of the lens. Several new approaches have

recently been proposed to improve the performance, versatility and function of these compact cameras. Liquid lens technology has been developed to reduce the complexity and bulk required for zoom and autofocus functions for miniature cameras [1][2][3]. Another approach utilizes spatial multiplexing of an array of thin subimagers in combination with post-detection processing to reconstruct a high resolution image from low-resolution samples [4][5]. In addition, advanced optical fabrication techniques have led to novel ultra-compact catadioptric (reflective and refractive) lens designs using aspheric and free-form optics [6][7][8][9]. These recent advances illustrate the use of new fabrication techniques, design tools and digital processing degrees of freedom to realize novel compact imaging systems.

1

2

I.A.1 Aspheric Optics for Compact Cameras

In recent years the design tools, fabrication technologies, coatings, diffractive elements and optical materials available have all advanced considerably; making lenses that are far more compact, powerful, efficient and highly specified than ever. A very significant role in this advancement has been played by the improvements to aspherical lens technology. Aspheric lenses are effective at correcting several nonchromatic geometrical aberrations and make it

possible to reduce the number of lens elements in an optical design, or reduce the aberrations

incurred by high powered elements [10]. Because aspheric elements allow optical designs to be

more powerful with fewer elements, they play a critical role in compact lens design. Aspheric

elements found in many commercial digital still cameras and other products are commonly

fabricated using or plastic molding technology [11]. Single point diamond turning (SPDT)

technology, which up until recently was used primarily for fabricating high quality infrared

optics, can now also be used to produce visible-light aspheric optics with good surface figure

[12][13].

Future lens designs may also benefit from more general free-form optical designs

[6][7][8][9]. Free-form optics provide significant reduction in lens thickness and the necessary

degrees of freedom for optimized performance, but still require improvements to the precision and flexibility of the fabrication and assembly techniques involved and further development of

the necessary design tools. This free-form catadioptric approach is similar in nature to the CMR lens approach discussed in this dissertation. Both approaches utilize multiple-reflections to

achieve compactness and aberration correction with extended focal lengths. The major

differences between the two approaches are the concentricity of the annular reflectors (a single

optic axis), surface types, fabrication and assembly techniques used [14][15][16]. The simplicity

3

of single-axis CMR optics fabrication compared to that of multi-axis free-form optics is its

primary advantage.

I.A.2 Impact of the Digital Format

The field of imaging systems was revolutionized in the early 1970s by the concurrent development of solid state image sensors and the computer microprocessor. Photo-electric

(digital) imaging methods have several differences and advantages over the older photo-chemical imaging methods, and have opened up a wide range of new imaging applications in recent years.

Of particular importance is the simplicity in which images can be stored, transmitted and processed with the electronic image systems- advantages that are indispensible for many scientific and technical applications. In addition, the quality of these digital systems and the pictures they take have been improving rapidly. This has led to a massive consumer market in digital imaging, with the digital format replacing film for the majority of consumer applications.

With the added flexibility of the digital format, image processing has become an integral part of the function of imaging systems. As opposed to traditional imaging system design where the image formation and detection were considered independently; the optics, detection and processing of modern cameras can now be thought of and designed on the systems to improve or add new imaging capabilities [17]. Joint optimization of the imaging system’s optics and post-processing has proven useful for tasks such as extending the DOF, relaxing tolerances and aberration correction [18]. Digital imaging systems are also well suited to exploring new computational approaches to image acquisition and reconstruction. These computational techniques make it possible for new image measurement paradigms to be realized. One example of this is the new field of compressive imaging, where high-fidelity imaging can be accomplished using a small number of non-adaptive linear measurements in conjunction with computational reconstruction algorithms [19][20][21]. These new computational techniques are also being

4

investigated to more efficiently perform specific tasks or measurements such as object

identification or recognition as opposed to the visually pleasing object/scene representations that

are traditionally desired [22][23].

I.B Designing for Specific System Constraints

Traditionally, the design of imaging systems has focused on the choice of an appropriate lens for the intended application. To this end, the optical engineer’s job has primarily been to identify a lens type, often from a lens catalog, and modify it according to the application’s specific needs. While this method is simple and often effective, the unconventional and sometimes severe requirements of new imaging applications in combination with new design tools and degrees of freedom available to the optical engineer require a systems-level

optimization. A systems-level approach can leverage new hybrid optical-digital optimization to

explore new and more extreme capabilities for a wide variety of new applications. Some typical

requirements and considerations the optical engineer must take into account are:

Optical Performance Specifications Physical/ Practical Constraints

• Resolution • Thickness

• Field of view • Volume

• Sensitivity • Weight

• Depth of Field • Available materials

band • Image sensor type and size

• SNR • Fabrication cost

• Working distance • Operation Cost

• Zoom function • Manufacturability

5

• Non-imaging measurements • Mechanical durability

• Specific tolerances • Electrical durability

• Defocus invariance • Temperature

• Color fidelity • Vibration

• High-frequency contrast • Functionality in special environments

• Low-frequency contrast (space, underwater etc.)

• Special aberration corrections • Power dissipation

(distortion, chromatic etc.) • Processing power

• Etc. • Etc.

Future optical systems will require that their optics, detection and processing be considered simultaneously rather than independently to more efficiently design the system as a whole.

I.B.1 Application Examples Requiring Design Innovation for Compactness

Surveillance Cameras for Micro-UAV

The goal for this application is to fit a camera into the hull or wing of a micro-UAV

aircraft for long-range surveillance. In particular, weight and thickness are tightly constrained

and a large format lens is unacceptable due to its bulk and cantilevered mass. Miniature lenses are

common in applications where minimal size and weight are important, but are not suitable for

long range surveillance due to their low resolution and generally poor light collection. This low

resolution is due to the short focal length in combination with the limited pixel resolution and

image sensor size which creates a relatively wide-field of view, low resolution image. This

application therefore requires an imaging system that provides high resolution and good light

6

collection at long range without the additional depth and bulk of a conventional large lens

solution.

Miniature Cameras for Cell Phones & Portable Devices

While miniature cameras have found wide-spread use and acceptance in the commercial

cell phone market, their limited resolution, sensitivity, and function (zoom, autofocus, flash etc.)

have created a large demand for improved performance and zoom without a significant increase

in size or cost. In this application, cost, thickness and volume are all tightly constrained and

solutions must be capable of imaging at both close range (1-2 m) and long range with large DOF.

Light-Weight Infrared Imaging Systems

Imaging systems in the infrared (IR) windows of low atmospheric absorption (3-5 µm and 8-12 µm) require large format imaging optics to fit the image sensors currently available at those wavelengths. For refractive imaging systems in the IR, the choice of suitable refractive materials is smaller by more than an order of magnitude compared to the choices available in the visible spectrum [24][25]. The limited choice of IR materials and the general size of the format cause most IR lenses to be heavy, bulky and very expensive. Catadioptric and catoptric (all- reflective) lenses are available as well; however, they are mostly limited to very small FOV, telescopic applications. Efforts to reduce the bulk and weight of these systems undoubtedly rely on both advancement of imaging optics as well as the advancement of IR sensing technology.

Current IR sensors can be roughly divided into two generic categories: 1) cooled IR image sensors which have superior image quality but are fragile, expensive and bulky; and 2) uncooled

IR image sensors which can be smaller, less expensive and easier to use; but suffer from inferior image quality (resolution and signal to noise ratio) [26]. In general, as quality is improved while weight and bulk are reduced in IR image sensors, more emphasis will be placed on reducing the

7

bulk and weight of high-quality IR imaging lenses. This push to make small, inexpensive and

easy to use IR imaging systems will expand the use and possible applications of such devices.

I.B.2 The Montage Program

A large extent of the work covered in this dissertation was carried out with funding from the DARPA Montage program. Working with program collaborators at the University of Arizona,

Distant Focus Corporation (DFC), Massachusetts Institute of Technology (MIT) and CDM

Optics; the goal of the Montage program was to design and build ultra-high-performance thin cameras for micro-UAV surveillance. As described in Section I.B.1 and shown in Figure I-1, conventional refractive lenses are ill-suited to this application because both resolution and thickness are tightly constrained.

Figure I-1: Motivation for the MONTAGE program.

A large focus of the program was the pursuit of an integrated approach to optical system design to exploit the simultaneous optimization of both optical and post-processing degrees of freedom. This multi-domain optimization (MDO) methodology provided a framework to pursue

8

improvements in both imaging performance and camera form-factor through departures from the

paradigm of isomorphic imaging.

My role in the project was the optical design, assembly and test of the CMR cameras

designed for the demo track of the Montage program. Phase 1 of the program aimed to achieve

the resolution and light collection of a 35 mm camera lens in a thickness of just 5 mm. The final

live demonstration at DARPA, on 11/2005, met the Phase 1 performance specifications including

greater than 7x reduction in overall lens thickness compared to a conventional refractive camera.

The camera prototypes that were demonstrated as part of the Phase 1 demo can be found in

Sections III.A and IV.A. The first part of the Phase 2 program was to show extended DOF

without compromising resolution. This was accomplished using the “arc-sectioned” eight-

reflection lens described in Section III.B. In addition to the intended surveillance application, the

increased DOF and reduced size of the arc-sectioned camera provides an attractive approach to

large magnification ultra-compact cameras for portable device applications. The final part of the

Phase 2 program aimed to build a high-resolution, 5mm thick conformal array camera with a

FOV of 30 degrees. To achieve this, the four-reflection lens of Section III.C was designed as a

camera element for the 7-element array described in Section V.A. These camera prototypes were

successfully demonstrated at the final Montage program review in San Diego, CA on 01/17/2008.

I.C Dissertation Outline

This dissertation explores the CMR optical design approach and hybrid optical-digital optimization applied to ultra-thin and ultra-compact cameras. To these ends, this dissertation is organized as follows:

• In Chapter II I discuss the fundamental considerations of the CMR lens approach. This

includes geometrical and physical optics effects caused by the thin reflective geometry

9

and highly obscured aperture respectively. A comparison of the volume and thickness

reduction possible with the full aperture CMR approach is also given. For ultra-compact

applications, arc-sectioning- a modification to the full-aperture symmetric CMR design-

is discussed in terms of its effects on DOF, volume and light collection. Finally, some

additional discussion is given to the CMR lens design space, implementation challenges,

a practical design approach, and broadband chromatic performance possible with air-

spaced reflectors.

• In Chapter III I discuss several CMR prototype visible-light cameras that we designed,

fabricated and demonstrated experimentally. These prototypes include an eight-reflection

camera for long-range ultra-thin surveillance; an arc-sectioned eight-reflection camera

with enlarged DOF and reduced volume for ultra-compact applications; and a four-

reflection camera for enlarged FOV ultra-thin imaging applications.

• In Chapter IV I discuss the application of pupil-phase encoding (PPE) and post-

processing- an optical-digital hybrid optimization technique- for increasing the DOF and

relaxing the fabrication and alignment tolerances of CMR camera designs. Simulated and

experimental results are given for application of PPE and post-processing applied to the

full-aperture eight-reflection and four-reflection camera designs of Chapter III.

• In Chapter V I discuss three additional applications of the CMR design approach with

preliminary results. The first section discusses an array which orientates seven four-

reflection camera elements in a hexagonal pattern to extend the FOV to >30 degrees

using image stitching and processing at video frame rates. The second section discusses

preliminary design results for a near infrared to long wave infrared broadband four-

reflection camera. The third and final section of this chapter discusses the experimental

10

setup and preliminary demonstration of a visible-light compressive imaging system setup

using an arc-sectioned eight-reflection lens, a digital micromirror device (DMD) and two

large-area silicon photodetectors to measure linear projections of a test scene.

Preliminary results using linear and nonlinear reconstruction algorithms are given.

• Finally, in Chapter VI I summarize the major contributions of this work and give

suggestions for future applications.

Chapter II

Fundamentals of Concentric Multi-Reflection Lenses

II.A Concentric Multi-Reflection Lens Concept

One method of significantly increasing focal length and magnification without a corresponding increase in track length consists of reflecting the optical path multiple times with concentric reflectors, thus constraining the optical propagation to occur within a thin optical element. This concept is based on an extension of traditional reflective telescope design [27] with additional reflectors to minimize track length and an enlarged diameter to maintain light collection [28][29][14]. Figure II-1 shows the design concept- light enters the element through an outer annular aperture and is focused by a series of concentric zone reflectors to the image plane in the central area of the lens.

11

12

Figure II-1: (a) Conventional compound refractive lens. (b) Concentric multi-reflection lens concept.

This thin concentric multi-reflector approach leads to an optical lens design with large surface powers, a large obscuration ratio and tight fabrication tolerances. These challenges can be met by 1) the use of multiple aspheric reflectors optimized to correct the aberrations caused by the geometry, 2) an enlarged diameter to allow for multiple concentric reflections, improve light collection, and offset the diffraction effects caused by the annular aperture, and 3) self-alignment of multiple surfaces through the use of SPDT to fabricate concentric annular aspheric reflectors with minimal re-chucking.

In this chapter we will examine the CMR lens design approach examining the fundamental optical performance, size and geometry benefits, and general design considerations.

These fundamental considerations form a basis for design of the particular imaging systems described in later chapters of this dissertation.

II.B Concentric Multi-Reflection Optics

II.B.1 Geometrical Optics

Figure II-2 depicts a simplified design geometry for a thin CMR lens comparing a thin paraxial lens of focal length F to an obscured CMR version using a single thin (paraxial) powered mirror at the first reflection. With this simplified thin reflector geometry of an arbitrary even

13

number of reflections (2, 4, 6 etc.), we can examine and relate the important geometric parameters: FOV, aperture, focal length, diameter, thickness and number of reflections. This approach is similar to one described in reference [30], however the analysis given here describes results given by a powered reflector rather than refractive lens.

Figure II-2: Simplified paraxial geometry and comparison. (a) Thin (paraxial) lens of focal length F. (b) Four-reflection obscured version using a single thin (paraxial) reflector of focal length F.

The focal length of the CMR lens shown in Figure II-2b will be: F = NT/ns, where F is

the focal length, N is the number of reflections, T is the thickness of the lens and ns is the index of of the medium. For small FOV, we find that to avoid vignetting losses the width of the annular aperture is limited by the outermost oblique ray position at the second reflector. This condition allows us to express the size of the aperture as a function of the other geometric parameters as

1 D (2 − N ) wF=−tan(FOV ) ⋅⋅ (2.1) 2 2NN

where w is the width of the annular aperture and D is the diameter. For FOV larger than

14

−1 ⎛⎞DN FOV= 2 tan ⎜⎟2 (2.2) ⎝⎠2(FN − 1)

and for four or more reflections, we find that the width of the aperture is no longer limited at the

second reflector, but at the second to last reflector, assuming the image plane is designed to be in

close contact with the back of the CMR lens. For these large values of FOV, the width of the

annulus is given by

D wFN=−tan(FOV ) ⋅⋅−+ ( 11 ) (2.3) 2 N 2N

A value of w = 0 in Equation (2.3) determines the maximum FOV possible where the aperture

size has gone to zero. Figure II-3 shows a plot of w versus FOV using Equations (2.1) and (2.3)

where a sharp drop in aperture size to zero can be seen once the FOV of Equation (2.2) has been

met.

Figure II-3: Example curves of annular aperture width (w) versus FOV for various diameters using paraxial geometrical calculations (N = 4, F = 20 mm, ns = 1 in this example). As aperture size in increased, FOV decreases. As diameter is increased while maintaining fixed thickness, number of reflections and focal length, both aperture and FOV increase.

With N = 2, the common two reflector scheme, the aperture will be reduced to zero at the FOV of

Equation (2.2) (the max FOV), and Equation (2.1) alone is sufficient to describe the relationship between parameters.

15

These relationships do not include diffraction effects or aberrations but are useful in providing general geometrical relationships to define potentially useful design areas. As in all catoptric systems, FOV is limited compared to a comparable refractive lens. Equations (2.1) through (2.3) indicate that as the number or reflections (N), focal length (F), or thickness (T) are

increased, maximum FOV decreases. In addition, as FOV is increased, the annular aperture size,

w must decrease to prevent overlap of the annular reflectors. Finally, as the diameter, D is

increased both the FOV and w increase. This last point is important since it states that we need an

enlarged diameter to fit a number of reflections into a thin design with reasonable FOV and

aperture. With an enlarged diameter and several reflections we can expect very large surface

power, Φ, and surface “work”, Φ y, where y is the ray height off the optical axis on the first

reflector. This large surface work will provide significant aberrations which must be corrected

for. To do this, each subsequent reflection of a CMR lens can be used for aberration correction

and/or to extend the focal length of the system. Our present designs have required these surfaces

to be aspheric due to required surface powers and the severe system constraints. Aspheric

surfaces create additional difficulties with tolerances, but can be easily fabricated with SPDT.

II.B.2 Physical Optics

Diffraction effects from the highly obscured annular aperture are important since we expect obscuration ratios, (defined as the diameter of the obscuration divided by the diameter of the outer diameter), to be in excess of 0.7 for CMR lens designs with four or more reflections. For a given diameter and focal length, an annular aperture moves optical power in the incoherent point spread function (PSF) from the central peak into the sidelobes; reducing the mid-spatial frequency incoherent modulation transfer function (MTF). The incoherent PSF for an annular aperture can be expressed analytically by Equation (2.4) and shown for various levels of obscuration in Figure II-4a [31].

16

2 2 ⎛⎞1 22⎡2(Jkarz12 /)⎤⎡ 2( Jkarz 11 /) ⎤ Ir()=−⎜⎟ (ππ a21 )⎢ ⎥⎢ ( a ) ⎥ (2.4) ⎝⎠λzkarzkarz// ⎣ 21⎦⎣ ⎦

The associated incoherent MTFs are shown in Figure II-4b. The maximum resolvable spatial frequency remains constant regardless of obscuration, but large obscuration ratios significantly reduce mid-spatial frequency values of MTF. For this reason large obscurations are not usually acceptable in reflective telescopes due to its effect on image contrast [32][33][34]. In our CMR designs we can accept the form of the highly obscured MTF since our diameter is scaled up enough to correct for the otherwise low contrast of the mid-spatial frequencies. We will show in

Chapter III that the Nyquist frequency of a typical small pixel image sensor (2-3 micron pitch) is much smaller than the cutoff frequency of the reflective lens’s diffraction limited MTF. For these spatial frequencies of interest the diffraction limited MTF will be sufficient for good contrast in our designs. In addition, post-detection processing can restore contrast in the final image provided there is sufficient dynamic range in the detector and no zeros exist in the MTF [35].

Figure II-4: Diffraction limited incoherent (a) PSF and (b) MTF of a 60 mm circular aperture with different levels of central obscuration (% outer diameter).

Given an image sensor with limited SNR, the reduction in incoherent MTF reduces the unprocessed image contrast compared to an unobscured lens. To match the contrast without

17

processing, the diameter of the CMR lens must be increased to raise the total signal power to at least the value of the unobscured lens at all spatial frequencies of interest. To directly and more clearly compare obscured and unobscured lenses, it is useful to examine the incoherent MTF as a representation of relative energy collection as a function of spatial frequency. Consider the definition of incoherent MTF given by Equation (2.5) below [31].

⎛⎞⎛⎞λλzfλλzfyy zf zf ∫∫ Px⎜⎟⎜⎟++xx,, y Px −− dxdy ⎝⎠⎝⎠22 22 MTF(, fxy f )= (2.5) Pxydxdy(, ) ∫∫

Here the definition of MTF is expressed in its usual form as the autocorrelation of the pupil

normalized by the area of the pupil. Removing the normalization factor, or normalizing to a

comparison camera, allows for apertures of different sizes and shapes to be directly compared in

terms of the apertures diffraction limited relative energy collection vs. spatial frequency.

Normalizing to a clear 60 mm aperture, Figure II-5 illustrates how circular obscured apertures of

different diameters and obscurations can be directly compared in terms of energy collection for a

fixed focal length.

Figure II-5: Relative energy collection versus spatial frequency for various circular apertures. (a) Comparing different obscurations with fixed outer diameter (60 mm). (b) Comparing different diameters with fixed obscuration (50%). Plots are normalized to the unobscured 60 mm lens. All simulations have an EFL = 35 mm.

18

II.C Resolution & Light Collection

Resolution is typically poor in miniature conventional lenses since the focal length is reduced without a similar decrease in pixel and array size. Currently, the minimum pixel pitch found to be commercially available in CMOS color image sensors is on the order of 1.4 - 1.7 μm square [36][37]. For miniature cameras, when the physical size constraints limit the focal length of the camera optics, these minimum pixel sizes limit the achievable resolution. The CMR approach enables a longer effective focal length (EFL) without increasing the optical track (the physical length from first surface to image sensor). This allows for greater magnification and increased angular resolution subtended by the pixel sampling pitch.

In addition to extending the focal length, the thin CMR approach also enlarges the diameter of the camera increasing collection aperture area. This enlarged aperture area allows for small F/#s and high relative illumination even with large obscuration. To compare the aperture of a CMR lens to that of a conventional unobscured circular lens, we can define an effective aperture diameter for the CMR lens as

DDo= 1− 2 (2.6) eff where D is the outer diameter of the obscured lens, o is the obscuration ratio and Deff is the

diameter of an unobscured circular aperture of the same aperture area as the CMR lens. Figure

II-1shows this relationship between outer diameter and obscuration to maintain constant aperture

area (light collection for a fixed focal length) for several effective aperture diameters. For

example, a CMR lens with an obscuration ratio of 0.9 (inner diameter 0.9x outer diameter) will

have an effective diameter 2.29x smaller than its actual outer diameter. Stated another way, this

CMR lens’s diameter will have to be scaled in size by 2.29x to match the collection area of a

conventional unobscured lens. CMR lenses will typically have aspect ratios (diameter/thickness)

19

larger than 2, and often a large total aperture area compared to conventional fast lenses of the same track length. CMR lenses can therefore achieve low F/#s, even with large obscuration.

Figure II-6: Outer diameter versus obscuration to maintain constant collection aperture area (effective diameter) compared to an unobscured lens.

II.D Volume & Thickness

Since the motivation of CMR lens design is size reduction, it is useful to compare the volume, track and performance of a CMR lens compared to a conventional lens of the same EFL.

Given the basic form of a CMR lens (number of reflections and obscuration ratio) we can estimate the total track of the CMR lens; and the diameter needed to match either the total light collection area or relative energy as a function of spatial frequency. From these values of track and diameter we can also estimate the volume of the CMR lens for comparison to a conventional lens of the same focal length.

To make this comparison we consider a conventional high resolution camera lens with typical attributes: an F/1.4 lens with 35 mm focal length. We can model this lens volume as a cylinder with a diameter (open aperture) of 25 mm and a physical track length of 35 mm. This reference lens will be compared to a simplified CMR lens with the same focal length and a form

20

determined by the number of reflections and the diameter required to achieve the same optical energy collection. For simplicity we will again assume all of the optical power at the first reflection of the CMR lens. In this case the total physical thickness T is

F ⋅n T = s (2.7) N

where ns is the of the internal volume of the CMR lens. No telephoto reduction

has been assumed in Equation (2.7) or the conventional comparison lens. Results using Equation

(2.7) for eight-reflection and four-reflection lens designs with Calcium Fluoride (CaF2) and air-

gap substrates can be found in Table II-1. Here we find a 2.5x-8x reduction in total optical

thickness depending on the number of reflections and the lens material.

Using these calculated track lengths and assuming an obscuration ratio, we can also

estimate the total volume of the CMR lens designs to match the collection aperture area of the

conventional lens. Obscuration ratios for the eight-reflection and four-reflection lenses must be

assumed so we will use values of 89% and 78% respectively (values based on functional designs

discussed in Chapter III of this dissertation). The results for the diameter and volume of the CMR

lenses to match collection aperture area are summarized in Table II-1 below.

Table II-1: Size comparisons to match light collection aperture area to a F/1.4 35 mm conventional lens. Values relative to the conventional lens.

Track Diameter Volume Conventional lens 1.00 1.00 1.00 (reference) 4-Refl., CaF2 substrate 0.36 1.60 0.91 4-Refl., Air-gap 0.25 1.60 0.64 8-Refl., CaF2 substrate 0.18 2.19 0.86 8-Refl., Air-gap 0.12 2.19 0.60

To match total collection aperture area, these example CMR lenses each display an increase in diameter but a reduction in both optical track and volume compared to an F/1.4 conventional comparison lens.

21

In addition to comparing simple collection aperture area it is useful to match performance of CMR lenses in terms of the frequency domain energy collection. This is a much more stringent comparison with highly obscured apertures since incoherent MTF falls off steeply as a function of spatial frequency compared to the more gently sloped incoherent MTF of the unobscured conventional lens. The aperture of the CMR lens must be scaled up considerably to match or exceed the energy collection of the conventional lens up to a desired cut-off frequency. Choosing a maximum spatial frequency of 156 lp/mm (cut-off frequency for 3.2µm pixels), we find that the diameters of the eight-reflection and four-reflection lenses must be scaled up 2.84x and 1.86x respectively to match frequency domain energy collection with respect to the conventional reference lens up to sensor cut-off. This is shown in Figure II-7.

Figure II-7: Relative energy collection versus spatial frequency for eight-reflection and four-reflection lens designs matched to a 35 mm F/1.4 conventional lens up to 156 lp/mm.

Table II-2 shows the comparison results for matching the frequency domain energy collection.

Here the diameters of the CMR lenses must be scaled up significantly to match frequency domain energy collection up to cutoff, which can result in greater volume than the comparison lens. This comparison may be overly strict since the matched CMR lenses have much larger total light collection than the comparison lens.

22

Table II-2: Size comparisons to match diffraction limited relative energy collection to a F/1.4 35 mm conventional lens up to 156 lp/mm. Values relative to the conventional lens.

Track Diameter Volume Conventional lens 1.00 1.00 1.00 (reference) 4-Refl., CaF2 substrate 0.36 1.86 1.25 4-Refl., Air-gap 0.25 1.86 0.86 8-Refl., CaF2 substrate 0.18 2.84 1.45 8-Refl., Air-gap 0.12 2.84 0.97

As described in Section II.B.1 of this chapter, the obtainable FOV in a CMR lens design is constrained by the concentric geometry, and is in general less than a well-corrected conventional lens with the same aperture diameter. Reference [30] is an analysis of compact multi-aperture and CMR lens systems in which the author concludes that the CMR lens approach decreases system track but requires multiple cameras to match the full FOV of a conventional

camera. Under his assumptions, a 4x reduction in thickness would require roughly a 2x increase

in total system volume, and greater length reductions would require much greater volume. This estimate is substantially verified by our specific CMR designs. In some applications, the reduction in track length is critical (i.e., in reducing cantilevered mass in tracking mechanics).

Otherwise, CMR lens systems are best suited to applications requiring high resolution but only moderate FOV.

II.E Arc-Sectioning

Conventional refractive lenses often employ adjustable irises to stop down the lens aperture to increase the DOF or to reduce the amount of light collected by the imaging system.

Although a highly obscured CMR lens cannot be stopped down in the same way, similar effect can be achieved with an off-axis aperture mask. Off-axis aperture masks are sometimes employed with reflective telescopes to avoid the effects of obscuration [28]. With our CMR designs, a

23

convenient off-axis aperture mask is achieved by sectioning an asymmetrical section of the aperture as shown in Figure II-8a. This reduction of aperture can significantly increase the DOF of a CMR lens with a trade-off in sensitivity due to the reduced aperture area. The arc-sectioned aperture also provides a convenient method of further reducing the volume of the CMR lens since a significant portion of the lens volume can be removed. By arc-sectioning a CMR lens design in this way, we can make an ultra-compact, large magnification, large DOF camera for a variety of applications including compact portable device cameras. The following subsections examine in general terms the DOF extension, physical optics, light collection and volume of arc-sectioned

CMR lenses.

II.E.1 Increased Depth of Focus & Depth of Field

The depth of focus and DOF of an arc-sectioned aperture can be examined using the parameters and geometry shown in Figure II-8. This analysis is based on a common method given in literature [38], assuming an acceptable angular blur, β; or acceptable blur spot size, B. For small angular blur, these two values are related as

B β = (2.8) D

where D is the diameter of the aperture.

24

Figure II-8: Arc-sectioned geometry with parameters for depth of focus and through-focus image shift calculations. (a) xy view, (b) x cross-section, and (c) y cross-section.

From Figure II-8a, the x-direction width of the arc-sectioned aperture, Dx is related to the

diameter of the full-aperture symmetric aperture, D, and the arc-section angle, θ, as

DD= sin θ (2.9) x 2

It can be seen from Figure II-8b that the geometry of an arc-sectioned lens in the xz plane (as drawn) is similar to that of a conventional unobscured lens of arbitrary diameter and focal length.

From similar triangles,

δ F x = (2.10) βδ()f ± D x x

where δ is the departure from focus (either away from the lens, δout, or toward the lens, δin) and F is the lens focal length (with the lens focused at infinity). In terms of β and B the allowable

departure from focus is

FFB2β δ == (2.11) x D ± fDBβ ± xx

Assuming δ<

FBF2β δ ≈= (2.12) x DDsinθ sin θ 22

The depth of focus, Δ, will be twice this value

25

2BF Δ≈ (2.13) x D sin θ 2

With highly obscured CMR lens designs, the x-dimension (as drawn) will tend to have larger

aperture than the smaller y-direction (direction of the obscuration) and will therefore dictate the

improvement in depth of focus and DOF. The depth of focus is improved over the full-aperture

θ symmetric design by a factor of 1/sin( 2 ) . The DOF of the arc-sectioned design will be related

to the depth of focus through the longitudinal magnification, m as

DOF= m⋅Δ (2.14) x

The longitudinal magnification approaches the lateral magnification, m, squared for small values:

mm≈ 2 .

As shown in Figure II-8c, the y-direction depth of focus includes spreading from the

width of the annulus as well as through-focus image shift from the decentered aperture. The total

width of the aperture in the y-direction will be

Dw=+()(1cos)D − w − θ (2.15) y 22 which approaches the width of the annular, w, for small values of the arc-section angle, θ.

Neglecting the through focus image shift for determination of the depth of focus, the departure

from focus is

BF BF δ == (2.16) y Dw+−()(1cos)D w −θ y 22

As previously mentioned, the obscuration ratio present in most designs will be fixed from the

full-aperture parent design. The arc-section angle will typically be chosen to increase the DOF to

a desirable level with as large an aperture as possible. In the majority of these situations, the

depth of focus and DOF will be limited by the arc-section angle (the x-direction as drawn), and

26

not the obscuration (the y-direction). Therefore, in most circumstances the depth of focus and

DOF will be determined by Equations (2.13) and (2.14) respectively.

The through-focus image shift can be described in terms of the departure from focus, δy, in the y-direction. Examination of Figure II-8c leads to

δ w S = (2.17) 2F where S is the image shift and δ is the departure from focus. Here, δ is the variable determining

the amount of image shift for a specified defocus.

II.E.2 Physical Optics of Arc-Sectioning

Diffraction effects from arc-sectioned aperture can be considered using the same Fourier optics approach used in Section II.B.2. These effects are important since we expect the reduction in aperture to more significantly diffraction limit the arc-sectioned CMR lens’s performance. In addition, with large obscuration ratios the arc-sectioned aperture required for the desired DOF extension will most often be asymmetric leading to an asymmetric PSF and MTF, and potential resolution loss.

The arc-sectioned incoherent PSF can be calculated from the square magnitude of the

Fourier transform of the arc-sectioned pupil function. Similarly, the incoherent arc-sectioned

MTF can be calculated as in Equation (2.5) from the normalized autocorrelation of the arc- sectioned pupil function. Figure II-9a displays incoherent monochromatic PSFs for an arc- sectioned aperture paraxial lens for a variety of obscuration ratios and arc-sectioned angles. As shown, diffraction from the smaller arc-sectioned aperture does increase the diffraction limited spot size. Also shown is the dramatic effect of the obscuration- the upper end of the possible range of obscuration ratios (bottom row) causes significant diffraction spreading in the vertical

direction. Since the largest obscuration ratios are found in the designs with the most reflections

27

(and longest focal length), the diffractive spreading effect will be at its most extreme as the designs themselves become more extreme in terms of the number of reflections required.

Figure II-9: Diffraction from an arc-sectioned aperture. (a) Monochromatic incoherent PSF spreading due to the arc-section angle, θ (columns) and obscuration ratio, o (rows). (b) Vertical monochromatic incoherent MTF showing the effect of varying obscuration ratio with a fixed arc-section angle. (c) Horizontal monochromatic incoherent MTF showing the effect of varying arc-section angle with a fixed obscuration. All simulations have a diameter of 60 mm and a focal length of 35 mm.

The diffractive spreading effect is also shown in the incoherent MTF curves of Figure

II-9b and Figure II-9c. Figure II-9b shows the incoherent monochromatic MTF in the vertical direction with a fixed arc-sectioned angle and varying obscuration ratio. Figure II-9c shows the incoherent monochromatic MTF in the horizontal direction with a fixed obscuration ratio and varying arc-section angle. Comparing Figure II-9b and Figure II-9c shows once again the relative importance of obscuration ratio to the amount of diffractive spreading and 1D resolution loss.

28

Small changes in obscuration ratio of the CMR lens design can make large differences in the resolution of an arc-sectioned version of that CMR lens.

II.E.3 Aperture and Volume

The effective aperture diameter of an arc-sectioned CMR design can be described as

DD=−θ 1 o2 (2.18) eff 360 ( )

This value can be used to calculate the effective F/# of the arc-sectioned lens defined as

F F = (2.19) #eff D eff

The increase in effective F/# compared to the full-aperture symmetric CMR design is equal to

360 θ .

Taking into account the additional area required for the image plane and oblique ray paths through the arc-sectioned CMR lens, the required volume is found to be approximately

4 θθD 112 VDDDT=+⋅−+360ππ 4sin( 90 2) 8 (2.20) arc−sec tion( 4 I I ) where D is the diameter of the full-aperture symmetric lens, DI is the diameter of the image plane and T is the thickness of the lens. Using Equation (2.20) and typical values found for a range of

CMR designs, a reduction in volume of 3x to 11x can be found by removing the unused volume of an arc-sectioned CMR lens. This method is therefore attractive for applications that require not only the thin form of a CMR lens, but also minimal volume/ maximum compactness along with extended DOF.

29

II.F Specific Design Considerations

II.F.1 Design Space

The number of reflections required in a CMR lens is a property that depends on the desired focal length of the lens and the desired total thickness. Exploring the design space, we find that increasing the number of reflections also increases the size of the effective aperture of the lens while the obscuration ratio increases. Figure II-10 shows a plot of FOV versus equivalent

aperture for several different folded designs all 5 mm thick illuminating a ¼” inch image sensor.

Figure II-10: Field of view versus equivalent aperture diameter for several CMR designs.

With the exception of the eight-reflection lens, all of the designs have aspheric surfaces on front and back- a more difficult fabrication project than the single sided eight-reflection lens. Once optimized, all of the designs are shown to lie on a rough line through the chart where the available FOV is reduced as the size of the aperture and number of reflections and corresponding

EFL are increased. Designs of a specific number of reflections are found to have a limited region where they can be successfully optimized for a given thickness and image diagonal.

The unnormalized incoherent MTF approach described in Section II.B.2 can also be used to compare various lens designs with different focal lengths, aperture shapes and sizes. However,

30

to do so the relative illumination, proportional to the inverse square of the F/# must be used. To include the effects of both aperture size and total image size, the autocorrelation of the pupil function is divided by the square of the focal length before normalizing to a common F/#. In this way, the value at zero spatial frequency corresponds to the inverse square of the F/# and the drop off in relative illumination as a function of F/# is determined by the obscuration and shape of the aperture. As an example, Figure II-11 shows the diffraction limited relative illumination versus spatial frequency for a miniature conventional lens and two CMR lens designs all of ~5mm total thickness. The conventional lens shown is an F/1.8 circular aperture with an EFL of 5 mm. The eight-reflection lens shown has an EFL of 38 mm with an effective aperture diameter of 27 mm

(60 mm OD, 90% obscured); and the four-reflection lens has an EFL of 19 mm with an effective aperture diameter of 16.8 mm (28 mm OD, 80% obscured).

Figure II-11: Diffraction limited relative illumination versus spatial frequency comparing a conventional miniature lens (F/1.8, EFL = 5mm), a four-reflection lens (F/1.13, F = 19 mm), and an eight-reflection lens (F/1.4, F = 38 mm); all of 5 mm total thickness.

These values of EFL and aperture size for the CMR lenses were taken from working designs for comparison. With this approach, we can directly compare relative illumination and

31

image contrast on the image sensor as a function of image space spatial frequency of significantly different lens designs.

II.F.2 Implementation Challenges

The most significant challenges associated with CMR lenses are fabrication tolerances,

DOF/depth of focus, stray light suppression, and optical efficiency. Fabrication tolerances and

DOF/depth of focus are for the most part defocus related problems that in some instances can be significantly improved with the use arc-sectioning (Section II.E) or PPE and post-processing

(Chapter IV) [18][39]. If the CMR lens is fabricated from one piece of material, thickness error in the fabrication introduces concatenated thickness error in the optical path. Refocus of the image plane compensates the error but this remains the most severe tolerance. On the other hand centration tolerances can be effectively eliminated by making the lens ‘plano-aspheric’ as in the eight-reflection lens of Section III.A, where all of powered surfaces reside on one side of the lens and the CMR lens can be diamond turned without re-chucking the substrate.

The narrow DOF associated with CMR lenses is due to the high numerical aperture of the lens design. However, this property is not unique to CMR lenses since all high quality cameras suffer from the same trade-off between resolution (NA) and DOF/ depth of focus.

Stray light arises in CMR lenses when light travels an unintended path through the lens and reaches the sensor as noise. Stray light is commonly reduced with baffles in astronomical telescopes and the same approach works with thin CMR lenses. With CMR lenses, the regions between concentric reflectors can be cut into baffles and made absorbing to help reduce stray light. External thin honeycomb baffles may also be used to control the range of field angles that enter the lens. Stray light suppression is one of the most important challenges with CMR cameras, and necessary for their use in situations where bright stray light sources, such as the sun, are

32

present. In such situations more advanced baffle geometries and angle selective dielectric coatings will be useful in the future to help control wash-out caused by stray light.

Lastly, significant optical attenuation can be caused by multiple reflection losses and by vignetting or shading due to the specific sensor geometry. Pixel vignetting is a term used to describe losses in CMOS image sensors due to shadowing from the metal interconnect layers that surround the light-sensitive area. Microlenses are typically used to help focus light onto the sensor, but large angle rays can be blocked [40], making the sensors incompatible with the low

F/# of many injection molded aspheric lenses. This effect is increasingly problematic as the pixel pitch decreases. For 3 micron pitch, the interconnect layer height is greater than the width of the active area. With CMR lenses, the numerical aperture is large and all of the light is incident at large angles. The microlenses can be omitted or index-matched to reduce their effect [41], but even so approximately half of the incident light can be blocked by the interconnect layer. A near- term solution is to use CMOS processing with copper rather than aluminum interconnects, which increases conductivity and allows for thinner and narrower traces. There are also development efforts to improve the angle performance of CMOS sensors by wafer thinning and backside illumination [42][43].

II.F.3 General Design Procedure

All of the CMR lens designs described in this dissertation were designed and optimized in ZEMAX EE, a commercial optical design program [44]. Similar results could also be achieved with any of the other popular optical design programs (CODE V, OSLO etc.). In this section I will describe the general procedure in ZEMAX with references to its specific operands and features [45].

The first step in the design of a CMR lens is the specification of the lens constraints and goals. With these values in mind and using the basic analysis given is Section II.B, the basic

33

structure of the thin CMR lens can be determined in terms of the number of reflections, desired focal length, estimated outer diameter and achievable FOV. Using these values, a first pass CMR lens can be set up in ZEMAX, generally starting with two powered spherical surfaces in a telephoto configuration with a large positive reflector on the rear, outermost zone of the element followed by a negative reflector at a following, usually intermediate reflector position. In the later stages of the design, optimization will be used to determine the distribution of optical power among the elements following the front positive reflector. The Petzval sum can also be calculated and balanced at this point to help guide the distribution of optical power as the optical design progresses.

With the basic geometry setup, the surface powers can be adjusted- using either the well known telephoto lens calculations [46], by hand, or by optimization- such that the annular reflectors don’t overlap for the maximum field positions (no mechanical vignetting should be allowed at this early stage). The merit function can now be setup to aid in controlling the geometry and to speed up the design of the basic form. For this, the RMS spot size default merit function can be used with a large rectangular array and vignetted rays deleted1. In the merit

function editor, the REAR (real ray radial coordinate) operand can be used to control three key

ray-surface positions that constrain the design. These rays are 1) the maximum pupil position (P =

1) and maximum field position (H=1) ray at the second reflector, 2) the maximum pupil position

(P=1), and maximum field position (H=1) ray at the third reflector, and 3) the minimum pupil

position (P = o) and minimum field position (H=-1) at the second to last reflector. These constraints control 1) ray overlap over the width of the open annulus, 2) the amount of negative power applied at the second reflector which must often be controlled, and 3) ray overlap of the small clear aperture (field stop) in close contact with the image sensor. Since we are only

1 Gaussian quadrature is not effective with obscured pupils

34

concerned when these ray positions exceed a limiting value, the operand OPLT (operand less than) can be used in conjunction with the zero contribution REAR operands to achieve the desired control over the geometry.

With the merit function setup, the geometry of the CMR lens can be experimented with for initial adjustments to the distribution of optical power among the reflectors, the minimum value of the annular aperture (obscuration), EFL, FOV, Petzval sum and outer diameter. Optical performance will almost certainly be poor at this point with spherical surfaces, but it is important to obtain a good design starting point before adding aspheric surfaces to the CMR lens. Once a basic geometry is achieved, even aspheric terms can start to be added to the reflective surfaces and optimized. Useful metrics for the lens performance are monochromatic incoherent MTF, and broadband spot diagrams. In general, these metrics have proven more useful than ray fans which can be difficult to interpret for highly obscured designs. In addition, since the chief ray is vignetted, the location of the chief ray as well as many of the first order properties calculated by the software can often be misleading and should be checked using individual calculations. When the lens performance begins to approach the desired specifications required, the merit function can be alternated between RMS spot size and RMS wavefront error to help explore the solution space. Global optimization can also be experimented with; however I have had little success finding alternate, improved solutions in my attempts.

II.F.4 Air-spaced Designs

We conclude this chapter with mention of two advantages unique to air-spaced CMR lens designs, where two reflective surfaces surround a hollow cavity. When a CMR lens is cut from a substrate such as glass or plastic, some residual chromatic will be present due to the refraction of light as it enters the lens element, even if the first surface is flat. Air-spaced versions of the CMR lens eliminate this refraction, yielding an all-reflective camera with zero chromatic

35

aberration. A thin cover sheet used to protect the optical surfaces does not introduce angle dispersion, and does not cause significant chromatic aberration. Hollow CMR lenses will be useful for inexpensive, light-weight IR cameras and may lead to convenient designs for hyperspectral imagers.

The second interesting property of a CMR air-gap lens is “squeeze” focus. In the same way that substrate thickness errors become concatenated by the reflections in the tolerance analysis, a small adjustment of the distance between the front and back reflectors allows for large changes in the focus of the lens. This sensitivity can be advantageous when compared to the limited amount of refocus possible with simple back focal length adjustment. CMR lenses with refractive interiors can also be squeeze-focused by fabricating the front and back reflectors on separate substrates, and using antireflection-coated or index-matched flat surfaces between the substrates to allow for a variable gap.

Chapter II, in part, is a reprint of the material as it appears in 1) E. J. Tremblay, R. A.

Stack, R. L. Morrison, and J. E. Ford, "Ultrathin cameras using annular folded optics," Appl. Opt.

46, 463-471 (2007), and 2) E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford,

"Ultrathin four-reflection imager," Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press). The dissertation author was the primary researcher and author.

Chapter III

Concentric Multi-Reflection Camera Prototypes

III.A Eight-Reflection Camera Prototype

In this section I describe an eight-reflection camera prototype intended for use as an ultrathin visible-light surveillance camera. The design achieves a FOV of 6.7° over 1.23

Megapixels of a color CMOS image sensor with 3.2 µm pixels in just 5 mm of total track.

III.A.1 Eight-Reflection Lens Design

The goal of this design was a visible light camera with a total thickness of 5 mm, a 0.1 radian FOV, 0.1 mrad resolution and >25 mm effective aperture to illuminate at least 1000x700 pixels of an Omnivision 3620 CMOS color image sensor.

To achieve these specifications, the eight-reflection lens shown in Figure III-1a was optimized using ZEMAX EE. CaF2 was chosen as the reflector interior substrate material for its

rigidity, low dispersion and especially for its compatibility with SPDT technology. The lens has a

60 mm outer diameter, 53 mm inner obscuration, 5 mm thickness and was designed for a 2.5 m

36

37

object distance to facilitate lab testing. Light is focused onto the image sensor by four concentric aspheric reflectors on the back side of the lens while the front side remains planar to simplify fabrication. Index matching gel (Nye OC431A-LVP) is used between the final transmissive surface of the camera and the sensor to index-match the microlenses on the sensor. The ZEMAX prescription for this CMR lens is included in Appendix A of this dissertation.

Figure III-1: (a) Eight-reflection lens in CaF2 schematic, (b) calculated monochromatic incoherent MTF, (c) simulated monochromatic (588 nm) geometric spot diagrams, and (d) simulated broad-spectrum (486, 588, 656 nm) geometric spot diagrams showing ~8 µm lateral color (±1 pixel at field stop).

The eight-reflection lens achieves an effective aperture diameter of 27 mm, numerical aperture of 0.7, FOV of 0.12 radians (6.7 degrees), and 38 mm EFL. The effective F/# (defined as

38

the focal length divided by the equivalent diameter of an unobscured aperture of the same aperture area) is 1.4. It is designed to have a resolution of 0.07 mrad resolving 1280x960 pixels of the Omnivision image sensor. The simulated monochromatic incoherent MTF for this eight- reflection lens is shown in Figure III-1b. This incoherent MTF figure shows diffraction limited monochromatic performance up to and beyond the Nyquist frequency of the sensor sampling. The geometric spot diagrams for the monochromatic and broad-spectrum cases are shown in Figure

III-1c and Figure III-1d respectively. As shown in the broad-spectrum figure, the lens displays approximately 8 µm of lateral color (±1 pixel at full field) aberration due to dispersion at the entrance pupil. This color aberration can be corrected with post-detection processing (remapping of the RGB planes).

Although this lens was designed for fixed focus, a small range of refocus can be obtained by adjusting the distance between the rear transmissive aperture and the image sensor. As shown in Figure III-2, the lens designed for a 2.5 m conjugate can be refocused out to 2.835 m by pressing the image sensor into contact with the lens, and in to approximately 2 m before performance begins to significantly degrade. If the eight-reflection lens is reoptimized for its normal mode of operation at larger object distance, the amount of refocus would scale in a similar way to DOF, yielding a larger available refocus range.

Figure III-2: Monochromatic incoherent MTF curves showing refocus possible by reposition of the image plane. (a) Refocus to 2.835 m, (b) refocus to 2.165 m, and (c) refocus to 2 m.

39

Depth of Focus & Depth of Field

The thin annular reflector structure used in a CMR lens allows for dramatic reduction in

overall thickness, but also requires large marginal ray angles to allow for spatial separation of the

reflective surfaces. These large ray angles (image-plane numerical aperture of 0.7) allow for the

resolution of the CMR design to be sensor limited at the Nyquist frequency of the Omnivision

CMOS image sensor (156 cycles/mm), but also cause shallow depth of focus & DOF [47]. Figure

III-3 shows a through-focus monochromatic geometrical spot diagram for the eight-reflection

lens, designed for a 2.5 m object distance. The geometrical effect of the annular aperture can be

seen in Figure III-3 where out of focus spots appear as annular shadows of the aperture.

On-Axis

Half-field

Full-field

Figure III-3: Through-focus monochromatic spot distributions for ±4 μm range. Shown on 3.18 μm pixel grid for reference.

Comparing the geometrical through focus spot diagram and through focus incoherent

MTF performance, the defocus tolerance for the eight-reflection lens was estimated to be ±5 µm

(1.76 waves of defocus) for human viewing. This defocus tolerance maintains an incoherent MTF

40

of 10% at the sensor cut-off frequency across the full field and corresponds to a DOF of only 24 mm at the designed object distance of 2.5 m. The DOF improves considerably for larger object distances, but is a limitation for imaging short object-range applications. PPE and post-processing can be used to improve the DOF in high numerical aperture fold designs such as the one described here. The application of PPE and post-processing to the eight-reflection lens design will be discussed in Chapter IV, Section B.

Stray Light

Using non-sequential ray tracing in ZEMAX, stray light paths can be analyzed to find the

problematic light paths through the eight-reflection lens. Of particular interest are ray paths

through the lens that skip reflectors or include additional reflections that propagate to the image

plane. Figure III-4 shows ray transmission versus output ray angle for the eight-reflection lens. It

can be seen that three distinct peaks occur corresponding to the intended signal and wide angle

deviation noise. The separation in the peaks suggests that a dielectric angle filter on the final

surface of the lens may be feasible to block stray light in future designs. Our current eight-

reflection lens design includes roughened and blackened areas between the concentric annular

reflectors to aide in the suppression of stray light. These black diffuse areas help to absorb errant

light before it can reach the image sensor as noise.

41

Figure III-4: Stray Light analysis showing transmission through the eight-reflection lens versus output ray angle. The central area in blue represents the desired ray paths through the lens, while the two outer peaks represent stray light paths.

Fabrication Tolerances

Since the eight-reflection lens is fabricated from a single substrate with little assembly compensation possible, fabrication tolerances are inherently tight for the eight-reflection lens design. Once the optical element is fabricated, the only compensation available comes from the variable gap between the final transmissive surface and the sensor (focus adjustment).

The nominal fabrication tolerances for the eight-reflection lens design were determined from a sensitivity analysis in ZEMAX. The tolerances of the shifts and tilts of the aspheric surfaces have minimum values of ±10 µm for surface shift (along the optic axis) and ±0.01 degrees surface tilt. The allowed departure from the aspheric surface equation is 0.25λ (at 546 nm) within each annular reflector zone and the required flatness of the planar side is 0.5λ (at 546 nm). The most problematic tolerance for the eight-reflection lens design is the nominal part thickness, since any error will concatenate 8x through the reflective lens. The nominal part thickness tolerance is ±5 um- a challenging specification for Fresnel Technologies, our SPDT vendor for this prototype. Using focus position as the compensator, the tolerances given maintain an incoherent MTF greater than 10% at 156 cycles/mm across the full field. The range of focus

42

compensation required is ± 28 µm. The full list of tolerances for the eight-reflection lens can be seen in Table IV-1of Chapter IV where they are compared to the tolerances of a PPE modified eight-reflection lens.

III.A.2 Eight-Reflection Camera Demonstration

Fabrication, Metrology and Integration

The eight-reflection lens element was chosen to be cut from a single CaF2 substrate to simplify fabrication and to increase the achievable FOV of the eight-reflection design. Compared to a hollow air-filled CMR lens, this plano-aspheric optical element can be diamond turned without re-chucking, which reduces centration errors in fabrication and alignment errors in assembly. We had the eight-reflection lens design diamond turned by Fresnel Technologies in

CaF2, shown in Figure III-5a. In between the concentric aspheric surfaces of the back side, black baffles were painted onto the eight-reflection lens to provide a simple method of stray light blocking in the optical path. Once machined, IST Optics completed the fabrication by coating the substrate with patterned silver reflectors shown in Figure III-5b & Figure III-5c.

43

Figure III-5: Eight-reflection camera prototype fabrication and integration. (a) Diamond turned lens before coating, (b) silver coated front surface with annular aperture, (c) coated back surface, (d) active alignment of the image sensor, (e) fully functional fixed-focus camera, and (f) fixed focus camera and electronics packaged in plastic enclosure.

Surface metrology of large area, multiple zone aspheric optical elements is difficult due to its non-conventional shape and rapidly varying slopes. Conventional interferometric metrology relies on the comparison of the test surface with a reference surface (usually a sphere or a plane), and the phase difference is measured either in transmission or reflection. The eight-reflection lens element poses the challenges of 1) not possessing readily available reference nulls for each surface, making it difficult to measure the annular surface figures, and 2) presenting multiple high-powered annular surfaces, which are difficult to measure with a white-light profilometer (the oblique incidence angles require individual mechanical tilts and prevent accurate image stitching). Even if one could individually measure the figure of each surface, the challenge would lie on referencing the position of each surface vertex with respect to each other, and that (rather than surface figure) is likely to be the main fabrication error and the strongest contributor to overall wavefront error.

44

Without reference nulls to test the surface figure of the annular aspheric surfaces, we used a less accurate examination of the aspheric side using a 50x, 0.42 NA microscope objective along with computer controlled positioning to compare relative surface coordinates to the specified optical design. This measurement had ~5 µm absolute positioning accuracy, allowing us to verify gross element thickness and zonal surface shift but not the surface figures of the aspheric reflectors. The annular surface shifts and total element thickness were verified to be within the specified 5 µm tolerances. The surface figures of the annular aspheric reflectors were verified using a large-magnification white-light interferometer (ADE Phase-shift MicroXAM) as shown in

Figure III-6. This diamond turned aspheric surfaces showed an average polished roughness of 5 nm RMS with peaks to 50nm (toolmarks) on the aspheric side of the lens.

Figure III-6: 3-D surface measurement of a diamond turned asphere in CaF2 using a large-magnification white-light interferometer.

The surface of the nominally planar front reflector was measurable using a Michelson interferometer which revealed an approximately spherical curvature across the surface with 500 nm of sag at the center of the bowed front surface. This value of sag exceeded our initial tolerance spec of 0.5λ at 546 nm, but ray tracing indicated that refocus of the back focal distance byy 10 µm reduced the effect of the aberration to tolerable levels.

45

We aligned and integrated the eight-reflection lens and sensor with index matching gel between them using a 5-axis stage and the mounting assembly shown in Figure III-5d. Once optically aligned to an object at 2.5 m, the sensor and flex circuit were attached to the back of the eight-reflection lens (Figure III-5e) with an area of rigid UV-cured epoxy (Norland NOA 63) later surrounded by a flexible UV-cured epoxy (Norland NOA 68) for strain relief. The lens was mounted into the black plastic case with the camera and USB driver circuitry (Figure III-5f) as a fixed focus prototype camera.

Image Quality- Comparisons and Performance

The first demonstration eight-reflection lens was attempted using PMMA (acrylic) instead of CaF2 to test the fabrication method. Figure III-7a shows the image of a resolution chart taken with the fixed focus eight-reflection camera prototype at 2.5 m. This experimental image can be compared to the ZEMAX geometrical image simulation shown in Figure III-7d. Although we were encouraged by the basic function of the lens, it was clear that the lens was not fabricated within the required fabrication tolerances. At the time we believed this was due to the relative softness of the plastic lens material, and decided to proceed to CaF2 which we expected to maintain fabrication tolerances better. An image taken from the first CaF2 eight-reflection lens is shown in Figure III-7b. This lens showed an improvement in image quality; however tolerances were still not being maintained. Fortunately, we discovered an error in the surface figure code that the diamond turning vendor was using. Once this error was corrected, results improved dramatically. Figure III-7c shows the image of a resolution chart taken with the second CaF2 eight-reflection lens at 2.5 m. Visually, resolution and uniformity appear to be nearly identical indicating that fabrication tolerances were well met with this lens. Vignetting can be seen in the corners of both images where regions of the object extend outside of a 1280x960 pixel FOV.

46

Figure III-7: Images taken with the first three eight-reflection camera prototypes. (a) First attempt in PMMA, (b) first attempt in CaF2, and (c) Second attempt in CaF2. (d) ZEMAX prredicted image.

To make a comparison test of the eight-reflection camera to a conventional camera, we set up a refractive Tokina zoom lens with the same magnification, aperture size, Omnivision 3620 image sensor and software. Placed side by side (Figure III-8a), the two cameras image a series of resolution charts (Figure III-8b) spaced 7 cm apart to show DOF effects. Images from both cameras can be seen in Figure III-8c and Figure III-8d corresponding to the conventional camera and eight-reflection camera respectively. At best focus the resolution, color and image quality are very similar between the two cameras. Out of focus, the eight-reflection camera is easily identified with a more pronounced blur due to the narrow DOF.

47

Figure III-8: Conventional and eight-reflection camera comparison. The dual camera setup (a) is used to image the staggered resolution charts (b). (c) and (d) show the images taken with the conventional camera and eight-reflection camera respectively.

To quantitatively compare the performance of the two cameras, we measured the incoherent system MTF (Lens + image sensor) using a sinusoidal modulation chart of varying spatial frequency. The system MTF values were calculated using a custom Matlab analysis script which sequentially measured contrast over the various frequencies of the sinusoidal chart. Figure

III-9 shows the two measured system MTFs for the CMR and conventional cameras confirming

that the resolution and image quality are nearly identical between the two cameras at best focus.

48

Figure III-9: Measured in-focus incoherent system MTF (lens + sensor) comparison of the eight-reflection camera and the conventional comparison camera using identical image sensors.

Quality between the two cameras becomes more disparate at lower light levels. With approximately 10x less light, noise in the eight-reflection camera’s image is revealed. To obtain visually comparable images, it was found that the eight-reflection camera required a 600 msec exposure to match the pixel signal levels of a ~4 msec exposure with the conventional camera.

This difference in efficiency comes from preventable losses in the reflective coating of the eight- reflection lens, and unsuitable image sensor geometry. Reflective losses can be improved by using a highly efficient dielectric reflector rather than the currently used silver reflector.

Reflecting 8x, the silver reflector only passes 12% of the light compared to a properly designed dielectric reflector with approximately 80% achievable efficiency. Our first measurement of a camera with a dielectric mirror showed approximately 43% total efficiency including surface transmission and eight mirror reflections.

Pixel vignetting from the image sensor’s interconnect layer can be reduced by switching to a more suitable sensor structure. Although the exact tunnel height specification for the

Omnivision image sensor is not available; we can approximate a large amount of loss (~10db) in our CMR camera with a typical 3.6 µm interconnect layer and our range of incidence angles.

Switching to a different sensor format with reduced interconnect layer thickness or rear

49

illuminated back-thinned sensors will significantly improve signal to noise in CMR cameras such as the prototype fabricated here.

Thermal Testing

Finally, we tested the thermal operation range of the eight-reflection camera prototype by

imaging the DOF scene of Figure III-8 with the camera in an oven looking out through a window.

Figure III-10 shows comparison images taken at 23º C, after 90 minutes at 60º C, and after cool

down to 23º C. The effect of heating on the eight-reflection lens was a ~1% change in focal

position due to thermal expansion in the CaF2 element. Heating also caused strong color variation and increased noise in the image sensor. At high temperatures the raw images from the sensor were blue shifted with reduced image contrast due to elevated noise. Finally, and of least consequence, the elevated temperature introduced a mechanical tilt in the hinge of the plastic package giving a slight tilt to the captured image. No apparent damage to the camera was caused from the test. Resolution, color balance and focus were all restored after the camera was cooled down.

(a) (b) (c)

Figure III-10: Eight-reflection camera prototype thermal testing. (a) before heating, 25º C, (b) after 90 minutes at 60° C, (c) after cooldown, 23º C. Images are color balanced.

50

III.B Arc-Sectioned Eight-Reflection Camera Prototype

The eight-reflection camera design described in the previous section was intended for use as a surveillance camera for magnification of distant objects. At moderate object distances- across a room for example- the large NA and relatively large EFL create a narrow depth of focus/ field that is especially apparent and problematic for some applications. One simple and effective method for increasing DOF in highly obscured CMR lenses is to reduce the aperture in a rotationally asymmetric section as described previously in Section II.E. In this section, I describe arc-sectioning the eight-reflection lens design of Section III.A to demonstrate how a large magnification CMR lens can be modified for use as an extremely compact general purpose visible-light camera with broadened DOF.

III.B.1 Arc-Sectioned Eight-Reflection Design

To demonstrate the effects of arc-sectioning a CMR lens, we sectioned a 50 º arc from the eight-reflection design as shown in Figure III-11. The arc-sectioned eight-reflection lens is approximately 5x smaller in volume than its symmetric counterpart with an increase in DOF of more than 3x. The trade-offs for these improvements are reduced light collection and asymmetric resolution. The arc-sectioned aperture area has been reduced to 80 mm2, a reduction of 7.2x compared to the full symmetric camera. With this aperture reduction, the effective F/# of the arc- sectioned eight-reflection lens is now F/3.76.

51

Figure III-11: Arc-sectioned eight-reflection lens drawing.

Compared to the incoherent on-axis PSF produced by the full-aperture eight-reflection lens (Figure III-12a), diffraction from the off-axis asymmetric aperture causes the PSF of the arc- sectioned eight-reflection lens to be asymmetrical (Figure III-12b). This asymmetry can also be seen in the incoherent MTF, shown in Figure III-12c. When sectioned, the sagittal (horizontal as drawn in Figure III-11) MTF of the camera is retained and in fact improved over the full-aperture sagittal MTF. This effect is due to the unobscured off-axis nature of the lens pupil. The tangential

(vertical as drawn) MTF however, is degraded by diffraction due to the narrow aperture in that dimension. This loss of resolution is significant for the arc-sectioned eight-reflection lens due to its large obscuration ratio (0.9). Arc-sectioning of CMR lenses with smaller obscuration is more effective with significantly less reduction in incoherent MTF and no loss of resolution.

Figure III-12: Arc-sectioned eight-reflection design’s simulated performance. (a) full-aperture on-axis Huygens incoherent monochromatic PSF, (b) arc-sectioned on-axis Huygens incoherent monochromatic PSF and (c) incoherent monochromatic MTF across 6.7° FOV.

52

Figure III-13 compares the geometrical spot diagrams at best focus and 4% defocus at 2.5 m for three cameras: 1) an F/1.9 40 mm conventional lens (for reference), 2) the full-aperture eight-reflection lens, and 3) a 50° arc-section of the eight-reflection lens. Although diffraction is not considered, the effect of sectioning is apparent. Geometrical DOF is significantly improved in the arc-sectioned camera compared to the full-aperture 8-reflection lens. The asymmetric geometry of the arc-sectioned lens also causes image shift for out-of-focus objects as shown in

Figure III-13c.

Conventional Full‐Aperture 50° Arc‐Section reference 8‐fold 8‐fold

Best focus

RRMS 1.5 μm RRMS 0.5 μm RRMS 0.5 μm

4% defocus @ 2.5 m

R 7.5 μm R 20.7 μm R 5.2 μm RMS RMS RMS (a) (b) (c)

Figure III-13: Geometrical best focus and 4% object defocus spot diagrams at 2.5m for (a) F/1.9 F = 40 mm conventional reference lens, (b) full-aperture eight-reflection lens, and (c) arc-sectioned eight-reflection lens. A 5 um pitch grid is used for scale.

III.B.2 Arc-Sectioned Eight-Reflection Camera Demonstration

For the arc-sectioned eight-reflection camera prototype we chose to move from the

Omnivision 3620 image sensor we used previously with the full-aperture eight-reflection camera

to a Forza/Sunplus 1.93 megapixel (1600 x 1200) CMOS color image sensor shown in Figure

III-14. This Forza/Sunplus ¼” sensor was made using the IBM copper-CMOS process, which

53

produced an image sensor structure with thinner interconnects than other common CMOS image sensors with Aluminum interconnects. With the range of ray angles present in a CMR lens design, this reduction in interconnect height dramatically improves throughput due to a reduction in pixel vignetting [42]. For the range of incidence angles present in the eight-reflection lens design, we estimate an average throughput increase of 70% with the Forza/Sunplus sensor compared to the Aluminum-CMOS Omnivision sensor previously used. This estimate was made byy measuring the sensitivities of the two image sensors using a collimated green laser and a rotation stage for varying the incidence angle of the laser beam. This measurement of average pixel signal versus incidence angle, is shown in Figure III-14b.

Figure III-14: (a) The Forza/Sunplus CMOS image sensor (left) and the Omnivision 3620 CMOS image sensor (right). (b) Measured sensitivity versus incidence angles for the Forza/Sunplus and Omnivision image sensors showing the Forza/Sunplus sensor’s reduced pixel vignetting.

We fabricated the arc-sectioned eight-reflection lens by dicing a 50° section from a full- aperture eight-reflection lens using a diamond saw as shown in Figure III-15a. New coating specifications were also crafted to address both the enhanced mirror performance needs in the visible spectrum as well as to reduce the reflection of IR light to enhance the camera's color performance. We chose ISP Optics to fabricate this version of the eight-reflection lens since they could provide the CaF2 blanks, perform the diamond turning and metrology, and apply the

54

dielectric mirror coating to the lenses all in house. In contrast, the first eighht-reflection lenses described in Section III.A were diamond turned and coated by two different companies while a third supplied the blanks.

Figure III-15b and Figure III-15c show schematics the arc-sectioned lens assembly and packaging. This camera package consists of a small enclosure that houses both the sectioned eight-reflection lens and all the electronics in a complete self contained low power USB 2.0 camera. This experimental camera prototype is compatible with our existing software libraries and measures just 64 mm x 26 mm x 9mm thick. Further, this camera features three adjustment screws allowing the eight-reflection lens element to be aligned after final assembly eliminating the need to permanently bond the image sensor to the lens. The final assembled and packaged arc-sectioned eight-reflection lens is shown in Figure III-15d.

(a) (b)

(c) (d)

Figure III-15: Arc-sectioned eight-reflection camera fabrication and assembly. (a) Sectioned eighht- reflection lens cut with a diamond wafering saw, (b) packaging assembly, (c) package drawing, and (d) final assembled prrototype.

55

In Figure III-16 we show a qualitative comparison of a common scene with the full- aperture and arc-sectioned eight-reflection cameras. For comparison purposes, images from a full size conventional camera (Pentax f = 43 mm, F/1.9 using an Omnivision 3620 sensor) and a

Logitech mini-cam (short focal-length refractive camera) are included in Figure III-16a and

Figure III-16b respectively.

Figure III-16: Qualitative camera Comparison. (a) F/1.9 43 mm conventional lens, (b) Conventional web mini-cam (F ≈ 3.9 mm), (c) Full-aperture eight-reflection camera, and (d) arc-sectioned eight-reflection camera.

56

We used a diorama of plastic models spread across a depth of ~ 30 cm at a distance of 2.6 m for the object. Several qualitative observations can be made. At best focus, the full-aperture eight- reflection camera is fairly comparable in resolution to the large conventional lens; and much improved over the mini-cam which produces wide-angle images that have poor resolution when zoomed electronically (Figure III-16c). At best focus, resolution of the arc-sectioned eight- reflection camera is also quite comparable to both the full-aperture eight-reflection camera and the Pentax conventional lens. Under close examination the resolution can be seen to be slightly asymmetrical as previously discussed. In the rear of the diorama (~20cm to the rear of best focus), the narrow DOF of the full-aperture eight-reflection camera is apparent while significantly improved in the arc-sectioned image (Figure III-16d).

For a more quantitative comparison, we measured the resolutions of a conventional (F/1.9

40 mm) reference camera, the full-aperture eight-reflection camera and the arc-sectioned eight- reflection camera using a 1951 USAF resolution chart. Images from the three cameras at ±15%

DOF are shown in Figure III-17. At best focus distances of 2.72 m for the full symmetric camera and 2.6 m for the arc-sectioned camera, the full-aperture eight-reflection camera and arc- sectioned eight-reflection camera resolve 1.8 lp/mm and 2.0 lp/mm respectively. In the horizontal direction, the full symmetric camera resolves 1.8 lp/mm while the arc-sectioned camera resolves

1.0 lp/mm at their respective best focus object distances. Over an extended range the advantage of the arc-sectioned camera is apparent, demonstrating DOF >3x larger than the full symmetric camera and similar to our F/1.9, 40 mm reference lens.

57

Figure III-17: Through focus images of a 1951 USAF resolution chart. (a) F/1.9 40 mm Tokina conventional lens, (b) full aperture eight-reflection camera, and (c) arc-sectioned eight-reflection camera. Best focus distances for the full symmetric and arc-sectioned cameras are at 2.72 m and 2.6 m respectively.

Figure III-18 shows object space resolution measurements in the horizontal direction

(Figure III-18a), and the vertical direction (Figure III-18b) for the three cameras as a function of object distance. The DOF improvement and asymmetrical resolution of the arc-sectioned 8-fold described above can be seen by comparing Figure III-18a and Figure III-18b.

58

Figure III-18: Through focus resolution for the conventional reference camera (F/1.9, 40 mm), the full- aperture eight-reflection camera, and the 50º arc-sectioned eight-reflection camera in the (a) horizontal and (b) vertical directions using a 1951 USAF resolution chart.

III.C Four-Reflection Camera Prototype

In this section I describe a four-reflection camera prototype intended for use as an ultrathin visible-light camera for surveillance and security purposes. This CMR camera design has a wider FOV, better sensitivity, smaller volume and broader DOF than the previously described eight-reflection camera prototype due to its less extreme focal length extension and smaller obscuration. The four-reflection design achieves a FOV of 17° over 1.92 Mpixels of a color image sensor with 3 µm pixels in just 5.5 mm of total track. This design also incorporates a sensitive method of refocus utilizing the sensitivity of the concentric aspheric mirror separations.

III.C.1 Four-Reflection Lens Design

Our objective here was to create an ultra-thin CMR lens design capable of a 17º FOV illuminating 1600 x 1200 3 µm pixels of a color image sensor in a physical track length close to 5 mm. To meet these specifications, a four-reflection lens design in CaF2 was chosen and optimized

in ZEMAX EE. Our four-reflection lens design, shown in Figure III-19a, is a thin annular

59

reflective lens utilizing four concentric aspheric reflectors to focus incoherent light to an image plane in near alignment with the back side of the lens.

Figure III-19: (a) Layout drawing (3/4 section) of the four-reflection camera, and (b) monochromatic incoherent MTF at the nominal design object distance of 10m.

A solid CaF2 geometry (solid dielectric as opposed to air) was again chosen to increase the

achievable FOV which is limited by the reflective geometry. In contrast to the eight-reflection

lens in Section III.A, the front and back sides of the four-reflection lens are fabricated separately

as two plano- elements which are subsequently aligned and joined with index-

matching gel between the elements. Focusing is accomplished by varying the nominal 300 µm

part spacing between these two elements. Very large changes in refocus are available with a gap

spacing adjustment on the order of microns due to the concatenated change in reflector spacing.

Fabrication of the two lens elements separately also avoids a critical alignment in re-chucking a

single element during fabrication in favor of adjustable alignment in assembly. This arrangement

does however require the mechanical package to support the focus adjustment while maintaining

the critical alignment between the two lens elements.

The four-reflection lens design shown in Figure III-19a has an EFL of 18.6 mm, a NA of

0.7 and an effective F/# of 1.15. The design is 28 mm in diameter and 5.5 mm thick with an

obscuration ratio of 0.81. Its 17º FOV fills a 1600 x1200 pixel color image sensor with 3 µm

60

pixels. At the extremes of the field, near the corners of the image sensor, relative illumination is

87% and pincushion distortion is limited to 7%. The prescription for this lens is included in

Appendix A of this dissertation.

The monochromatic incoherent MTF is shown in Figure III-19b. The design performs well with incoherent monochromatic light across the full FOV up to the Nyquist frequency, 167 line pairs per millimeter (lp/mm), of the image sensor. Across the full visible spectrum (486 to

656 nm), up to 10 µm of lateral color aberration (at edge of field) is present from dispersion as light enters the CaF2 lens material. In addition, larger axial color aberration than expected was found experimentally due to dispersion in the commercially available Cargille 0608 index- matching gel. This dispersion has been accounted for in the current simulations and results in a chromatic focal shift of 7 µm.

Chromatic aberration is not intrinsic to this design, and we have modeled (although not tested) two ways to eliminate it. An air-spaced catoptric lens with first-surface reflectors would have no chromatic dependences. If packaging constraints (or FOV) make a dielectric-filled catadioptric lens desirable, the index gel can be omitted in favor of a small adjustable air-gap for focus. In this arrangement the addition of planar low-power surface-relief diffractive elements fabricated with SPDT can be used to correct the chromatic aberrations incurred from the air- dielectric (see Appendix B). To help correct the chromatic focal shift present in our current four-reflection camera design, we will show in Section IV.C simulated and experimental results for the application of PPE and post-processing to help correct the axial color aberration described above [48].

Camera Packaging and Image Sensor Selection

A mechanical package and support structure was designed to meet the requirements of the four-reflection camera. The package was designed to hold the front and back lens elements in

61

precise alignment in order to maintain optical performance while allowing the gap spacing between them to be adjusted for refocus as shown in the cut away view of Figure III-20.

Figure III-20: (a) Cross-section and (b) perspective drawings of the mechanical lens package assembly.

The material used for the package was 303 stainless steel, which is similar in thermal

expansion characteristics to the CaF2 lens material. This reduces unwanted strain on the lens elements that could degrade optical performance. The front lens element is designed to be glued into the main enclosure body while the back lens element is designed to be glued into a back lens carrier. The back lens carrier also serves as a mounting point for the image sensor circuit board and provides a moderate amount of tip/tilt adjustment for the circuit board as well. A cover snaps onto the back of main enclosure body to protect internals. The back lens carrier slides into a

62

precision machined bore in the main enclosure body to maintain the 5 µm alignment needed between the two elements. To set the position of the back lens carrier, and thus the gap spacing between the two lens elements, a 168 tooth 120 pitch gear adjustment ring is threaded onto precision threads cut into the back lens carrier. The adjustment ring rides on a machined surface located in the main enclosure body. A small 12 tooth pinion allows for rotation of the adjustment ring using a small screwdriver where one rotation of the pinion screw corresponds to an adjustment in the gap spacing of 28 µm. Four springs clamp the back lens carrier into the main enclosure body and provide the clamping force needed to expel the index matching gel when adjusting lens elements closer together.

As done with the previous CMR camera prototypes, the microlenses in our prototype four-reflection camera were covered with a thin layer of index-matching gel (Cargille 0608) to prevent pixel crosstalk due to the steep ray angles. For this camera we have again chosen to use the 1.93 megapixel (1600x1200) Forza/Sunplus color CMOS image sensor with 3 µm pixels.

Refocus

The adjustable gap between the two lens elements of the four-reflection camera allows focus to be adjusted for different object ranges via small adjustments around the nominal 300 µm lens element gap spacing. Monochromatic incoherent MTF curves for the camera focused at 4m and 1km are shown in Figure III-21a and Figure III-21b respectively.

63

Figure III-21: Simulated refocus performance of the four-reflection lens design. Monochromatic incoherent MTF curves at object distances of (a) 4m and (b) 1km. A 14 µm gap adjustment is required to focus from postion (a) to position (b).

Some degradation of the tangential component of the most extreme field angles can be seen in Figure III-19a for close object distance; however results are generally excellent across a very large range of refocus. The total adjustment for this refocus is 14 µm. The small travel of this focus adjustment method may be especially suitable for use with a precision actuator such as a piezoelectric transducer. For our experimental purposes and to simplify the design, we have chosen a mechanical adjustment using a pinion-gear reduction assembly to adjust the spacing between the two lens elements as described in the previous section. The main drawback of our

chosen method is the tight 5 µm (0.0002") bore tolerance between the main enclosure body and

the back lens carrier that needs to be maintained. This turned out to be very difficult for the

machine shop to consistently achieve even with matched sets of the parts. In order to ensure the

back lens carrier slides smoothly in its machined bore during a refocus, a certain amount of clearance was required and in practice this clearance was larger than 5 µm. This ultimately led to problems with misalignment when a refocus was performed due to rear lens carrier shifting in the bore. Future designs will most likely use a flexure style focusing mechanism actuated with a piezoelectric transducer.

64

Fabrication and Assembly Tolerances

The four-reflection lens was intended to be fabricated as two plano-aspheric lens elements to allow for focus adjustment by varying the gap spacing between the front and back lens elements. While this geometry relieves the critical alignment involved with re-chucking a single double-sided lens during SPDT as well as the critical part thickness tolerance present in the previously reported eight-reflection lens of Section III.A; it presents additional difficulties with the assembly of the two lens elements once they have been fabricated. The tolerances of the shifts and tilts of the aspheric surfaces are similar to those of the eight-reflection lens with minimum values of ±10 µm for surface shift (along the optic axis) and ±0.01 degrees surface tilt. The overall lens element thickness tolerance is ±50 µm and the tolerable departure from the aspheric surface equation is ±0.25λ (@546nm) within each annular reflector zone. Once fabricated, the

two lens elements must be carefully aligned in the mechanical package for centration and tilt.

The assembled centration and tilt tolerances are 5 µm (radial) and ±0.02 degrees respectively.

These tolerances were calculated using ZEMAX’s sensitivity analysis to maintain an incoherent

MTF greater than 10% at 167 cycles/mm across the full field at the nominal object distance of 10

m.

III.C.2 Four-Reflection Camera Demonstration

Fabrication and Assembly

ISP Optics [50] diamond turned the front and back plano-aspheric CaF2 lens elements and coated them with patterned dielectric reflectors as shown in Figure III-22a. The specified fabrication tolerances for these elements were achieved on the first pass. Once fabricated, we aligned and glued the two reflective lens elements with UV epoxy into the mechanical package shown in Figure III-22b.

65

Figure III-22: Four-reflection camera assembly- (a) diamond turned and coated optical parts, and (b) image of the assembled four-reflection camera with USB interface PCB.

This process was carried out on a measuring microscope to carefully align the elements for centricity and tilt. The front lens element was first centered and epoxied into the main enclosure body. The completed front lens assembly was then mounted to the microscope with bllocking glue and the rear lens element was placed directly on top of front lens assembly. The direct contact resulted in zero gap spacing between the lens element’s planar sides to ensure minimal tilt. The back lens carrier was then placed into position with the adjustment ring set appropriately. We then aligned the centers of the front and back lens elements to within the centration tolerance of 5 µm with the measuring microscope and affixed the rear lens element to the back lens carrier with epoxy. The assemblies were then disassembled from the alignment station and reassembled with index matching gel filling the nominal 300 µm gap spacing between the two lens elements.

Finally, we aligned and mounted the image sensor circuit board onto the back lens carrier assembly. The spacing between the transmissive aperture of the rear lens element and the image sensor surface was set to 0.6 mm. This spacing was constrained primarily by the necessary chip package clearance. Index-matching gel was used to fill the gap between the rear lens element and

66

the image sensor to remove the effect of the microlenses. The camera assembly connects to computer through a ribbon cable which is connected to the USB interface as shown in Figure

III-22b. Image sensor settings, image capture and processing are controlled by MDOSim- a custom interactive camera programming environment written in Python and developed by Distant

Focus Corporation.

Measured Performance

A full-field 1600x1200 image at an object distance of 3.9 meters is shown in Figure

III-23a. The exposure time for the image is 35 msec under typical fluorescent laboratory

illumination. Examining the USAF resolution chart which can be seen more closely in Figure

III-23b, we find the limit of resolution to be group (-1,4) or 0.707 lp/mm in object space at this

range. This corresponds to a measured angular resolution of 0.363 mrad.

Figure III-23c shows the measured contrast transfer function (CTF) (lens + sensor) for

the four-reflection camera at 3.9 m. The CTF of a three-bar pattern is defined as

IImax− min I + I CTF() v = max min (3.1) IIWB− II+ WB

where v is the spatial frequency of the pattern of interest in line pairs per mm, IB is the average luminance for black areas at low spatial frequencies, IW is the average luminance for white areas

at low spatial frequencies, Imax is the maximum value of luminance near the pattern of spatial

frequency v, and Imin is the minimum value of luminance near the pattern of spatial frequency v.

CTF measurements were made by first carefully white balancing the image of a resolution chart

and then measuring the various square-wave contrast values referenced to the white and black

levels at low spatial frequency. The image sensor’s raw Bayer image data was used to avoid the

additional loss of contrast or resolution from the default color interpolation process.

67

Figure III-23: Performance at 3.9 m: (a) full image, (b) enlarged and cropped 1951 USAF resolution chart, (c) measured CTF (lens + sensor) of the four-reflection camera compared to a conventional F/1.4 lens of the same focal length and sensor, and (d) image space resolution versus object distance for the four- reflection camera and conventional F/1.4 comparison camera.

The CTF of the four-reflection camera shows a resolution cutoff of approximately 150 lp/mm in image space. This corresponds to 90% of the image sensor’s Nyquist frequency. For comparison, the CTF of a conventional refractive Sanyo SVCL-CS550VM F/1.4 zoom lens mounted to the same image sensor is shown as well in Figure III-23c. The zoom lens focal length has been set to 19mm to match the focal length of the four-reflection camera lens. The Sanyo lens resolution is image sensor limited with a measured resolution (aliased) of ~175 lp/mm. This measured cut-off frequency is larger than the Nyquist frequency of the image sensor due to additional bandwidth around the fundamental frequency of the three-bar image [51]. The

68

measured CTF results of the four-reflection camera lens are close to those predicted by optical simulation when the dispersion of the index matching gel and measured decentration of the final assembled camera are included. The measured 5 µm decentration between the two assembled lens elements corresponds to the limit or tolerance where detrimental effects on image quality at best focus start to become significant.

With the same image sensor settings and averaged pixel gains as the four-reflection camera, the conventional comparison lens and image sensor required an exposure of 17 msec to image the resolution targets with similar levels. The larger exposure required by the four- reflection camera can be attributed to losses of the dielectric reflectors and residual pixel vignetting at the image sensor. Transmission through the four-reflection camera lens was measured with a collimated green laser and large area silicon detector to be 38%, which is significantly lower than the designed transmission of > 90%. Assuming a second fabrication run would achieve the design performance in mirror reflectivity, the sensitivity of the four-reflection camera will be identical to that achieved with the long conventional refractive lens.

Figure III-23d shows the measured DOF of the four-reflection camera lens and the conventional comparison lens. With both lenses focused at 3.9 m, images of the USAF resolution charts were taken at various distances to gauge the fall off in resolution as a function of distance.

As measured, the DOF of the four-reflection camera lens is roughly 3x smaller than the conventional lens. This difference is smaller than the approximate DOF difference predicted by

1/NA2 due to the large obscuration and in part to the nature of the measurement. The large

obscuration causes an annular blur, which for small values of defocus, does not affect resolution

as significantly as an unobscured aperture of the same NA. However, for large values of defocus,

the annular blur causes a rapid and sudden falloff in resolution and perceived image quality due to

the annular bokeh [52].

69

Refocused Oudoor images

Figure III-24 shows two outdoor images taken with the four-reflection camera refocused

for large object distances. Due to the high brightness of the scene, the four-reflection lens’s

aperture was arc-sectioned down to a 50º wedge with an external aperture block. To suppress

stray light from the Sun and its reflections, we used a thin commercial honeycomb baffle made by

Tenebraex as an angle-selective baffle.

(a) (b)

Figure III-24: Refocused outdoor images captured with the four-reflection camera. (a) The UCSD library at 100 m, and (b) the engineering building at 40 m.

Stray Light

Reflective imaging systems are extremely susceptible to stray light since rays may

encounter reflective surfaces non-sequentially and appear as noise at the image sensor. Reflective

telescopes typically use large hoods and internal field stops as primary and secondary baffles to

limit the angular extent of the light reaching the image sensor [53][54]. These structures are often

long compared to the physical size of the lens and are not desirable when creating an ultra-thin

70

camera. A complete analysis of the origins of stray light in the four-reflection camera is necessary to develop appropriate baffling techniques.

We performed systematic stray light calculations using ZEMAX non-sequential analysis to identify critical surfaces which can be seen directly by the image sensor. Tracing rays in reverse from the image sensor to the input aperture is helpful to recognize specific paths where light can directly reach the image sensor [55]. Multiple reflections occurring within thin reflective lens designs lead to extreme ray angles which may enter the system and strike incorrect facets or skip surfaces all together. Skew rays tend to migrate around the exit pupil and do not contribute to significant levels of stray light.

We confirmed the stray light simulations experimentally by rotating the four-reflection camera with respect to a fixed light source. The four-reflection camera has a designed 8.5° half- angle FOV. Significant amounts of light at incidence angles beyond 12° off-axis reach the image sensor. These errant paths appear as bright bands at specific locations within the field and can lead to complete image loss. An example of bright oblique stray light imaged by the image sensor is shown in Figure III-25a. Additionally, an annular gap exists between the two surfaces cut into the front lens element and provides a direct ray path to the detector without encountering any of the appropriate surfaces. This effect is shown in Figure III-25b. The reflector of the front lens element must be covered to prevent light leaking through this front surface onto the image sensor.

Figure III-25c shows simulated and measured normalized intensity versus incidence angle for the four-reflection camera indicating the most problematic stray light paths to the image sensor.

71

Figure III-25: Stray Light in the four-reflection camera. (a) Large angle oblique rays skip reflectors and arrive at the sensor as stray light, (b) axial light may enter through gaps in the front reflectors if a central block isn’t used, (c) simulated and measured normalized intensity versus incidence angle.

A simple hood placed on the front of the four-reflection camera would have to extend

190 mm to limit incident rays to the design FOV. This excessive length can be shortened by dividing the full 28 mm aperture into small, subapertures each limiting the FOV. The thin

Tenebraex baffle described in the last section was measured to have a 12° angular cutoff and reduces the overall length to 6.3mm. Placing the honeycomb in front of the aperture causes a

40% drop in on-axis transmission since vanes now block portions of the entrance pupil. Off-axis

illumination is linearly attenuated down to the cutoff angle. A central obscuration is placed within

the honeycomb to block the direct stray light path through the annular gap.

Other aspects of image degradation associated with intense illumination are seen within

the four-reflection camera. Chromatic flare from the dielectric mirror coating can be seen at some stray light angles and especially for rays entering the annular gap. Scatter from the honeycomb vanes can also reduce the contrast of the image. Very bright spots formed on the image sensor

72

produce surface reflections which reenter the folded track. These rays may reflect from the last aspheric reflector back to the image sensor leading to narcissus-like effect of thermal cameras where the camera sees itself.

The honeycomb baffle works well for stray light suppression, however despite its reduced thickness; it is still thicker than the four-reflection camera lens. More extreme versions of the sub-aperture baffling concept are being explored using capillary arrays and even high resolution holographic film to expose the desired baffling structure. The resulting devices can be only microns thick, although care will be necessary to minimize scatter and loss.

Chapter III, in part, is a reprint of the material as it appears in 1) E. J. Tremblay, R. A.

Stack, R. L. Morrison, and J. E. Ford, "Ultrathin cameras using annular folded optics," Appl. Opt.

46, 463-471 (2007), 2) E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford, "Arc-section annular folded optic imager," Proc. SPIE 6668, 666807 (2007) and 3) E. J. Tremblay, R. A.

Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-reflection imager," Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press). The dissertation author was the primary researcher and author.

Chapter IV

Pupil-Phase Encoding and Post-Processing

An ultrathin CMR lens design requires the use of a large obscuration ratio, which in turn requires a large entrance pupil diameter to achieve the same light gathering capacity of an equivalent unobscured and unfolded optical design. Moreover, the multiple reflections required by the CMR design increase the manufacturing sensitivity of the imaging system to certain fabrication tolerances, especially the total thickness of the substrate. Finally, the thin reflective geometry and large entrance pupil diameter result in steep marginal ray angles at the focal plane.

This creates an extremely shallow depth of focus & DOF [47].

Several hybrid optical-digital approaches have been investigated for increasing the DOF of imaging systems [18][39][56][57][58][59][60]. These approaches optimize the amplitude and/or phase of the imaging system’s pupil function to create optical systems that are invariant with respect to defocus over a given range. Digital image restoration techniques are then used to restore image contrast lost by the pupil modification. Wavefront Coding, the hybrid optical- digital approach of Dowski and Cathey [39] utilized a cubic-phase pupil function for extended

DOF. This approach, which we will refer to in general as “pupil-phase encoding” (PPE), is

73

74

attractive since it is compatible with incoherent illumination and its phase-only modification of the pupil function maintains light throughput better than pupil function apodization approaches.

In recent years, more general phase-only masks and several design approaches utilizing different metrics in the frequency and spatial domain have been introduced for defocus invariance

[58][59][60].

This chapter discusses the use of PPE to alleviate these effects by increasing the fabrication tolerances while simultaneously extending the depth of focus & DOF of a CMR imaging system. PPE reduces these problems by providing the system with the ability to trade off best focus performance for more tolerance to optical aberrations. This increased tolerance then can be budgeted to relax the alignment or fabrication tolerances, or to extend the system’s tolerance to optical aberrations such as defocus. PPE, uses a combination of specialized aspheric optical elements with post-detection signal processing to create a digital imaging system that is capable of producing acceptable image quality over a wider range of operating conditions than would be otherwise possible [18][39]. The specialized optical elements are designed in order to maximize the transfer of information in the presence of fabrication tolerances and aberrations

(for example, defocus) rather than producing diffraction-limited images at best focus with a tight range of tolerances [56]. The images captured by these specialized elements are then digitally processed to produce the final image.

IV.A Application of PPE to Concentric Multi-Reflection Cameras

The thin annulus present in a highly obscured CMR lens design poses a difficulty for applying PPE compared to unobscured lens systems. The thin annulus provides little opportunity

for varying the exit pupil phase along the radial coordinate and, consequently, the selection of

75

potential surface forms is limited. While this is a limitation for the application of PPE to highly obscured CMR designs, the large obscuration ratio that is present in the CMR design does create some interesting transformations of some typical third order aberrations. Namely, spherical aberration and coma more closely resemble defocus and tilt- aberrations that can be directly corrected for with PPE and post-processing. Appendix D contains a closer examination of how these aberrations compare in highly obscured lenses.

With these considerations taken into account, a well-suited PPE surface for the CMR design is the so-called cosine-form, which has a small radially dependent component with a substantial cosinusoidally varying angular component [57]. The cosine-form is also readily suitable for diamond turning using a fast tool servo because the periodic shape of the surface is well mapped to the motion of the diamond tool. The general form of the cosine-form is described mathematically by

m bi SagPPE() r,cos()θ = ∑ a i r w iθφ+ i (4.1) i=1 where r, θ and SagPPE specify the surface position in cylindrical coordinates. The weight on each term is given by ai and the radian frequency and phase are given by wi and φi respectively.

Typically, the PPE portion of the surface sag is added to the base curvature, conic, and aspheric portions of a surface at or near the aperture stop.

IV.B PPE Eight-Reflection Camera Prototype

PPE was applied to the eight-reflection design of Chapter III, Section A by reshaping the first reflective surface of the design, as shown in Figure IV-1. This section contains a description of the PPE surface optimized for use with the eight-reflection camera followed by fabrication and experimental demonstration of a prototype PPE eight-reflection camera. Depth of focus,

76

resolution and tolerances are compared between the unmodified and PPE eight-reflection camera prototypes.

Figure IV-1: PPE eight-reflection lens. Zonal aspheric reflectors on the back side of the optical element focus light to an image sensor at the center of the element. (a) Cross-section illustrating ray path. (b) Back (aspheric) side perspective view.

IV.B.1 PPE Design and Nominal Performance

Our primary goal was to apply PPE to extend the depth of focus and therefore relax the alignment tolerance of the focal plane in tip, tilt, and axial position. Specifically, our goal was to maintain acceptable imaging performance over a depth of focus of +/- 10 µm. In our PPE eight- reflection lens design, the specific PPE surface figure to be optimized is described by

m i SagWFC ( r,θ )(= ∑ ai 109cosr − ) (3)θ for {0.9≤≤r 1.0} (4.2) i=1 where the range of r values represent a scaled radius from the limiting radius of obscuration through the outer aperture radius. This PPE surface profile was optimized with 8 terms (m = 8) using CDM Optics’ proprietary Wavefront Coding design software in conjunction with a

77

commercially available lens design software (ZEMAX). This software includes custom merit function definitions and filter design routines to optimize both the PPE surface as well as the deconvolution filter used to produce the final filtered PPE images. The PPE surface was first added to the eight-reflection lens design presented in Section III.A. The previously described eight-reflection lens design was originally optimized using traditional lens to have incoherent MTFs that were substantially close to each other as a function of field. The small variation of incoherent MTFs as a function of field is usually a requirement of successful PPE designs, since one expects to be able to use a single convolution kernel to deconvolve the whole image. A custom merit function was defined in order to optimize the filter and the PPE surface.

The main goals of the merit function were 1) to reduce the variation between incoherent MTFs as a function of field; 2) the reduce the amount of overshoot and undershoot of the MTFs after filtering; 3) to reduce the size of the filtered PSFs; and 4) to reduce the total amount of noise gain produced by the digital filter.

A plot showing the unmodified and PPE thru-focus incoherent MTFs is shown in Figure

IV-2. In the PPE design, performance at the best-focus condition is sacrificed to maintain an acceptable performance over a larger depth of focus. In a PPE system, excess SNR at one operating point is traded to achieve acceptable performance throughout the entire operating range.

The plot in Figure IV-2 shows that our design has achieved a usable modulation of 12% within our design goal of +/- 10 µm (typically, a modulation greater than 10% is sufficient for good image restoration during the post-processing step of a PPE system).

78

Figure IV-2: Thru-focus incoherent MTF plot for an on-axis field point at 156 cycles/mm in image space, showing the expected extension of the depth of focus in the PPE design before filtering.

The effect of the signal processing is best illustrated by examining the pre-processed and post-processed PSFs. Thru-focus simulated digital PSFs for the PPE and traditional designs are presented in Figure IV-3.

Figure IV-3: Thru-focus simulated digital PSFs for the nominal PPE and unmodified eight-reflection cameras.

79

The bottom row depicts PSFs for the case of unmodified (non-PPE) imaging. The middle row depicts the unprocessed PPE PSFs, and the top row depicts the PPE PSFs after convolution with the digital filter shown in Figure IV-4 below.

Figure IV-4: Digital filter used to process PPE images. The noise gain is 1.35, meaning that a small noise penalty is expected.

From Figure IV-3 it can be seen that the PPE PSFs have a three-fold symmetry, resulting from the three-fold symmetry of the pupil function. They also present a sharp, single-pixel central peak and symmetric legs that are about 4-pixels in length. After processing, the defocused PPE

PSFs present a central peak that is sharper than the defocused traditional ones, with some noise surrounding the central peak. This means that we expect the PPE imaging system to yield sharper defocused images after processing with some added noise as a trade-off [61][62]. The noise penalty can be quantized by the noise gain of the digital filter used to reconstruct the PPE PSFs.

This filter was synthesized using an adaptive matrix inversion algorithm, having a diffraction- limited reconstructed PSF as its optimization target. The synthesized 21x21 pixel filter shown in

Figure IV-4 has a noise gain of 1.35 for a 3-dB bandwidth of 90% of the Nyquist frequency, meaning that we expect to recover almost the full resolution of the imaging system for a relatively

80

small noise penalty. However, as will be shown in Section IV.B.2, the experimental results did not fully satisfy these expectations.

Fabrication Tolerances

In addition to relaxing the image sensor alignment tolerances, the application of PPE and post-processing has the effect of relaxing the fabrication tolerances of the eight-reflection lens.

Table IV-1 compares the individual fabrication tolerances of the unmodified eight-reflection lens design to those of the PPE eight-reflection camera design calculated using sensitivity analysis in

ZEMAX.

Table IV-1: Calculated fabrication tolerances for the unmodified eight-reflection lens and the PPE eight- reflection lens.

Description Tolerances: Tolerances: unmodified With PPE CaF2 substrate thickness (nominal = 5 mm) ± 5 µm ± 10 µm Departure from flat surface (front planar side) 0.5λ (546 nm) 0. 5λ (546 nm) Departure from aspheric surface equations 0.25λ (546 nm) 0.25λ (546 nm) Zone shift, aspheric surface 1 (outermost) ± 15 µm ± 20 µm Zone shift, aspheric surface 2 ± 10 µm ± 10 µm Zone shift, aspheric surface 3 ± 10 µm ± 10 µm Zone shift, aspheric surface 4 (innermost) ± 15 µm ± 20 µm Zone tilt, aspheric surface 1 (outermost) ± 0.010° ± 0.010° Zone tilt, aspheric surface 2 ± 0.010° ± 0.025° Zone tilt, aspheric surface 3 ± 0.020° ± 0.070° Zone tilt, aspheric surface 4 (innermost) ± 0.030° ± 0.200°

IV.B.2 Experimental Results

Measured Point-Spread Function

In the assembly of the PPE eight-reflection camera, it was necessary to align the optical

element with respect to the rows and columns of the sensor array. This was done because of the

circular asymmetry of the PPE surface function, and was necessary in order to allow us to use a

81

convolutional decoding filter aligned with respect to the orientation of the simulated PSF. The clear gel facilitates this alignment by allowing the CaF2 lens to be rotated while imaging a point

source at the nominal object distance, positioned close to the center of the FOV. The correct

orientation was found when one of the legs of the PSF was aligned with respect to one of the

rows of the sensor array.

Figure IV-5 shows a PSF measured at the best-focus position after focus adjustment. The

PSF was measured by imaging a 15 µm pinhole illuminated by a bright white-light source

positioned 2.5 meters away from the eight-reflection camera. The best-focus position then was

found by varying the distance between the detector array and the eight-reflection camera using a

micrometer. The position was varied until we found the most compact PSF possible.

Figure IV-5: Best-focus PSF measured using the PPE eight-reflection imaging system.

As shown in the figure, the measured PSF is quite a bit larger than the predicted PSF. The central lobe alone is about 3x3 pixels wide and the size of each leg varies from 10 to 12 pixels wide. Moreover, the legs are uneven and asymmetric, showing quite a large discrepancy between the expected and measured PSFs. This discrepancy is mostly attributed to fabrication defects associated with PPE a design with tight tolerances and, as we will see, it negatively impacts the imaging quality of our system.

82

Filters

It is usually preferred to derive filters for decoding the PPE images using predicted PSFs.

This is the case because the calculated PSFs are free from noise and aliasing, and the filters

derived from them render good results when the fabricated parts are close to their respective

designs. Unfortunately in this case the measured PSFs turned out to be considerably different

from the expected ones, forcing us to use the measured PSFs in the synthesis of the filter. More

than 30 different filters were produced using an adaptive matrix inversion algorithm, each filter

slightly different in one or more of its design parameters (e.g. bandwidth, noise gain, tolerance to

imaging artifacts, etc). Then, each one of the filters was tested and a human observer selected the

best filter, shown in Figure IV-6. The filter is limited in size to 55x55 pixels (considerably large,

in order to accommodate the large PSF). Its associated noise gain is 3.64 (quite large, meaning

that noisy images should be expected as a result) and its maximum bandwidth is 87% of the

Nyquist frequency (considerably large, in an attempt to recover as much detail as possible).

Figure IV-6: Filter designed using measured PSF. The filter noise gain is 3.64, indicating that noisy images should be expected.

83

Performance Comparison

Figure IV-7 shows resolution targets at best-focus for (a) an unmodified eight-reflection camera, and (b) a PPE eight-reflection camera. Both images were captured using a resolution target placed 2.5 meters away from the system. The color images were white-balanced and then converted from raw to YUV images. We are showing the Y-channel (luminance) only. In the case of the PPE image, the Y-channel information was convolved with the filter shown in Figure IV-6 before producing the images shown.

Figure IV-7: USAF targets imaged through eight-reflection cameras at best-focus. (a) Unmodified. (b) PPE.

At best-focus the traditional and PPE imaging systems have nearly the same resolution

(about 1.587 line-pairs per mm). However, one also notices quite a bit of noise in the PPE image, which should be expected given the noise gain of the reconstruction filter.

Figure IV-8 shows the same bar targets imaged at the same distance away from the camera but this time the detector has been moved 10 µm away from the eight-reflection lens, resulting in 3.53 waves of defocus. Note that the unmodified eight-reflection image has lost some resolution, being now capable of resolving up to 1.414 line-pairs per mm in the horizontal direction and about 1.260 line-pairs per mm in the vertical direction. The PPE image has also lost some resolution in the vertical dimension, but not in the horizontal dimension. It is now capable

84

of resolving up to 1.414 line-pairs per mm in the vertical dimension while maintaining 1.587 line- pairs per mm of resolution in the horizontal dimension.

Figure IV-8: USAF targets imaged through eight-reflection cameras at 10 microns away from best-focus. (a) Unmodified. (b) PPE.

Thus, we see that fabrication defects provided us with PSFs that are quite a bit larger than expected. This forced us to use measured data to produce our decoding filters, and the resulting filters had high noise gains, resulting in noisy images. Nevertheless, we have shown that even under these unfavorable conditions PPE was still capable of providing some advantage in imaging resolution over an unmodified CMR system. Future designs can be improved by 1) using fabrication processes with tighter tolerances and 2) taking into account the sensitivity of the design to different tolerances (other than defocus) when designing the PPE surface, that way further increasing the fabrication tolerance of the resulting PPE system.

IV.C PPE Four-Reflection Camera Prototype

In this section I describe the design and experimental implementation of PPE and post- processing for the four-reflection camera of Section III.3. In this case, PPE and post-processing

85

were applied to extend the DOF of the four-reflection camera, and help correct the small amount of axial chromatic aberration caused by the dispersive index-matching gel.

IV.C.1 PPE Design and Nominal Performance

We applied PPE to the four-reflection camera of Chapter III, Section C by modifying the nominally flat annular aperture on the front side of the four-reflection lens. In our four-reflection lens design, we optimized the specific surface figure given by

8 i Sag( r ,θθ )=−∑ ai ( r 0.81) cos(3 ) for {0.81 ≤≤ r 1.0} (4.3) i=1

where the range of r values represent a scaled radius from the limiting radius of obscuration

through the outer aperture radius. This surface profile is applied to the nominally planar front

annular aperture of the four-reflection lens (see Appendix C).

The goal of our optimization was to extend the depth of focus of our four-reflection

camera to ±10 µm, thus increasing the depth of focus of the four-reflection camera by a factor of

approximately 3x and creating a defocus tolerance large enough to compensate for the axial color

aberration present in our design. In contrast to the design of the PPE eight-reflection surface

which was designed in collaboration with CDM Optics with their proprietary software; the PPE

design for the four-reflection lens was carried out using a combination of commercial ray-tracing

software, ZEMAX, and commercial numerical computing software, Matlab, for optimization. An

intermediate software link provided us with the flexibility to incorporate our own image sensor

modeling, post-processing and optimization routines in Matlab with calls to ZEMAX to perform

the necessary ray-traces through our four-reflection lens design. We defined a custom merit

function to optimize the optical system such that a single restoration filter could be applied to

increase the depth of focus. The goals of the merit function were: (1) to maximize the average

incoherent MTF values across the fields, focal plane positions and wavelengths of the design up

86

to the Nyquist frequency of the intended image sensor, (2) reduce the variation between incoherent MTFs across the fields, focal plane positions and wavelengths, and (3) maintain a minimum threshold incoherent MTF for all fields, focal plane positions and wavelengths up to the

Nyquist frequency of the intended image sensor. Our merit function to be minimized for optimum system performance can be described as

merit value= offset−++α Aαα B C (4.4) 123 where α1, α2, α3, are the weights on the mean, standard deviation and minimum threshold terms respectively, and offset is a user specified term to maintain a positive merit function value. The mean term, A, is defined as

A= mean(((,))) mean MTF u u (4.5) c,, fλλ ux , uy c ,, f x y where c is the configuration (focal plane position), f is the field position, λ is the wavelength, and ux,uy are the spatial frequency coordinates. Due to symmetry, only the first quadrant of the

MTFc,f,λ(ux,uy) function is needed in the optimization calculations. In Equation (4.5), mean indicates the arithmetic mean. Since a large mean value is desirable, this term is assigned negative value in the merit function, Equation (4.4). The standard deviation term, B, in Equation

(4.4) is described as

B= mean(((,))) stdev MTF u u (4.6) uxuy,,,,, cfλλ cf x y

Here incoherent MTF values are compared across the configurations, fields and wavelengths to enforce similarity between optimized incoherent MTF values. The arithmetic mean is used to average the standard deviation values over the spatial frequency coordinates to give a single value to the merit value equation. Finally, the threshold term, C, in Equation (4.4) is described as

Cmean= ()β (4.7) cf,,λ cf ,,λ

where

87

n ⎧⎛⎞⎛⎞δδ−−MTF MTF ⎪ threshmin; c , f ,λλif thresh min; c , f , > 0 β = ⎜⎟⎜⎟(4.8) cf,,λ ⎨⎝⎠⎝⎠(δδthresh /10) ( thresh /10) ⎪ 0 otherwise ⎩

In Equation (4.8), δthresh is the chosen minimum incoherent MTF value threshold, n is the exponent power on the threshold term and MTFmin;c,f,λ is the calculated minimum incoherent MTF value within the first quadrant for each configuration, field position and wavelength up to the

Nyquist frequency of the image sensor. Depending on the value of minimum incoherent MTF value below threshold, Equation (4.8) is designed to give positive values between zero and ten raised to the nth power. This term in the merit value therefore enforces a minimum incoherent

MTF across the configurations, field positions and wavelengths for image restoration and penalizes zero crossings where image information is lost and phase inversion occurs.

Optimization using the merit value described by Equations (4.6) through (4.8) was carried out utilizing several different optimization routines such as the Nelder-Mead simplex

(direct search) method, gradient/finite-difference and simulated to investigate solutions offered by the different methods [63][64][65]. In addition, the weights of the terms in Equation

(4.4) were adjusted to investigate their influence on the results of optimization. In general, the standard deviation term, B and the minimum threshold term, C received much stronger weighting than the mean term, A. The threshold term in the merit value equation was chosen to maintain a modulation of greater than 10% up to the Nyquist frequency of our sensor (167 cycles/mm) across the operating range. Typically, a modulation of 10% is sufficient for effective image restoration with post-processing. We found the simplex method in general to be the most useful and reliable in terms of convergence and speed, however significant interaction was required and a large number of optimization starting points were tried to steer the optimization toward a usable solution. Modeled results of our final optimization solution are displayed in Figure IV-9 where the axial point spread function (PSF) over ±10 µm defocus is shown for both the unmodified and

88

PPE designs. A simple inverse-filter is used to generate the processed PSFs shown in the bottom row of Figure IV-9 [61].

Figure IV-9: Simulated PSFs at best focus and ±10 µm defocus (±3.5 waves defocus at 550 nm). Unmodified PSFs (top row), PPE PSFs before filtering (middle row), and inverse-filtered PPE PSFs (bottom row).

For display purposes, the intensity scale on the unmodified PSFs (top row) and the PPE

PSFs (middle and bottom rows) differ by a factor of ~4x. This value represents the trade-off between signal-to-noise (SNR) and depth of focus with this technique. The trade-off is minimized as much as possible in the optimization by maximizing the average incoherent MTF across configurations, fields and wavelengths; however the significant variation between field positions in the base-line four-reflection lens design required a relatively strong phase modulation, lower average incoherent MTF, and more significant trade-off of SNR to achieve our desired spatial invariance over the image volume (depth of focus and field).

89

IV.C.2 Experimental Results

To demonstrate the optimized PPE surface, we had the front element of the four- reflection lens refabricated with the optimized PPE surface profile diamond turned onto the previously flat annular aperture. The modified front element was then assembled with the standard rear element as described in Section 3 to make a PPE four-reflection camera. To test the addition of the PPE surface, we set up a comparison test with the PPE four-reflection camera and the unmodified four-reflection camera at the same object distance. Figure IV-10shows axial PSFs measured with both the unmodified four-reflection camera and the PPE four-reflection camera.

Figure IV-10: Measured PSFs at 3.9 m (best focus) and ±0.3 m (~ ±3.5 waves defocus at 550 nm). Unmodified PSFs (top row), PPE PSFs before filtering (middle row), and inverse-filtered PPE PSFs (bottom row).

We measured the PSF by imaging a 25 µm pinhole illuminated by a bright white-light

LED positioned 3.9 m away from the two cameras. Best focus was found by adjusting the pinion

90

screw which varies the separation of the front and back optical elements until the most compact

PSF was found. We measured PSFs at best focus and ±0.3 m to characterize the PSFs variations through focus for the two cameras. The measured object depth of 0.6 m corresponds to a depth of approximately 20 µm (±3.5 waves of defocus at 550nm) in image space. Examining the PSFs for the unmodified camera, we found that although the camera forms a small bright PSF at best focus, the out of focus measured PSFs display an asymmetry associated with the measured decentration of 5 µm between optical parts. In this case, we find the PSF at 3.6 m to be better than the simulated results shown in Figure IV-9. The asymmetry in the PSF leads to significant power concentration on a small PSF as the object distance is reduced from best focus. However, this behavior is not symmetric through focus- increasing the object distance from best focus causes the PSF to spread more quickly on the far side of best focus. The PSFs for the PPE four-reflection camera also show some asymmetry from a measured decentration of 11 µm (assembly error) between the lens elements. This decentration is large enough to cause significant degradation in image quality in the PPE four-reflection camera. The decentration tolerance for the PPE four- reflection lens is approximately 8 µm. Using matched pixel gains and sensor settings, the exposure times used for the PSF measurements of Figure IV-10 are 0.78 msec and 3.3 msec for the unmodified and PPE four-reflection cameras respectively. The bottom row in Figure IV-10 shows filtered results of the PPE four-reflection camera where a Wiener filter has been applied to restore contrast [62]. This Wiener filter is based on the measured axial PSF at 3.9 m with its threshold and SNR balance parameters tweaked to subjectively provide the best looking images based on a human viewpoint.

Figure IV-11 shows comparison images of a USAF 1951 resolution chart acquired with the unmodified and PPE four-reflection cameras at best focus (3.9 m). The color images of the

PPE camera were white balanced and converted to luminance-bandwidth-chrominance (YUV) image data before the restoration filter was applied to the Y-channel only. At best focus the

91

resolution limit is 0.707 lp/mm in the horizontal and vertical directions for the unmodified camera; and 0.63 lp/mm and 0.561 lp/mm in the vertical and horizontal directions respectively for the PPE camera. The Wiener filter was able to restore much of the resolution of the PPE system; however the significant misalignment of the lens elements resulted in some resolution loss in the horizontal direction. The noise evident in Figure IV-11b is introduced by the restoration filter due to the significant noise gain of the filter.

Figure IV-11: Experimental comparison images at 3.9 m (best focus). (a) Unmodified camera, and (b) prrocessed PPE camera.

Figure IV-12 shows another comparison of the unmodified and PPE four-reflection cameras with the object distance reduced by 30 cm to 3.6 m. At this position the resolution of the unmodified camera is found to have fallen to 0.561 lp/mm in the vertical direction, and 0.500 lp/mm in the horizontal direction. The PPE camera however shows little perceivable resolution loss over this distance, still resolving 0.63 lp/mm in the vertical direction and 0.561 lp/mm in the horizontal direction.

92

Figure IV-12: Experimental comparison images at 3.6m (~3.5 waves defocus at 550 nm). (a) Unmodified camera, and (b) processed PPE camera.

Increasing the object distance by 30 cm to 4.2 m created a more pronounced difference beetween the two four-reflection cameras. At 4.2 m the unmodified camera's quality is degraded resolving 0.315 lp/mm in the vertical direction and 0.353 lp/mm in the horizontal direction. In contrast, the PPE camera shows a smaller amount of degradation resolving 0.561 lp/mm in the vertical direction and 0.445 lp/mm in the horizontal direction.

With the variations in assembly alignment of the two camera systems, it is difficult to directly quantify the improvement gained with PPE and post-processing in this case. However, even with a measurably larger decentration in the PPE four-reflection camera (11 µm) compared to the unmodified four-reflection camera (5 µm), we find some improvement in the DOF.

Additional effort is required to improve alignment accuracy to more clearly examine the benefit of PPE and post-processing for a multi-element CMR lens such as the four-reflection lens.

Chapter IV, in part, is a reprint of the material as it appears in 1) E. J. Tremblay, J.

Rutkowski, I. Tamayo, P. E. X. Silveira, R. A. Stack, R. L. Morrison, M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication tolerances of thin annular folded imaging systems using wavefront coding," Appl. Opt. 46, 6751-6758 (2007) and 2) E. J. Tremblay, R. A.

Stack, R. L. Morrison, J. H. Karp and J. E. Ford, "Ultrathin four-reflection imager," Appl. Opt.

93

doc. ID 101823 (posted 4 November 2008, in press). The dissertation author was the primary researcher and author.

Chapter V

Ongoing Work

In this chapter I describe some of the preliminary work we have done in new directions using the CMR lens approach. These three areas represent three distinct, potentially advantageous applications of the CMR approach, all with their own significant challenges to be overcome with future research.

V.A Four-Reflection 7-Element Camera Array: 13 Megapixel Visible-Light Camera

In this section I describe some preliminary work I participated in with collaborators at

Distant Focus Corporation to build a thin 7-element array of four-reflection cameras embedded into a hemispherical surface to illustrate how this technology could be used to conformally deploy

"flat" cameras on a surface of a vehicle such as an aircraft for high resolution, enlarged FOV imaging. This imaging system was a demonstration for phase 2B of the MONTAGE program described in Section I.B.2. My role in this work was the assembly and optical characterization of the individual four-reflection cameras. I also assisted our collaborators Ron Stack and Rick

94

95

Morrison from Distant Focus Corporation with the integration, alignment, calibration and test of the 7-element array.

V.A.1 Four-Reflection 7-Element Array Design

Figure V-1 shows the concept and arrangement of the seven four-reflection cameras used in the array. Here each four-reflection camera (as described in Section III.C) is orientated such that the center of its FOV is orientated at an angle of 7.5º to the central camera. With image

stitching provided in post-processing, a final high resolution image is obtained with a FOV

greater than 30º.

Seven Forza 1.93 Mpixel CMOS image sensors are used with the four-reflection lenses of

the array. All totaled, these seven image sensors combine to form a 13.4 Mpixel image which is

intended to process, stitch the data from all seven image sensors, and output the final image at

video rate (15 frames/sec).

Figure V-1: (a) 7-element array camera diagram- seven four-reflection cameras are orientated to capture different portions of a large FOV. (b) Seven 1.93 Mpixel image sensors capture and stitch the seven 17º degree fields for a total FOV of >30º.

96

V.A.2 Implementation & Demonstration of a 7-Element Camera Prototype

Four-reflection lens fabrication and assembly

The final procedure for assembly of the seven four-reflection cameras is described in detail in Section III.C.2. However, this assembly process was partially developed during the assembly of the seven four-reflection lenses for the array. As such, there is significant variation in the performance of the array cameras with some significant misalignment present in some of the array cameras.

Some of the issues encountered during the development of the assembly procedure are shown in Figure V-2a. During the assembly process it was necessary to not only align the centers of the front and back lens elements for centration, but to also make sure that the two elements laid flat under gravity during the curing process. The top PSF of Figure V-2a shows a severe tilt that was encountered when a small amount of UV glue prevented a close fit and therefore severe tilt of the two lens elements during assembly. This issue was resolved by careful inspection and removal of excess glue from the flat-side of the front lens surface before setting and curing the back lens on top.

Another issue that required attention was physical distortion of the package. This issue arose from shrinkage in the 5-minute epoxy that we initially used to hold the front lens package to the steel parallels which were in turn glued to the measuring microscope. Shrinkage in the epoxy applied undue stress to the steel lens package and lens which can be seen in the middle PSF of

Figure V-2a. This issue was resolved by switching to a low shrinkage UV epoxy and slowing the

UV curing process by reducing the UV exposure power and extending the exposure time.

The measured on-axis incoherent PSFs of the seven four-reflection cameras used in the 7- element array are shown in Figure V-2b. These PSFs were measured by focusing each camera on a 25 µm pinhole illuminated with a bright white light LED located at an object distance of 4

97

meters. As shown, performance of these cameras ranges from almost ideal (bottom-right) to poor

(bottom, bottom-left, middle).

Figure V-2: (a) Issues and results encountered during the development of the four-reflection assembly process as displayed by the on-axis PSF: severe tilt (top), physical distortion of the lens package (middle) and nearly ideal with major issues resolved (bottom). (b) Measured PSFs of the cameras used in the arrays.

Electronics and Processing for the 7-element array

Distant Focus Corporation developed a custom hardware processor to support the multi- sensor camera system of the 7-element array. This hardware processor, called the DFC Vision engine consists of 2 Virtex 4 FPGA units, 1GB of DDR2 SDRAM memory, 128 MB of DDR1

SDRAM memory, a DVI video port to drive a high resolution LCD monitor, a USB 2.0 port, and a serial sensor interface that allows up to seven simultaneous image sensor connections. The system is capable of capturing, processing and stitching of the images from all seven image sensors simultaneously at a frame rate of 15 frames/sec. Individual portions or the stitched and processed areas of interest may be displayed in real time on the attached LCD monitor. The system is also capable of independent operation without a computer attached.

98

The 7-element camera system produces a large numbered pixel composite image formed from the slightly overlapping FOV of each array element. DFC created the software tools to register and stitch the reference views. Using data generated from this analysis, a lookup table

(LUT) process was developed in order to achieve high-rate image stitching for the project demonstrator [66]. The composite demonstration was developed for both PC-based and FPGA- based operation so that processing and stitching could be accomplished without an externally attached PC.

Array Demonstration

The assembled 7-element array and system are shown in Figure V-3a and Figure V-3b respectively. Each of the seven image sensors connects to an interface board attached to the array with a short flex jumper. This interface board then connects to the DFC Vision Engine using a 1 meter ultra-high density SCSI cable.

Development of the FPGA code was a major challenge for the implementation of the camera array and by the end of the Montage program all of the major modules were coded and most were functional. However, timing problems with the DDR1 controller prevented proper troubleshooting of the code on FPGA 2 and as a result, prevented the stitching algorithms from operating on the FPGA board. The demonstration of the array was therefore modified to use the

FPGA board in pass-through mode where the data from each sensor was transferred to an external

PC, and the stitching was performed in MDOSim- DFC’s custom camera interface software. This problem prevented the stitched images from appearing at video frame rate on the directly attached

LCD and had to be stitched and processed at slow speed on the external PC as well.

Although there were difficulties associated with hardware implementation of the image

processing and display and with the assembled quality of the individual four-reflection cameras;

99

we managed to assemble, align, characterize and demonstrate the 7-element arrayed image system. Two stitched images, one captured at close distance in the lab during alignment and one captured at large distance during the Montage final demonstration are shown in Figure V-3c and

Figure V-3d respectively. With recent improvements and repairs to the FPGA code and the now improved four-reflection lens assembly process, we believe that modifications or a second generation of this system would eliminate the most serious issues described here.

Figure V-3: (a) 13.4 Mpixel 7-element array camera with >30° FOV in a conformal 5.5mm thick optical track. (b) 7-element array camera with FPGA processor and LCD distplay. (c) Indoor image at 4 m during stitching alignment. (d) A stitched outdoor image (grayscale) of sail boats in the San Diego Bay.

100

V.B Infrared Concentric Multi-Reflection Camera Design

As discussed previously in Section II.E.3, air-spaced CMR lens designs are particularly interesting for infrared (IR) applications due to their completely achromatic performance. Most long-wave infrared (LWIR) lenses on the commercial market today often use heavy and expensive refractive lens materials. While catadioptric LWIR lenses are also commonplace, all- reflective (catoptric) designs are not common for IR objective lenses due to the typically small

FOV available with catoptric lens designs [67]. An air-spaced four-reflection CMR lens, which is essentially an air-spaced, scaled up version of the lens described in Section III.C has the potential to be an attractive LWIR lens design due to its reduced thickness, bulk, weight and cost compared to other commercially available LWIR lenses. In addition, the all-reflective geometry of an air- spaced CMR lens provides potential for extremely broadband imaging, provided that broadband or dual-band image sensors become available.

In this section I describe the preliminary design of a visible to LWIR camera using an air- spaced four-reflection CMR lens. This lens design is proposed for use with dual-band near- infrared (NIR)/LWIR image sensors that are currently in development.

V.B.1 Comparison to a Conventional IR Lens

One of the most important attributes of an IR camera is its sensitivity and SNR over the wavelength range of interest; determined from both the sensitivity of the image sensor and the light collection of the IR lens. To compare the relative energy collection of an obscured air- spaced CMR lens to an unobscured refractive LWIR lens, the unnormalized incoherent MTF technique described in Section II.B.2 can be used. As an example of this technique and to get an idea of the performance of an obscured CMR lens, we can compare a first-pass air-spaced four- reflection lens to a high-end refractive LWIR lens of the same focal length. Figure V-4 shows the

101

diffraction limited relative energy collection versus spatial frequency for an obscured four- reflection air-spaced CMR lens and an unobscured IR refractive lens, both with focal lengths of

125 mm. The specific values for diameter and obscuration for the four-reflection lens were taken from the working design described in detail in the next section. In this case the outer diameter is

134 mm and the obscuration ratio is 0.72. The refractive lens for comparison was chosen to be an ideal F/1.4 LWIR lens with a diameter of 89 mm. As shown in Figure V-4, the obscuration of the

CMR lens becomes a more serious concern for longer wavelengths since the scale of the incoherent MTF and therefore the relative energy collection (and SNR) will be reduced. Although there will be an SNR penalty compared to a low F/# refractive lens, we see that the performance of a well corrected CMR lens can provide good performance across a very wide range of wavelengths as well as other advantages due to its geometry and construction. These advantages are: reduced thickness, reduced weight, reduced cost (potentially), and broadband achromatic performance compared to a refractive IR lens.

Figure V-4: Diffraction limited relative energy collection versus spatial frequency comparing a conventional F/1.4 IR lens to an F/1.34 four-reflection air-spaced CMR IR lens of the same focal length (125 mm). The four-reflection IR lens has an outer diameter of 134 mm and an obscuration ratio of 0.72. Note: wavelength dependent transmission losses in the refractive material lens were not considered.

102

V.B.2 Infrared Four-Reflection Air-Spaced Design

A preliminary four-reflection NIR/ LWIR dual band CMR lens design is shown in Figure

V-5. The baseline four-reflection LWIR lens design shown in Figure V-5 is not fully optimized but still provides an accurate indication of the physical layout of the first prototype optics. The outer and inner diameter of the aperture is 134 and 96 mm, respectively, an area equivalent to a

92.6 mm diameter clear aperture. The effective F/# based on energy collection is 1.34, but the effective numerical aperture at the image plane is 0.47, defined by the maximum ray angle of incidence on the sensor.

Figure V-5: (a) A schematic drawing and raytrace of the four-reflection MWIR-LWIR camera scaled to 125 mm focal length and evaluated incoherent MTF for (b) 1 µm light and (c) 10 µm light.

103

The lens elements are diamond-turned, but unlike the CMR lenses described in Chapter

III, these will be first surface reflectors fabricated in mirror-coated metal, facing on an air-filled

cavity. This helps reduce the overall weight of the lens, and also makes it possible to separately

optimize the mechanical and optical properties of the primary elements. The lens aperture is

covered with a Zinc Selenide (ZnSe) window to prevent contamination. Since ray angles at this

surface are small, chromatic aberrations will not be a problem (unlike the window covering the

sensor).

Rays incident upon the sensor plane enter with a hollow cone that has a maximum angle

of 40˚, corresponding to a conventional F/0.8 lens. The basic pixel structure of the focal plane requires some form of microlenses to work efficiently in both NIR and LWIR bands, and a

conventional microlens is not compatible with such steep ray angles. The fundamental challenge

is to match the optical structure to the sensor requirements. Future research should explore two

approaches: (1) modifying the basic structure of the CMR lens to restrict the ray incidence angles

at the sensor plane, and (2) modifying the microlens to act differently on the NIR and LWIR

bands.

This dual band CMR lens is proposed as a lens solution for the dual band NIR/LWIR

image sensors that are currently being investigated. The physical structure of the dual-band sensor

will require the NIR and LWIR light to be split and detected independently at the image plane.

This fact will also have to be further considered in the design of the dual band CMR lens. The

optical component layout and information processing algorithms will be driven by this structure,

and we expect that the result will be substantially different in form than would be optimal for a

co-planar and fully overlapping sensor structure.

The fundamental approach to lens design used to create the visible-light CMR cameras of

Chapter III was to include the tolerances and the specific fabrication methods (SPDT) in the

original design. Similarly, this NIR/LWIR CMR lens will be designed so that SPDT can provide

104

reference points for accurate alignment within an optomechanical housing compatible with the

existing focal plane array test system. The conceptual physical construction of the four-reflection

NIR/LWIR camera and packaging is shown in Figure V-6.

(a) (b)

Figure V-6: Preliminary optomechanical design for the first prototype NIR/LWIR CMR camera incorporating the baseline four-reflection camera, showing (a) the front face with central obscuration and (b) rear face. The lower element in a metal housing mounts onto the focal plane holder, with tip/tilt and focus adjustments. The top reflector (the central obscuration) mounts into a housing on a 2mm thick ZnSe window reinforced with narrow struts.

V.C Compressive Imaging with a Concentric Multi- Reflection Lens

Recently a great deal of attention has been directed toward the new theory of compressed sensing (CS); which aims to reduce the overall complexity required by a large variety of measurement systems by introducing signal compression into the measurement process [19][20]

[68]. This theory states that “sparse signal statistics can be recovered from a small number of non-adaptive linear measurements”. In more general terms, CS refers to any measurement process in which the total number of measurements is smaller than the dimensionality of the signals of interest. The sparse nature of most signals of interest allows high-fidelity reconstructions to be

made using this approach.

105

The field of digital imaging is a good candidate for CS due to the large amount of raw data acquired by conventional image sensors. It is often required that this data be immediately compressed for the sake of efficiently storing or transmitting the data [69]. Compared to conventional imaging, compressive imaging (CI) offers improved performance with reduced system complexity. This reduced complexity can be important for example in mid-wave infrared

(MWIR) image systems where photodetecting array technology is less developed and much more expensive than the photodetector array technology used in visible imaging. CI also holds an advantage in detector-noise-limited measurement fidelity (SNR) over conventional imaging because the total number of photons can be measured using fewer photodetectors [70][71].

Rather than spatial sampling an image by collecting the individual pixel data, a CI system measures linear projections of the object space. The resulting projections can then be processed for applications such as image reconstruction [70][72] or recognition [73]. A large number of candidate linear projections such as wavelets, principal components, Hadamard, discrete-cosine and pseudo-random projections have been studied in the context of CI [71][74][75][76][77][78].

Linear and nonlinear reconstruction methods have also been investigated in detail including linear minimum mean square error (LMMSE) using large training sets [71]; and a variety of nonlinear reconstruction methods [79][80][81][82] based on the CS theory of Candes, Romberg, and Tao

[19], and Donoho [20].

In this section I describe an experimental setup incorporating a modified arc-sectioned eight-reflection lens in conjunction with a Texas Instruments DLP® DMD for use as a CI test bed. The purposes of this system are to (a) demonstrate a novel reflective CI system based on a

CMR lens as a first step toward an all-reflective MWIR compressive camera, and (b) investigate the performance of different CI hardware available (position and number of detectors etc.) and algorithms. My role on this project was the implementation of the CI hardware, characterization and testing; working with Peter Ilinykh and Pavel Shekhtmeyster at

106

UCSD. The algorithm side of the project was handled by our collaborators Jun Ke, Pawan Baheti

and Mark Neifeld at the University of Arizona.

V.C.1 Compressive Imaging System Hardware and Set-up

Figure V-7 shows a schematic of our CI system setup. The forward imaging portion of the CI system is composed of a modified arc-sectioned eight-reflection camera and a small right- angle to fold the optic axis of the CI system and simplify the optomechanics of the setup. In addition to the arc-sectioning cuts made with a diamond saw, an additional facet has been cut and polished to allow light to propagate through what is usually the last planar mirror through a glass spacer to image on a Texas Instruments DMD. The optical assembly shown in Figure V-7 is fit together using index matching gel and UV curing epoxy to minimize losses at the glass interfaces.

The DMD consists of a 1024x768 array of electrostatically actuated micromirros, where each mirror of the array is suspended above an individual static random access memory (SRAM) cell. The DMD acts as a spatial light modulator, allowing us to display 2D linear projections by individually setting the position of each of the 13.68 µm square micromirrors in the array. Each micromirror rotates about a hinge to one of two fixed states at ±12º deflection. Light incident on each micromirror may therefore be directed in one of two directions- the two optical paths in our optical setup. The DMD is controlled by a hardware control board and software interface provided by Apogen Technologies. The Apogen board allows us to run the micromirrors of the

DMD in an individually-addressed binary mode as opposed to the grayscale mode typically employed by DLP® projectors.

The first optical path through our optical setup, the DMD mirror “off” state, is allowed to diverge through three glass spacers to a silver coated convex singlet. This mirror coated singlet acts as a positive reflector to converge the light towards a Newport 818-ST large area Silicon wand detector located under the optical assembly. This path is non-imaging, simply collecting the

107

power transmitted through the projection mask on the DMD. The “on” state of the DMD is sent

vertically from the DMD as shown in Figure V-7, through a fold mirror and two doublet relay

lenses to the second detector plane. Light on this detector plane is relay imaged from the DMD,

which allows us to use either a single photodetector as used in the off-state path, or a low

resolution detector to examine the advantages of spatially binning the image data. To this end we

are working on integrating a Hamamatsu 5x5, 1.5 mm pitch silicon detector array with the optical

setup. For the initial experiments reported here, we have used a Newport 818-SL large area

Silicon photodetector in this position. Having two single pixel detectors allows us to capture the

light from both the on and off-states of the DMD, simultaneously capturing all of the light for

each measurement (rather than half with the one detector setup) and improves the image SNR

compared to a single-detector setup for a fixed number of features or measurement time [71].

Figure V-7: Two-path compressive imaging system setup using an eight-reflection lens. Single-detector path: light directed but not reimaged on a large area silicon detector through a spherical mirror. Relay image path: light is relayed through two lenses to another image plane which may be another single detector or a small pixel number image sensor (5x5 for example).

The experimental demonstration CI system hardware components are shown in Figure

V-8. The object used in the experimental demonstrations is generated using a Casio projection display and transmission screen at a distance of 1.65 m from the optical setup. With these

108

preliminary experiments it was important to align the image on the DMD with the projection patterns. For accurate reconstructions, the projector and screen are used to control the size, position and content of the object. In this way, the image of the object on the DMD can be carefully aligned to properly sized test patterns. The projector and diffusing screen are used together in transmission to maximize the luminous exitance from our object (the transmission screen).

Figure V-8: Setup Hardware. (a) An arc-sectioned eight-reflection lens is further modified to fit into the CI optical system. (b) A Texas Instruments DLP® DMD with Apogen control board and software. (c) The object is displayed using a Casio projection display and a diffuse transmission screen. (d) The modified eight-reflection lens is combined with a right-angle prism and four additional planar glass pieces for relaying the image from the DMD to the detector planes.(e) Image of the final setup showing the single silicon detector and the relay lenses. (f) Another image of the final setup showing the arc-sectioned eight- reflection assembly, fold mirror and 5x5 detector array (in the background).

109

V.C.2 Preliminary Experimental results

Alignment and Characterization

To align and calibrate the optical setup of Figure V-8, we first measured the best focus position of the forward imaging optics (the arc-sectioned eight-reflection lens) by examining the

PSF formed by an 80 µm pinhole and a bright white LED at a variety of object distances. To measure the PSF on the DMD, a long working distance microscope objective and charge coupled device (CCD) camera were focused onto the DMD surface. The ZEMAX predicted PSF and measured PSF (on the DMD surface) are shown in Figure V-9a and Figure V-9b respectively, for the best focus object distance of 1.65 m. As shown, the measured PSF was considerably larger than that predicted from optical simulation, with a blur in one dimension of ~70 µm (5 DMD pixels).

Figure V-9: (a) Simulated on-axis spot diagram on the intermediate image plane (DMD). (b) Experimental image of a bright point source on the DMD plane showing a PSF spread of ~ 5 DMD pixels along the diagonal.

We believe that this linear blur is caused by scattering at the edge of the modified facets on the arc-sectioned lens due to the short, best focus object distance. The experimentally determined object distance is 35% shorter than the designed object distance for the eight-reflection lens, and

110

subsequently suboptimal in terms of the path geometry through the lens. Although diagonal in

Figure V-9 (the CCD plane), the blur occurs in the lengthwise dimension of the arc-sectioned

lens. This blur is several times larger than that caused by diffraction from the arc-sectioned

aperture alone.

Once best focus of the optical setup was found, we replaced the point-source with the

transmission diffusing screen and focused the rear-projection Casio projector onto the screen.

Next, the object image pixels and DMD pixels had to be properly aligned and registered to each

other. To do this we displayed a 192x192 square (a 64x64 object scaled up 3x) on the projection

screen and matched a 128x128 square of DMD pixels to the location of the object image. With

adjustments to the magnification of the object with the projector controls, and the position of the

128x128 DMD pattern, the two were made as coincident as possible on the DMD surface.

Linear Reconstruction using Hadamard Features

The first experimental results utilize single pixel photodetectors on both optical paths of the CI system setup. This arrangement requires that the light be transferred efficiently to the detectors after reflecting from the DMD, but does not require the relay path to be well aligned or focused. Our first experimental demonstration was the reconstruction of the binary 64x64 M- object shown in Figure V-10a. To reconstruct this object, 200 sorted Hadamard masks [83] we sequenced to the DMD and the subsequent power measurements from the two photodetectors were recorded. Although Hadamard features are not binary in theory- they contain positive and negative elements- they can be measured experimentally with binary masks by shifting and scaling the masks to collect the data with a dual-rail measurement [70][71]. To reconstruct the object, linear filtering (LMMSE) was used to reconstruct the object for various levels of truncation of the projection measurements. Figure V-10 shows the simulated and experimental results using 38 features as well as the full 200 Hadamard features. Relative root

111

mean squared error (RMSE) as a function of measurement features is shown in Figure V-10d.

Since a full pixel-by-pixel measurement of the 64x64 image would require 4096 measurements,

these results show reconstruction with 1% (38 measurements) and 5% (200 measurements) of the

object dimensionality.

Figure V-10: Two Detector Hadamard reconstruction. (a) 64x64 binary object, (b) simulated linear reconstruction using 38 features, (c) simulated linear reconstruction using 200 features, (d) Relative RMSE versus number of features used for simulated and experimental results, (e) experimental linear construction using 38 features, and (f) experimental linear construction using 200 features.

In this first experiment a priori knowledge of the object was used in terms of the specific

Hadamard projections used and the linear reconstruction operator. Reconstruction of different scenes requires knowledge of the object class, usually obtained with system training sets to be used with the LMMSE reconstruction operator [84]. These training sets define statistical expectation values for both the object class and noise statistics. In the absence of noise, it is well known that principal component features (the M-largest eigenvectors of the object class) minimize the reconstruction RMSE [85]. However, in the presence of noise, the suboptimality of

112

principal component features makes them a less ideal projection basis. Experimental results have shown that Hadamard projections can provide better linear reconstruction RMSE in the presence of noise [75][76].

With the two-photodetector arrangement, all of the object photons incident on the DMD are captured with each binary feature and both the negative and positive Hadamard features can be measured simultaneously. This arrangement increases the object signal by a factor of 2 and reduces reconstruction RMSE by a factor of 2 compared to the single-photodetector setup for a fixed number of measurements (or measurement time). We examined this by reconstructing the data using only one of the photodetectors and compared the results to the two photodetector results. Figure V-11a shows the linear reconstruction RMSE versus number of features for the 1- detector and 2-detector setups. In this figure we find only a small reduction in RMSE with the 2- detector setup over the 1-detector setup. This is due to the relatively large SNR levels present in our experiment. If we further reduce SNR with additive white Gaussian noise (AWGN), we find a more apparent advantage with the 2-detector setup. This is shown in Figure V-11b, where AWGN with a standard deviation of 50 was added to the experimental data.

Figure V-11: Relative RMSE (experimental) versus number of features for 1 and 2 detectors. (a) The conditions of the experiment do not immediately show the benefit of two detectors because SNR is too large. (b) If AWGN (standard deviation of 50) is added during processing, the 2-detector advantage becomes more apparent.

113

Nonlinear Reconstruction using Hadamard Features

Nonlinear reconstruction exploits prior knowledge of wavelet sparsity via a variety of

nonlinear candidate algorithms [86]. Our preliminary experimental results using nonlinear

reconstruction are shown in Figure V-12 using 200 Hadamard features and the Matching Pursuit

reconstruction algorithm [80]. As expected, a slight improvement (visually) was found in

reconstruction fidelity using nonlinear reconstruction compared to linear reconstruction. Further

work is required to quantify this improvement.

(a) (b)

Figure V-12: (a) Linear and (b) non-linear reconstruction of a 64x64 binary object using 200 Hadamard features. In this case non-linear reconstruction shows slight improvement over linear reconstruction.

Linear and Nonlinear Reconstruction using Binary Random features

As opposed to Hadamard features, Random features do not employ explicit knowledge of object class and do not contain ordered spatial frequency information. Because of this, the exploitation of signal sparsity (usually in the wavelet domain) provides an advantage to the nonlinear reconstruction algorithms. Figure V-13 shows simulated and experimental results using linear and nonlinear reconstruction of the 64x64 binary object with 1000 random features. The pseudoinverse [84][87] of the projection matrix is used for linear reconstruction. Nonlinear reconstruction was once again carried out using the Matching Pursuit algorithm. As expected, the

114

nonlinear reconstruction shows dramatic improvement over the linear reconstruction, but inferior performance to the Hadamard projection results of Figure V-12.

Figure V-13: Reconstruction of a 64x64 binary object using 1000 random masks. (a) Simulated linear reconstruction, (b) simulated nonlinear reconstruction, (c) experimental linear reconstruction, (d) experimental nonlinear reconstruction.

Chapter VI

Conclusion

This dissertation presents an investigation of CMR lenses for ultra thin and ultra-compact visible-light imaging systems. I began by first discussing fundamentals of thin CMR lenses. This includes geometrical and physical optics considerations of the thin annular geometry and aperture followed by analysis of light collection and volume of full-aperture CMR lens designs. We have shown that the lens size can be scaled using a normalized incoherent modulation transfer function to match the spatial frequency response of clear aperture lenses, and shown that for cameras with moderate FOV requirements, CMR lenses provide an effective way to substantially reduce optical track and, potentially, total system weight and volume with large light collection for fast, sharp exposures. Additionally, we found arc-sectioning- a modification to a CMR lens design- to be an effective method of increasing DOF and reducing volume of a CMR lens design, thereby trading aperture size for added compactness and functionality at close to moderate object distances.

Next, I presented our three specific CMR camera prototypes. For our first prototype camera demonstration we selected an 89% obscured F/1.4 eight-reflection lens design intended for use as a long-range visible-light surveillance camera. We used a lens produced by conventional diamond turning to fabricate a color camera with 1280x960 pixels resolution (0.07 mrad resolution over a 0.116 radian FOV) where a 38 mm EFL is folded into a total optical track

115

116

(lens face to image sensor) of just 5 mm. The self-contained imager, integrated with a USB interface to PC, performed close to the optical design performance under laboratory and elevated temperature conditions.

The second prototype camera presented was a 50º arc-sectioned version of the eight- reflection lens. This F/3.76 wedge version of the eight-reflection camera maintained the 38 mm focal length and 5 mm track length, with volume reduced 5x and DOF increased more than 3x compared to the full-aperture eight-reflection camera. This prototype lens was integrated with a

USB interface to PC in a self contained package, this time allowing for threaded adjustment of the image sensor position. Whereas the narrow DOF limits the usefulness of the full-aperture eight-reflection lens to large object distances, the arc-sectioned version can be used effectively at more moderate object distances without overly narrow DOF or critical alignment necessary. With the improved DOF and compact size of the arc-sectioned eight-reflection camera prototype, we demonstrated that arc-sectioning can be used to modify an existing CMR lens design for more general purpose, size constrained imaging applications such as portable device cameras.

The third prototype camera prototype presented was a four-reflection visible-light camera. This CMR camera design has a wider FOV, better sensitivity, smaller volume and broader DOF than the eight-reflection camera prototype due to its less extreme focal length extension. This design achieves a focal length of 18.6 mm and effective F/# of 1.15 in a track length of just 5.5 mm, providing a 17° FOV over 1600x1200 3 µm pixels of a color image sensor.

Once again, the image sensor is integrated with a USB connection to a PC for controlling the image sensor, processing and viewing images. The four-reflection lens and image sensor were mated together within a stainless steel package providing an extremely sensitive focus adjustment with small changes on the order of a few microns to the nominal 300 µm spacing between the front and back lens elements. Despite a small chromatic focal shift caused by commercially available index matching gel and imperfect alignment in the optical assembly, the self contained

117

and packaged four-reflection camera performed close to the optical design performance under laboratory conditions. Direct comparison using an identical image sensor mated to a large conventional lens of the same focal length showed that resolution and subjective image quality are comparable between the two lenses despite the dramatically reduced track length of the four- reflection imager lens.

After presenting the performance of the eight and four-reflection camera prototypes, I discussed the application of PPE and post-processing (a hybrid optical-digital optimization approach) to the full-aperture CMR camera designs for extending the DOF and reducing fabrication and alignment tolerances. We introduced the cosine-form as a surface well-suited to the design and fabrication of CMR imaging systems, applying joint optical-digital optimization to both the eight and four-reflection CMR lens designs discussed previously. Although we expected a pronounced improvement in defocus invariance from design and simulation for both the eight and four-reflection PPE imagers, we found the measured PSFs to be significantly different from simulation due to fabrication error in the eight-reflection PPE lens and misalignment of the optical assembly with the four-reflection PPE imager. Nevertheless, in both cases we found a discernable improvement in defocused performance in the processed PPE images compared to those of the unmodified eight and four-reflection cameras.

Finally, in the last chapter of this dissertation I discussed three in-progress/future applications of the CMR design approach. The first of these applications is an array of four- reflection cameras which orientates seven four-reflection camera elements in a hexagonal pattern to extend the FOV to >30 degrees using image stitching and processing. We have completed the first demonstration of this system, however further effort is required to improve the consistency of the assembled lenses and operate the system and process the images at video rates with the system’s FPGA. The second future application discussed is the use of air-spaced CMR designs for ultra-light and ultra-thin IR optics. As an example I presented a preliminary design using an

118

air-spaced four-reflection lens for use as a broadband NIR-LWIR camera lens with next generation multiband sensors. The last in-progress application is the use of CMR optics in compressive imaging systems. As a preliminary demonstration we have setup and tested a visible- light CI system setup using an arc-sectioned eight-reflection lens, a DMD and two large-area silicon photodetectors to measure linear projections of a test scene. Preliminary results using linear and nonlinear reconstruction algorithms have yielded promising compressed results with our system.

In summary, CMR lenses and imaging systems such as those presented in this dissertation have potential for compact imagers where long focal lengths must be contained within a limited track. Looking ahead, solid filled CMR lenses may be suitable for inexpensive mass production using a glass molding process provided in part that the design include additional registration surfaces to aid in the alignment during assembly. All-reflective air-spaced CMR designs are also particularly attractive for IR imaging applications where the low weight, low cost

(potentially), and broadband performance are advantageous.

Appendix A

CMR Lens Prescription Data

The following pages contain lens prescription data for

i. The eight-reflection CMR lens (from Section III.A)

ii. The four-reflection CMR lens (from Section III.C)

119

120

Eight-Reflection CMR Lens Prescription2

Surf Type Radius Thickness Glass Diameter Conic OBJ STANDARD Infinity 2505 - 291.5068 0 STO STANDARD Infinity 10.90811 CAF2 60 0 2 EVENASPH -291.4691 -10.90811 MIRROR 60.32864 0 3 STANDARD Infinity 4.05448 MIRROR 53.77869 0 4 EVENASPH 76.15415 -4.05448 MIRROR 44.03402 0 5 STANDARD Infinity 3.753318 MIRROR 33.92581 0 6 EVENASPH -113.5592 -3.753318 MIRROR 23.97291 0 7 STANDARD Infinity 5.06043 MIRROR 19.27952 0 8 EVENASPH 9.475628 -5.06043 MIRROR 14.51987 0 9 STANDARD Infinity 5 MIRROR 10.02814 0 10 STANDARD Infinity 0.53139 F_SILICA 5.527568 0 IMA STANDARD Infinity - CAF2 6.4 0

ADDITIONAL SURFACE DATA DETAIL: Surface STO: STANDARD Surface 6: EVENASPH Aperture: Circular Aperture Coeff on r^2: 0.012154546 Minimum Rad: 26.76 Coeff on r^4: 1.1161788e-005 Maximum Rad: 30 Coeff on r^6: -4.0336181e-008 Coeff on r^8: -2.8693291e-010 Surface 2: EVENASPH Coeff on r^10: 1.2633894e-012 Coeff on r^2: -0.0078162399 Coeff on r^12: -1.1815159e-015 Coeff on r^4: 1.8174671e-006 Aperture: Circular Aperture Coeff on r^6: -5.5163424e-010 Minimum Rad.: 7.8 Coeff on r^8: -6.2802457e-013 Maximum Rad.: 12.2 Coeff on r^10: 1.1943454e-015 Coeff on r^12: -4.8564287e-019 Surface 8: EVENASPH Aperture: Circular Aperture Coeff on r^2: 0.049781878 Minimum Rad.: 26.6 Coeff on r^4: 0.00018099718 Maximum Rad.: 30.2 Coeff on r^6: 1.5660799e-007 Coeff on r^8: 3.2575128e-008 Surface 4: EVENASPH Coeff on r^10: -4.2782842e-010 Coeff on r^2: 0.00035313724 Coeff on r^12: 4.1524457e-012 Coeff on r^4: -2.3610748e-005 Aperture: Circular Aperture Coeff on r^6: 5.5062915e-008 Minimum Rad.: 3.1 Coeff on r^8: -9.5661063e-011 Maximum Rad.: 7.3 Coeff on r^10: 9.920049e-014 Coeff on r^12: -4.4502127e-017 Surface 10: STANDARD Aperture : Circular Aperture Aperture: Circular Aperture Minimum Rad.: 16 Minimum Rad.: 0 Maximum Rad.: 22.2 Maximum Rad.: 2.9

2 All units in mm

121

Four-Reflection Lens Prescription

Surf Type Radius Thickness Glass Diameter Conic OBJ STANDARD Infinity 10000 2917.684 0 STO STANDARD Infinity 2.5 CAF2 28.05 0 2 STANDARD Infinity 0.3 CARGILLE0608 28.56127 0 3 STANDARD Infinity 4.8 CAF2 28.62166 0 4 EVENASPH -27.49186 -4.8 MIRROR 29 0 5 STANDARD Infinity -0.3 CARGILLE0608 26.70879 0 6 STANDARD Infinity -0.9791882 CAF2 26.30066 0 7 EVENASPH -939.5068 0.9791882 MIRROR 22.8 0 8 STANDARD Infinity 0.3 CARGILLE0608 21.38585 0 9 STANDARD Infinity 3.082 CAF2 21.21693 0 10 EVENASPH -261.9748 -3.082 MIRROR 19.94 0 11 STANDARD Infinity -0.3 CARGILLE0608 15.94771 0 12 STANDARD Infinity -2.05236 CAF2 15.29119 0 13 EVENASPH -36.51965 2.05236 MIRROR 9.62 0 14 STANDARD Infinity 0.3 CARGILLE0608 8.097108 0 15 STANDARD Infinity 2.85 CAF2 7.916261 0 16 STANDARD Infinity 0.5990508 CARGILLE0608 6.3 0 IMA STANDARD Infinity CAF2 5.807568 0

SURFACE DATA DETAIL: Surface STO: STANDARD Surface 10: EVENASPH Aperture: Circular Aperture Coeff on r^2: -0.011108209 Minimum Rad.: 11.48 Coeff on r^4: 4.8267636e-005 Maximum Rad.: 14.025 Coeff on r^6: -2.0987821e-007 Coeff on r^8: -6.7614074e-009 Surface 4: EVENASPH Coeff on r^10: 8.8051454e-011 Coeff on r^2: 0.0025853267 Coeff on r^12: -4.8465219e-013 Coeff on r^4: -1.8917513e-005 Coeff on r^14: 1.0563096e-015 Coeff on r^6: 2.6260473e-007 Coeff on r^16: 0 Coeff on r^8: -8.3806942e-010 Aperture: Circular Aperture Coeff on r^10: 5.6430763e-013 Maximum Rad.: 4.35 Coeff on r^12: 4.0468849e-015 Maximum Rad.: 9.97 Coeff on r^14: -8.5481374e-018 Coeff on r^16: 0 Surface 13: EVENASPH Aperture: Circular Aperture Coeff on r^2: 0.0006434863 Minimum Rad.: 10.9 Coeff on r^4: -0.00017924226 Maximum Rad.: 14.5 Coeff on r^6: 3.5747186e-006 Coeff on r^8: -8.0928741e-007 Surface 7: EVENASPH Coeff on r^10: 4.223627e-008 Coeff on r^2: -0.021295517 Coeff on r^12: -1.1764457e-009 Coeff on r^4: 0.00013882173 Coeff on r^14: 1.1961539e-011 Coeff on r^6: -6.5270835e-007 Coeff on r^16: 0 Coeff on r^8: 1.595272e-009 Aperture: Circular Aperture Coeff on r^10: 5.7534737e-013 Minimum Rad.: 0 Coeff on r^12: -1.6574257e-014 Maximum Rad.: 4.81 Coeff on r^14: 1.1837596e-017 Coeff on r^16: 0 Surface 16: STANDARD Aperture: Circular Aperture Aperture: Circular Aperture Minimum Rad.: 6.4 Minimum Rad.: 0 Maximum Rad.: 11.4 Maximum Rad.: 3.15

Appendix B

CMR Lens Example with air-spaced dielectrics and diffractive chromatic correction

This appendix contains a first-pass four-reflection CMR lens design utilizing a small air, rather than index-matching gel, spacing between dielectric materials. Due to the refraction present at each oblique crossing of this air-gap, large chromatic aberration will be present. To counter this effect, low-power surface relief diffractives have been added to the available aspheric surfaces. It may be possible to fabricate these diffractive surfaces using the same SPDT technology used to

fabricate the aspheric surfaces, thereby eliminating the problematic chromatic aberration found in

our fabricated four-reflection lens (Section IV.C) as well as the need for index matching gel.

In this design, careful attention was paid to the incidence angles on the dielectric-air

boundaries since total internal reflection (TIR) can be a problem. These boundaries also present a

problem with optical throughput since the oblique incidence angles will cause reflection losses.

To counteract this, dielectric anti-reflection coatings would need to be used to reduce reflection

losses. It remains to be seen whether or not these coatings could be effectively designed for the

range of incidence angles present in the CMR design.

The layout, polychromatic incoherent MTF and spot diagrams are shown in Figure B-1.

122

123

Figure B-1: Four-reflection CMR design example using two air-spaced dielectrics and diffractive chromatic aberration correction. (a) Design layout, (b) polychromatic (486, 588, 656 nm) spot diagram, and (c) polychromatic incoherent MTF.

In this design example the diffractive surfaces were implemented using ZEMAX’s binary2 surface. Examination of the diffractive surface profiles using the Phaseplot analysis shows that the minimum period size used was ~4 µm. These surfaces were optimized without a minimum period constraint and are probably not physically realizable with current fabrication technology. In future designs, the minimum period pitch of the diffractive surfaces will need to be carefully constrained to match the current fabrication capabilities.

Appendix C

PPE User Defined Surface

This appendix contains a description of the cosine-form PPE surface applied to the annular aperture of the four-reflection lens as described in Section IV.C. The surface sag of the cosine-form PPE is described by

12345 Sag(, sθ )=⋅+⋅+⋅+⋅+⋅+ [( D12 s D s D 3 s D 4 s D 5 s (C.1) 678 Ds678⋅+ Ds ⋅+ Ds ⋅)( ∗ G ⋅ cos( RF ⋅θ ))]

Where s is the normalized radius given by

⎧()/()rr−−min r max r min forr min ≤≤ rr max s = ⎨ (C.2) ⎩0 otherwise

Here rmin, and rmax are the minimum and maximum radius of the annular aperture, r is the radial

coordinate, and θ is the azimuthal angle in cylindrical coordinates. The optimized parameter

values used in the four-reflection lens of Section IV.C are:

G: 0.0022476023 RF: 3 rmin: 11.48 rmax: 14.025 D1: 0.031838361 D2: -3.2203978 D3: 2.065596 D4: 0.42062161 D5: 0.9572307 D6: -0.00087750028 D7: -0.17716305 D8: -0.34930247

124

125

The cosine-form PPE surface is implemented in ZEMAX using a user defined surface

(UDS). The UDS is a dynamic link library (DLL) written in C and used by ZEMAX’s ray tracing engine to trace rays to and from the surface in the same way it traces to and from its built in surfaces. The C-code for the cosine-form PPE surface was written with help from James Sutter and is included below.

ZEMAX UDS C-Code:

#include #include #include #include "usersurf.h"

/* PPE_CosineForm.dll Written by James Sutter & Eric Tremblay September 14, 2006

This DLL models and even asphere with an added cosine-form PPE term. The sag is given by: z = zstandard + zasphere + zPPE zstandard = (c*r^2) / (1 + (1 - ((1+k)*c^2*r^2) )^1/2 ) zasphere = a2*(r)^2 + a4*(r)^4 + ... a12*(r)^12 zPPE = (D1*s^1 + D2*s^2 + D3*s^3 +... D8*s^8) * G * cos(RF * theta) where s = (r - rmin)/(rmax - rmin) if rmin <= r <= rmax s = 0 otherwise theta = atan(x,y) Input parameters in the extra data editor include G RF rmax rmin D1, D2,...D8 */ int __declspec(dllexport) APIENTRY UserDefinedSurface(USER_DATA *UD, FIXED_DATA *FD); /* a generic Snells law refraction routine */ int Refract(double thisn, double nextn, double *l, double *m, double *n, double ln, double mn, double nn); BOOL WINAPI DllMain (HANDLE hInst, ULONG ul_reason_for_call, LPVOID lpReserved) { return TRUE; }

/* this DLL models an even asphere with an added rotationally asymmetric PPE term*/ int __declspec(dllexport) APIENTRY UserDefinedSurface(USER_DATA *UD, FIXED_DATA *FD) { int i; double alpha, power, t; double x, y, z; int loop; double r2, tp, dz, sag, mm, mx, my; double r, s, rn, sn, rmin, rmax, G, RF; double a[9], d[9]; double theta, gamma, delta; double dgammadx, dgammady, ddeltadx, ddeltady, dsdx, dsdy; double dzwfsdx, dzwfsdy;

switch(FD->type) {

126

case 0: /* ZEMAX is requesting general information about the surface */ switch(FD->numb) { case 0: /* ZEMAX wants to know the name of the surface */ /* do not exceed 12 characters */ strcpy(UD->string,"PPE_CosineForm"); break; case 1: /* ZEMAX wants to know if this surface is rotationally symmetric */ /* it is not, so return a null string */ UD->string[0] = '\0'; break; case 2: /* ZEMAX wants to know if this surface is a gradient index media */ /* it is not, so return a null string */ UD->string[0] = '\0'; break; } break; case 1: /* ZEMAX is requesting the names of the parameter columns */ /* the value FD->numb will indicate which value ZEMAX wants. */ /* they are all "Unused" for this surface type */ /* returning a null string indicates that the parameter is unused. */ switch(FD->numb) { case 1: strcpy(UD->string, "A2 on r^2"); break; case 2: strcpy(UD->string, "A4 on r^4"); break; case 3: strcpy(UD->string, "A6 on r^6"); break; case 4: strcpy(UD->string, "A8 on r^8"); break; case 5: strcpy(UD->string, "A10 on r^10"); break; case 6: strcpy(UD->string, "A12 on r^12"); break; default: UD->string[0] = '\0'; break; } break; case 2: /* ZEMAX is requesting the names of the extra data columns */ /* the value FD->numb will indicate which value ZEMAX wants. */ /* they are all "Unused" for this surface type */ /* returning a null string indicates that the extradata value is unused. */ switch(FD->numb) { case 1: strcpy(UD->string, "G"); break; case 2: strcpy(UD->string, "RF"); break; case 3: strcpy(UD->string, "r min"); break; case 4:

127

strcpy(UD->string, "r max"); break; case 5: strcpy(UD->string, "D1 on s^1"); break; case 6: strcpy(UD->string, "D2 on s^2"); break; case 7: strcpy(UD->string, "D3 on rs^3"); break; case 8: strcpy(UD->string, "D4 on rs^4"); break; case 9: strcpy(UD->string, "D5 on rs^5"); break; case 10: strcpy(UD->string, "D6 on rs^6"); break; case 11: strcpy(UD->string, "D7 on rs^7"); break; case 12: strcpy(UD->string, "D8 on rs^8"); break; default: UD->string[0] = '\0'; break; } break; case 3: /* ZEMAX wants to know the sag of the surface */ /* if there is an alternate sag, return it as well */ /* otherwise, set the alternate sag identical to the sag */ /* The sag is sag1, alternate is sag2. */

/* aspheric terms*/ for (i = 1; i<=6; i++) { a[i] = FD->param[i]; // even terms } /* extra data editor */ G = FD->xdata[1]; RF = FD->xdata[2]; rmin = FD->xdata[3]; rmax = FD->xdata[4];

/* PPE aspheric terms*/ for (i = 1; i<=8; i++) { d[i] = FD->xdata[i+4]; // odd and even terms }

UD->sag1 = 0.0; UD->sag2 = 0.0;

x = UD->x; y = UD->y;

r2 = x*x + y*y; alpha = 1.0 - (1.0+FD->k)*FD->cv*FD->cv*r2; if (alpha < 0) return(-1);

UD->sag1 = (FD->cv*r2)/(1.0 + sqrt(alpha));

/* even aspheric terms */ rn = r2; for (i = 1; i <= 6; i++) { if (a[i] != 0.0) {

128

UD->sag1 += a[i] * rn; } rn *= r2; }

/* evaluate angle term */ theta = atan2(y,x); gamma = G * cos(RF * theta); delta = 0.0; r = sqrt(r2); if (r >= rmin && r <= rmax && rmax > rmin) { s = (r-rmin)/(rmax-rmin); if (gamma != 0.0) { /* evaluate zone terms and add to sag */ sn = s; for (i = 1; i <= 8; i++) { if (d[i] != 0.0) { delta += d[i] * sn; } sn *= s; } UD->sag1 += delta * gamma; } } UD->sag2 = UD->sag1; break; case 4: /* ZEMAX wants a paraxial ray trace to this surface */ /* x, y, z, and the optical path are unaffected, at least for this surface type */ /* for paraxial ray tracing, the return z coordinate should always be zero. */ /* paraxial surfaces are always planes with the following normals */

UD->ln = 0.0; UD->mn = 0.0; UD->nn = -1.0;

power = (FD->n2 - FD->n1)*(FD->cv + 2.0*FD->param[1]);

if ((UD->n) != 0.0) { (UD->l) = (UD->l)/(UD->n); (UD->m) = (UD->m)/(UD->n);

(UD->l) = (FD->n1*(UD->l) - (UD->x)*power)/(FD->n2); (UD->m) = (FD->n1*(UD->m) - (UD->y)*power)/(FD->n2);

/* normalize */ (UD->n) = sqrt(1/(1 + (UD->l)*(UD->l) + (UD->m)*(UD->m) ) ); /* de-paraxialize */ (UD->l) = (UD->l)*(UD->n); (UD->m) = (UD->m)*(UD->n); } break; case 5: /* ZEMAX wants a real ray trace to this surface */

/* iterative intercept */

/* get aspheric terms*/ for (i = 1; i<=6; i++) { a[i] = FD->param[i]; // even terms } /* get extra data editor */ G = FD->xdata[1]; RF = FD->xdata[2]; rmin = FD->xdata[3]; rmax = FD->xdata[4];

129

/* get PPE aspheric terms*/ for (i = 1; i<=8; i++) { d[i] = FD->xdata[i+4]; // odd and even terms }

/* make sure we do at least 1 loop */ t = 100.0; tp = 0.0; x = UD->x; y = UD->y; z = UD->z; loop = 0;

while (fabs(t) > 1e-10) { /* First, compute the sag using whatever the surface sag expression is. This is given the x and y starting points. The following block of code will change depending upon the surface shape, the rest of this iteration is typically common to all surface shapes. */

r2 = x * x + y * y; alpha = 1.0 - (1.0 + FD->k)*FD->cv*FD->cv*r2; if (alpha < 0.0) return(-1); sag = (FD->cv*r2)/(1 + sqrt(alpha));

/* now the aspheric terms */ rn = r2; for (i = 1; i <= 6; i++) { if (a[i] != 0.0) { sag += a[i] * rn; } rn *= r2; }

/* evaluate angle term */ theta = atan2(y,x); gamma = G * cos(RF * theta); delta = 0.0; r = sqrt(r2); if (r >= rmin && r <= rmax && rmax > rmin) { s = (r-rmin)/(rmax-rmin); if (gamma != 0.0) { /* evaluate zone terms and add to sag */ sn = s; for (i = 1; i <= 8; i++) { if (d[i] != 0.0) { delta += d[i] * sn; } sn *= s; } sag += delta * gamma; } }

/* okay, now with sag in hand, how far are we away in z? */ dz = sag - z;

/* now compute how far along the z axis this is */ /* note this will crash if n == 0!! */ t = dz / (UD->n);

/* propagate the additional "t" distance */ x += UD->l*t; y += UD->m*t; z += UD->n*t;

/* add in the optical path */

130

tp += t;

/* prevent infinte loop if no convergence */ loop++; if (loop > 1000) return(-1); }

UD->path = tp; UD->x = x; UD->y = y; UD->z = z;

/* now do the normals */ /* this prevents divide by zero for the normals */ if (fabs(x) < 1e-11) { x = 1e-11; theta = atan2(y,x); gamma = G * cos(RF * theta); }

r2 = x * x + y * y; if (r2==0) { mx = 0.0; my = 0.0; } else { alpha = 1.0 - (1.0+FD->k)*FD->cv*FD->cv*r2; if (alpha < 0) return(-1); /* ray misses */ alpha = sqrt(alpha);

mm = (FD->cv/(1.0+alpha))*(2.0 + (FD->cv*FD->cv*r2*(1.0+FD->k))/(alpha*(1.0+alpha)));

/* now the aspheric terms */ rn = 1.0; for (i = 1; i <= 6; i++) { if (a[i] != 0.0) { mm += (2.0 * i) * a[i] * rn; } rn *= r2; }

mx = x * mm; my = y * mm; }

/* now the PPE zone terms */ if (gamma != 0.0) { r = sqrt(r2); if (r >= rmin && r <= rmax && rmax > rmin) { /* do each term 1 through 8 */ s = (r-rmin)/(rmax-rmin); dsdx = (1.0/(rmax-rmin))*(x/r); dsdy = (1.0/(rmax-rmin))*(y/r); /* calculate ddeltadx */ sn = 1.0; ddeltadx = 0.0; for (i = 1; i <= 8; i++) { if (d[i] != 0.0) { ddeltadx += d[i] * (i) * sn * dsdx; } sn *= s; } /* calculate ddeltady */ sn = 1.0; ddeltady = 0.0; for (i = 1; i <= 8; i++) { if (d[i] != 0.0) {

131

ddeltady += d[i] * (i) * sn * dsdy; } sn *= s; }

/* calculate dgammadx */ dgammadx = G * -sin(RF*theta) * RF / (1.0 + (y*y)/(x*x)) * -y/(x*x);

/* calculate dgammady */ dgammady = G * -sin(RF*theta) * RF / (1.0 + (y*y)/(x*x)) / x;

dzwfsdx = ddeltadx * gamma + dgammadx * delta; dzwfsdy = ddeltady * gamma + dgammady * delta;

mx += dzwfsdx; my += dzwfsdy; } }

UD->nn = -sqrt(1.0/(1.0 + (mx*mx) + (my*my))); UD->ln = -mx*UD->nn; UD->mn = -my*UD->nn; if (Refract(FD->n1, FD->n2, &UD->l, &UD->m, &UD->n, UD->ln, UD->mn, UD->nn)) return(-FD->surf); break; case 6: /* ZEMAX wants the index, dn/dx, dn/dy, and dn/dz at the given x, y, z. */ /* This is only required for gradient index surfaces, so return dummy values */ UD->index = FD->n2; UD->dndx = 0.0; UD->dndy = 0.0; UD->dndz = 0.0; break; case 7: /* ZEMAX wants the "safe" data. */ /* this is used by ZEMAX to set the initial values for all parameters and extra data */ /* when the user first changes to this surface type. */ /* this is the only time the DLL should modify the data in the FIXED_DATA FD structure */ for (i = 1; i <= 8; i++) FD->param[i] = 0.0; FD->xdata[1] = 0.0; FD->xdata[2] = 1.0; FD->xdata[3] = 0.0; FD->xdata[4] = 1.0; for (i = 5; i <= 200; i++) FD->xdata[i] = 0.0; break; } return 0; } int Refract(double thisn, double nextn, double *l, double *m, double *n, double ln, double mn, double nn) { double nr, cosi, cosi2, rad, cosr, gamma; if (thisn != nextn) { nr = thisn / nextn; cosi = fabs((*l) * ln + (*m) * mn + (*n) * nn); cosi2 = cosi * cosi; if (cosi2 > 1) cosi2 = 1; rad = 1 - ((1 - cosi2) * (nr * nr)); if (rad < 0) return(-1); cosr = sqrt(rad); gamma = nr * cosi - cosr; (*l) = (nr * (*l)) + (gamma * ln); (*m) = (nr * (*m)) + (gamma * mn); (*n) = (nr * (*n)) + (gamma * nn); } return 0; }

Appendix D

Aberration Mapping in Highly Obscured Annular Systems

The large obscuration ratio present in CMR lenses creates some interesting transformations of two third-order aberrations. Namely, spherical aberration and coma more closely resemble defocus and tilt. This can be seen from the sets of rays displayed in Figure D-1.

(a) Spherical Aberration (b) Coma

Figure D-1: Ray trace comparison of aberrations in a 90% obscured imaging system (dark lines) versus those in a conventional imaging system (gray lines). (a) Spherical aberration translates into defocus. (b) Coma translates into tilt.

A mathematical argument for aberration mapping can be made by evaluating the inner product of spherical aberration with defocus and the inner product of coma with tilt over the unit

4 2 annulus. The Fringe Zernike polynomial Z9 = 6r – 6r +1 represents spherical aberration and the

132

133

2 Fringe Zernike polynomial Z4 = 2r -1 represents defocus. These two aberration functions can be

compared using inner products over the unit annulus as given in Equation (D.1).

21π ZZrdrdθ 246 zz94, ∫∫0 a 94 399aaa−+ ==21π 24 (D.1) zz44,124−+ a a Z44 Z rdrdθ ∫∫0 a

Here the constant a represents the obscuration ratio of the system and can range from zero (no obscuration) to one (complete obscuration). At the extreme values of a two observations can be made. First, for a value of zero, Equation (D.1) is zero, indicating that these two functions are orthogonal over the unit circle. This is a well know property of Zernike polynomials [88][89].

However, in the limit where a approaches one, Equation (D.1) is unity indicating similarity between the functions. A similar analysis can be performed for coma and tilt. The Fringe Zernike

3 representation of coma is given by Z7 = (3r – 2r) Cos(θ) and Z2 = r Cos(θ) for tilt. These two

aberrations are compared in Equation (D.2).

21π Z Z rdrdθ 4 zz72, ∫∫0 a 72 2a ==21π 2 (D.2) zz22,1+ a Z22 Z rdrdθ ∫∫0 a

Again, in the limit as a approaches one, Equation (D.2) approaches unity displaying the similarity between coma and tilt in the limit of extremely thin annuli.

Implications to PPE Annular Systems

In general, PPE systems are designed to be invariant to defocus over a given operating

range. On the other hand, certain aberrations, such as coma, tend to degrade the performance of

such systems by reducing the modulation at all values of defocus. The aberration mapping

described in the previous section makes the CMR design form a particularly appealing candidate

for PPE in terms of the potential to correct for aberrations in the system because coma translates

134

into simple tilt, no longer producing considerable degradation to the incoherent MTF. Likewise, spherical aberration translates into defocus, which is readily corrected for using PPE.

This appendix, in part, is a reprint of the material as it appears in E. J. Tremblay, J.

Rutkowski, I. Tamayo, P. E. X. Silveira, R. A. Stack, R. L. Morrison, M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication tolerances of thin annular folded imaging systems using wavefront coding," Appl. Opt. 46, 6751-6758 (2007). This analysis was contributed to the above publication by Joel Rutkowski and is therefore included here, rather than in the body of the dissertation.

Bibliography

[1] D. Graham-Rowe, “Liquid lenses make a splash,” Nat. Photonics, Sept. 2006.

[2] S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature cameras,” Appl. Phys. Lett. 85, 1128-1130 (2004).

[3] R. Kuwano, T. Tokunaga, Y. Otani, and N. Umeda, “Liquid pressure varifocus lens,” Opt. Rev. 12, 405-408 (2005).

[4] J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, "Thin Observation Module by Bound Optics (TOMBO): Concept and Experimental Verification," Appl. Opt. 40, 1806-1813 (2001).

[5] M. Shankar, R. Willett, N. Pitsianis, T. Schulz, R. Gibbons, R. Te Kolste, J. Carriere, C. Chen, D. Prather, and D. Brady, "Thin infrared imaging systems through multichannel sampling," Appl. Opt. 47, B1-B10 (2008).

[6] M. Sekita, “Photo taking optical system and optical device,” US Patent 5,917,662 (1999).

[7] T. Togino, “Image-forming optical system,” US Patent 6,829,113B2 (2004).

[8] D. W. Davis, M. Walter, M. Takahashi and T. Masaki, “Machining and metrology systems for free-form laser printer mirrors,” Sadhana, 28, 925-932 (2003).

[9] H. –S. Jeong, H. –S. Yoo, S. –H. Lee, H. –R. Oh, “Low profile optic design for mobile camera using dual freeform reflective lenses,” Proc. SPIE 6288, 628808 (2006).

[10] W. J. Smith, Modern Optical Engineering (McGraw-Hill, 2000), Chap. 10.

[11] T. Koyama, “Optics in digital still cameras,” in Image Sensors and Signal Processing for Digital Still Cameras, J. Nakamura, ed. (Taylor & Francis, 2005), pp. 21-42.

[12] J. Govier, “Aspheric optics: technological advances ease use of aspherics,” Laser Focus World, Sept. 2005.

135

136

[13] W. Kordonski, A. Shorey and A. Sekeres, “New magnetically assisted finishing method: material removal with magnetorheological fluid jet,” in Optical Manufacturing and Testing V, H. Philip Stahl, ed., Proc. SPIE 5180, 107-114 (2003).

[14] E. J. Tremblay, R. A. Stack, R. L. Morrison, and J. E. Ford, "Ultrathin cameras using annular folded optics," Appl. Opt. 46, 463-471 (2007).

[15] E. J. Tremblay, J. Rutkowski, I. Tamayo, P. E. X. Silveira, R. A. Stack, R. L. Morrison, M. A. Neifeld, Y. Fainman, and J. E. Ford, "Relaxing the alignment and fabrication tolerances of thin annular folded imaging systems using wavefront coding," Appl. Opt. 46, 6751-6758 (2007).

[16] E. J. Tremblay, R. A. Stack, R. L. Morrison, J. H. Karp and J. E. Ford, “Ultrathin four- reflection imager,” Appl. Opt. doc. ID 101823 (posted 4 November 2008, in press).

[17] J. Mait, R. Athale, and J. van der Gracht, "Evolutionary paths in imaging and recent trends," Opt. Express 11, 2093-2101 (2003).

[18] W. T. Cathey, and E. Dowski, "A new paradigm for imaging systems," Appl. Opt. 41, 6080-6092 (2002).

[19] E. Candès, J. Romberg, and T. Tao, "Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information," IEEE Trans. Inform. Theory, 52, 489-509, (2006).

[20] D. L. Donoho, "Compressed sensing," Information Theory, IEEE Transactions on , 52, 1289-1306 (2006).

[21] D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, "A new compressive imaging camera using optical domain compression," Proc. SPIE 6065, 606509 (2006).

[22] M. A. Neifeld and P. Shankar, "Feature-Specific Imaging," Appl. Opt. 42, 3379-3389 (2003).

[23] H. S. Pal, D. Ganotra, and M. A. Neifeld, "Face recognition by using feature-specific imaging," Appl. Opt. 44, 3784-3794 (2005).

[24] R. E. Parks, “Fabrication of infrared optics,” Opt. Eng. 33, 685-691(1994).

[25] Schott Optical Glass Catalog (Schott Glass Technologies Inc., 2000).

[26] G. C. Holst, Common Sense Approach to Thermal Imaging (JCD Publishing & SPIE, 2000), pp. 97-100.

[27] W. J. Smith, Modern Lens Design (McGraw-Hill, 2005), Chap. 18.

[28] D. Korsch, Reflective Optics (Academic, 1991).

137

[29] V. Dragonov and D. G. James, “Compact telescope for free-space communications,” in Current Developments in Lens Design and Optical Engineering III, Robert E. Fischer, Warren J. Smith, R. Barry Johnson, eds., Proc. SPIE 4767, 151-158 (2002).

[30] M.W. Haney, “Performance scaling in flat imagers,” Appl. Opt. 45, 2901-2910 (2006).

[31] J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 1968), pp 61,113.

[32] E. L. O’Neill, “Transfer function for an annular aperture,” J. Opt. Soc. Am. 46, 285-288 (1956).

[33] V. N. Mahajan, “Imaging with obscured pupils,” Opt. Lett. 1, 128-129 (1977).

[34] J. E. Harvey and C. Ftaclas, "Diffraction effects of telescope secondary mirror spiders on various image-quality criteria," Appl. Opt. 34, 6337-6349 (1995).

[35] A. Rosenfeld and A. C. Kak, Digital picture processing, volume 1 (Academic Press, Inc., Orlando, 1982), pp. 276-281.

[36] Aptina, “CMOS Image Sensors,” http://www.aptina.com/products/image_sensors/.

[37] Omnivision, “2008 Fall Product Guide,” http://www.ovt.com/data/userguide/OmniVision_ProductGuide.pdf.

[38] W. J. Smith, Modern Optical Engineering (McGraw-Hill, 2000), pp. 154-157.

[39] E. R. Dowski and W. T. Cathey, "Extended depth of field through wave-front coding," Appl. Opt. 34, 1859-1866 (1995).

[40] P. B. Catrysse and B. A. Wandell, "Optical efficiency of image sensor pixels," J. Opt. Soc. Am. A, 19, 1610-1620 (2002).

[41] G. Agranov, V. Berezin and R. H. Tsai, "Crosstalk and microlens study in a color CMOS image sensor," IEEE Trans. Electron Devices, 50, 4-11 (2003).

[42] J. Adkisson, J. Gambino, T. Hoague, M. Jaffe, J. Kyan, R. Leidy, D. McGrath, R. Rassel, D. Sackett and C. V. Stancampiano, "Optimization of Cu interconnect layers for 2.7 µm pixel image sensor technology: fabrication, modeling, and optical results." in Proceedings of IEEE Workshop on CCD and Advanced Image Sensors, (IEEE, 2005), pp. 1-4.

[43] J. Janesick, "Lux transfer: complementary metal oxide semiconductors versus charge- coupled devices," Opt. Eng. 41, 1203-1215 (2002).

[44] ZEMAX, http://www.zemax.com/.

[45] ZEMAX Optical Design Program User’s Guide (ZEMAX Development Corporation, 2008).

[46] M. Laiken, Lens Design ( Marcel Dekker, 1991), pp. 73-78.

138

[47] J. Hall, "F-Number, numerical aperture, and depth of focus," in Encyclopedia of Optical Engineering, (Marcel Dekker, 2003), pp. 556-559.

[48] H. B. Wach, E. R. Dowski, jr., and W. T. Cathey, "Control of Chromatic Focal Shift Through Wave-Front Coding," Appl. Opt. 37, 5359-5367 (1998).

[49] M. Dirjish, "BSI technology flips digital imaging upside down," http://electronicdesign.com/Articles/Index.cfm?AD=1&ArticleID=19160.

[50] http:// www.ispoptics.com/.

[51] D. H. Kelly, "Spatial Frequency, Bandwidth, and Resolution," Appl. Opt. 4, 435-435 (1965).

[52] T. Ang, Dictionary of and Digital Imaging: The Essential Reference for the Modern Photographer (Watson–Guptill, 2002).

[53] R. Prescott, “Cassegrainian baffle design,” Appl. Opt. 7, 479-481 (1968).

[54] C. Leinert and D. Klüppelberg, “Stray light suppression in optical space experiments,” Appl. Opt 13, 556-564 (1974).

[55] G. Peterson, “Stray light calculation methods with optical ray trace software,” Proc. SPIE 3780, 132-137 (1999).

[56] K. Kubala, E. Dowski, J. Kobus, and R. Brown, “Aberration and error invariant space telescope systems”, in Novel Optical and Optimization VII, J. M. Sasian, R. J. Koshel, P. K. Manhart and R. C. Juergens, eds., Proc. SPIE 5524, 54-65 (2004).

[57] Kubala, E. Dowski, and W.T. Cathey, “Reducing complexity in computational imaging systems,” Opt. Express 11, pp. 2102-2108 (2003).

[58] W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, pp. 875-877 (2001).

[59] S. Prasad, T. C. Torgersen, V. P. Pauca, R. J. Plemmons, and J. van der Gracht, “Engineering the pupil phase to improve image quality,” in Visual Information Processing XII, Z. Rahman, R. Schowengerdt, and S. Reichenbach, eds., Proc. SPIE 5108, pp. 1-12 (2003).

[60] S. Prasad, V. P. Pauca, R. J. Plemmons, T. C. Torgersen, and J. van der Gracht, "Pupil- phase optimization for extended focus, aberration corrected imaging systems," in Advanced Signal Processing Algorithms, Architectures, and Implementations XIV, F. T. Luk, ed., Proc. SPIE 5559, 335-345 (2004).

[61] B. R. Frieden, “Image enhancement and restoration,” in Topics in Applied Physics, Vol. 6 of Picture Processing and Digital Filtering, T.S. Huang, ed. (Springer-Verlag, New York, 1979), pp. 177-248.

139

[62] H. C. Andrews and B. R. Hunt, Digital Image Restoration, (Prentice-Hall, New Jersey, 1977), Chap. 8, pp. 147-152.

[63] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions," SIAM Journal of Optimization, 9, 112-147 (1998).

[64] T. F. Coleman, and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, 6, 418-445 (1996).

[65] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, "Optimization by simulated annealing," Science, 220, 671-680 (1983).

[66] Y. Gong, D. LaRose and G. Proietti, “A robust image mosaicing technique capable of creating integrated panoramas,” in IEEE 1999 International Symposium on Cyber Worlds, pp.14-16 (IEEE, 1999).

[67] E. Friedman and J. L. Miller, Photonics Rules of Thumb (McGraw-Hill, 2004), pp. 258.

[68] E. Candès and T. Tao, "Near optimal signal recovery from random projections: Universal encoding strategies?" IEEE Trans. Inform. Theory, vol. 52, 5406-5425, (2006).

[69] D.S. Taubman and M.W. Marcellin, JPEG 2000: Image Compression Fundamentals, Standards and Practice. (Norwell, MA: Kluwer, 2001).

[70] M. A. Neifeld and P. Shankar, "Feature-Specific Imaging," Appl. Opt., vol. 42, 3379-3389 (2003).

[71] M. A. Neifeld and J. Ke, "Optical architectures for compressive imaging," Appl. Opt., vol. 46, 5293-5303 (2007).

[72] H. Pal and M. Neifeld, "Multispectral principal component imaging," Opt. Express, vol. 11, 2118-2125 (2003).

[73] H. S. Pal, D. Ganotra, and M. A. Neifeld, "Face recognition by using feature-specific imaging," Appl. Opt., vol. 44, 3784-3794 (2005).

[74] P. Baheti, and M. A. Neifeld, “Feature-specific structured imaging,” Appl. Opt., vol. 45, 7382-7391 (2006).

[75] J. Ke, M. Stenner, and M. A. Neifeld, “Minimum reconstruction error in feature-specific imaging,” in Proc. SPIE, Visual Information Processing XIV, vol. 5817, (2005).

[76] J. Ke, P. Shankar, and M. A. Neifeld, “Distributed imaging using and array of compressive cameras,” Opt. Comm. Preprint, (2008).

[77] D. J. Brady, N. P. Pitsianis, and X. Sun, “Sensor-layer image compression based on the quantized cosine transform,” in Proc. SPIE, Visual Information Processing XIV, vol. 5817, (2005).

140

[78] A. Portnoy, X. Sun, T. Suleski, M. A. Fiddy, M. R. Feldman, N. P. Pitsianis, D. J. Brady, and R. D. TeKolste, “Compressive imaging sensors,” in Proc. SPIE, Intelligent Integrated Microsystems, vol. 6232, (2006).

[79] J. Haupt and R. Nowak, "Signal reconstruction from noisy random projections," IEEE Trans. Info. Theory 52, 4036-4048 (2006).

[80] M. F. Duarte, M. A. Davenport, M. B. Wakin, and R. G. Baraniuk, "Sparse signal detection from incoherent projections," in IEEE International Conference on Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings, Vol. 3 (IEEE, 2006).

[81] E. Candes, J. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Commun. Pure Appl. Mathematics 59, 1207-1223 (2006).

[82] S. S. Chen, D. L. Donoho, and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM Soc. Ind. Appl. Math. J. Numer. Anal. 43, 129-159 (2001).

[83] N. J. A. Sloane and M. Harwit, "Masks for Hadamard transform optics, and weighing designs," Appl. Opt. 15, 107-114 (1976).

[84] H. C. Andrews and B. R. Hunt, Digital Image Restoration, Prentice-Hall Signal Processing Series (Prentice-Hall, 1977).

[85] I. T. Jolliffe, Principle Component Analysis (Springer, 2002).

[86] D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, "A new compressive imaging camera using optical domain compression," Proc. SPIE 6065, 606509 (2006).

[87] M. R. Banham and A. K. Katsaggelos, "Digital image restoration," IEEE Signal Process. Mag. 14, 24-41 (1997).

[88] S. N. Bezdidko, "The Use of Zernike Polynomials in Optics." Sov. J. Opt. Techn. 41, 425 (1974).

[89] A. B. Bhatia, and E. Wolf, "On the Circle Polynomials of Zernike and Related Orthogonal Sets." Proc. Cambridge Phil. Soc. 50, 40 (1954).