(DSLR) Cameras

Total Page:16

File Type:pdf, Size:1020Kb

(DSLR) Cameras Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras Andrew D. Kerr and Douglas A. Stow Abstract Our objectives are to analyze the radiometric characteris- benefits of replicating solar ephemeris (Coulter et al., 2012, tics and best practices for maximizing radiometric fidel- Ahrends et al., 2008), specific shutter and aperture settings ity of digital single lens reflex (DSLR) cameras for aerial (Ahrends et al., 2008, Lebourgeois et al., 2008), using RAW image-based change detection. Control settings, exposure files (Deanet al., 2000, Coulter et al., 2012, Ahrends et al., values, white balance, light metering, ISO, and lens aper- 2008, Lebourgeois et al., 2008), vignetting abatement (Dean ture are evaluated for several bi-temporal imagery datasets. et al., 2000), and maintaining intra-frame white balance (WB) These variables are compared for their effects on dynamic consistency (Richardson et al., 2009, Levin et al., 2005). range, intra-frame brightness variation, acuity, temporal Through this study we seek to identify and determine how consistency, and detectability of simulated cracks. Test- to compensate and account for, the photometric aspects of im- ing was conducted from a terrestrial, rather than airborne age capture and postprocessing with DSLR cameras, to achieve platform, due to the large number of images collected, and high radiometric fidelity within and between digital multi- to minimize inter-image misregistration. Results point to temporal images. The overall goal is to minimize the effects of exposure biases in the range of −0.7 or −0.3EV (i.e., slightly these factors on the radiometric consistency of multi-temporal less than the auto-exposure selected levels) being preferable images, inter-frame brightness, and capture of images with for change detection and noise minimization, by achiev- high acuity and dynamic range. ing a balance between full dynamic range and high acu- The applications contexts for conducting this study are ity. DSLR cameras exhibit high radiometric fidelity and can detecting post-hazard damage and monitoring changes in effectively support low-cost aerial image-based change urban infrastructure. The technical context is Repeat Station detection, such as for post-hazard damage assessment.Delivered by IngentaImaging (RSI), where image capture over time occurs at nearly IP: 192.168.39.210 On: Fri, 24the Sep identical 2021 station10:11:27 points in the sky, and image registration Copyright: American Society for Photogrammetryand change and detection Remote are Sensingperformed on a frame-by-frame basis Introduction (Coulter et al., 2003; Stow et al., 2003). Due to the dynamic Consumer-oriented digital single lens reflex (DSLR) cameras of nature of urban scenes, a challenge is to minimize noise ever increasing spatial resolution and quality are an economi- sources due to variations in illumination characteristics and cal and readily available sensor option for airborne remote scene conditions and features, sensor variability, and appar- sensing. Increasing coverage per image frame and higher im- ent image motion (AIM) motion blur, through the selection of age fidelity, along with low barriers to entry (i.e., affordability an appropriate shutter speed. The goal is to automate image and ease of use) inherent to DSLR cameras, make them viable processing and analysis as much as possible, but with the ex- for many remote sensing applications. Consumer DSLR cam- pectation that a human analyst will make the final analysis of eras have features and incorporate processes that differ from whether damage or other land surface changes have occurred. traditional aerial cameras, as they are designed with photom- This study was conducted in such a way that the charac- etry in mind, rather than high radiometric fidelity associated teristics of the lens used, such as the specific or various focal with many aerial imaging sensors. length(s) and relative aperture(s), had minimal bearing on the Photometry seeks to measure light, as closely as possible replicability and adaptability to other users, can be performed to how it is perceived by the human eye, whereas radiometry without specialized equipment, and can be tailored to specific seeks to normalize or even measure the absolute spectral radi- collection parameters. The study design also should allow for ance. Having photometric accuracy as the primary concern of future camera models to be tested, provided that they have a camera manufactures, has led to the development of several variable aperture lens, bayer array, and image sensor similar photometric materials and processes, including specific lens to a CMOS sensor. We captured high quality images with vary- or sensor spectral coatings, and onboard image processing ing exposure parameters using consumer grade DSLR cameras, steps, to achieve greater photometric accuracy, sometimes at and the resultant image sets were utilized to empirically the expense of radiometric linearity or greater radiometric ac- address the following research questions, in the context of curacy (Lebourgeois et al., 2008). RSI-based change detection. A dearth of research articles exist that explore remote 1. With what combination of exposure settings can the sensing based on DSLR digital cameras (Clemens, 2015). These dynamic range of image brightness values be maximized, studies have tended to focus on vegetation remote sensing while achieving high image acuity? (Dean et al., 2000, Ahrends et al., 2008, Lebourgeois et al., 2008, Richardson et al., 2009), change detection for wide area Photogrammetric Engineering & Remote Sensing aerial surveillance (Coulter and Stow, 2008), and generation Vol. 84, No. 3, March 2018, pp. 149–158. of color indices for soil identification (Levinet al., 2005). 0099-1112/17/149–158 In investigating these topics, past studies have outlined the © 2018 American Society for Photogrammetry Storm Hall 307B, Department of Geography, San Diego State and Remote Sensing University, San Diego, CA 92182-4493 ([email protected]). doi: 10.14358/PERS.84.3.149 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING March 2018 149 03-18 March PR Print.indd 149 2/20/2018 4:35:47 PM 2. What is the characteristic spatial trend in brightness atmospheric path radiance, and transmittance conditions. response within image frames, how do these trends vary The sites chosen for these image collections (Figures 1 and with different exposure parameters, and how well can 2) have urban scene composition, and an image capture within image trends be balanced or normalized? azimuth nearly parallel to the solar azimuth, so as to create 3. How can between image differences in radiometric bright- illumination and shadow conditions similar to those for aerial ness response due to noise be minimized and due to signal imaging. Figure 3 is the camera and tripod set-up for station- be maximized for multi-temporal RSI pairs, by proper ary oblique imaging. Additionally, images were collected on selection of exposure parameters? a clear and a hazy day, so the dataset contains two different atmospheric conditions. We find no previous studies in the literature that evaluate the The experimental variables that pertain to the research radiometric characteristics of consumer grade DSLR cameras questions in this study are image exposure value and light me- for capturing aerial imagery in support of change detection ter measurement zones for automatically estimating the appro- applications, nor for determining best practices for optimal priate exposure level for particular ambient light magnitudes, radiometric fidelity through optimization of camera con- aperture size, and shutter speed for controlling the amount trol parameters. In addition, this study is novel in both its of light reaching the detector, ISO (a carry over acronym from ground-based image capture procedures that emulate airborne film speed metrics of the International Standards Operation) imaging, including simulation of vertical path radiance in the for controlling the sensitivity of the detector and white bal- horizontal plane, and near replication of solar geometry, and ance for inter-band radiometric consistency. Specific variables its damage change detection metrics, which were developed pertain to each of the research questions, as not all of the vari- for measuring change detection using a type of signal to noise ables are relevant for addressing each research question. metric, and two quantitative change detection evaluation The first research question pertains to maximizing dynam- methods based on pixel transects/profiles. ic range of a given image frame, while achieving high image acuity, for which the image exposure value (EV), light meter measurement zones, relative aperture, shutter speed, and ISO Methods are the five relevant variables. The frame-specific EV is a major determinant of dynamic range, as it sets the camera exposure Experimental Variables and Image Capture Strategy controls to yield an image that is not over- or under-exposed, The methods chosen for this study were designed for col- thus ensuring a maximized dynamic range in the resulting lection of imagery from a terrestrial, rather than an airborne image. The light meter measurement zone configuration may platform. The rationale is that for each image collection of 86 have an impact on a maximized dynamic range as well, as unique camera exposure setting combinations, the capture the “overall” and “center weighted” settings for determin- location
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Denver Cmc Photography Section Newsletter
    MARCH 2018 DENVER CMC PHOTOGRAPHY SECTION NEWSLETTER Wednesday, March 14 CONNIE RUDD Photography with a Purpose 2018 Monthly Meetings Steering Committee 2nd Wednesday of the month, 7:00 p.m. Frank Burzynski CMC Liaison AMC, 710 10th St. #200, Golden, CO [email protected] $20 Annual Dues Jao van de Lagemaat Education Coordinator Meeting WEDNESDAY, March 14, 7:00 p.m. [email protected] March Meeting Janice Bennett Newsletter and Communication Join us Wednesday, March 14, Coordinator fom 7:00 to 9:00 p.m. for our meeting. [email protected] Ron Hileman CONNIE RUDD Hike and Event Coordinator [email protected] wil present Photography with a Purpose: Conservation Photography that not only Selma Kristel Presentation Coordinator inspires, but can also tip the balance in favor [email protected] of the protection of public lands. Alex Clymer Social Media Coordinator For our meeting on March 14, each member [email protected] may submit two images fom National Parks Mark Haugen anywhere in the country. Facilities Coordinator [email protected] Please submit images to Janice Bennett, CMC Photo Section Email [email protected] by Tuesday, March 13. [email protected] PAGE 1! DENVER CMC PHOTOGRAPHY SECTION MARCH 2018 JOIN US FOR OUR MEETING WEDNESDAY, March 14 Connie Rudd will present Photography with a Purpose: Conservation Photography that not only inspires, but can also tip the balance in favor of the protection of public lands. Please see the next page for more information about Connie Rudd. For our meeting on March 14, each member may submit two images from National Parks anywhere in the country.
    [Show full text]
  • Sample Manuscript Showing Specifications and Style
    Information capacity: a measure of potential image quality of a digital camera Frédéric Cao 1, Frédéric Guichard, Hervé Hornung DxO Labs, 3 rue Nationale, 92100 Boulogne Billancourt, FRANCE ABSTRACT The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading. Keywords: digital photography, image quality evaluation, optical aberration, information capacity, camera performance database 1. INTRODUCTION The evaluation of a digital camera is a key factor for customers, whether they are vendors or final customers. It relies on many different factors as the presence or not of some functionalities, ergonomic, price, or image quality. Each separate criterion is itself quite complex to evaluate, and depends on many different factors. The case of image quality is a good illustration of this topic.
    [Show full text]
  • Section 10 Vignetting Vignetting the Stop Determines Determines the Stop the Size of the Bundle of Rays That Propagates On-Axis an the System for Through Object
    10-1 I and Instrumentation Design Optical OPTI-502 © Copyright 2019 John E. Greivenkamp E. John 2019 © Copyright Section 10 Vignetting 10-2 I and Instrumentation Design Optical OPTI-502 Vignetting Greivenkamp E. John 2019 © Copyright On-Axis The stop determines the size of Ray Aperture the bundle of rays that propagates Bundle through the system for an on-axis object. As the object height increases, z one of the other apertures in the system (such as a lens clear aperture) may limit part or all of Stop the bundle of rays. This is known as vignetting. Vignetted Off-Axis Ray Rays Bundle z Stop Aperture 10-3 I and Instrumentation Design Optical OPTI-502 Ray Bundle – On-Axis Greivenkamp E. John 2018 © Copyright The ray bundle for an on-axis object is a rotationally-symmetric spindle made up of sections of right circular cones. Each cone section is defined by the pupil and the object or image point in that optical space. The individual cone sections match up at the surfaces and elements. Stop Pupil y y y z y = 0 At any z, the cross section of the bundle is circular, and the radius of the bundle is the marginal ray value. The ray bundle is centered on the optical axis. 10-4 I and Instrumentation Design Optical OPTI-502 Ray Bundle – Off Axis Greivenkamp E. John 2019 © Copyright For an off-axis object point, the ray bundle skews, and is comprised of sections of skew circular cones which are still defined by the pupil and object or image point in that optical space.
    [Show full text]
  • Seeing Like Your Camera ○ My List of Specific Videos I Recommend for Homework I.E
    Accessing Lynda.com ● Free to Mason community ● Set your browser to lynda.gmu.edu ○ Log-in using your Mason ID and Password ● Playlists Seeing Like Your Camera ○ My list of specific videos I recommend for homework i.e. pre- and post-session viewing.. PART 2 - FALL 2016 ○ Clicking on the name of the video segment will bring you immediately to Lynda.com (or the login window) Stan Schretter ○ I recommend that you eventually watch the entire video class, since we will only use small segments of each video class [email protected] 1 2 Ways To Take This Course What Creates a Photograph ● Each class will cover on one or two topics in detail ● Light ○ Lynda.com videos cover a lot more material ○ I will email the video playlist and the my charts before each class ● Camera ● My Scale of Value ○ Maximum Benefit: Review Videos Before Class & Attend Lectures ● Composition & Practice after Each Class ○ Less Benefit: Do not look at the Videos; Attend Lectures and ● Camera Setup Practice after Each Class ○ Some Benefit: Look at Videos; Don’t attend Lectures ● Post Processing 3 4 This Course - “The Shot” This Course - “The Shot” ● Camera Setup ○ Exposure ● Light ■ “Proper” Light on the Sensor ■ Depth of Field ■ Stop or Show the Action ● Camera ○ Focus ○ Getting the Color Right ● Composition ■ White Balance ● Composition ● Camera Setup ○ Key Photographic Element(s) ○ Moving The Eye Through The Frame ■ Negative Space ● Post Processing ○ Perspective ○ Story 5 6 Outline of This Class Class Topics PART 1 - Summer 2016 PART 2 - Fall 2016 ● Topic 1 ○ Review of Part 1 ● Increasing Your Vision ● Brief Review of Part 1 ○ Shutter Speed, Aperture, ISO ○ Shutter Speed ● Seeing The Light ○ Composition ○ Aperture ○ Color, dynamic range, ● Topic 2 ○ ISO and White Balance histograms, backlighting, etc.
    [Show full text]
  • Multi-Agent Recognition System Based on Object Based Image Analysis Using Worldview-2
    intervals, between −3 and +3, for a total of 19 different EVs To mimic airborne imagery collection from a terrestrial captured (see Table 1). WB variable images were captured location, we attempted to approximate an equivalent path utilizing the camera presets listed in Table 1 at the EVs of 0, radiance to match a range of altitudes (750 to 1500 m) above −1 and +1, for a total of 27 images. Variable light metering ground level (AGL), that we commonly use as platform alti- tests were conducted using the “center weighted” light meter tudes for our simulated damage assessment research. Optical setting, at the EVs listed in Table 1, for a total of seven images. depth increases horizontally along with the zenith angle, from The variable ISO tests were carried out at the EVs of −1, 0, and a factor of one at zenith angle 0°, to around 40° at zenith angle +1, using the ISO values listed in Table 1, for a total of 18 im- 90° (Allen, 1973), and vertically, with a zenith angle of 0°, the ages. The variable aperture tests were conducted using 19 dif- magnitude of an object at the top of the atmosphere decreases ferent apertures ranging from f/2 to f/16, utilizing a different by 0.28 at sea level, 0.24 at 500 m, and 0.21 at 1,000 m above camera system from all the other tests, with ISO = 50, f = 105 sea level, to 0 at around 100 km (Green, 1992). With 25 mm, four different degrees of onboard vignetting control, and percent of atmospheric scattering due to atmosphere located EVs of −⅓, ⅔, 0, and +⅔, for a total of 228 images.
    [Show full text]
  • Holographic Optics for Thin and Lightweight Virtual Reality
    Holographic Optics for Thin and Lightweight Virtual Reality ANDREW MAIMONE, Facebook Reality Labs JUNREN WANG, Facebook Reality Labs Fig. 1. Left: Photo of full color holographic display in benchtop form factor. Center: Prototype VR display in sunglasses-like form factor with display thickness of 8.9 mm. Driving electronics and light sources are external. Right: Photo of content displayed on prototype in center image. Car scenes by komba/Shutterstock. We present a class of display designs combining holographic optics, direc- small text near the limit of human visual acuity. This use case also tional backlighting, laser illumination, and polarization-based optical folding brings VR out of the home and in to work and public spaces where to achieve thin, lightweight, and high performance near-eye displays for socially acceptable sunglasses and eyeglasses form factors prevail. virtual reality. Several design alternatives are proposed, compared, and ex- VR has made good progress in the past few years, and entirely perimentally validated as prototypes. Using only thin, flat films as optical self-contained head-worn systems are now commercially available. components, we demonstrate VR displays with thicknesses of less than 9 However, current headsets still have box-like form factors and pro- mm, fields of view of over 90◦ horizontally, and form factors approach- ing sunglasses. In a benchtop form factor, we also demonstrate a full color vide only a fraction of the resolution of the human eye. Emerging display using wavelength-multiplexed holographic lenses that uses laser optical design techniques, such as polarization-based optical folding, illumination to provide a large gamut and highly saturated color.
    [Show full text]
  • Introduction to Metering on a DSLR
    Getting more from your Camera Topic 4 - Introduction to Metering on a DSLR Learning Outcomes In this lesson, we will look at another important feature on a DSLR camera called “Metering Mode”. By the end of this lesson, you will have a better idea of the role that metering plays when thinking about exposure in your photography. Page | 1 Introduction to Metering on a DSLR Introduction to Metering “Metering Mode” may also be called “Camera Metering”, “Exposure Metering” or even “Metering”. One of the things that might have already frustrated you is a scenario in which some photographs come out too bright or, in some cases, too dark. By understanding the metering modes, you will be better equipped to tackle this. Let us first talk about what metering is, before moving on to see how it works and how you can use your understanding of it, to enhance your photography. 1) What is Metering? Metering, in its simplest meaning, is basically how your camera determines what the correct shutter speed and aperture should be, depending on the amount of light that goes into the camera and the sensitivity of the sensor. In the age of digital technology, we are fortunate enough that every DSLR camera is built with an integrated light meter. This device is clever in that it automatically measures the reflected light and determines the optimal exposure. It’s important to look at the most common metering modes that are found in digital cameras today: 1. Matrix Metering (Nikon), also known as Evaluative Metering (Canon) 2. Center-weighted Metering 3.
    [Show full text]
  • Exposure Metering and Zone System Calibration
    Exposure Metering Relating Subject Lighting to Film Exposure By Jeff Conrad A photographic exposure meter measures subject lighting and indicates camera settings that nominally result in the best exposure of the film. The meter calibration establishes the relationship between subject lighting and those camera settings; the photographer’s skill and metering technique determine whether the camera settings ultimately produce a satisfactory image. Historically, the “best” exposure was determined subjectively by examining many photographs of different types of scenes with different lighting levels. Common practice was to use wide-angle averaging reflected-light meters, and it was found that setting the calibration to render the average of scene luminance as a medium tone resulted in the “best” exposure for many situations. Current calibration standards continue that practice, although wide-angle average metering largely has given way to other metering tech- niques. In most cases, an incident-light meter will cause a medium tone to be rendered as a medium tone, and a reflected-light meter will cause whatever is metered to be rendered as a medium tone. What constitutes a “medium tone” depends on many factors, including film processing, image postprocessing, and, when appropriate, the printing process. More often than not, a “medium tone” will not exactly match the original medium tone in the subject. In many cases, an exact match isn’t necessary—unless the original subject is available for direct comparison, the viewer of the image will be none the wiser. It’s often stated that meters are “calibrated to an 18% reflectance,” usually without much thought given to what the statement means.
    [Show full text]
  • EXIF: a Format Is Worth a Thousand Words
    NATIONAL LAW ENFORCEMENT AND CORRECTIONS TECHNOLOGY CENTER A program of the Office of Justice Programs’ National Institute of Justice From Winter 2007 TechBeat TECH b • e • a • t Dedicated to Reporting Developments in Technology for Law Enforcement, Corrections, and Forensic Sciences EXIF: A Format Is Worth A Thousand Words he National Center for Missing & Exploited Children if they are used only to open the file. If, however, these T revealed in a June 2005 study that 40 percent of arrest- programs are used to modify an image, they can destroy ed child pornography possessors had both sexually the Exif data. victimized children and were in possession of child pornog- The most important data may be the thumbnail image raphy. Due in part to the increasing prevalence of child exploitation and pornography, the digital photograph has linked to the photograph. Thumbnails are saved in their now become a fixture in gathering and examining forensic own hidden file (a thumbs.db file placed in folders con- evidence in such cases. taining images on the computer), and changes to an image may not always transfer to the corresponding Investigators who frequently handle child pornogra- thumbnail. If an original image is wiped from a disk using phy cases usually have (or know where to access) the a program such as Secure Clean™ or BCWipe®, the thumb- tools and the knowledge to obtain evidence associated nail may still be available. Officers have encountered situ- with contraband images. Nevertheless, law enforcement ations in which the victim’s or perpetrator’s face was officers who do not handle these cases on a regular basis blurred or concealed in the full image, but the thumbnail may be unaware of the important data that can be depicted an older version that revealed the obscured derived from digital images.
    [Show full text]
  • Understanding Metering and Metering Modes
    Understanding Metering and Metering Modes Every modern DSLR has something called “Metering Mode”, also known as “Camera Metering”, “Exposure Metering” or simply “Metering”. Knowing how metering works and what each of the metering modes does is important in photography, because it helps photographers control their exposure with minimum effort and take better pictures in unusual lighting situations. In this understanding metering modes article, I will explain what metering is, how it works and how you can use it for your digital photography. What is Metering? Metering is how your camera determines what the correct shutter speed and aperture should be, depending on the amount of light that goes into the camera and the ISO. Back in the old days of photography, cameras were not equipped with a light “meter”, which is a sensor that measures the amount and intensity of light. Photographers had to use hand-held light meters to determine the optimal exposure. Obviously, because the work was shot on film, they could not preview or see the results immediately, which is why they religiously relied on those light meters. Today, every DSLR has an integrated light meter that automatically measures the reflected light and determines the optimal exposure. The most common metering modes in digital cameras today are: 1. Matrix Metering (Nikon), also known as Evaluative Metering (Canon) 2. Center-weighted Metering 3. Spot Metering Some Canon EOS models also offer “Partial Metering”, which is similar to Spot Metering, except the covered area is larger (approximately 8% of the viewfinder area near the center vs 3.5% in Spot Metering).
    [Show full text]
  • Aperture Efficiency and Wide Field-Of-View Optical Systems Mark R
    Aperture Efficiency and Wide Field-of-View Optical Systems Mark R. Ackermann, Sandia National Laboratories Rex R. Kiziah, USAF Academy John T. McGraw and Peter C. Zimmer, J.T. McGraw & Associates Abstract Wide field-of-view optical systems are currently finding significant use for applications ranging from exoplanet search to space situational awareness. Systems ranging from small camera lenses to the 8.4-meter Large Synoptic Survey Telescope are designed to image large areas of the sky with increased search rate and scientific utility. An interesting issue with wide-field systems is the known compromises in aperture efficiency. They either use only a fraction of the available aperture or have optical elements with diameters larger than the optical aperture of the system. In either case, the complete aperture of the largest optical component is not fully utilized for any given field point within an image. System costs are driven by optical diameter (not aperture), focal length, optical complexity, and field-of-view. It is important to understand the optical design trade space and how cost, performance, and physical characteristics are influenced by various observing requirements. This paper examines the aperture efficiency of refracting and reflecting systems with one, two and three mirrors. Copyright © 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS) – www.amostech.com Introduction Optical systems exhibit an apparent falloff of image intensity from the center to edge of the image field as seen in Figure 1. This phenomenon, known as vignetting, results when the entrance pupil is viewed from an off-axis point in either object or image space.
    [Show full text]