Single-Image Vignetting Correction

Total Page:16

File Type:pdf, Size:1020Kb

Single-Image Vignetting Correction Single-Image Vignetting Correction Yuanjie Zheng∗ Stephen Lin Sing Bing Kang Shanghai Jiaotong University Microsoft Research Asia Microsoft Research Abstract To determine the vignetting effects in an image, the most straightforward approach involves capturing an image In this paper, we propose a method for determining the completely spanned by a uniform scene region, such that vignetting function given only a single image. Our method brightness variations can solely be attributed to vignetting is designed to handle both textured and untextured regions [12, 1, 6, 14]. In such a calibration image, ratios of inten- in order to maximize the use of available information. To sity with respect to the pixel on the optical axis describe the extract vignetting information from an image, we present vignetting function. Suitable imaging conditions for this ap- adaptations of segmentation techniques that locate image proach, however, can be challenging to produce due to un- regions with reliable data for vignetting estimation. Within even illumination and camera tilt, and the vignetting mea- each image region, our method capitalizes on frequency surements are valid only for images captured by the camera characteristics and physical properties of vignetting to dis- under the same camera settings. Moreover, a calibration tinguish it from other sources of intensity variation. The image can be recorded only if the camera is at hand; con- vignetting data acquired from regions are weighted accord- sequently, this approach cannot be used to correct images ing to a presented reliability measure to promote robustness captured by unknown cameras, such as images downloaded in estimation. Comprehensive experiments demonstrate the from the web. effectiveness of this technique on a broad range of images. A vignetting function can alternatively be computed from image sequences with overlapping views of an arbi- trary static scene [5, 9, 4]. In this approach, point corre- spondences are first determined in the overlapping image 1. Introduction regions. Since a given scene point has a different position in each image, its brightness may be differently attenuated Vignetting refers to the phenomenon of brightness atten- by vignetting. From the aggregate attenuation information uation away from the image center, and is an artifact that from all correspondences, the vignetting function can be ac- is prevalent in photography. Although not objectionable to curately recovered without assumptions on the scene. the average viewer at low levels, it can significantly impair These previous approaches require either a collection computer vision algorithms that rely on precise intensity of overlapping images or an image of a calibration scene. data to analyze a scene. Applications in which vignetting However, often in practice only a single image of an arbi- distortions can be particularly damaging include photomet- trary scene is available. The previous techniques gain in- ric methods such as shape from shading, appearance-based formation for vignetting correction from pixels with equal techniques such as object recognition, and image mosaic- scene radiance but differing attenuations of brightness. For ing. a single arbitrary input image, this information becomes Several mechanisms may be responsible for vignetting challenging to obtain, since it is difficult to identify pixels effects. Some arise from the optical properties of camera having the same scene radiance while differing appreciably lenses, the most prominent of which is off-axis illumina- in vignetting attenuation. tion falloff or the cos4 law. This contribution to vignetting results from foreshortening of the lens when viewed from In this paper, we show that it is possible to correct or increasing angles from the optical axis [7]. Other sources reduce vignetting given just a single image. To maximize of vignetting are geometric in nature. For example, light ar- the use of available information in the image, our tech- riving at oblique angles to the optical axis may be partially nique extracts vignetting information from both textured obstructed by the field stop or lens rim. and untextured regions. Large image regions appropriate for vignetting function estimation are identified by pro- ∗This work was done while Yuanjie Zheng was a visiting student at posed adaptations to segmentation methods. To counter the Microsoft Research Asia. adverse effects of vignetting on segmentation, our method 2.1. Kang-Weiss model We consider an image with zero skew, an aspect ratio of 1, and principal point at the image center with image coor- dinates (u, v)=(0, 0). In the Kang-Weiss vignetting model [6], brightness ratios are described in terms of an off-axis illumination factor A, a geometric factor G, and a tilt factor T . For a pixel i at (ui,vi) with distance ri from the image center, the vignetting function ϕ is expressed as τ χ ϕ = A G T = ϑ T i =1···N, Figure 1. Tilt angles and in the Kang-Weiss vignetting model. i i i i ri i for (1) where 1 Ai = , 2 2 iteratively re-segments the image with respect to progres- (1 + (ri/f) ) sively refined estimates of the vignetting function. Addi- Gi =(1− α1ri), tionally, spatial variations in segmentation scale are used ϑ = A G , in a manner that enhances collection of reliable vignetting ri i i 3 data. tan τ Ti =cosτ 1+ (ui sin χ − vi cos χ) . (2) In extracting vignetting information from a given region, f we take advantage of physical vignetting characteristics to N is the number of pixels in the image, f is the effective diminish the influence of textures and other sources of in- focal length of the camera, and α1 represents a coefficient tensity variation. With the joint information of disparate in the geometric vignetting factor. The tilt parameters χ, τ image regions, we describe a method for computing the vi- respectively describe the rotation angle of a planar scene gnetting function. The effectiveness of this vignetting cor- surface around an axis parallel to the optical axis, and the rection method is supported by experiments on a wide vari- rotation angle around the x-axis of this rotated plane, as ety of images. illustrated in Fig. 1. The model ϕ in Eq. (1) can be decomposed into the global vignetting function ϑ of the camera and the natural 2. Vignetting model attenuation caused by local tilt effects T in the scene. Note that ϑ is rotationally symmetric; thus, it can be specified as a Most methods for vignetting correction use a paramet- 1D function of the radial distance ri from the image center. ric vignetting model to simplify estimation and minimize the influence of image noise. Typically used are empirical 2.2. Extended vignetting model models such as polynomial functions [4, 12] and hyperbolic In an arbitrary input image, numerous regions with dif- cosine functions [14]. Models based on physical consider- ferent local tilt factors may exist. To account for multiple ations include that of Asada et al. [1], which accounts for surfaces in an image, we present an extension of the Kang- off-axis illumination and light path obstruction, and that of Weiss model in which different image regions can have dif- Kang and Weiss [6] which additionally incorporates scene- ferent tilt angles. The tilt factor of Eq. (2) is modified to based tilt effects. Tilt describes intensity variations within a tan τ 3 scene region that are caused by differences in distance from si Ti =cosτs 1+ (ui sin χs − vi cos χs ) , (3) the camera, i.e., closer points appear brighter due to the in- i f i i verse square law of illumination. Although not intrinsic to where si indexes the region containing pixel i. the imaging system, the intensity attenuation effects caused We also extend the linear geometric factor to a more gen- by tilt must be accounted for in single-image vignetting es- eral polynomial form: timation. p Gi =(1− α1ri −···−αpr ), (4) Besides having physically meaningful parameters, an i important property of physical models is that their highly where p represents a polynomial order that can be arbitrarily structured and constrained form facilitates estimation in set according to a desired precision. This generalized rep- cases where data is sparse and/or noisy. In this work, we use resentation provides a closer fit to the geometric vignetting an extension of the Kang-Weiss model, originally designed effects that we have observed in practice. In contrast to us- for a single planar surface of constant albedo, to multiple ing a polynomial as the overall vignetting model, represent- surfaces of possibly different color. Additionally, we gen- ing only the geometric component by a polynomial allows eralize its linear model of geometric vignetting to a polyno- the overall model to explicitly account for local tilt effects mial form. and global off-axis illumination. Figure 3. Overview of vignetting function estimation. optimizing the underlying global vignetting parameters f, α1, ···,αp. With the estimated parameters, the vignetting corrected z /ϑ image is then given by i ri . We note that the estimated local tilt factors may contain other effects that can appear similar to tilt, such as non-uniform illumination or shad- ing. In the vignetting corrected image, these tilt and tilt-like factors are all retained so as not to produce an unnatural- looking result. Only the attenuation attributed to the imag- ing system itself (off-axis illumination and geometric fac- tors) is corrected. In this formulation, the scene is assumed to contain some Figure 2. Vignetting over multiple regions. Top row: without and piecewise planar Lambertian surfaces that are uniformly il- with vignetting for a single uniform region. Bottom row: without luminated and occupy significant portions of the image. Al- and with vignetting for multiple regions. though typical scenes are considerably more complex than uniform planar surfaces, we will later describe how vi- gnetting data in an image can be separated from other in- 2.3.
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Sample Manuscript Showing Specifications and Style
    Information capacity: a measure of potential image quality of a digital camera Frédéric Cao 1, Frédéric Guichard, Hervé Hornung DxO Labs, 3 rue Nationale, 92100 Boulogne Billancourt, FRANCE ABSTRACT The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital processing can correct some flaws (like distortion). Our definition of information takes possible correction into account and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading. Keywords: digital photography, image quality evaluation, optical aberration, information capacity, camera performance database 1. INTRODUCTION The evaluation of a digital camera is a key factor for customers, whether they are vendors or final customers. It relies on many different factors as the presence or not of some functionalities, ergonomic, price, or image quality. Each separate criterion is itself quite complex to evaluate, and depends on many different factors. The case of image quality is a good illustration of this topic.
    [Show full text]
  • A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods
    A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods Item Type text; Electronic Thesis Authors Zhou, Xi Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 23/09/2021 23:12:56 Link to Item http://hdl.handle.net/10150/634395 A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods by Xi Zhou ________________________________ Copyright © Xi Zhou 2019 A Thesis Submitted to the Faculty of the JAMES C. WYANT COLLEGE OF OPTICAL SCIENCES In Partial Fulfillment of the requirements For the degree of MASTER OF SCIENCE In the Graduate College THE UNIVERSITY OF ARIZONA 1 2 Table of Contents List of Figures ................................................................................................................................. 5 ABSTRACT .................................................................................................................................... 9 KEYWORDS ................................................................................................................................ 10 1. INTRODUCTION .................................................................................................................... 11 1.1 Motivation ..........................................................................................................................
    [Show full text]
  • Section 10 Vignetting Vignetting the Stop Determines Determines the Stop the Size of the Bundle of Rays That Propagates On-Axis an the System for Through Object
    10-1 I and Instrumentation Design Optical OPTI-502 © Copyright 2019 John E. Greivenkamp E. John 2019 © Copyright Section 10 Vignetting 10-2 I and Instrumentation Design Optical OPTI-502 Vignetting Greivenkamp E. John 2019 © Copyright On-Axis The stop determines the size of Ray Aperture the bundle of rays that propagates Bundle through the system for an on-axis object. As the object height increases, z one of the other apertures in the system (such as a lens clear aperture) may limit part or all of Stop the bundle of rays. This is known as vignetting. Vignetted Off-Axis Ray Rays Bundle z Stop Aperture 10-3 I and Instrumentation Design Optical OPTI-502 Ray Bundle – On-Axis Greivenkamp E. John 2018 © Copyright The ray bundle for an on-axis object is a rotationally-symmetric spindle made up of sections of right circular cones. Each cone section is defined by the pupil and the object or image point in that optical space. The individual cone sections match up at the surfaces and elements. Stop Pupil y y y z y = 0 At any z, the cross section of the bundle is circular, and the radius of the bundle is the marginal ray value. The ray bundle is centered on the optical axis. 10-4 I and Instrumentation Design Optical OPTI-502 Ray Bundle – Off Axis Greivenkamp E. John 2019 © Copyright For an off-axis object point, the ray bundle skews, and is comprised of sections of skew circular cones which are still defined by the pupil and object or image point in that optical space.
    [Show full text]
  • Starkish Xpander Lens
    STARKISH XPANDER LENS Large sensor sizes create beautiful images, but they can also pose problems without the right equipment. By attaching the Xpander Lens to your cinema lenses, you can now enjoy full scale ability to shoot with large sensors including 6K without compromising focal length capability. The Kish Xpander Lens allows full sensor coverage for the ARRI Alexa and RED Dragon cameras and is compatible for a wide range of cinema lenses to prevent vignetting: . Angenieux Optimo 17-80mm zoom lens . Angenieux Optimo 24-290mm zoom lens . Angenieux Compact Optimo zooms . Cooke 18-100mm zoom lens . Angenieux 17-102mm zoom lens . Angenieux 25-250mm HR zoom lens Whether you are shooting with the ARRI Alexa Open Gate or Red Dragon 6K, the Xpander Lens is the latest critical attachment necessary to prevent vignetting so you can maximize the most out of large sensor coverage. SPECIFICATIONS Version 1,15 Version 1,21 Image expansion factor: 15 % larger (nominal) Image expansion factor: 21 % larger (nominal) Focal Length: X 1.15 Focal length 1.21 Light loss: 1/3 to ½ stop Light Loss 1/2 to 2/3 stop THE XPANDER LENS VS. THE RANGE EXTENDER Even though the Lens Xpander functions similar to the lens range extender attachment, in terms of focal length and light loss factors, there is one fundamental difference to appreciate. The Xpander gives a larger image dimension by expanding the original image to cover a larger sensor area. This accessory is designed to maintain the optical performance over the larger image area. On the other hand, the range extender is also magnifying the image (focal length gets longer) but this attachment only needs to maintain the optical performance over the original image area.
    [Show full text]
  • Refractive Optical Design Systems • Many Different Types of Lens Systems
    Refractive Optical Design Systems • Many different types of lens systems used • Want to look at each from the following Performance Requirements • Field of View: How much of a object is seen in the image from the lens system • F# • Packaging requirements • Spectral Range Single Element • Poor image quality • Very small field of view • Chromatic Aberrations – only use a high f# • However fine for some applications eg Laser with single line • Where just want a spot, not a full field of view Landscape Lens • Single lens but with aperture stop added • i.e restriction on lens separate from the lens • Lens is “bent” around the stop • Reduces angle of incidence – thus off axis aberrations • Aperture either in front or back • Simple cameras use this Achromatic Doublet • Brings red and blue into same focus • Green usually slightly defocused • Chromatic blur 25 less than singlet (for f#=5 lens) • Cemented achromatic doublet poor at low f# • Slight improvement if add space between lens • Removes 5th order spherical Cooke Triplet Anastigmats • Three element lens which limits angle of incidence • Good performance for many applications • Designed in England by Taylor at “Cooke & Son” in 1893 • Created a photo revolution: simple elegant high quality lens • Gave sharp margins and detail in shadows • Lens 2 is negative & smaller than lenses 1 & 3 positives • Have control of 6 radii & 2 spaces • Allows balancing of 7 primary aberrations 1. Spherical 2. Coma 3. Astigmatism 4. Axial colour 5. Lateral colour 6. Distortion 7. Field curvature • Also control
    [Show full text]
  • Multi-Agent Recognition System Based on Object Based Image Analysis Using Worldview-2
    intervals, between −3 and +3, for a total of 19 different EVs To mimic airborne imagery collection from a terrestrial captured (see Table 1). WB variable images were captured location, we attempted to approximate an equivalent path utilizing the camera presets listed in Table 1 at the EVs of 0, radiance to match a range of altitudes (750 to 1500 m) above −1 and +1, for a total of 27 images. Variable light metering ground level (AGL), that we commonly use as platform alti- tests were conducted using the “center weighted” light meter tudes for our simulated damage assessment research. Optical setting, at the EVs listed in Table 1, for a total of seven images. depth increases horizontally along with the zenith angle, from The variable ISO tests were carried out at the EVs of −1, 0, and a factor of one at zenith angle 0°, to around 40° at zenith angle +1, using the ISO values listed in Table 1, for a total of 18 im- 90° (Allen, 1973), and vertically, with a zenith angle of 0°, the ages. The variable aperture tests were conducted using 19 dif- magnitude of an object at the top of the atmosphere decreases ferent apertures ranging from f/2 to f/16, utilizing a different by 0.28 at sea level, 0.24 at 500 m, and 0.21 at 1,000 m above camera system from all the other tests, with ISO = 50, f = 105 sea level, to 0 at around 100 km (Green, 1992). With 25 mm, four different degrees of onboard vignetting control, and percent of atmospheric scattering due to atmosphere located EVs of −⅓, ⅔, 0, and +⅔, for a total of 228 images.
    [Show full text]
  • Holographic Optics for Thin and Lightweight Virtual Reality
    Holographic Optics for Thin and Lightweight Virtual Reality ANDREW MAIMONE, Facebook Reality Labs JUNREN WANG, Facebook Reality Labs Fig. 1. Left: Photo of full color holographic display in benchtop form factor. Center: Prototype VR display in sunglasses-like form factor with display thickness of 8.9 mm. Driving electronics and light sources are external. Right: Photo of content displayed on prototype in center image. Car scenes by komba/Shutterstock. We present a class of display designs combining holographic optics, direc- small text near the limit of human visual acuity. This use case also tional backlighting, laser illumination, and polarization-based optical folding brings VR out of the home and in to work and public spaces where to achieve thin, lightweight, and high performance near-eye displays for socially acceptable sunglasses and eyeglasses form factors prevail. virtual reality. Several design alternatives are proposed, compared, and ex- VR has made good progress in the past few years, and entirely perimentally validated as prototypes. Using only thin, flat films as optical self-contained head-worn systems are now commercially available. components, we demonstrate VR displays with thicknesses of less than 9 However, current headsets still have box-like form factors and pro- mm, fields of view of over 90◦ horizontally, and form factors approach- ing sunglasses. In a benchtop form factor, we also demonstrate a full color vide only a fraction of the resolution of the human eye. Emerging display using wavelength-multiplexed holographic lenses that uses laser optical design techniques, such as polarization-based optical folding, illumination to provide a large gamut and highly saturated color.
    [Show full text]
  • Aperture Efficiency and Wide Field-Of-View Optical Systems Mark R
    Aperture Efficiency and Wide Field-of-View Optical Systems Mark R. Ackermann, Sandia National Laboratories Rex R. Kiziah, USAF Academy John T. McGraw and Peter C. Zimmer, J.T. McGraw & Associates Abstract Wide field-of-view optical systems are currently finding significant use for applications ranging from exoplanet search to space situational awareness. Systems ranging from small camera lenses to the 8.4-meter Large Synoptic Survey Telescope are designed to image large areas of the sky with increased search rate and scientific utility. An interesting issue with wide-field systems is the known compromises in aperture efficiency. They either use only a fraction of the available aperture or have optical elements with diameters larger than the optical aperture of the system. In either case, the complete aperture of the largest optical component is not fully utilized for any given field point within an image. System costs are driven by optical diameter (not aperture), focal length, optical complexity, and field-of-view. It is important to understand the optical design trade space and how cost, performance, and physical characteristics are influenced by various observing requirements. This paper examines the aperture efficiency of refracting and reflecting systems with one, two and three mirrors. Copyright © 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS) – www.amostech.com Introduction Optical systems exhibit an apparent falloff of image intensity from the center to edge of the image field as seen in Figure 1. This phenomenon, known as vignetting, results when the entrance pupil is viewed from an off-axis point in either object or image space.
    [Show full text]
  • Choosing Digital Camera Lenses Ron Patterson, Carbon County Ag/4-H Agent Stephen Sagers, Tooele County 4-H Agent
    June 2012 4H/Photography/2012-04pr Choosing Digital Camera Lenses Ron Patterson, Carbon County Ag/4-H Agent Stephen Sagers, Tooele County 4-H Agent the picture, such as wide angle, normal angle and Lenses may be the most critical component of the telescopic view. camera. The lens on a camera is a series of precision-shaped pieces of glass that, when placed together, can manipulate light and change the appearance of an image. Some cameras have removable lenses (interchangeable lenses) while other cameras have permanent lenses (fixed lenses). Fixed-lens cameras are limited in their versatility, but are generally much less expensive than a camera body with several potentially expensive lenses. (The cost for interchangeable lenses can range from $1-200 for standard lenses to $10,000 or more for high quality, professional lenses.) In addition, fixed-lens cameras are typically smaller and easier to pack around on sightseeing or recreational trips. Those who wish to become involved in fine art, fashion, portrait, landscape, or wildlife photography, would be wise to become familiar with the various types of lenses serious photographers use. The following discussion is mostly about interchangeable-lens cameras. However, understanding the concepts will help in understanding fixed-lens cameras as well. Figures 1 & 2. Figure 1 shows this camera at its minimum Lens Terms focal length of 4.7mm, while Figure 2 shows the110mm maximum focal length. While the discussion on lenses can become quite technical there are some terms that need to be Focal length refers to the distance from the optical understood to grasp basic optical concepts—focal center of the lens to the image sensor.
    [Show full text]
  • Astrophotography a Beginner’S Guide
    Astrophotography A Beginner’s Guide By James Seaman Copyright © James Seaman 2018 Contents Astrophotography ................................................................................................................................... 5 Equipment ........................................................................................................................................... 6 DSLR Cameras ..................................................................................................................................... 7 Sensors ............................................................................................................................................ 7 Focal Length .................................................................................................................................... 8 Exposure .......................................................................................................................................... 9 Aperture ........................................................................................................................................ 10 ISO ................................................................................................................................................. 11 White Balance ............................................................................................................................... 12 File Formats ..................................................................................................................................
    [Show full text]
  • 变焦鱼眼镜头系统设计 侯国柱 吕丽军 Design of Zoom Fish-Eye Lens Systems Hou Guozhu, Lu Lijun
    变焦鱼眼镜头系统设计 侯国柱 吕丽军 Design of zoom fish-eye lens systems Hou Guozhu, Lu Lijun 在线阅读 View online: https://doi.org/10.3788/IRLA20190519 您可能感兴趣的其他文章 Articles you may be interested in 鱼眼镜头图像畸变的校正方法 Correction method of image distortion of fisheye lens 红外与激光工程. 2019, 48(9): 926002-0926002(8) https://doi.org/10.3788/IRLA201948.0926002 用于物证搜寻的大视场变焦偏振成像光学系统设计 Wide-angle zoom polarization imaging optical system design for physical evidence search 红外与激光工程. 2019, 48(4): 418006-0418006(8) https://doi.org/10.3788/IRLA201948.0418006 高分辨率像方远心连续变焦投影镜头的设计 Design of high-resolution image square telecentric continuous zoom projection lens based on TIR prism 红外与激光工程. 2019, 48(11): 1114005-1114005(8) https://doi.org/10.3788/IRLA201948.1114005 紧凑型大变倍比红外光学系统设计 Design of compact high zoom ratio infrared optical system 红外与激光工程. 2017, 46(11): 1104002-1104002(5) https://doi.org/10.3788/IRLA201746.1104002 硫系玻璃在长波红外无热化连续变焦广角镜头设计中的应用 Application of chalcogenide glass in designing a long-wave infrared athermalized continuous zoom wide-angle lens 红外与激光工程. 2018, 47(3): 321001-0321001(7) https://doi.org/10.3788/IRLA201847.0321001 三维激光雷达共光路液体透镜变焦光学系统设计 Design of common path zoom optical system with liquid lens for 3D laser radar 红外与激光工程. 2019, 48(4): 418002-0418002(9) https://doi.org/10.3788/IRLA201948.0418002 第 49 卷第 7 期 红外与激光工程 2020 年 7 月 Vol.49 No.7 Infrared and Laser Engineering Jul. 2020 Design of zoom fish-eye lens systems Hou Guozhu1,2,Lu Lijun2* (1. Industrial Technology Center, Shanghai Dianji University, Shanghai 201306, China; 2. Department of Precision Mechanical Engineering, Shanghai University, Shanghai 200444, China) Abstract: The zoom fish-eye lens had the characteristics of much larger field-of-view angle, much larger relative aperture, and much larger anti-far ratio.
    [Show full text]