Measuring Camera Shannon Information Capacity with a Siemens Star Image

Total Page:16

File Type:pdf, Size:1020Kb

Measuring Camera Shannon Information Capacity with a Siemens Star Image https://doi.org/10.2352/ISSN.2470-1173.2020.9.IQSP-347 © 2020, Society for Imaging Science and Technology Measuring camera Shannon Information Capacity with a Siemens Star Image Norman L. Koren, Imatest LLC, Boulder, Colorado, USA Abstract Measurement background Shannon information capacity, which can be expressed as bits per To measure signal and noise at the same location, we use an pixel or megabits per image, is an excellent figure of merit for pre- image of a sinusoidal Siemens-star test chart consisting of ncycles dicting camera performance for a variety of machine vision appli- total cycles, which we analyze by dividing the star into k radial cations, including medical and automotive imaging systems. Its segments (32 or 64), each of which is subdivided into m angular strength is that is combines the effects of sharpness (MTF) and segments (8, 16, or 24) of length Pseg. The number sine wave noise, but it has not been widely adopted because it has been cycles in each angular segment is 푛 = 푛푐푦푐푙푒푠/푚. difficult to measure and has never been standardized. We have developed a method for conveniently measuring inform- ation capacity from images of the familiar sinusoidal Siemens Star chart. The key is that noise is measured in the presence of the image signal, rather than in a separate location where image processing may be different—a commonplace occurrence with bilateral filters. The method also enables measurement of SNRI, which is a key performance metric for object detection. Information capacity is strongly affected by sensor noise, lens quality, ISO speed (Exposure Index), and the demosaicing algo- rithm, which affects aliasing. Information capacity of in-camera JPEG images differs from corresponding TIFF images from raw files because of different demosaicing algorithms and nonuniform sharpening and noise reduction. Figure 1. 144-cycle Siemens star pattern. Introduction For radius r in pixels, the spatial frequency in Cycles/Pixel is In electronic communications systems, Shannon informa- 푓 = 푛푐푦푐푙푒푠/2휋푟. This means that the Nyquist frequency, fnyq = tion capacity C defines the maximum rate in bits per second that 0.5 C/P is located at r = 46 pixels for a 144-cycle star and 23 data can be transmitted through a channel without error. For pixels for a 72-cycle star [3]. additive white gaussian noise, it is given by the deceptively We calculate the ideal (pure sine wave + second harmonic) simple Shannon-Hartley equation. signal sideal(휑) for each segment from the actual (noisy) input signal sinput(휑), using Fourier series coefficients aj and bj [4]. 퐶 = 푊 log (1 + 푆 ) = 푊 log (푆+푁) (1) 2 푁 2 푁 2 2휋푗푛휑 2휋푗푛휑 푠푖푑푒푎푙(휑) = ∑푗=1 푎푗 cos ( ) + 푏푘 sin ( ) (2) 푃푠푒푔 푃푠푒푔 While it is quite logical to extend this definition to imaging systems, where C has units of bits/pixel, a number of questions where arise. How should bandwidth W (closely related to sharpness) be defined? What to use as signal S? Can we trust measurements 2 2휋푗푛푥 from consumer camera JPEG images, which often have nonuni- 푎 = ∫ 푠 (푥) cos ( ) 푑푥 푗 푃 푃 푖푛푝푢푡 푃 form image processing (bilateral filtering) [1] that reduces noise 푠푒푔 in smooth areas (lowpass filtering) but sharpens the image near 2 2휋푗푛푥 contrasty features such as edges (high frequency boost)? 푏푗 = ∫ 푠푖푛푝푢푡(푥) sin ( ) 푑푥 (3) Because nonuniform image processing is so commonly 푃 푃 푃푠푒푔 applied, it is highly desirable to measure signal and noise at the same location in the image, i.e., to measure noise in the presence The second harmonic term (j = 2) term improves results for of signal. We have developed a method to accomplish this with JPEG images from cameras, which frequently have “shoulders” the well-known Siemens Star chart, which is a part of the ISO (regions of reduced contrast) in their tonal response curves, 12233:2014/2017 standard [2]. making them difficult to linearize with a simple gamma formula. IS&T International Symposium on Electronic Imaging 2020 Image Quality and System Performance 347-1 2휋 퐵 푆(푓푟,푓휃)+푁(푓푟,푓휃) 퐶 = ∫ ∫ log2 ( ) 푓푟 푑푓푟 푑푓휃 (7) 0 0 푁(푓푟,푓휃) Since 푆(푓푟, 푓휃) and 푁(푓푟, 푓휃) are only weakly dependent on θ, (7) can be rewritten in one-dimension. 퐵 푆(푓)+푁(푓) 퐶 = 2휋 ∫ log ( ) 푓 푑푓 (8) 0 2 푁(푓) The minimum star frequency is 푓푚푖푛 = 푛푐푦푐푙푒푠/2휋푟푠푡푎푟 where rstar is the radius of the star in pixels. The contribution of frequencies below fmin is calculated by extrapolation— by assu- ming that 푆(푓) = 푆(푓푚푖푛) and 푁(푓) = 푁(푓푚푖푛) for 푓 < 푓푚푖푛. This assumption is reasonable for 푁(푓) but somewhat pessi- mistic for 푆(푓). However, the effect on C is minimal because of the presence of fr in the integrand of Equation (8). C is strongly dependent on noise 푁(휑), which is calculated from the difference between the input signal (which includes Figure 2. Illustration of 푠 (휑) (blue), 푠 (휑)) (brown), and noise 푁(휑) = 푖푛푝푢푡 푖푑푒푎푙 noise) and the perfect sine wave signal. This causes C to decrease 푠푖푑푒푎푙(휑) − 푠푖푛푝푢푡(휑) (green). in the presence of artifacts such as aliasing and saturation. Aliasing has little effect on standard MTF measurements, and The spatial domain noise in each segment is saturation (or clipping) actually improves measurements even though it degrades system performance. We will discuss aliasing ( ) 푁 휑 = 푠푖푑푒푎푙(휑) − 푠푖푛푝푢푡(휑) (4) in more detail. The roughness of sinput and N(φ) in Figure 2, which is derived Measurement method from all points in the segment (sorted), does not represent the A key challenge in measuring information capacity is how to actual frequency distribution of the noise, and does not affect the define mean signal power S. Ideally, the definition should be calculation of frequency domain noise power 푁(푓). The noise based on a widely-used test chart. For convenience, the chart spectrum is more closely related to the noise image (with signal should be scale-invariant (so precise chart magnification does removed), illustrated in Figure 8. not need to be measured). And, as we indicated, signal and noise The mean signal power used to calculate C is 푆(푓) = should be measured at the same location. For different observers mean(푎2 + 푏2)/2 for all angular segments at radius r (where 푓 = 푛 푛 to obtain the same result the chart design and contrast should be 푛 /2휋푟). Frequency domain noise power 푁(푓) is the mean of 푐푦푐푙푒푠 standardized. 2 the variance (σ ) of the middle 80% of N(φ) for each angular To that end we recommend a sinusoidal Siemens star chart segment (there can be irregularities near the ends). similar to the chart specified in ISO 12233:2014/2017, Annex E. We started the analysis the integral form of the Shannon Contrast should be as close as possible to 50:1 (the minimum information capacity equation [5], which is appropriate for one- specified in the standard; close to the maximum achievable with dimensional systems: matte media). Higher contrast can make the star image difficult to linearize. Lower contrast is acceptable, but should be reported 퐵 ( ) 퐵 ( ) 푆 푓 푆 푓 +푁(푓) with the results. The chart should have 144 cycles for high resolu- 퐶 = ∫0 log2 (1 + ) 푑푓 = ∫0 log2 ( ) 푑푓 (5) 푁(푓) 푁(푓) tion systems, but 72 cycles is sufficient for low resolution sys- tems. The center marker (quadrant pattern), used to center the Where C is the channel capacity in bits/pixel; B = Nyquist image for analysis, should be 1/20 of the star diameter. frequency 푓 ; 푆(푓) is the signal power spectrum, 푁(푓) is the 푁푦푞 Acquire a well-exposed image of the Siemens star in even, noise power spectrum (which we interpret as broadband noise glare-free light. Exposures should be reasonably consistent when in the presence of signal 푆(푓)). multiple cameras are tested. The mean pixel level of the linea- The difficulty with this equation is that Information capacity rized image inside the star should be in the range of 0.16 to 0.36. applies to two-dimensional images, whereas all sharpness-rela- (The optimum has yet to be determined.) ted measurements (MTF and 푆(푓)) refer to one dimension, for The center of the star should be located as close as possible example, MTF in units of cycle/pixel actually means cycle per to the center of the image to minimize measurement errors pixel length. To overcome this limitation, we express C as a caused by optical distortion (if present). double integral. The size of the star in the image should be set so the maximum spatial frequency, corresponding to the minimum 퐵 푆(푓푥,푓푦)+푁(푓푥,푓푦) 퐶 = ∬ log ( ) 푑푓 푑푓 (6) radius rmin, is larger than the Nyquist frequency fNyq, and, if 0 2 푁(푓 ,푓 ) 푥 푦 푥 푦 possible, no larger than 1.3 fNyq, so sufficient lower frequencies are available for the channel capacity calculation. This means that where 푓푥 and 푓푦 are frequencies in the x and y-directions, a 144-cycle star with a 1/20 inner marker should have a diameter respectively. In order to evaluate this integral, we transform x of 1400-1750 pixels and a 72-cycle star should have a diameter and y into polar coordinates r and θ. of 700-875 pixels. For high-quality inkjet printers, the physical IS&T International Symposium on Electronic Imaging 2020 347-2 Image Quality and System Performance diameter of the star should be at least 9 (preferably 12) inches (23 to 30 cm).
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Estimation and Correction of the Distortion in Forensic Image Due to Rotation of the Photo Camera
    Master Thesis Electrical Engineering February 2018 Master Thesis Electrical Engineering with emphasis on Signal Processing February 2018 Estimation and Correction of the Distortion in Forensic Image due to Rotation of the Photo Camera Sathwika Bavikadi Venkata Bharath Botta Department of Applied Signal Processing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Department of Applied Signal Processing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with Emphasis on Signal Processing. Contact Information: Author(s): Sathwika Bavikadi E-mail: [email protected] Venkata Bharath Botta E-mail: [email protected] Supervisor: Irina Gertsovich University Examiner: Dr. Sven Johansson Department of Applied Signal Processing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Images, unlike text, represent an effective and natural communica- tion media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recog- nition are one of the most important tasks in the image processing. Crime scene photographs should always be in focus and there should be always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. There- fore, the camera must be on a grounded platform such as tripod. Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized.
    [Show full text]
  • Effects on Map Production of Distortions in Photogrammetric Systems J
    EFFECTS ON MAP PRODUCTION OF DISTORTIONS IN PHOTOGRAMMETRIC SYSTEMS J. V. Sharp and H. H. Hayes, Bausch and Lomb Opt. Co. I. INTRODUCTION HIS report concerns the results of nearly two years of investigation of T the problem of correlation of known distortions in photogrammetric sys­ tems of mapping with observed effects in map production. To begin with, dis­ tortion is defined as the displacement of a point from its true position in any plane image formed in a photogrammetric system. By photogrammetric systems of mapping is meant every major type (1) of photogrammetric instruments. The type of distortions investigated are of a magnitude which is considered by many photogrammetrists to be negligible, but unfortunately in their combined effects this is not always true. To buyers of finished maps, you need not be alarmed by open discussion of these effects of distortion on photogrammetric systems for producing maps. The effect of these distortions does not limit the accuracy of the map you buy; but rather it restricts those who use photogrammetric systems to make maps to limits established by experience. Thus the users are limited to proper choice of flying heights for aerial photography and control of other related factors (2) in producing a map of the accuracy you specify. You, the buyers of maps are safe behind your contract specifications. The real difference that distortions cause is the final cost of the map to you, which is often established by competi­ tive bidding. II. PROBLEM In examining the problem of correlating photogrammetric instruments with their resultant effects on map production, it is natural to ask how large these distortions are, whose effects are being considered.
    [Show full text]
  • Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns
    sensors Article Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns Jiawen Weng 1,†, Weishuai Zhou 2,†, Simin Ma 1, Pan Qi 3 and Jingang Zhong 2,4,* 1 Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China; [email protected] (J.W.); [email protected] (S.M.) 2 Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China; [email protected] 3 Department of Electronics Engineering, Guangdong Communication Polytechnic, Guangzhou 510650, China; [email protected] 4 Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510650, China * Correspondence: [email protected] † The authors contributed equally to this work. Abstract: The existing lens correction methods deal with the distortion correction by one or more specific image distortion models. However, distortion determination may fail when an unsuitable model is used. So, methods based on the distortion model would have some drawbacks. A model- free lens distortion correction based on the phase analysis of fringe-patterns is proposed in this paper. Firstly, the mathematical relationship of the distortion displacement and the modulated phase of the sinusoidal fringe-pattern are established in theory. By the phase demodulation analysis of the fringe-pattern, the distortion displacement map can be determined point by point for the whole distorted image. So, the image correction is achieved according to the distortion displacement map by a model-free approach. Furthermore, the distortion center, which is important in obtaining an optimal result, is measured by the instantaneous frequency distribution according to the character of distortion automatically. Numerical simulation and experiments performed by a wide-angle lens are carried out to validate the method.
    [Show full text]
  • Learning Perspective Undistortion of Portraits
    Learning Perspective Undistortion of Portraits Yajie Zhao1,*, Zeng Huang1,2,*, Tianye Li1,2, Weikai Chen1, Chloe LeGendre1,2, Xinglei Ren1, Jun Xing1, Ari Shapiro1, and Hao Li1,2,3 1USC Institute for Creative Technologies 2University of Southern California 3Pinscreen Beam Splitter Fixed Camera Sliding Camera (a) Capture Setting (b) Input image (c) Undistorted image (d) Reference (e) Input image (f) Undistorted image (g) Reference Figure 1: We propose a learning-based method to remove perspective distortion from portraits. For a subject with two different facial expressions, we show input photos (b) (e), our undistortion results (c) (f), and reference images (d) (g) captured simultaneously using a beam splitter rig (a). Our approach handles even extreme perspective distortions. Abstract 1. Introduction Perspective distortion artifacts are often observed in Near-range portrait photographs often contain perspec- portrait photographs, in part due to the popularity of the tive distortion artifacts that bias human perception and “selfie” image captured at a near-range distance. The challenge both facial recognition and reconstruction tech- inset images, where a person is photographed from dis- niques. We present the first deep learning based approach tances of 160cm and 25cm, demonstrate these artifacts. to remove such artifacts from unconstrained portraits. In When the object-to- contrast to the previous state-of-the-art approach [25], our camera distance is method handles even portraits with extreme perspective dis- comparable to the size of tortion, as we avoid the inaccurate and error-prone step of a human head, as in the first fitting a 3D face model. Instead, we predict a distor- 25cm distance example, tion correction flow map that encodes a per-pixel displace- there is a large propor- ment that removes distortion artifacts when applied to the tional difference between arXiv:1905.07515v1 [cs.CV] 18 May 2019 input image.
    [Show full text]
  • Optics II: Practical Photographic Lenses CS 178, Spring 2013
    Optics II: practical photographic lenses CS 178, Spring 2013 Begun 4/11/13, finished 4/16/13. Marc Levoy Computer Science Department Stanford University Outline ✦ why study lenses? ✦ thin lenses • graphical constructions, algebraic formulae ✦ thick lenses • center of perspective, 3D perspective transformations ✦ depth of field ✦ aberrations & distortion ✦ vignetting, glare, and other lens artifacts ✦ diffraction and lens quality ✦ special lenses • 2 telephoto, zoom © Marc Levoy Lens aberrations ✦ chromatic aberrations ✦ Seidel aberrations, a.k.a. 3rd order aberrations • arise because we use spherical lenses instead of hyperbolic • can be modeled by adding 3rd order terms to Taylor series ⎛ φ 3 φ 5 φ 7 ⎞ sin φ ≈ φ − + − + ... ⎝⎜ 3! 5! 7! ⎠⎟ • oblique aberrations • field curvature • distortion 3 © Marc Levoy Dispersion (wikipedia) ✦ index of refraction varies with wavelength • higher dispersion means more variation • amount of variation depends on material • index is typically higher for blue than red • so blue light bends more 4 © Marc Levoy red and blue have Chromatic aberration the same focal length (wikipedia) ✦ dispersion causes focal length to vary with wavelength • for convex lens, blue focal length is shorter ✦ correct using achromatic doublet • strong positive lens + weak negative lens = weak positive compound lens • by adjusting dispersions, can correct at two wavelengths 5 © Marc Levoy The chromatic aberrations (Smith) ✦ longitudinal (axial) chromatic aberration • different colors focus at different depths • appears everywhere
    [Show full text]
  • Distortion-Free Wide-Angle Portraits on Camera Phones
    Distortion-Free Wide-Angle Portraits on Camera Phones YICHANG SHIH, WEI-SHENG LAI, and CHIA-KAI LIANG, Google (a) A wide-angle photo with distortions on subjects’ faces. (b) Distortion-free photo by our method. Fig. 1. (a) A group selfie taken by a wide-angle 97° field-of-view phone camera. The perspective projection renders unnatural look to faces on the periphery: they are stretched, twisted, and squished. (b) Our algorithm restores all the distorted face shapes and keeps the background unaffected. Photographers take wide-angle shots to enjoy expanding views, group por- ACM Reference Format: traits that never miss anyone, or composite subjects with spectacular scenery background. In spite of the rapid proliferation of wide-angle cameras on YiChang Shih, Wei-Sheng Lai, and Chia-Kai Liang. 2019. Distortion-Free mobile phones, a wider field-of-view (FOV) introduces a stronger perspec- Wide-Angle Portraits on Camera Phones. ACM Trans. Graph. 38, 4, Article 61 tive distortion. Most notably, faces are stretched, squished, and skewed, to (July 2019), 12 pages. https://doi.org/10.1145/3306346.3322948 look vastly different from real-life. Correcting such distortions requires pro- fessional editing skills, as trivial manipulations can introduce other kinds of distortions. This paper introduces a new algorithm to undistort faces 1 INTRODUCTION without affecting other parts of the photo. Given a portrait as an input,we formulate an optimization problem to create a content-aware warping mesh Empowered with extra peripheral vi- FOV=97° which locally adapts to the stereographic projection on facial regions, and sion to “see”, a wide-angle lens is FOV=76° seamlessly evolves to the perspective projection over the background.
    [Show full text]
  • On Photogrammetric Distortion 763
    HORST SCHOLER lena Optics lena GMBH, German Democratic Republic On Photogram metric Distortion Compensation for model deformations is afforded by electrical- analogue computers both for planimetry and elevation, allowing one to generate mutually independent correction surfaces in both directions. INTRODUCTION DISTORTION IN OPTICAL IMAGING SYSTEMS ITH THE ADVANCE of photogrammetric Normally, image formation by an optical­ W methods with respect to measuring ac­ photographical system is, to a first approxi­ curacy, systematic model deformations are mation, based on the mathematical­ claiming closer attention. In relevant publi­ geometrical model ofperspective projection. cations, frequent use is made of the word In any physical implementation of optical distortion as a component of various terms imaging by glass elements of(mostly) spheri­ such as asymmetrical distortion, tangential cal curvature, deviations from this model will distortion, etc. In photogrammetric instru- occur due to the laws of geometric optics. ABSTRACT: The term "distortion" is precisely defined as the symmet­ rical dislocation ofimage points between the mathematical model of perspective projection and the physical image formation model. On this physical distortion, which is always rotation-symmetrical, is superposed a summation ofunavoidable tolerances in the manufac­ ture of optical systems, for which the term "deformation" is intro­ duced. Further considerations deal with external influences on photogrammetric photography and lead to the conclusion that at the present state ofknowledge no definite causes can be established for image point dislocations in photographic image formation. Based on the assumption that model deformations are caused by imperfections ofthe photograph's geometry, it is advantageous to compensate those imperfections by correction surfaces generated analytically or by analogue instrumental techniques from discrepancies at known con­ trol points.
    [Show full text]
  • The Precise Evaluation of Lens Distortion*
    The Precise Evaluation of Lens Distortion* FRA CIS E. WASHER, National Bureau of Standards, Washington, D. C. ABSTRACT: Four methods of radial distortion measurement are described. Com­ parison of results obtained on the same lens using each of the four methods is used to locate systematic errors and to evaluate the reliability of each method. Several sources of error are discussed. It is concluded that a precision of ±2 microns can be achieved using anyone of the four methods provided adequate attention is given to the reduction of all potential errors to negligible proportions: that is, the PE 8 of D{3 does not exceed 2 microns where D{3 is the value of distor­ tion obtained at angle fJ and PE 8 is the probable error of a single determination. 1. INTRODUCTION which is used to considerable extent In Europe. HE measurement of radial distortion in T the focal-plane of photographic objec­ Because of the diversity of methods used in tives has been the subject of intensive study evaluation of distortion, it seemed worth­ since the advent of mapping from aerial while to investigate the results of measure­ photographs. This particular aberration is of ments made in a single laboratory on a single prime interest to photogrammetrists as its lens by a variety of methods, to determine magnitude determines the accuracy with whether or not the values so obtained varied which the final negative maintains the correct appreciably \\"ith the method. This has been relationships among the array of point images done using four different methods and a com­ making up the photography of the cor­ parison of the results is reported in this paper.
    [Show full text]
  • Vignette and Exposure Calibration and Compensation
    Vignette and Exposure Calibration and Compensation Dan B Goldman and Jiun-Hung Chen University of Washington Figure 1 On the left, a panoramic sequence of images showing vignetting artifacts. Note the change in brightness at the edge of each image. Although the effect is visually subtle, this brightness change corresponds to a 20% drop in image intensity from the center to the corner of each image. On the right, the same sequence after vignetting is removed. Abstract stereo correlation and optical flow. Yet despite its occur- rence in essentially all photographed images, and its detri- We discuss calibration and removal of “vignetting” (ra- mental effect on these algorithms and systems, relatively dial falloff) and exposure (gain) variations from sequences little attention has been paid to calibrating and correcting of images. Unique solutions for vignetting, exposure and vignette artifacts. scene radiances are possible when the response curve is As we demonstrate, vignetting can be corrected easily known. When the response curve is unknown, an exponen- using sequences of natural images without special calibra- tial ambiguity prevents us from recovering these parameters tion objects or lighting. Our approach can be used even uniquely. However, the vignetting and exposure variations for existing panoramic image sequences in which nothing is can nonetheless be removed from the images without resolv- known about the camera. Figure 1 shows a series of aligned ing this ambiguity. Applications include panoramic image images, exhibiting noticeable brightness change along the mosaics, photometry for material reconstruction, image- boundaries of the images, even though the exposure of each based rendering, and preprocessing for correlation-based image is identical.
    [Show full text]
  • Fisheye Distortion Rectification from Deep Straight Lines
    1 Fisheye Distortion Rectification from Deep Straight Lines Zhu-Cun Xue, Nan Xue, Gui-Song Xia Abstract—This paper presents a novel line-aware rectification network (LaRecNet) to address the problem of fisheye distortion rectification based on the classical observation that straight lines in 3D space should be still straight in image planes. Specifically, the proposed LaRecNet contains three sequential modules to (1) learn the distorted straight lines from fisheye images; (2) estimate the distortion parameters from the learned heatmaps and the image appearance; and (3) rectify the input images via a proposed differentiable rectification layer. To better train and evaluate the proposed model, we create a synthetic line-rich fisheye (SLF) dataset that contains the distortion parameters and well-annotated distorted straight lines of fisheye images. The proposed method enables us to simultaneously calibrate the geometric distortion parameters and rectify fisheye images. Extensive experiments demonstrate that our model achieves state-of-the-art performance in terms of both geometric accuracy and image quality on several evaluation metrics. In particular, the images rectified by LaRecNet achieve an average reprojection error of 0.33 pixels on the SLF dataset and produce the highest peak signal-to-noise ratio (PSNR) and structure similarity index (SSIM) compared with the groundtruth. Index Terms—Fisheye Distortion Rectification, Fisheye Camera Calibration, Deep Learning, Straight Line Constraint F 1 INTRODUCTION Fisheye cameras enable us to capture images with an Straighten the Distorted Lines ultrawide field of view (FoV) and have the potential to ben- efit many machine vision tasks, such as structure from mo- LaRecNet tion (SfM), simultaneous localization and mapping (SLAM) and autonomous driving, by perceiving a well-conditioned scene coverage with fewer images than would otherwise be required.
    [Show full text]
  • An Exact Formula for Calculating Inverse Radial Lens Distortions
    sensors Article An Exact Formula for Calculating Inverse Radial Lens Distortions Pierre Drap *,† and Julien Lefèvre † Aix-Marseille Université, CNRS, ENSAM, Université De Toulon, LSIS UMR 7296, Domaine Universitaire de Saint-Jérôme, Bâtiment Polytech, Avenue Escadrille Normandie-Niemen, Marseille 13397, France; [email protected] * Correspondence: [email protected]; Tel.: +33-4-91-82-85-20 † These authors contributed equally to this work. Academic Editor: Manuela Vieira Received: 4 March 2016; Accepted: 24 May 2016; Published: 1 June 2016 Abstract: This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. Keywords: radial distortion; distortion correction; power series 1. Introduction Distortion is a physical phenomenon that in certain situations may greatly impact an image’s geometry without impairing quality nor reducing the information present in the image.
    [Show full text]