Structured Light in Sunlight

Total Page:16

File Type:pdf, Size:1020Kb

Structured Light in Sunlight Structured Light In Sunlight Mohit Gupta Qi Yin Shree K. Nayar Columbia University Columbia University Columbia University New York, NY 10027 New York, NY 10027 New York, NY 10027 [email protected] [email protected] [email protected] Abstract Strong ambient illumination severely degrades the per- formance of structured light based techniques. This is espe- cially true in outdoor scenarios, where the structured light sources have to compete with sunlight, whose power is often 2-5 orders of magnitude larger than the projected light. In this paper, we propose the concept of light-concentration to overcome strong ambient illumination. Our key observation is that given a fixed light (power) budget, it is always better to allocate it sequentially in several portions of the scene, (a) An object placed outdoors (b) Imageof the sky as compared to spreading it over the entire scene at once. 6 am 9 am 12 pm For a desired level of accuracy, we show that by distributing light appropriately, the proposed approach requires 1-2 or- ders lower acquisition time than existing approaches. Our approach is illumination-adaptive as the optimal light dis- tribution is determined based on a measurement of the am- bient illumination level. Since current light sources have (c-e) 3D reconstructions at different times of the day a fixed light distribution, we have built a prototype light source that supports flexible light distribution by control- Figure 1. Effect of ambient illumination on structured light 3D ling the scanning speed of a laser scanner. We show several scanning. (a) An object placed outdoors on a clear day receives strong ambient illuminance R from the sun and the sky. (b) Im- high quality 3D scanning results in a wide range of outdoor a age of the sky at 9am. (c-e) 3D reconstructions using conventional scenarios. The proposed approach will benefit 3D vision methods at different times of the day. From left to right, as the day systems that need to operate outdoors under extreme ambi- progresses, Ra increases (2000 lux, 24,000 lux and 90,000 lux, ent illumination levels on a limited time and power budget. respectively) and the reconstruction quality degrades. 1. Introduction to increase the power of the light source. Unfortunately, Structured light 3D scanning, because of its accuracyand this is not always possible. Especially in outdoor scenar- simplicity, is the method of choice for 3D reconstruction ios, vision systems often operate on a limited power budget. in several applications, including factory automation, per- Moreover, low-cost hand-held projectors (e.g., pico projec- formance capture, digitization of cultural heritage and au- tors) are increasingly becoming popular as structured light tonomous vehicles. In many real-world settings, structured sources. For these low-power devices to be useful outdoors, light sources have to compete with strong ambient illumi- it is important to be able to handle strong ambient illumina- nation. In these scenarios, because of the limited dynamic tion on a tight power budget. range of image sensors, the signal (intensity due to struc- In this paper, we introduce the concept of light concen- tured light) in the captured images can be extremely low, tration in order to deal with strong ambient illumination. resulting in poor 3D reconstructions. The key idea is that even with a small light budget, signal This is especially true outdoors, where sunlight is often level can be increased by concentrating the available pro- 2-5 orders of magnitude brighter than the projected struc- jector light on a small portion of the scene. This is illus- tured light. For instance, it is known that Kinect, a popular trated in Figure 2 (center). At first glance, it may appear that structured light device, cannot recover 3D shape in strong concentrating the light will require more measurements, as sunlight [1]. While several optical techniques for ambient only a fraction of the scene is illuminated and encoded at light reduction have been proposed [10], they achieve only a time. However, we show that, it is possible to achieve moderate success. An example using an off-the-shelf laser significantly lower acquisition times by concentrating light 3D scanner is shown in Figure 1. Under strong ambient il- as compared to existing approaches that spread the avail- lumination, the reconstruction quality of an object placed able light over the entire projector image plane, and then outdoors degrades, even when spectral filtering is used. reduce image noise by frame-averaging. We call this the One obvious solution to the ambient light problem is light-concentration advantage. 1 Decreasing Light Spread large spread medium spread narrow spread light light light Projector Projector source source source 1st block th block Projected Images Projected (Left) Stripe Intensity = (Center) Stripe IntensityܥΤ ܭ = (Right) Stripe Intensity = Figure 2. Light redistribution for structured light. We consider different light distributions for designing structured light systems that perform under strong ambient illumination.ͳΤ ܥ Given a fixed light budget, as the lightܭ spreadΤ ܥ decreases (from left to right), the intensityͳ of each projected stripe increases. Existing structured light techniques lie at the two extremes of the power distribution scale. (Left) Systems where light is distributed over the entire projector image plane yield low signal strength and hence poor reconstruction quality. (Right) Systems where all the light is concentrated in to a single column require a large number of measurements. (Center) We show that by concentrating the light appropriately, it is possible to achieve fast and high-quality 3D scanning even in strong ambient illumination. The light-concentration advantage arises from the fact the scanner. The proposed approach achieves fast and high- that frame-averaging increases signal-to-noise-ratio (SNR) quality (pixel-level) 3D reconstruction for even the most ex- bya factorof square-root of the number of averaged frames. treme scenarios (direct sunlight, low-powered light source). However, the same time and power budget, if allocated into These features make our approach especially suitable for smaller scene regions, increases SNR linearly with the num- moving platforms such as autonomous cars which need to ber of measurements. We show that for the same accuracy operate outdoors under varying ambient illuminance levels (SNR) level, while the number of measurements required on a limited power budget. by existing approaches is linear in the ambient illuminance level Ra, i.e., (Ra), the proposed approach requires only O 2. Related Work √Ra measurements. For outdoor ambient illuminance O levels, this translates into 1-2 orders of magnitude (10-100 Structured light 3D scanning: Structured light techniques times) lower acquisition time. are classified based on the coded patterns that they project on the scene. Some typical examples are single line Scope and contributions: This paper introduces light re- stripes [3], sinusoidal patterns [13], binary patterns [11] and distribution as a new dimension in the design of structured deBruijn codes [15]. For a comprehensive survey on exist- light systems. We do not introduce a new structured light ing coding schemes, the reader is referred to [12]. coding scheme. Instead, we show that by managing the light budget appropriately, it is possible to perform fast and Significant work has been done towards designing high accurate 3D scanning outdoors on a limited power budget. SNR structured light coding schemes [14, 4, 6]. It has been After determining the optimal light distribution based on shown that in scenarios with extremely low SNR (such as the ambient illuminance level, any one of the existing high- strong ambient illumination), optimal SNR is achieved by SNR structured light coding scheme [14, 4, 6] can be used. using patterns with the fewest possible intensity levels (bi- The proposed approach can adapt to the ambient light level. nary patterns with two intensity levels) [6]. In Figure 1, For instance, as ambient illuminance decreases, the acquisi- despite binary Gray code patterns being used, result quality tion time required by our approach decreases. The proposed degrades as ambient illumination increases. This is because techniques are not restricted only to ambient illumination using high SNR patterns without considering light redistri- due to sunlight. They are applicable in any scenario that bution is not sufficient to achieve high-quality results under has a wide range of ambient illumination. strong ambient illumination. Hardware prototype and practical implications: Exist- Optical methods for suppressing ambient illumination: ing projectors distribute light over the entire image plane; Examples of such methods include using a narrow spec- they do not have the ability to distribute light in a flexible tral bandwidth laser (sunlight has broad bandwidth) with a manner. We have developed a prototype projector with flex- narrow-band spectral filter [10] and a polarized light source ible light distribution ability by using an off-the-shelf laser (sunlight is unpolarized) with a polarization filter [10]. scanner. Different light distributions over the projector im- This paper proposes a different approach. Given a fixed age plane are achieved by varying the scanning speed of level of ambient illuminance (after optical suppression), we determine the optimal distribution of the light (of the struc- strong ambient illumination, Ra >> Rl, and the dominant tured light source) in order to maximize the SNR. The in- source of noise is the signal-dependent photon noise, i.e., crease in SNR achieved by our method is in addition to, and 2 βRa σr << g . Then, the SNR is approximated as: much higher than, that achieved by the optical methods. In R SNR λ l , (4) order to deal with extreme ambient illumination scenarios, ≈ √R optical suppression techniques can be used in a complemen- a where λ is a constant. In order to achieve a desired depth tary manner to our method.
Recommended publications
  • Structured Light Based 3D Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting
    The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B3, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic STRUCTURED LIGHT BASED 3D SCANNING FOR SPECULAR SURFACE BY THE COMBINATION OF GRAY CODE AND PHASE SHIFTING Yujia Zhanga, Alper Yilmaza,∗ a Dept. of Civil, Environmental and Geodetic Engineering, The Ohio State University 2036 Neil Avenue, Columbus, OH 43210 USA - (zhang.2669, yilmaz.15)@osu.edu Commission III, WG III/1 KEY WORDS: Structured Light, 3D Scanning, Specular Surface, Maximum Min-SW Gray Code, Phase Shifting ABSTRACT: Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution.
    [Show full text]
  • Real-Time Depth Imaging
    TU Berlin, Fakultät IV, Computer Graphics Real-time depth imaging vorgelegt von Diplom-Mediensystemwissenschaftler Uwe Hahne aus Kirchheim unter Teck, Deutschland Von der Fakultät IV - Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften — Dr.-Ing. — genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr.-Ing. Olaf Hellwich Berichter: Prof. Dr.-Ing. Marc Alexa Berichter: Prof. Dr. Andreas Kolb Tag der wissenschaftlichen Aussprache: 3. Mai 2012 Berlin 2012 D83 For my family. Abstract This thesis depicts approaches toward real-time depth sensing. While humans are very good at estimating distances and hence are able to smoothly control vehicles and their own movements, machines often lack the ability to sense their environ- ment in a manner comparable to humans. This discrepancy prevents the automa- tion of certain job steps. We assume that further enhancement of depth sensing technologies might change this fact. We examine to what extend time-of-flight (ToF) cameras are able to provide reliable depth images in real-time. We discuss current issues with existing real-time imaging methods and technologies in detail and present several approaches to enhance real-time depth imaging. We focus on ToF imaging and the utilization of ToF cameras based on the photonic mixer de- vice (PMD) principle. These cameras provide per pixel distance information in real-time. However, the measurement contains several error sources. We present approaches to indicate measurement errors and to determine the reliability of the data from these sensors. If the reliability is known, combining the data with other sensors will become possible. We describe such a combination of ToF and stereo cameras that enables new interactive applications in the field of computer graph- ics.
    [Show full text]
  • Combined Close Range Photogrammetry and Terrestrial Laser Scanning for Ship Hull Modelling
    geosciences Article Combined Close Range Photogrammetry and Terrestrial Laser Scanning for Ship Hull Modelling Pawel Burdziakowski and Pawel Tysiac * Faculty of Civil and Environmental Engineering, Gdansk University of Technology, 80-233 Gda´nsk,Poland; [email protected] * Correspondence: [email protected] Received: 8 April 2019; Accepted: 23 May 2019; Published: 26 May 2019 Abstract: The paper addresses the fields of combined close-range photogrammetry and terrestrial laser scanning in the light of ship modelling. The authors pointed out precision and measurement accuracy due to their possible complex application for ship hulls inventories. Due to prescribed vitality of every ship structure, it is crucial to prepare documentation to support the vessel processes. The presented methods are directed, combined photogrammetric techniques in ship hull inventory due to submarines. The class of photogrammetry techniques based on high quality photos are supposed to be relevant techniques of the inventories’ purpose. An innovative approach combines these methods with Terrestrial Laser Scanning. The process stages of data acquisition, post-processing, and result analysis are presented and discussed due to market requirements. Advantages and disadvantages of the applied methods are presented. Keywords: terrestrial laser scanning; LiDAR; photogrammetry; surveying engineering; geomatics engineering 1. Introduction The present-day market requirements address application development in the cases of inventories and reverse engineering. Attention is paid to autonomous, fast, and accurate measurements covering a broad goal domain, e.g., the human body studies [1,2], coastal monitoring [3–5], or civil engineering [6–8]. The measurements involving a single technique only may lead to unacceptable results in terms of complex services in variable conditions (e.g., various weather conditions).
    [Show full text]
  • Range Imagingimaging
    RangeRange ImagingImaging E-mail: [email protected] http://web.yonsei.ac.kr/hgjung Range Imaging [1] E-mail: [email protected] http://web.yonsei.ac.kr/hgjung Range Imaging [1] E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [2] Matricial TOF cameras estimate the scene geometry in a single shot by a matrix of NR×NC TOF sensors. Each one independently but simultaneously measuring the distance of a scene point in front of them. E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [2] Technical challenges because of the light speed. Distance measurements of nominal distance resolution of 1mm need a clock capable to measure 5ps time steps. 1) Continuous wave (CW) intensity modulation-based 2) Optical shutter (OS)-based 3) Single photon avalanche diodes (SPAD)-based This device is able to detect low intensity signals (down to the single photon) and to signal the arrival times of the photons with a jitter of a few tens of picoseconds. http://en.wikipedia.org/wiki/Single- photon_avalanche_diode E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [2] CW Modulation-based Commercial products: -MESA Imaging - PMD Technologies - Optrima SoftKinetic - Canesta (acquired by Microsoft in 2010) Research institutions -CSEM - Fondazione Bruno Kessler E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [2] CW Modulation-based E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [3] OS-based - ZCam of 3DV Systems (acquired by Microsoft in 2009) E-mail: [email protected] http://web.yonsei.ac.kr/hgjung TOF Camera [3] OS-based The 3D information can now be extracted from the reflected deformed “wall” by deploying a fast image shutter in front of the CCD chip and blocking the incoming lights.
    [Show full text]
  • Padrão De Formatação
    Faculdade de Engenharia da Universidade do Porto Projection Mapping aplicada à indústrial de Fabricação de estruturas para armazenamento alimentar Carlos Manuel Borges Silva VERSÃO DE TRABALHO Dissertação realizada no âmbito do Mestrado Integrado em Engenharia Electrotécnica e de Computadores Major Automação Orientador: Prof. Pedro Luís Cerqueira Gomes da Costa Co-orientador: Luís André Freitas da Rocha © Carlos Manuel Borges Silva, 2019 ii Resumo Atualmente existe uma forte evolução das tecnologias, nomeadamente em automatizar processos. Este tipo de automatização requer investimento por parte das empresas que retornará em lucro por minimizar os gastos de operacionalidade. Ao longo desta dissertação pretende-se explicar uma destas automatizações que neste caso se aplica à melhoria de processo de fabricação de estruturas para o armazenamento alimentar. O projeto incide no desenvolvimento de uma aplicação que visa ajudar a marcação de cortes de tampos para silos alimentares. Esta marcação tem por base um modelo desenhado a computador do tampo final (com todos os cortes e dimensões) contudo é necessário o desenvolvimento de um algoritmo capaz de entender e calcular eficazmente as diferenças, a deformação e a distorção entre o modelo real e virtual. Posteriormente, essas entidades (cortes e furações) são projetadas com um laser no tampo do silo para que sirvam de guias para um operador efetuar os cortes e/ou as furações. Os resultados atingidos vão ao encontro dos objetivos, que é a automatização de processos tornando-os mais eficientes e eficazes, tendo sido desenvolvido com sucesso a aquisição das nuvens de pontos, o tratamento desta informação com o modelo virtual, e o cálculo da deformação existente entre os diferentes modelos.
    [Show full text]
  • MC3D: Motion Contrast 3D Scanning
    MC3D: Motion Contrast 3D Scanning Nathan Matsuda Oliver Cossairt Mohit Gupta Northwestern University Northwestern University Columbia University Evanston, IL Evanston, IL New York, NY Abstract Structured light 3D scanning systems are fundamentally constrained by limited sensor bandwidth and light source power, hindering their performance in real-world appli- cations where depth information is essential, such as in- dustrial automation, autonomous transportation, robotic surgery, and entertainment. We present a novel struc- tured light technique called Motion Contrast 3D scanning (MC3D) that maximizes bandwidth and light source power to avoid performance trade-offs. The technique utilizes mo- tion contrast cameras that sense temporal gradients asyn- chronously, i.e., independently for each pixel, a property that minimizes redundant sampling. This allows laser scan- ning resolution with single-shot speed, even in the presence of strong ambient illumination, significant inter-reflections, and highly reflective surfaces. The proposed approach will allow 3D vision systems to be deployed in challenging and hitherto inaccessible real-world scenarios requiring high performance using limited power and bandwidth. Figure 1: Taxonomy of SL Systems: SL systems face trade-offs in acquisition speed, resolution, and light efficiency. Laser scan- 1. Introduction ning (upper left) achieves high resolution at slow speeds. Single- shot methods (mid-right) obtain lower resolution with a single Many applications in science and industry, such as exposure. Other methods such as Gray coding and phase shift- robotics, bioinformatics, augmented reality, and manufac- ing (mid-bottom) balance speed and resolution but have degraded turing automation rely on capturing the 3D shape of scenes. performance in the presence of strong ambient light, scene inter- Structured light (SL) methods, where the scene is actively reflections, and dense participating media.
    [Show full text]
  • An Overview of Imaging Lidar Sensors for Autonomous Vehicles
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 August 2019 Peer-reviewed version available at Appl. Sci. 2019, 9, 4093; doi:10.3390/app9194093 Article An overview of imaging lidar sensors for autonomous vehicles Santiago Royo1,2* and Maria Ballesta 1 1 Centre for Sensor, Instrumentation and systems Development, Universitat Politècnica de Catalunya (CD6-UPC); Rambla Sant Nebridi 10 E08222 Terrassa ; [email protected] 2 Beamagine S.L.; C/Bellesguard 16 E08755 Castellbisbal, Spain; * Correspondence: [email protected]; Tel.: +34 648773478 Abstract: Imaging lidars are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a career to decide the final solution to be implemented. The diversity of state-of-the art approaches to the solution brings, however, a large uncertainty towards such decision. This results often in contradictory claims from different manufacturers and developers. Within this paper we intend to provide an introductory overview of the technology linked to imaging lidars for autonomous vehicles. We start with the main single-point measurement principles, and then present the different imaging strategies implemented in the different solutions. An overview of the main components most frequently used in practice is also presented. Finally, a brief section on pending issues for lidar development has been included, in order to discuss some of the problems which still need to be solved before an efficient final implementation.Beyond this introduction, the reader is provided with a detailed bibliography containing both relevant books and state of the art papers. Keywords: lidar, ladar, time of flight, 3D imaging, point cloud, MEMS, scanners, photodetectors, lasers, autonomous vehicles, self-driving car 1.
    [Show full text]
  • Dual Structured Light 3D Using a 1D Sensor
    Dual Structured Light 3D using a 1D Sensor Jian Wangy, Aswin C. Sankaranarayanany, Mohit Guptaz, and Srinivasa G. Narasimhany yCarnegie Mellon University, zUniversity of Wisconsin-Madison fjianwan2,[email protected], [email protected], [email protected] Abstract. Structured light-based 3D reconstruction methods often il- luminate a scene using patterns with 1D translational symmetry such as stripes, Gray codes or sinusoidal phase shifting patterns. These patterns are decoded using images captured by a traditional 2D sensor. In this work, we present a novel structured light approach that uses a 1D sen- sor with simple optics and no moving parts to reconstruct scenes with the same acquisition speed as a traditional 2D sensor. While tradition- al methods compute correspondences between columns of the projector and 2D camera pixels, our `dual' approach computes correspondences between columns of the 1D camera and 2D projector pixels. The use of a 1D sensor provides significant advantages in many applications that op- erate in short-wave infrared range (0.9 - 2.5 microns) or require dynamic vision sensors (DVS), where a 2D sensor is prohibitively expensive and difficult to manufacture. We analyze the proposed design, explore hard- ware alternatives and discuss the performance in the presence of ambient light and global illumination. Keywords: Structured light, Dual photography 1 Introduction Structured light (SL) [9] is one of the most popular techniques for 3D shape acquisition. An SL system uses active illumination, typically via a projector, to obtain robust correspondences between pixels on the projector and a camera, and subsequently, recovers the scene depth via triangulation.
    [Show full text]
  • Laser Scanning and Data Integration for Three-Dimensional Digital Recording of Complex Historical Structures: the Case of Mevlana Museum
    International Journal of Geo-Information Article Laser Scanning and Data Integration for Three-Dimensional Digital Recording of Complex Historical Structures: The Case of Mevlana Museum Cihan Altuntas 1,*, Ferruh Yildiz 1 and Marco Scaioni 2 1 Department of Geomatics, Engineering Faculty, Selcuk University, Selcuklu, Konya 42075, Turkey; [email protected] 2 Department of Architecture, Built Environment and Construction Engineering, Politecnico di Milano, Milano 20132, Italy; [email protected] * Correspondence: [email protected]; Tel.: +90-332-223-1894; Fax: +90-332-241-0635 Academic Editor: Wolfgang Kainz Received: 3 November 2015; Accepted: 25 January 2016; Published: 18 February 2016 Abstract: Terrestrial laser scanning method is widely used in three-dimensional (3-D) modeling projects. Nevertheless it usually requires measurement data from other sources for full measurement of the shapes. In this study a 3-D model of the historical Mevlana Museum (Mevlana Mausoleum) in Konya, Turkey was created using state-of-the art measurement techniques. The building was measured by terrestrial laser scanner (TLS). In addition, some shapes of the indoor area were measured by a time-of-flight camera. Thus, a 3-D model of the building was created by combining datasets of all measurements. The point cloud model was created with 2.3 cm and 2.4 cm accuracy for outdoor and indoor measurements, and then it was registered to a georeferenced system. In addition a 3-D virtual model was created by mapping the texture on a mesh derived from the point cloud. Keywords: terrestrial laser scanner; time-of-flight camera; point cloud; registration; three-dimensional modeling; cultural heritage 1.
    [Show full text]
  • SR-4000 and Camcube3.0 Time of Flight (Tof) Cameras: Tests and Comparison
    Remote Sens. 2012, 4, 1069-1089; doi:10.3390/rs4041069 OPEN ACCESS Remote Sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Article SR-4000 and CamCube3.0 Time of Flight (ToF) Cameras: Tests and Comparison Dario Piatti * and Fulvio Rinaudo DITAG, Politecnico di Torino, C.so Duca degli Abruzzi, 24, I-10129 Torino, Italy; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +39-011-090-7657; Fax: +39-011-090-7699. Received: 9 February 2012; in revised form: 9 April 2012 / Accepted: 10 April 2012 / Published: 18 April 2012 Abstract: In this paper experimental comparisons between two Time-of-Flight (ToF) cameras are reported in order to test their performance and to give some procedures for testing data delivered by this kind of technology. In particular, the SR-4000 camera by Mesa Imaging AG and the CamCube3.0 by PMD Technologies have been evaluated since they have good performances and are well known to researchers dealing with Time-of- Flight (ToF) cameras. After a brief overview of commercial ToF cameras available on the market and the main specifications of the tested devices, two topics are presented in this paper. First, the influence of camera warm-up on distance measurement is analyzed: a warm-up of 40 minutes is suggested to obtain the measurement stability, especially in the case of the CamCube3.0 camera, that exhibits distance measurement variations of several centimeters. Secondly, the variation of distance measurement precision variation over integration time is presented: distance measurement precisions of some millimeters are obtained in both cases.
    [Show full text]
  • Kinect V2 Sensor) Towards a Potential Use for Close-Range 3D Modeling
    Remote Sens. 2015, 7, 13070-13097; doi:10.3390/rs71013070 OPEN ACCESS remote sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Article Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling Elise Lachat *, Hélène Macher, Tania Landes and Pierre Grussenmeyer ICube Laboratory UMR 7357, Photogrammetry and Geomatics Group, National Institute of Applied Sciences (INSA Strasbourg), 24, Boulevard de la Victoire, 67084 Strasbourg Cedex, France; E-Mails: [email protected] (H.M.); [email protected] (T.L.); [email protected] (P.G.) * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +33-388-14-47-33; Fax: +33-388-14-47-33. Academic Editors: Diego Gonzalez-Aguilera, Devrim Akca, Richard Müller and Prasad S. Thenkabail Received: 30 June 2015 / Accepted: 23 September 2015 / Published: 1 October 2015 Abstract: In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft) release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling.
    [Show full text]
  • Rethinking Machine Vision Time of Flight with Ghz Heterodyning
    Received October 9, 2017, accepted November 15, 2017. Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2017.2775138 Rethinking Machine Vision Time of Flight With GHz Heterodyning ACHUTA KADAMBI , (Student Member, IEEE), AND RAMESH RASKAR Massachusetts Institute of Technology, Cambridge, MA 02139, USA Corresponding author: Achuta Kadambi (e-mail: [email protected]) This work was supported by the Media Laboratory consortium funding. The work of A. Kadambi was supported in part by the Qualcomm Innovation Fellowship and in part by the Draper Doctoral Fellowship. ABSTRACT Time-of-flight (ToF) 3-D cameras like the Microsoft Kinect, are prevalent in computer vision and computer graphics. In such devices, the power of an integrated laser is amplitude modulated at megahertz frequencies and demodulated using a specialized imaging sensor to obtain subcentimeter range precision. To use a similar architecture and obtain micrometer range precision, this paper incorporates beat notes. To bring telecommunications ideas to correlation ToF imaging, we study a form of ‘‘cascaded Time of Flight’’ which uses a hertz-scale intermediate frequency to encode high-frequency pathlength information. We show synthetically and experimentally that a bulk implementation of opto-electronic mixers offers: 1) robustness to environmental vibrations; 2) programmability; and 3) stability in frequency tones. A fiberoptic prototype is constructed, which demonstrates 3-µm range precision over a range of 2 m. A key contribution of this paper is to study and evaluate the proposed architecture for use in machine vision. INDEX TERMS Machine vision, computer graphics, imaging. I. INTRODUCTION In this paper, we focus our study on correlation ToF Time of flight (ToF) imagers are a popular method of obtain- imagers, a robust method to estimate the time of flight ing meter-scale depth images of a scene.
    [Show full text]