
Structured Light 3D Scanning in the Presence of Global Illumination Mohit Gupta†, Amit Agrawal‡, Ashok Veeraraghavan‡ and Srinivasa G. Narasimhan† † Robotics Institute, Carnegie Mellon University, Pittsburgh, USA ‡ Mitsubishi Electric Research Labs, Cambridge, USA Abstract of an actor. In all these settings, scene points receive illumination indirectly in the form of inter-reflections, Global illumination effects such as inter-reflections, sub-surface or volumetric scattering. Such effects, col- 1 diffusion and sub-surface scattering severely degrade lectively termed global or indirect illumination , often the performance of structured light-based 3D scanning. dominate the direct illumination and strongly depend In this paper, we analyze the errors caused by global on the shape and material properties of the scene. Not illumination in structured light-based shape recovery. accounting for these effects results in large and system- Based on this analysis, we design structured light pat- atic errors in the recovered shape (see Figure 1b). terns that are resilient to individual global illumination The goal of this paper is to build an end-to-end sys- effects using simple logical operations and tools from tem for structured light scanning under a broad range combinatorial mathematics. Scenes exhibiting multi- of global illumination effects. We begin by formally ple phenomena are handled by combining results from analyzing errors caused due to different global illumi- a small ensemble of such patterns. This combination nation effects. We show that the types and magnitude also allows us to detect any residual errors that are of errors depend on the region of influence of global illu- corrected by acquiring a few additional images. mination at any scene point. For instance, some scene Our techniques do not require explicit separation of points may receive global illumination only from a local the direct and global components of scene radiance and neighborhood (sub-surface scattering). We call these hence work even in scenarios where the separation fails short-range effects. Some points may receive global or the direct component is too low. Our methods can illumination from a larger region (inter-reflections or be readily incorporated into existing scanning systems diffusion). We call these long range effects. without significant overhead in terms of capture time or The key idea is to design patterns that modulate hardware. We show results on a variety of scenes with global illumination and prevent the errors at capture complex shape and material properties and challenging time itself. Short and long range effects place con- global illumination effects. trasting demands on the patterns. Whereas low spa- tial frequency patterns are best suited for short range 1. Introduction effects, long range effects require the patterns to have high-frequencies. Since most currently used patterns Structured light triangulation has become the (e.g., binary and sinusoidal codes) contain a combi- method of choice for shape measurement in several nation of both low and high spatial frequencies, they applications including industrial automation, graphics, are ill-equipped to prevent errors. We show that such human-computer interaction and surgery. Since the patterns can be converted to those with only high fre- early work in the field about 40 years ago [18, 12], re- quencies by applying simple logical operations, mak- search has been driven by two factors: reducing the ing them resilient to long range effects. Similarly, we acquisition time and increasing the depth resolution. use tools from combinatorial mathematics to design Significant progress has been made on both fronts (see patterns consisting solely of frequencies that are low the survey by Salvi et al [16]) as demonstrated by sys- enough to make them resilient to short range effects. tems which can recover shapes at close to 1000 Hz. [21] But how do we handle scenes that exhibit more than and at a depth resolution better than 30 microns [5]. one type of global illumination effect (such as the one Despite these advances, most structured light tech- in Figure 1a)? To answer this, we observe that it is niques make an important assumption: scene points highly unlikely for two different patterns to produce the receive illumination only directly from the light source. same erroneous decoding. This observation allows us to For many real world scenarios, this is not true. Imag- project a small ensemble of patterns and use a simple ine a robot trying to navigate an underground cave or voting scheme to compute the correct decoding at ev- an indoor scenario, a surgical instrument inside human 1 Global illumination should not be confused with the oft-used body, a robotic arm sorting a heap of metallic machine “ambient illumination” that is subtracted by capturing image parts, or a movie director wanting to image the face with the structured light source turned off. 713 (a)Bowlona (b)ConventionalGray (c)Modulatedphase (d)Ourensemble (e)Errormap translucent marble slab codes (11 images) shifting [4](162images) codes(41images) forcodes Figure 1. Measuring shape for the ‘bowl on marble-slab’ scene. This scene is challenging because of strong inter- reflections inside the concave bowl and sub-surface scattering on the translucent marble slab. (b-d) Shape reconstructions. Parentheses contain the number of input images. (b) Conventional Gray codes result in incorrect depths due to inter- reflections. (c) Modulated phase-shifting results in errors on the marble-slab because of low direct component. (d) Our technique uses an ensemble of codes optimized for individual light transport effects, and results in the best shape reconstruc- tion. (e) By analyzing the errors made by the individual codes, we can infer qualitative information about light-transport. Points marked in green correspond to translucent materials. Points marked in light-blue receive heavy inter-reflections. Maroon points do not receive much global illumination. For more results and detailed comparisons to existing techniques, please see the project web-page [1]. ery pixel, without any prior knowledge about the types illumination is used as scene texture that is invariant to of effects in the scene (Figure 1d). For very challenging global illumination. Park et al. [15] move the camera scenes, we present an error detection scheme based on or the scene to mitigate the errors due to global illumi- a simple consistency check over the results of the indi- nation in a structured light setup. Hermans et al [9] use vidual codes in the ensemble. Finally, we present an a moving projector in a variant of structured light tri- error correction scheme by collecting a few additional angulation. The depth measure used in this technique images. We demonstrate accurate reconstructions on (frequency of the intensity profile at each pixel) is in- scenes with complex geometry and material properties, variant to global light transport effects. In this paper, such as shiny brushed metal, translucent wax and mar- our focus is on designing structured light systems while ble and thick plastic diffusers (like shower curtains). avoiding the overhead due to moving components. Our techniques do not require explicit separation Recently, it was shown that the direct and global of the direct and global components of scene radiance components of scene radiance could be efficiently sep- and hence work even in scenarios where the separa- arated [14] using high-frequency illumination patterns. tion fails (e.g., strong inter-reflections among metallic This has led to several attempts to perform structured objects) or where the direct component is too low and light scanning under global illumination [3, 4]. All noisy (e.g., translucent objects or in the presence of de- these techniques rely on subtracting or reducing the focus). Our techniques consistently outperform many global component and apply conventional approaches traditional coding schemes and techniques which re- on the residual direct component. While these ap- quire explicit separation of the global component, such proaches have shown promise, there are three issues as modulated phase-shifting [4]. Our methods are sim- that prevent them from being applicable broadly: (a) ple to implement and can be readily incorporated into the direct component estimation may fail due to strong existing systems without significant overhead in terms inter-reflections (as with shiny metallic parts), (b) the of acquisition time or hardware. residual direct component may be too low and noisy (as with translucent surfaces, milk and murky water), 2. Related Work and (c) they require significantly higher number of im- ages than traditional approaches, or rely on weak cues In this section, we summarize the works that address like polarization. In contrast, we explicitly design en- the problem of shape recovery under global illumina- sembles of illumination patterns that are resilient to a tion. The seminal work of Nayar et al.[13] presented an broader range of global illumination effects, using sig- iterative approach for reconstructing shape of Lamber- nificantly less number of images. tian objects in the presence of inter-reflections. Gupta et al. [8] presented methods for recovering depths us- 3. Errors due to Global Illumination ing projector defocus [20] under global illumination ef- fects. Chandraker et al. [2] use inter-reflections to re- The type and magnitude of errors due to global il- solve the bas-relief ambiguity inherent in shape-from- lumination depends on the spatial frequencies of the shading techniques. Holroyd et al [10] proposed an ac- patterns and the global illumination effect. As shown tive multi-view stereo technique where high-frequency in Figures 2 and 3, long range effects and short range 714 Conventional Coding and Decoding (a) (b) (c) (d) (e) Logical Coding and Decoding (f) (g) (h) (i) (j) Results and Comparison 1100 1000 900 800 Depth (mm) Our XOR−04 Codes 700 Conventional Gray Codes Ground Truth 600 0 200 400 600 Pixels (k) Depth (conventional Gray codes) (l) Depth (our XOR04 codes) (m)Comparisonwiththe Mean absolute error = 28.8mm Mean absolute error = 1.4mm ground-truth Figure 2.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-