Parallel and Efficient Boolean on Polygonal Solids

Total Page:16

File Type:pdf, Size:1020Kb

Parallel and Efficient Boolean on Polygonal Solids Vis Comput (2011) 27: 507–517 DOI 10.1007/s00371-011-0571-1 ORIGINAL ARTICLE Parallel and efficient Boolean on polygonal solids Hanli Zhao · Charlie C.L. Wang · Yong Chen · Xiaogang Jin Published online: 22 April 2011 © Springer-Verlag 2011 Abstract We present a novel framework which can effi- 1 Introduction ciently evaluate approximate Boolean set operations for B- rep models by highly parallel algorithms. This is achieved Boolean operations, such as union (∪), subtraction (−), by taking axis-aligned surfels of Layered Depth Images and intersection (∩), are useful for combining simple mod- (LDI) as a bridge and performing Boolean operations on els to create complex solid objects on a Constructive Solid the structured points. As compared with prior surfel-based Geometry (CSG) tree, which has a variety of applications approaches, this paper has much improvement. Firstly, we in computer-aided design and manufacturing (CAD/CAM), adopt key-data pairs to store LDI more compactly. Secondly, virtual reality, and computer graphics (e.g., [4, 17, 18]). robust depth peeling is investigated to overcome the bottle- Polygonal meshes are the most popular boundary repre- neck of layer-complexity. Thirdly, an out-of-core tiling tech- sentation (B-rep) for 3D models. Although many commer- cial CAD systems, such as Rhinoceros [14] and ACIS [13], nique is presented to overcome the limitation of memory. are able to compute Boolean on polygons, they have diffi- Real-time feedback is provided by streaming the proposed culty in modeling very complex models (e.g., the scaffold pipeline on the many-core graphics hardware. of Bone as shown in Fig. 1). Polygons provide the simplest way to approximate freeform solids and the computation of Keywords Boolean operations · Layered depth images · Boolean is not necessary to be exact in many areas, such Depth peeling · Out-of-core · CUDA as biomedical engineering, jewelry industry, and game in- dustry. This gives us an opportunity to overcome the robust- ness problem [11] and to speed up the computation by using an approximate method. Recent researches have shown that H. Zhao College of Physics & Electronic Information Engineering, densely oriented point sets are competent to fulfill such a Wenzhou University, Wenzhou 325035, China task. Several algorithms [1, 23, 32] have been investigated e-mail: [email protected] to perform Boolean on oriented points but they do not op- erate on B-rep models directly. Some methods [3, 31] can C.C.L. Wang () Department of Mechanical and Automation Engineering, process B-rep models using well-structured points in Lay- The Chinese University of Hong Kong, Hong Kong, China ered Depth Images (LDI). However, both the efficiency and e-mail: [email protected] the memory-usage still have much room for improvement. In this paper, we aim to develop a framework that fully Y. Chen Epstein Department of Industrial and Systems Engineering, exploits the advantages of LDI [27] and the power of University of Southern California, Los Angeles, CA, USA Graphics Processing Unit (GPU) for evaluating approxi- e-mail: [email protected] mate Boolean operations on polygonal meshes. Our algo- rithm is efficient, parallel, and scalable compared with prior X. Jin approaches. These good properties are achieved by intro- State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310058, China ducing the following work. First of all, we adopt a compact e-mail: [email protected] representation for LDI which contains axis-aligned surfels 508 H. Zhao et al. Fig. 1 A high-quality osseous scaffold is modeled with our system: surfel-represented Boolean result, where surfels with different colors (left) input models (Bone, offset of Bone, and cellular-scaffold), (mid- are from different input solids, and (right) the cross-section view of dle) the layout of solids where 3 × 3 × 3 of cellular-scaffold models are the output osseous tissue. The approximation resolution is 512 × 512 tiled together, resulting in a CSG tree with a depth of 28, (mid-bottom) in this example sampled from the boundary of input solids. By utilizing the The solution to these problems is the major contribution of atomic operation in graphics global memory, we develop a our approach. programmable rasterizer to capture all layers of LDI in a sin- The rest of our paper is organized as follows. After briefly gle pass. Moreover, the problem of limited sampling resolu- reviewing some related previous work in the next section, tion is avoided by splitting the whole working envelope into Sect. 3 introduces an overview of our new Boolean evalua- multiple smaller tiles and processing them independently us- tion framework. Section 4 presents the parallelization tech- ing the out-of-core strategy. Note that the computation in the niques on the GPU in detail. Experimental results and re- whole pipeline is highly parallel, thus allowing for a GPU- lated discussions are given in Sect. 5. Finally, Sect. 6 con- based implementation. To the best of our knowledge, this cludes the advantage of our approach and suggests some fu- is the first algorithm that is capable of computing Boolean ture work. combinations of freeform polygonal shapes in real-time (see the examples shown in Fig. 11). Even for a very complex CSG tree such as the one shown in Fig. 1, we are able to ob- 2 Related work tain the result of the high-quality osseous tissue in 1.7 s on an NVIDIA Geforce GTX 480 GPU while consuming only Boolean operations of solids have been investigated in com- 600 MB of the graphics memory. puter graphics for many years. For a comprehensive survey, In summary, this paper addresses the following important we refer readers to [26]. Here, we only review the Boolean questions in geometric modeling: algorithms based on surfels [24] and oriented points. – How to support real-time approximate geometric feed- Many approaches [6–9, 25] employ the image-based back for a CSG tree with polygonal models? depth-layering or depth-peeling technique to achieve the in- – How to support highly accurate approximation that may teractive rendering of CSG models. They gradually peel can- be beyond the memory capacity of graphics hardware? didate points from the boundaries of CSG primitives and Parallel and efficient Boolean on polygonal solids 509 only display the nearest boundary points of the Boolean re- Algorithm 1 Streaming Boolean operations sult. The points can be fast sampled using the graphics hard- 1: Input Boolean tree accompanied with polygonal meshes ware [5] and encoded as pixels in LDI. However, these ap- 2: for each volumetric tile in bounding volume do proaches only solve the fast rendering problem but do not 3: for each axis in {x, y, z} do provide the ability to obtain the geometric shape of resultant 4: for each node n by depth-first tree traversal do solids. 5: if n is a leaf then Surfel is a representation that can approximate the shape 6: Sample axis-aligned surfels by rasterizing n of a solid model on some oriented sample points. Boolean 7: Shell sort per-pixel surfels operations on freeform solids represented by surfels have 8: else been developed in [1, 23]. Since most CAD/CAM applica- 9: Merge sort candidate surfels from children tions, such as CAE analysis, CNC machining and rapid pro- 10: Boolean on the sorted candidate surfels totyping, need the B-rep of solid models, additional surface 11: end if reconstruction from the surfel-represented solid is required. 12: end for Zhou et al. [32] introduced a parallel surface reconstruc- 13: Obtain boundary surfels from the root node tion algorithm by extending the marching cubes algorithm 14: Accumulate the QEF matrices for boundary cells [20] with a GPU-based octree. They also presented an inter- 15: Mark interior/exterior cells along the axis active Boolean tool for oriented point set. However, their 16: end for adaptive marching cubes algorithm tends to smooth the 17: Output vertices by solving the QEF matrices sharp features in the underlying geometry. Moreover, their 18: Output indices by connecting boundary cells in-core reconstruction can only handle octrees with a lim- 19: end for ited number of depths. 20: return merge all tiles of polygons into a closed mani- Chen and Wang [3] presented an approximate Boolean fold approach that can preserve sharp features in the resultant solids. They made use of a variant of LDI, named as Lay- ered Depth-Normal Images (LDNI), for the efficient volu- each leaf node, we convert the B-rep model into a solid metric test [29], and employed the quadratic error function bounded by surfels using axis-aligned ray-casting. For each (QEF) in dual contouring [15] for surface generation. How- inner node, we first collect all candidate surfels from its ever, only the sampling of LDNI is performed using GPU- children nodes and sort the merged samples according to based depth peeling whereas other algorithms are performed their depth values. Next, valid boundary surfels are extracted on the CPU. Although multi-threaded schemes are designed, by performing Boolean on the candidate surfels in an effi- the parallelism is limited. Wang et al. [31] further transferred cient way. After traversing the CSG tree, the resultant surfel- the LDNI-based algorithm to the GPU to improve the ef- represented solid is obtained at the root node. ficiency. Unfortunately, as the workload on each thread is Efficient surface reconstruction is conducted to output B- not well balanced, the primary algorithm still cannot pro- rep models as the result of Boolean. We develop a modified vide real-time feedback. In addition, their algorithm can- version of dual contouring in this paper. The optimal posi- not process models which contain more than 256 LDI lay- tion of each boundary cell is determined by the QEF matrix, ers because the stencil buffer used in their sampling method which can be accumulated by all the boundary surfels in the only has 8 bits in the current graphics hardware architecture.
Recommended publications
  • Rendering of Complex Heterogenous Scenes Using Progressive Blue Surfels
    Rendering of Complex Heterogenous Scenes using Progressive Blue Surfels SASCHA BRANDT, Paderborn University CLAUDIUS JÄHN, DeepL.com MATTHIAS FISCHER, Paderborn University FRIEDHELM MEYER AUF DER HEIDE, Paderborn University Fig. 1. 20 donut production lines with ∼50M Polygons each (∼1G total) rendered at interactive frame rates of ∼120 fps at a resolution of 2560×1440. We present a technique for rendering highly complex 3D scenes in real-time the interest dropped as one might consider the problem solved con- by generating uniformly distributed points on the scene’s visible surfaces. sidering the vast computational power of modern graphics hardware The technique is applicable to a wide range of scene types, like scenes and the rich set on different rendering techniques like culling and directly based on complex and detailed CAD data consisting of billions of approximation techniques. polygons (in contrast to scenes handcrafted solely for visualization). This A new challenge arises from the newest generation of head allows to visualize such scenes smoothly even in VR on a HMD with good mounted displays (HMD) like the Oculus Rift or its competitors image quality, while maintaining the necessary frame-rates. In contrast to other point based rendering methods, we place points in an approximated – high framerates are no longer a nice-to-have feature but become blue noise distribution only on visible surfaces and store them in a highly a hard constraint on what content can actually be shown to the GPU efficient data structure, allowing to progressively refine the number of user. To avoid nausea it is necessary to keep a high frame-rate of rendered points to maximize the image quality for a given target frame rate.
    [Show full text]
  • Multi-View Scene Capture by Surfel Sampling: from Video Streams to Non-Rigid 3D Motion, Shape & Reflectance
    Multi-View Scene Capture by Surfel Sampling: From Video Streams to Non-Rigid 3D Motion, Shape & Reflectance Rodrigo L. Carceroni Kiriakos N. Kutulakos Departamento de Ciˆencia da Computac¸˜ao Department of Computer Science Universidade Federal de Minas Gerais University of Toronto Belo Horizonte, MG, CEP 31270-010, Brazil Toronto, ON M5S3H5, Canada [email protected] [email protected] Abstract In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene’s shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene’s trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region’s dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectance model and complex real scenes (clothing, shiny objects, skin) illustrate our method’s ability to explain pixels and pixel variations in terms of their underlying causes — shape, reflectance, motion, illumination, and visibility. Keywords: Stereoscopic vision (3D reconstruction, multiple-view geometry, multi-view stereo, space carving); Motion analysis (multi-view motion estimation, direct estimation methods, image warping, deformation analysis); 3D motion capture; Reflectance modeling (illumination modeling, Phong reflectance model) This research was conducted while the authors were with the Departments of Computer Science and Dermatology at the University of Rochester, Rochester, NY, USA.
    [Show full text]
  • Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling And
    Journal of Visual Communication and Image Representation 25(1):137-147, Springer, January 2014, corrected Table 3 & Sec 6.2. Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking J¨orgSt¨uckler and Sven Behnke Autonomous Intelligent Systems, Computer Science Institute VI, University of Bonn, Friedrich-Ebert-Allee 144, 53113 Bonn, Germany fstueckler, [email protected] Abstract Building consistent models of objects and scenes from moving sensors is an important prerequisite for many recognition, manipulation, and navigation tasks. Our approach integrates color and depth measurements seamlessly in a multi-resolution map representation. We process image sequences from RGB-D cameras and consider their typical noise properties. In order to align the images, we register view-based maps efficiently on a CPU using multi- resolution strategies. For simultaneous localization and mapping (SLAM), we determine the motion of the camera by registering maps of key views and optimize the trajectory in a probabilistic framework. We create object models and map indoor scenes using our SLAM approach which includes randomized loop closing to avoid drift. Camera motion relative to the acquired models is then tracked in real-time based on our registration method. We benchmark our method on publicly available RGB-D datasets, demonstrate accuracy, efficiency, and robustness of our method, and compare it with state-of-the- art approaches. We also report on several successful public demonstrations where it was used in mobile manipulation tasks. Keywords: RGB-D image registration, simultaneous localization and mapping, object modeling, pose tracking 1. Introduction Robots performing tasks in unstructured environments must estimate their pose in reference to objects and parts of the environment.
    [Show full text]
  • Point-Based Rendering
    Point-Based Rendering • Introduction and motivation Point-Based Rendering • Surface elements •Rendering • Antialiasing Matthias Zwicker • Hardware Acceleration Computer Graphics Lab • Conclusions ETH Zürich Point-Based Computer Graphics Matthias Zwicker 1 Point-Based Computer Graphics Matthias Zwicker 2 Motivation 1 Motivation 1 • Performance of 3D hardware has exploded (e.g., GeForce4: 136 million vertices per second) • Projected triangles are very small (i.e., cover only a few pixels) • Overhead for triangle setup increases (initialization of texture filtering, rasterization) Quake 2 Nvidia GeForce4 A simpler, more efficient rendering 1998 2002 primitive than triangles? Point-Based Computer Graphics Matthias Zwicker 3 Point-Based Computer Graphics Matthias Zwicker 4 Points as Rendering Motivation 2 Primitives • Modern 3D scanning devices • Point clouds instead of triangle meshes [Levoy and (e.g., laser range scanners) Whitted 1985, Grossman and Dally 1998, Pfister et acquire huge point clouds al. 2000] • Generating consistent triangle meshes is time consuming and difficult A rendering primitive for direct visualization of point clouds, without the need to triangle mesh (with point cloud generate triangle meshes? 4 million pts. [Levoy et al. 2000] textures) Point-Based Computer Graphics Matthias Zwicker 5 Point-Based Computer Graphics Matthias Zwicker 6 1 Point-Based Surface Representation Surface Elements - Surfels •Points are samples of the surface • Each point corresponds to a surface • The point cloud describes: element, or surfel, describing the surface in • 3D geometry of the surface a small neighborhood • Surface reflectance properties (e.g., diffuse color, etc.) • Basic surfels: • There is no additional information, such as • connectivity (i.e., explicit BasicSurfel { y neighborhood information between position points) position; color • texture maps, bump maps, etc.
    [Show full text]
  • Efficient High-Quality Volume Rendering of SPH Data
    Efficient High-Quality Volume Rendering of SPH Data Roland Fraedrich, Stefan Auer, and R¨udiger Westermann Fig. 1. A novel technique for order-dependent volume rendering of SPH data is presented. It provides rendering options like direct volume rendering (left), iso-surface rendering (middle), and mixed modes (right), and it renders data sets consisting of millions of particles at high quality and speed (42M @ 0.1 fps, 2.2M @ 4.5 fps, 2M @ 1.7 fps, from left to right) on a 1024×1024 viewport. Abstract—High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles. Index Terms—Particle visualization, volume rendering, ray-casting, GPU resampling. 1 INTRODUCTION Particle-based simulation techniques like Smoothed Particle Hydrody- tion. This representation can be rendered efficiently using standard namics (SPH) have gained much attention due to their ability to avoid volume rendering techniques.
    [Show full text]
  • The State of the Art in Interactive Global Illumination
    Volume 0 (1981), Number 0 pp. 1–27 COMPUTER GRAPHICS forum The State of the Art in Interactive Global Illumination Tobias Ritschel1;2 Carsten Dachsbacher3 Thorsten Grosch4 Jan Kautz5 Télécom ParisTech / CNRS (LTCI)1 and Intel Visual Computing Institute2 Karlsruhe Institute of Technology3 University of Magdeburg4 University College London5 !"#$%&'( ) *+,#-. ) !"#$%&'() 6/-**7) /$0+() %&8&'5-"*) 3,4*5'*) 1',2&%$"0) Figure 1: Photography of a scene with global illumination: Multiple diffuse and specular bounces, caustics and scattering. Abstract The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modeling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent complexity stems from the underlying physical processes as well as the global nature of the interactions that let light travel within a scene. This article reviews the state of the art in interactive global illumination computation, that is, methods that generate an image of a virtual scene in less than one second with an as exact as possible, or plausible, solution to the light transport. Additionally, the theoretical background and attempts to classify the broad field of methods are described. The strengths and weaknesses of different approaches, when applied to the different visual phenomena, arising from light interaction are compared and discussed. Finally, the article concludes by highlighting design patterns for interactive global illumination and a list of open problems. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture / Radiosity / Raytracing 1.
    [Show full text]
  • Raycaching: Amortized Isosurface Rendering for Virtual Reality
    DOI: 10.1111/cgf.13762 COMPUTER GRAPHICS forum Volume 00 (2019), number 00 pp. 1–11 RayCaching: Amortized Isosurface Rendering for Virtual Reality F. Nysjo,¨ F. Malmberg and I. Nystrom¨ Centre for Image Analysis, Uppsala University, Uppsala, Sweden {fredrik.nysjo, filip.malmberg, ingela.nystrom}@it.uu.se Abstract Real-time virtual reality requires efficient rendering methods to deal with high- resolution stereoscopic displays and low latency head-tracking. Our proposed RayCaching method renders isosurfaces of large volume datasets by amortizing raycasting over several frames and caching primary rays as small bricks that can be efficiently rasterized. An occupancy map in form of a clipmap provides level of detail and ensures that only bricks corresponding to visible points on the isosurface are being cached and rendered. Hard shadows and ambient occlusion from secondary rays are also accumulated and stored in the cache. Our method supports real-time isosurface rendering with dynamic isovalue and allows stereoscopic visualization and exploration of large volume datasets at framerates suitable for virtual reality applications. Keywords: ray tracing, rendering, point-based graphics, rendering, immersive VR, virtual environments ACM CCS: • Computing methodologies → Ray tracing; Visibility; Point-based models; Virtual reality 1. Introduction elliptical-weighted-average (EWA)-splatting [BHZK05] that pro- vides smooth interpolation of normals and other surface attributes. Isosurfaces in sampled and procedural data are encountered in appli- However, for dynamic isovalue, indirect volume rendering often re- cations in many areas, including scientific visualization, construc- quires the complete isosurface to be recomputed when the isovalue tive solid geometry (CSG), computer animation and games. Efficient changes, which can be expensive.
    [Show full text]
  • A Compact RGB-D Map Representation Dedicated to Autonomous Navigation Tawsif Ahmad Hussein Gokhool
    A compact RGB-D map representation dedicated to autonomous navigation Tawsif Ahmad Hussein Gokhool To cite this version: Tawsif Ahmad Hussein Gokhool. A compact RGB-D map representation dedicated to autonomous navigation. Other. Université Nice Sophia Antipolis, 2015. English. NNT : 2015NICE4028. tel- 01171197 HAL Id: tel-01171197 https://tel.archives-ouvertes.fr/tel-01171197 Submitted on 3 Jul 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. UNIVERSITÉ DE NICE - SOPHIA ANTIPOLIS ÉCOLE DOCTORALE STIC SCIENCES ET TECHNOLOGIES DE L’INFORMATION ET DE LA COMMUNICATION THÈSE pour l’obtention du grade de Docteur en Sciences de l’Université de Nice-Sophia Antipolis Mention : AUTOMATIQUE ET TRAITEMENT DES SIGNAUX ET DES IMAGES Présentée et soutenue par Tawsif GOKHOOL Cartographie dense basée sur une représentation compacte RGB-D dédiée à la navigation autonome Thèse dirigée par Patrick RIVES préparée à l’INRIA Sophia Antipolis, Équipe LAGADIC-SOP soutenue le 5 Juin 2015 Jury: Rapporteurs : Pascal VASSEUR - Université de Rouen Frédéric CHAUSSE - Institut
    [Show full text]
  • Surfels: Surface Elements As Rendering Primitives
    Surfels: Surface Elements as Rendering Primitives Ý £ Ý Hanspeter Pfister £ Matthias Zwicker Jeroen van Baar Markus Gross Figure 1: Surfel rendering examples. Abstract 1 Introduction Surface elements (surfels) are a powerful paradigm to efficiently 3D computer graphics has finally become ubiquitous at the con- render complex geometric objects at interactive frame rates. Un- sumer level. There is a proliferation of affordable 3D graphics hard- like classical surface discretizations, i.e., triangles or quadrilateral ware accelerators, from high-end PC workstations to low-priced meshes, surfels are point primitives without explicit connectivity. gamestations. Undoubtedly, key to this success is interactive com- Surfel attributes comprise depth, texture color, normal, and oth- puter games that have emerged as the “killer application” for 3D ers. As a pre-process, an octree-based surfel representation of a graphics. However, interactive computer graphics has still not geometric object is computed. During sampling, surfel positions reached the level of realism that allows a true immersion into a and normals are optionally perturbed, and different levels of texture virtual world. For example, typical foreground characters in real- colors are prefiltered and stored per surfel. During rendering, a hi- time games are extremely minimalistic polygon models that often erarchical forward warping algorithm projects surfels to a z-buffer. exhibit faceting artifacts, such as angular silhouettes. A novel method called visibility splatting determines visible sur- Various sophisticated modeling techniques, such as implicit sur- fels and holes in the z-buffer. Visible surfels are shaded using tex- faces, NURBS, or subdivision surfaces, allow the creation of 3D ture filtering, Phong illumination, and environment mapping using graphics models with increasingly complex shapes.
    [Show full text]
  • The State of the Art in Interactive Global Illumination
    Volume 0 (1981), Number 0 pp. 1–27 COMPUTER GRAPHICS forum The State of the Art in Interactive Global Illumination , Tobias Ritschel1 2 Carsten Dachsbacher3 Thorsten Grosch4 Jan Kautz5 Télécom ParisTech / CNRS (LTCI)1 and Intel Visual Computing Institute2 Karlsruhe Institute of Technology3 University of Magdeburg4 University College London5 $ % # " Figure 1: Photography of a scene with global illumination: Multiple diffuse and specular bounces, caustics and scattering. Abstract The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modeling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent complexity stems from the underlying physical processes as well as the global nature of the interactions that let light travel within a scene. This article reviews the state of the art in interactive global illumination computation, that is, methods that generate an image of a virtual scene in less than one second with an as exact as possible, or plausible, solution to the light transport. Additionally, the theoretical background and attempts to classify the broad field of methods are described. The strengths and weaknesses of different approaches, when applied to the different visual phenomena, arising from light interaction are compared and discussed. Finally, the article concludes by highlighting design patterns for interactive global illumination and a list of open problems. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture / Radiosity / Raytracing 1. Introduction the underlying principles of light-matter interaction are well understood, the efficient computation is still a challenging Light in the real world interacts with surrounding media and problem.
    [Show full text]
  • Online Surfel-Based Mesh Reconstruction
    This is the author’s version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPAMI.2019.2947048 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. XX, NO. X, SEPTEMBER 2018 1 SurfelMeshing: Online Surfel-Based Mesh Reconstruction Thomas Schops,¨ Torsten Sattler, and Marc Pollefeys Abstract—We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera’s resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at https://github.com/puzzlepaint/surfelmeshing.
    [Show full text]
  • Efficient Surfel-Based SLAM Using 3D Laser Range Data in Urban
    Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments Jens Behley and Cyrill Stachniss Institute of Geodesy and Geoinformation, University of Bonn, Germany Email: ffi[email protected] Abstract—Accurate and reliable localization and mapping is a A B fundamental building block for most autonomous robots. For this purpose, we propose a novel, dense approach to laser- based mapping that operates on three-dimensional point clouds obtained from rotating laser sensors. We construct a surfel-based map and estimate the changes in the robot’s pose by exploiting the projective data association between the current scan and a rendered model view from that surfel map. For detection and 100 m verification of a loop closure, we leverage the map representation to compose a virtual view of the map before a potential loop closure, which enables a more robust detection even with low overlap between the scan and the already mapped areas. Our approach is efficient and enables real-time capable registration. At the same time, it is able to detect loop closures and to perform map updates in an online fashion. Our experiments show that we are able to estimate globally consistent maps in large scale environments solely based on point cloud data. A B I. INTRODUCTION Most autonomous robots, including self-driving cars, must be able to reliably localize themselves, ideally by using only their own sensors without relying on external information such as GPS or other additional infrastructure placed in the C D environment. There has been significant advances in vision- based [6, 7] and RGB-D-based [18, 33, 3] SLAM systems over the past few years.
    [Show full text]