Final.Pdf (2.120Mb)

Total Page:16

File Type:pdf, Size:1020Kb

Final.Pdf (2.120Mb) AUTOMATED TEXTURE MAPPING OF LASER BASED RANGE IMAGES by RATTASAK SRISINROONGRUANG, B.S. A THESIS IN COMPUTER SCIENCE Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE Approved Eric D. Sinzinger Chairperson of the Committee Gopal D. Lakhani Hector J. Hernandez Accepted John Borrelli Dean of the Graduate School August, 2005 ACKNOWLEDGEMENTS I would like to thank Mom and Dad for everything they’ve done and had to go through to give their children the best kind of life that they could. I only hope I will be as kind and giving as you throughout my life. I would like to thank Dr. Eric Sinzinger for all the help he has given me during my research. The suggestions and advice made the completion of this work possible. ii CONTENTS ABSTRACT . v LIST OF FIGURES . vi LIST OF TABLES . vii 1 INTRODUCTION . 1 2 RELATED WORK . 3 2.1 Texture Mapping . 3 2.2 Laser Range Data . 3 2.3 Data Registration . 4 2.4 Image Segmentation . 5 3 BACKGROUND . 7 3.1 3D Transformations . 7 3.2 Camera Model . 8 3.3 Texture Mapping Overview . 10 3.4 Texture Mapping Techniques . 11 3.5 Texture Mapping Types . 12 3.6 Texture Mapping Effects . 14 3.7 Aliasing and Filtering . 16 3.8 Image Segmentation . 18 4 METHODOLOGY . 21 4.1 Automated Mesh Alignment . 21 4.2 Stencil Calculation . 21 4.2.1 Translation Alignment . 22 4.2.2 Scale Alignment . 23 4.2.3 Rotation Alignment . 23 4.2.4 Alignment Metric . 25 4.2.5 Field of View Alignment . 26 4.2.6 Combined Transform Alignment . 26 iii 4.3 Texture Coordinate Mapping . 28 4.3.1 Orthographic Projection . 28 4.3.2 Perspective Projection . 28 5 RESULTS ................................ 31 6 CONCLUSION AND FUTURE WORK . 42 6.1 Advantages and Disadvantages . 42 6.2 Future Work and Improvements . 43 6.2.1 Stencil calculations . 43 6.2.2 Extended borders . 44 6.2.3 Lighting and shading . 44 REFERENCES . 45 iv ABSTRACT Texture mapping is the process of applying a 2D image onto a 3D planar surface. This requires the generation of a mapping that defines the relationship between the 2D coordinates of the image and the 3D coordinates of the surface. The goal of this research is to provide a method of automatically generating this mapping given a 3D object at arbitrary orientation and a 2D image that may contain unwanted background information. A review of the current methods of texture mapping, image segmentation, and basic 3D viewing transforms is given. An algorithm to compute this alignment given the proper segmentation of the 2D image is then proposed and tested with five different models. The results of the generated alignment and mapping are then discussed, showing the level of accuracy of the final texture mapped model. v LIST OF FIGURES 3.1 Viewing frustum . 8 3.2 Texture Mapping Example . 11 3.3 Segmentation using clustering with color. Reprinted from ”Computer Vision: A Modern Approach,” by Forsyth and Ponce, Prentice Hall, 2003. .................................... 20 3.4 Segmentation using clustering with color and position. Reprinted from ”Computer Vision: A Modern Approach,” by Forsyth and Ponce, Pren- tice Hall, 2003. .............................. 20 5.1 Segmentation Images . 32 ◦ 5.2 Teapot Alignment Results for δr = 45 ................. 35 ◦ 5.3 Teapot Alignment Results for δr = 20 ................. 35 ◦ 5.4 Teapot Initial Orientation and Result Error for δr = 20 . 35 ◦ 5.5 Face Initial Orientation and Result Error for δr = 20 ......... 36 ◦ 5.6 Face Alignment Results for δr = 45 ................... 36 ◦ 5.7 Face Alignment Results for δr = 20 ................... 36 ◦ 5.8 Mechanical Part Initial Orientation and Result Error for δr = 20 . 37 ◦ 5.9 Mechanical Part Alignment Results for δr = 45 ............ 37 ◦ 5.10 Mechanical Part Alignment Results for δr = 20 ............ 37 ◦ 5.11 Cessna Initial Orientation and Result Error for δr = 20 . 38 ◦ 5.12 Cessna Alignment Results for δr = 45 ................. 38 ◦ 5.13 Cessna Alignment Results for δr = 20 ................. 38 ◦ 5.14 Will Rogers Initial Orientation and Result Error for δr = 20 . 39 ◦ 5.15 Will Rogers Alignment Results for δr = 45 ............... 40 ◦ 5.16 Will Rogers Alignment Results for δr = 20 ............... 40 vi LIST OF TABLES 5.1 Model Sizes . 32 ◦ 5.2 Alignment Calculations with δr = 45 .................. 33 ◦ 5.3 Alignment Calculations with δr = 20 .................. 33 vii CHAPTER 1 INTRODUCTION Texture mapping has become an integral component of computer generated scenes, whether in movies, games, or another form of graphical rendering. Before, with meshes composed of thousands of polygons considered complex, manual assignment of texture coordinates to 3D model coordinates, though time consuming, was a man- ageable task. With increased processing power and memory however, 3D models can now be composed of hundreds of thousands of polygons. An automated method of assigning texture coordinates with quick visual feedback would expediate the artistic pipeline. Texture mapping is the process of applying an image (usually 2D) to a planar 2D surface (usually in 3D space). This can be used to increase the visual interest and complexity of a scene without adding increased geometric data. The most common component to add is surface color. However, texture mapping is also used to apply light effects, shadow effects, reflective effects, and surface irregularity effects onto a surface. The process of applying and warping a texture on a planar surface is computationally inexpensive compared to the cost of transforming a geometrically complex scene. This project involves the automated mapping of textures to 3D geometric data, requiring as little user input as possible. The traditional process of applying a non- tiled texture map to a 3D model requires extensive user input to select the proper ’binding’ of texture coordinate points to 3D model vertices. Traditionally, the process of creating a non-tiled mapping onto a 3D model is accomplished in two ways. One method requires the user to manually assign each vertex in the model to a specific point in the 2D texture. For increasingly complex 3D models, this process takes a long time. The other method is ”reverse-skinning”. This involves obtaining the unwrapped ”outline” of the 3D model on a 2D texture. The user can then apply the color properties onto the 2D texture as desired, knowing 1 where each point on the 2D texture is mapped on the 3D model. Both methods require significant user input. Both of these processes can take a long time to accomplish. In the former case, the process does not scale well with increasing geometric complexity in terms of the amount of input needed from the user. In the latter case, working around the 3D model outline usually means a stock piece of texture cannot be applied to the model as the outline (usually generated automatically from an unwrapping tool) will have an orientation not in alignment with the texture of the object. If the process of texture mapping can be automated as proposed in this research, the time needed to correctly apply a texture to a 3D model may be significantly re- duced and the problems of the two traditional methods of texture mapping discussed earlier can be reduced. This automated process should scale well with increased geo- metric complexity of models in terms of time because the user will not be required to manually set the mapping between texture points and model points for every vertex. The issue with ”reverse-skinning” where the orientation of the model outline does not align with the stock texture may also be eliminated. Given a stock texture containing a texture of the object whose 3D representation the user wishes to have mapped, this new process of texture mapping would ideally determine the transformation needed to align the 3D model so that its orientation can closely match that present in the texture. From this, the mapping of texture coordinates to 3D model vertices can be automatically computed. 2 CHAPTER 2 RELATED WORK 2.1 Texture Mapping Texture mapping is the process where a 2D texture is applied to a 3D object. The process involves the parametrization of a surface and the subsequent application of the texture onto the surface. [3]. Texture mapping made gains in popularity because of its ability to add realism and interest to a scene without adding increased geometric complexity that would lead to significantly increased processing time. Such effects include color, reflection, shadow, and surface irregularities. The reflection effect is not a true reflection and requires the use of an environment map that is ”wrapped” around the 3D object as rays are cast outward from the object to determine the intersection point with the environment map [3]. Surface irregularities are modeled with the use of a bump map that defines surface normal offsets. These offsets are used during lighting calculations to give the illusion of surface irregularities [2]. Combined transforms of 3D points are facilitated by encoding the transformations as matrix operations. This requires that 3D points be represented as the 4 compo- nents of a homogenous coordinate system so that the 3D points can be manipulated by multiplication with the transformation matrix [15]. With the advent of modern graphic processing units that allow efficient matrix operations, this has the added bonus of allowing the transformations to be done on specialized hardware. The most common type of texture mapping is the perspective texture mapping, a transformation that is related to the perspective transformation used when manip- ulating the point of view [8]. This type of mapping limits the distortions apparent when using an affine texture mapping transformation. 2.2 Laser Range Data Laser range scanner devices emit a beam that is transmitted and recaptured to determine the distance to an object.
Recommended publications
  • Canonical View Volumes
    University of British Columbia View Volumes Canonical View Volumes Why Canonical View Volumes? CPSC 314 Computer Graphics Jan-Apr 2016 • specifies field-of-view, used for clipping • standardized viewing volume representation • permits standardization • restricts domain of z stored for visibility test • clipping Tamara Munzner perspective orthographic • easier to determine if an arbitrary point is perspective view volume orthographic view volume orthogonal enclosed in volume with canonical view y=top parallel volume vs. clipping to six arbitrary planes y=top x=left x=left • rendering y y x or y x or y = +/- z x or y back • projection and rasterization algorithms can be Viewing 3 z plane z x=right back 1 VCS front reused front plane VCS y=bottom z=-near z=-far x -z x z=-far plane -z plane -1 x=right y=bottom z=-near http://www.ugrad.cs.ubc.ca/~cs314/Vjan2016 -1 2 3 4 Normalized Device Coordinates Normalized Device Coordinates Understanding Z Understanding Z near, far always positive in GL calls • convention left/right x =+/- 1, top/bottom y =+/- 1, near/far z =+/- 1 • z axis flip changes coord system handedness THREE.OrthographicCamera(left,right,bot,top,near,far); • viewing frustum mapped to specific mat4.frustum(left,right,bot,top,near,far, projectionMatrix); • RHS before projection (eye/view coords) parallelepiped Camera coordinates NDC • LHS after projection (clip, norm device coords) • Normalized Device Coordinates (NDC) x x perspective view volume orthographic view volume • same as clipping coords x=1 VCS NDCS y=top • only objects
    [Show full text]
  • COMPUTER GRAPHICS COURSE Viewing and Projections
    COMPUTER GRAPHICS COURSE Viewing and Projections Georgios Papaioannou - 2014 VIEWING TRANSFORMATION The Virtual Camera • All graphics pipelines perceive the virtual world Y through a virtual observer (camera), also positioned in the 3D environment “eye” (virtual camera) Eye Coordinate System (1) • The virtual camera or “eye” also has its own coordinate system, the eyeY coordinate system Y Eye coordinate system (ECS) Z eye X Y X Z Global (world) coordinate system (WCS) Eye Coordinate System (2) • Expressing the scene’s geometry in the ECS is a natural “egocentric” representation of the world: – It is how we perceive the user’s relationship with the environment – It is usually a more convenient space to perform certain rendering tasks, since it is related to the ordering of the geometry in the final image Eye Coordinate System (3) • Coordinates as “seen” from the camera reference frame Y ECS X Eye Coordinate System (4) • What “egocentric” means in the context of transformations? – Whatever transformation produced the camera system its inverse transformation expresses the world w.r.t. the camera • Example: If I move the camera “left”, objects appear to move “right” in the camera frame: WCS camera motion Eye-space object motion Moving to Eye Coordinates • Moving to ECS is a change of coordinates transformation • The WCSECS transformation expresses the 3D environment in the camera coordinate system • We can define the ECS transformation in two ways: – A) Invert the transformations we applied to place the camera in a particular pose – B) Explicitly define the coordinate system by placing the camera at a specific location and setting up the camera vectors WCSECS: Version A (1) • Let us assume that we have an initial camera at the origin of the WCS • Then, we can move and rotate the “eye” to any pose (rigid transformations only: No sense in scaling a camera): 퐌푐 퐨푐, 퐮, 퐯, 퐰 = 퐑1퐑2퐓1퐑ퟐ … .
    [Show full text]
  • Interactive Volume Navigation
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 4, NO. 3, JULY-SEPTEMBER 1998 243 Interactive Volume Navigation Martin L. Brady, Member, IEEE, Kenneth K. Jung, Member, IEEE, H.T. Nguyen, Member, IEEE, and Thinh PQ Nguyen Member, IEEE Abstract—Volume navigation is the interactive exploration of volume data sets by “flying” the viewpoint through the data, producing a volume rendered view at each frame. We present an inexpensive perspective volume navigation method designed to be run on a PC platform with accelerated 3D graphics hardware. The heart of the method is a two-phase perspective raycasting algorithm that takes advantage of the coherence inherent in adjacent frames during navigation. The algorithm generates a sequence of approximate volume-rendered views in a fraction of the time that would be required to compute them individually. The algorithm handles arbitrarily large volumes by dynamically swapping data within the current view frustum into main memory as the viewpoint moves through the volume. We also describe an interactive volume navigation application based on this algorithm. The application renders gray-scale, RGB, and labeled RGB volumes by volumetric compositing, allows trilinear interpolation of sample points, and implements progressive refinement during pauses in user input. Index Terms—Volume navigation, volume rendering, 3D medical imaging, scientific visualization, texture mapping. ——————————F—————————— 1INTRODUCTION OLUME-rendering techniques can be used to create in- the views should be produced at interactive rates. The V formative two-dimensional (2D) rendered views from complete data set may exceed the system’s memory, but the large three-dimensional (3D) images, or volumes, such as amount of data enclosed by the view frustum is assumed to those arising in scientific and medical applications.
    [Show full text]
  • Computer Graphicsgraphics -- Weekweek 33
    ComputerComputer GraphicsGraphics -- WeekWeek 33 Bengt-Olaf Schneider IBM T.J. Watson Research Center Questions about Last Week ? Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Overview of Week 3 Viewing transformations Projections Camera models Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Compared to Taking Pictures View Vector & Viewplane: Direct camera towards subject Viewpoint: Position "camera" in scene Up Vector: Level the camera Field of View: Adjust zoom Clipping: Select content Projection: Exposure Field of View Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Position and orient camera Set viewpoint, viewing direction and upvector Use geometric transformation to position camera with respect to the scene or ... scene with respect to the camera. Both approaches are mathematically equivalent. Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Pipeline and Coordinate Systems Model Modeling Transformation Coordinates (MC) Modeling Position objects in world coordinates Transformation Viewing Transformation World Coordinates (WC) Position objects in eye/camera coordinates Viewing EC a.k.a. View Reference Coordinates Transformation Eye (Viewing) Perspective Transformation Coordinates (EC) Perspective Convert view volume to a canonical view volume Transformation Also used for clipping Perspective PC a.k.a. clip coordinates Coordinates (PC) Perspective Perspective Division Division Normalized Device Perform mapping from 3D to 2D Coordinates (NDC) Viewport Mapping Viewport Mapping Map normalized device coordinates into Device window/screen coordinates Coordinates (DC) Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Why a special eye coordinate systemsystem ? In prinicipal it is possible to directly project on an arbitray viewplane. However, this is computationally very involved.
    [Show full text]
  • CSCI 445: Computer Graphics
    CSCI 445: Computer Graphics Qing Wang Assistant Professor at Computer Science Program The University of Tennessee at Martin Angel and Shreiner: Interactive Computer Graphics 7E © 1 Addison-Wesley 2015 Classical Viewing Angel and Shreiner: Interactive Computer Graphics 7E © 2 Addison-Wesley 2015 Objectives • Introduce the classical views • Compare and contrast image formation by computer with how images have been formed by architects, artists, and engineers • Learn the benefits and drawbacks of each type of view Angel and Shreiner: Interactive Computer Graphics 7E © 3 Addison-Wesley 2015 Classical Viewing • Viewing requires three basic elements • One or more objects • A viewer with a projection surface • Projectors that go from the object(s) to the projection surface • Classical views are based on the relationship among these elements • The viewer picks up the object and orients it how she would like to see it • Each object is assumed to constructed from flat principal faces • Buildings, polyhedra, manufactured objects Angel and Shreiner: Interactive Computer Graphics 7E © 4 Addison-Wesley 2015 Planar Geometric Projections • Standard projections project onto a plane • Projectors are lines that either • converge at a center of projection • are parallel • Such projections preserve lines • but not necessarily angles • Nonplanar projections are needed for applications such as map construction Angel and Shreiner: Interactive Computer Graphics 7E © 5 Addison-Wesley 2015 Classical Projections Angel and Shreiner: Interactive Computer Graphics 7E
    [Show full text]
  • Openscenegraph 3.0 Beginner's Guide
    OpenSceneGraph 3.0 Beginner's Guide Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines Rui Wang Xuelei Qian BIRMINGHAM - MUMBAI OpenSceneGraph 3.0 Beginner's Guide Copyright © 2010 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2010 Production Reference: 1081210 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-82-4 www.packtpub.com Cover Image by Ed Maclean ([email protected]) Credits Authors Editorial Team Leader Rui Wang Akshara Aware Xuelei Qian Project Team Leader Reviewers Lata Basantani Jean-Sébastien Guay Project Coordinator Cedric Pinson
    [Show full text]
  • Opensg Starter Guide 1.2.0
    OpenSG Starter Guide 1.2.0 Generated by Doxygen 1.3-rc2 Wed Mar 19 06:23:28 2003 Contents 1 Introduction 3 1.1 What is OpenSG? . 3 1.2 What is OpenSG not? . 4 1.3 Compilation . 4 1.4 System Structure . 5 1.5 Installation . 6 1.6 Making and executing the test programs . 6 1.7 Making and executing the tutorials . 6 1.8 Extending OpenSG . 6 1.9 Where to get it . 7 1.10 Scene Graphs . 7 2 Base 9 2.1 Base Types . 9 2.2 Log . 9 2.3 Time & Date . 10 2.4 Math . 10 2.5 System . 11 2.6 Fields . 11 2.7 Creating New Field Types . 12 2.8 Base Functors . 13 2.9 Socket . 13 2.10 StringConversion . 16 3 Fields & Field Containers 17 3.1 Creating a FieldContainer instance . 17 3.2 Reference counting . 17 3.3 Manipulation . 18 3.4 FieldContainer attachments . 18 ii CONTENTS 3.5 Data separation & Thread safety . 18 4 Image 19 5 Nodes & NodeCores 21 6 Groups 23 6.1 Group . 23 6.2 Switch . 23 6.3 Transform . 23 6.4 ComponentTransform . 23 6.5 DistanceLOD . 24 6.6 Lights . 24 7 Drawables 27 7.1 Base Drawables . 27 7.2 Geometry . 27 7.3 Slices . 35 7.4 Particles . 35 8 State Handling 37 8.1 BlendChunk . 39 8.2 ClipPlaneChunk . 39 8.3 CubeTextureChunk . 39 8.4 LightChunk . 39 8.5 LineChunk . 39 8.6 PointChunk . 39 8.7 MaterialChunk . 40 8.8 PolygonChunk . 40 8.9 RegisterCombinersChunk .
    [Show full text]
  • High-Quality Volume Rendering of Dark Matter Simulations
    https://doi.org/10.2352/ISSN.2470-1173.2018.01.VDA-333 © 2018, Society for Imaging Science and Technology High-Quality Volume Rendering of Dark Matter Simulations Ralf Kaehler; KIPAC/SLAC; Menlo Park, CA, USA Abstract cells overlap in overdense regions. Previous methods to visualize Phase space tessellation techniques for N–body dark matter dark matter simulations using this technique were either based on simulations yield density fields of very high quality. However, due versions of the volume rendering integral without absorption [19], to the vast amount of elements and self-intersections in the result- subject to high–frequency image noise [17] or did not achieve in- ing tetrahedral meshes, interactive visualization methods for this teractive frame rates [18]. This paper presents a volume rendering approach so far either employed simplified versions of the volume approach for phase space tessellations, that rendering integral or suffered from rendering artifacts. This paper presents a volume rendering approach for • combines the advantages of a single–pass k–buffer approach phase space tessellations, that combines state-of-the-art order– with a view–adaptive voxelization grid, independent transparency methods to manage the extreme depth • incorporates several optimizations to tackle the unique prob- complexity of this mesh type. We propose several performance lems arising from the large number of self–intersections and optimizations, including a view–dependent multiresolution repre- extreme depth complexity, sentation of the data and a tile–based rendering strategy, to en- • proposes a view–dependent level–of–detail representation able high image quality at interactive frame rates for this complex of the data that requires only minimal preprocessing data structure and demonstrate the advantages of our approach • and thus achieves high image quality at interactive frame for different types of dark matter simulations.
    [Show full text]
  • CS 351 Introduction to Computer Graphics
    EECS 351-1 : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4 3D Transforms (cont’d): 3D Transformation Types: did we really describe ALL of them? No! --All fit in a 4x4 matrix, suggesting up to 16 ‘degrees of freedom’. We already have 9 of them: 3 kinds of Translate (in x,y,z directions) + 3 kinds of Rotate (around x,y,z axes) + 3 kinds of Scale (along x,y,z directions). ?Where are the other 7 degrees of freedom? 3 kinds of ‘shear’ (aka ‘skew’): the transforms that turn a square into a parallelogram: -- Sxy sets the x-y shear amount; similarly, Sxz and Syz set x-z and y-z shear amount: Shear(Sx,Sy,Sz) = [ 1 Sxy Sxz 0] [ Sxy 1 Syz 0] [ Sxz Syz 1 0] [ 0 0 0 1] 3 kinds of ‘perspective’ transform: --Px causes perspective distortions along x-axis direction, Py, Pz along y and z axes: Persp(px,py,pz) = [1 0 0 0] [0 1 0 0] [0 0 1 0] [px py pz 1] --Finally, we have that lower-left ‘1’ in the matrix. Remember how we convert homogeneous coordinates [x,y,z,w] to ‘real’ or ‘Cartesian’ 3D coordinates [x/w, y/w, z/w]? If we change the ‘1’ in the lower left, it scales the ‘w’—it acts as a simple scale factor on all ‘real’ coordinates, and thus it’s redundant:—we have only 15 degrees of freedom in our 4x4 matrix. We can describe any 4x4 transform matrix by a combination of these three simpler classes: ‘rigid body’ transforms == preserves angles and lengths; does not distort or reshape a model, includes any combination of rotation and translation.
    [Show full text]
  • A Survey of Visibility for Walkthrough Applications
    A Survey of Visibility for Walkthrough Applications Daniel Cohen-Or1 ¢¡ Yiorgos Chrysanthou2 † Claudio´ T. Silva3 ‡ 1Tel Aviv University 2 University College London 3AT&T Labs-Research Abstract The last few years have witnessed tremendous growth in the complexity of computer graphics models as well as network-based computing. Although significant progress has been made in the handling of specific types of large polygonal datasets (i.e., architectural models) on single graphics workstations, only recently have researchers started to turn their attention to more general solutions, which now include network-based graphics and virtual environments. The situation is likely to worsen in the future since, due to technologies such as 3D scanning, graphical models are becoming increasingly complex. One of the most effective ways of managing the complexity of virtual environments is through the application of smart visibility methods. Visibility determination, the process of deciding what surfaces can be seen from a certain point, is one of the fundamental problems in computer graphics. It is required not only for the correct display of images but also for such diverse applications as shadow determination, global illumination, culling and interactive walkthrough. The importance of visibility has long been recognized, and much research has been done in this area in the last three decades. The proliferation of solutions, however, has made it difficult for the non-expert to deal with this effectively. Meanwhile, in network-based graphics and virtual environments, visibility has become a critical issue, presenting new problems that need to be addressed. In this survey we review the fundamental issues in visibility and conduct an overview of the work performed in recent years.
    [Show full text]
  • Visualization of Large 3D Scenes in Real-Time with Automatic Mesh Optimization
    VISUALIZATION OF LARGE 3D SCENES IN REAL-TIME WITH AUTOMATIC MESH OPTIMIZATION by Haraldur Örn Haraldsson Master´s thesis for Advanced Computer Graphics at Linköping University, Campus Norrköping - 20.march 2009 2 AbstractAbstract The search for solutions to visualize large 3D virtual scenes in real-time on todays hardware is an ongoing process. The polygonal models are growing in size and complexity, and proving too large and complex to render on current consumer hardware. Mesh optimization algorithms can produce low resolution models that can be used in level-of-detail (LOD) methods to accelerate the framerate and add greater interactivity to a scene, whereas a scene with high resolution models would not be able to render at an interactive framerate. The preprocess of simplifying models is an efficient method but it requires that, most often, many models need to be simplified into multiple levels-of-detail, where every model needs to be visually validated and tested manually. This can prove to be a bit tedious work and quite time consuming. There exist tools for mesh optimization, with graphical user interfaces, that allow for simplifying a model with a viewer to validate the model, but they only deal with single objects at a time, not allowing for producing multiple simplifications of the same model for the purpose of using in level- of-detail scenes. This thesis presents a solution that tries to make this preprocessing stage more efficient by presenting an application that allows for creating multiple LODs, a library if you will, using a mesh optimization tool from Donya Labs.
    [Show full text]
  • CS 543: Computer Graphics Lecture 8 (Part II): Hidden Surface Removal Emmanuel
    CS 543: Computer Graphics Lecture 8 (Part II): Hidden Surface Removal Emmanuel Agu Hidden surface Removal n Drawing polygonal faces on screen consumes CPU cycles n We cannot see every surface in scene n To save time, draw only surfaces we see n Surfaces we cannot see and their elimination methods: n Occluded surfaces: hidden surface removal (visibility) n Back faces: back face culling n Faces outside view volume: viewing frustrum culling n Definitions: n Object space techniques: applied before vertices are mapped to pixels n Image space techniques: applied after vertices have been rasterized Visibility (hidden surface removal) n A correct rendering requires correct visibility calculations n Correct visibility – when multiple opaque polygons cover the same screen space, only the closest one is visible (remove the other hidden surfaces) wrong visibility Correct visibility Visibility (hidden surface removal) n Goal: determine which objects are visible to the eye n Determine what colors to use to paint the pixels n Active research subject - lots of algorithms have been proposed in the past (and is still a hot topic) Visibility (hidden surface removal) n Where is visiblity performed in the graphics pipeline? v1, m1 modeling and per vertex projection viewing lighting v2, m2 v3, m3 Rasterization viewport interpolate clipping texturing mapping vertex colors Shading visibility Display Note: Map (x,y) values to screen (draw) and use z value for depth testing OpenGL - Image Space Approach § Determine which of the n objects is visible to each pixel
    [Show full text]