A Real-Time Photo-Realistic Visual Flythrough

Total Page:16

File Type:pdf, Size:1020Kb

A Real-Time Photo-Realistic Visual Flythrough A RealTime PhotoRealistic Visual Flythrough 1;2 1 1 1 Daniel CohenOr Eran Rich Uri Lerner and Victor Shenkar 1 Tiltan System Engineering BneiBrak Israel 2 Scho ol of Mathematical Sciences TelAviv University RamatAviv Israel Abstract In this pap er we present a comprehensive ythrough system which generates photorealistic images in true realtime The high p erfor mance is due to an innovative rendering algorithm based on a discrete ray casting approach accelerated by ray coherence and multiresolu tion traversal The terrain as well as the D ob jects are represented by a textured mapp ed voxelbased mo del The system is based on a pure software algorithm and is thus p ortable It was rst implemented on a workstation and then p orted to a generalpurp ose parallel architecture to achieve realtime p erformance Keywords Terrain Visualization Parallel Rendering Flight Simulator Vi sual Simulations VoxelBased Mo deling Ray Casting Intro duction The quest for realtime photorealistic rendering has b een one of the ma jor goals of D computer graphics in recent years Techniques for adding real ism to the image such as shading shadow textures and transparency have b een develop ed The generation of realistic images in realtime is currently b eing researched Flight simulator applications have always led the way in realtime p erformance Sp ecialpurp ose machines dedicated to ight simulation have b een develop ed These machines generate images with rea sonable realism in realtime but are exp ensive more than a few million US dollars The main contribution of the work presented in this pap er is that the realtime p erformance was achieved on commercial generalpurp ose parallel architecture as opp osed to sp ecialized rendering hardware Generating images of arbitrary complex scenes is not within the reach of current technology However the rate of image generation in ight simulation can achieve realtime b ecause the scenes that are viewed from the sky are not to o complex Typical views contain terrains which are merely D or D ob jects which are seen as relatively simple featureless ob jects Nevertheless simulating photorealistic aerial views in realtime is by no means easy The term visual ythrough can b e distinguished from ight simulation Visual ythrough generates simulated images as seen from a video camera attached to a ying ob ject The camera generates photorealistic images although not necessarily in color since many video cameras have a greylevel output It should b e emphasized that the generation of true photorealistic images is critical for applications where the user needs to recognize the area or identify ob jects on the ground ie targeting mission rehearsal See the photorealistic impression of the images presented in Figure In a typical ythrough scenario the camera views a very large area es p ecially when the camera pitch angle is high ie towards the horizon In many applications the camera ies at high sp eed over long distances and the area covered during a few seconds of ight is vast This suggests that it is not p ossible to load the entire terrain data onto the main memory For some applications even a Gigabyte of RAM is not enough It is safe to say that no size will ever suce since the application demands will always increase according to the availability of space This suggests that ythroughs require a large secondary storage together with a fast paging mechanism An image of an aerial view gains its realistic impression by mapping a digital photograph onto the terrain mo del In order to achieve high quality full resolution of b oth the digital terrain mo del and the corresp onding aerial photograph need to b e employed This causes a ma jor load on conventional graphics hardware based on a geometric pip eline First a high resolution p olygonal terrain mo del contains a vast numb er of p olygons which need to b e pro cessed by the geometric pip eline while pro cessing tiny p olygons loses the costeectiveness of the rasterization hardware The high resolution photo graph that needs to b e texturemapp ed during rasterization creates a further problem since large photographic maps need to b e loaded onto an exp ensive cache For example the RealityEngine b oard cannot hold more than a few Kilobytes of texture while larger textures need to b e paged from the main memory in realtime To avoid that many ight simulators use rep etitive patterns as ground texture but for some applications where a sp e cic target area is a vital requirement a true photograph has to b e mapp ed on the terrain These photographs are huge and must b e loaded on the y to the rasterization hardware forming a serious b ottleneck in the rendering pip eline Instead of using a p olygonal mo del and a geometric pip eline we have favored a software solution where the mo del is represented by a voxelbased representation The texturemapping of the photograph over the mo del is a prepro cessing stage which is decoupled from the rendering stage Voxel based mo deling also lends itself to representing ne grained geometry The voxel data is regular internally represented in an array and oers easy and fast access to the data Each voxel represents and stores some discrete lo cal attributes of the mo del Voxels representing the terrain contain a height value and a color value while the voxels representing a D mo del contain a texture photograph as will b e describ ed b elow in Section A voxel based visual ight simulator with realtime p erformance has b een develop ed at Hughes Training Inc Their ight simulator runs on sp ecial purp ose hardware but yields p o or results on a graphics workstation The visual ythrough that we have develop ed is hardware indep endent and thus p ortable Portability is imp ortant since it enables integration of the ythrough system with rapid progress in hardware platforms However a software rendering algorithm must b e fast enough around a second or two p er frame running on a sequential machine so that on a parallel machine with pro cessors it achieves a rate higher than frames p er second That Figure Two aerial photorealistic images generated by the ythough is of course assuming little overhead is imp osed on the parallel version of the algorithm Although we have employed a parallel machine the real time p erformance is mainly due to an innovative rendering algorithm The new algorithm gen erates a photorealistic image such as in Figure within two seconds on a common workstation The implementation of a parallel version of the al gorithm on a way multipro cessor architecture has sp ed up the rendering to achieve the desired realtime rates It should b e noted that other hard ware indep endent ray casting algorithms have reached reasonable sp eeds but just for p oint sampling Avoiding aliasing artifacts is quite involved and time costly The algorithm presented here resembles the prin ciples of the pro jection algorithm in However their algorithm is based on a forward mapping metho d and was designed to b e implemented in hard ware The algorithm presented in the next section is a simple ray casting Figure The image footprint over the terrain is dened by the viewing parameters forward mapping accelerated by ray coherence and multiresolution traver sal and highly optimized for hardware indep endent implementation The remainder of this pap er is structured as follows Section describ es the rendering algorithm Section presents the IBM Power Visualization System the current parallel platform of the parallel algorithm implementa tion details concerning the parallelization of the algorithm and some results We discuss the generation of voxelbased ob jects in Section and conclude with a brief discussion on our current activity and some nal remarks The Rendering Algorithm The sequence of images generated by the rendering algorithm is indep endent Each image is dened by the lo cation of the camera in D space the camera eld of view and its orientation namely the pitch roll yaw angles and image resolution Figure depicts the image footprint dened by the pro jection of the image frame over the terrain The terrain mo del is represented by a voxelbased map generated from a discrete elevation map colored by a corresp onding aerial photo map The rendering algorithm is based on a backward mapping approach known as ray casting The image is generated by casting a rayofsight emanating from the viewp oint through each of the image pixels towards the mo del see Figure The ray traverses ab ove the Figure Discrete ray casting of a voxelbased terrain terrain voxels until it intersects the terrain The terrain color is sampled and mapp ed back to the source pixel Since the mo del is discrete there is no explicit intersection calculation but a sequential search for a hit b etween the ray and a voxel The sp eed of the ray traversal is crucial for achieving realtime p erformance The technique we employ is based on a discrete grid traversal where the steps along the ray are p erformed on the pro jection of the ray on the plane rather than in D The heights along the ray are incrementally and uniformly sampled and compared to the height of the terrain b elow it until a hit o ccurs and the color of the terrain at the hit p oint is mapp ed back to the source pixel If there is no hit then the background color of the sky is mapp ed This apparently naive traversal is at in contrast to a hierarchical traversal In a pyramidal elevation map is used The multiresolution pyramid is treated as a hierarchy of b ounding b oxes through which the ray traverses in
Recommended publications
  • Advanced Computer Graphics to Do Motivation Real-Time Rendering
    To Do Advanced Computer Graphics § Assignment 2 due Feb 19 § Should already be well on way. CSE 190 [Winter 2016], Lecture 12 § Contact us for difficulties etc. Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir Motivation Real-Time Rendering § Today, create photorealistic computer graphics § Goal: interactive rendering. Critical in many apps § Complex geometry, lighting, materials, shadows § Games, visualization, computer-aided design, … § Computer-generated movies/special effects (difficult or impossible to tell real from rendered…) § Until 10-15 years ago, focus on complex geometry § CSE 168 images from rendering competition (2011) § § But algorithms are very slow (hours to days) Chasm between interactivity, realism Evolution of 3D graphics rendering Offline 3D Graphics Rendering Interactive 3D graphics pipeline as in OpenGL Ray tracing, radiosity, photon mapping § Earliest SGI machines (Clark 82) to today § High realism (global illum, shadows, refraction, lighting,..) § Most of focus on more geometry, texture mapping § But historically very slow techniques § Some tweaks for realism (shadow mapping, accum. buffer) “So, while you and your children’s children are waiting for ray tracing to take over the world, what do you do in the meantime?” Real-Time Rendering SGI Reality Engine 93 (Kurt Akeley) Pictures courtesy Henrik Wann Jensen 1 New Trend: Acquired Data 15 years ago § Image-Based Rendering: Real/precomputed images as input § High quality rendering: ray tracing, global illumination § Little change in CSE 168 syllabus, from 2003 to
    [Show full text]
  • Sun Opengl 1.3 for Solaris Implementation and Performance Guide
    Sun™ OpenGL 1.3 for Solaris™ Implementation and Performance Guide Sun Microsystems, Inc. www.sun.com Part No. 817-2997-11 November 2003, Revision A Submit comments about this document at: http://www.sun.com/hwdocs/feedback Copyright 2003 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. This document and the product to which it pertains are distributed under licenses restricting their use, copying, distribution, and decompilation. No part of the product or of this document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, SunSoft, SunDocs, SunExpress, and Solaris are trademarks, registered trademarks, or service marks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc.
    [Show full text]
  • Projective Texture Mapping with Full Panorama
    EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel Volume 21 (2002), Number 3 (Guest Editors) Projective Texture Mapping with Full Panorama Dongho Kim and James K. Hahn Department of Computer Science, The George Washington University, Washington, DC, USA Abstract Projective texture mapping is used to project a texture map onto scene geometry. It has been used in many applications, since it eliminates the assignment of fixed texture coordinates and provides a good method of representing synthetic images or photographs in image-based rendering. But conventional projective texture mapping has limitations in the field of view and the degree of navigation because only simple rectangular texture maps can be used. In this work, we propose the concept of panoramic projective texture mapping (PPTM). It projects cubic or cylindrical panorama onto the scene geometry. With this scheme, any polygonal geometry can receive the projection of a panoramic texture map, without using fixed texture coordinates or modeling many projective texture mapping. For fast real-time rendering, a hardware-based rendering method is also presented. Applications of PPTM include panorama viewer similar to QuicktimeVR and navigation in the panoramic scene, which can be created by image-based modeling techniques. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms; I.3.7 [Computer Graphics]: Color, Shading, Shadowing, and Texture 1. Introduction texture mapping. Image-based rendering (IBR) draws more applications of projective texture mapping. When Texture mapping has been used for a long time in photographs or synthetic images are used for IBR, those computer graphics imagery, because it provides much images contain scene information, which is projected onto 7 visual detail without complex models .
    [Show full text]
  • Rendering from Compressed Textures
    Rendering from Compressed Textures y Andrew C. Beers, Maneesh Agrawala, and Navin Chaddha y Computer Science Department Computer Systems Laboratory Stanford University Stanford University Abstract One way to alleviate these memory limitations is to store com- pressed representations of textures in memory. A modi®ed renderer We present a simple method for rendering directly from compressed could then render directly from this compressed representation. In textures in hardware and software rendering systems. Textures are this paper we examine the issues involved in rendering from com- compressed using a vector quantization (VQ) method. The advan- pressed texture maps and propose a scheme for compressing tex- tage of VQ over other compression techniques is that textures can tures using vector quantization (VQ)[4]. We also describe an ex- be decompressed quickly during rendering. The drawback of us- tension of our basic VQ technique for compressing mipmaps. We ing lossy compression schemes such as VQ for textures is that such show that by using VQ compression, we can achieve compression methods introduce errors into the textures. We discuss techniques rates of up to 35 : 1 with little loss in the visual quality of the ren- for controlling these losses. We also describe an extension to the dered scene. We observe a 2 to 20 percent impact on rendering time basic VQ technique for compressing mipmaps. We have observed using a software renderer that renders from the compressed format. compression rates of up to 35 : 1, with minimal loss in visual qual- However, the technique is so simple that incorporating it into hard- ity and a small impact on rendering time.
    [Show full text]
  • Research on Improving Methods for Visualizing Common Elements in Video Game Applications ビデオゲームアプリケーショ
    Research on Improving Methods for Visualizing Common Elements in Video Game Applications ビデオゲームアプリケーションにおける共通的 なな要素要素のの視覚化手法視覚化手法のの改良改良にに関関関するする研究 June 2013 Graduate School of Global Information and Telecommunication Studies Waseda University Title of the project Research on Image Processing II Candidate ’’’s name Sven Dierk Michael Forstmann 2 Index Index 3 Acknowledgements Foremost, I would like to express my sincere thanks to my advisor Prof. Jun Ohya for his continuous support of my Ph.D study and research. His guidance helped me in all the time of my research and with writing of this thesis. I would further like to thank the members of my PhD committee, Prof. Shigekazu Sakai, Prof. Takashi Kawai, and Prof. Shigeo Morishima for their valuable comments and suggestions. For handling the formalities and always being helpful when I had questions, I would like to thank the staff of the Waseda GITS Office, especially Yumiko Kishimoto san. For their financial support, I would like to thank the DAAD (Deutscher Akademischer Austauschdienst), the Japanese Government for supporting me with the Monbukagakusho Scholarship and ITO EN Ltd, for supporting me with the Honjo International Scholarship. For their courtesy of allowing me the use of some of their screenshots, I would like to thank John Olick, Bernd Beyreuther, Prof. Ladislav Kavan and Dr. Cyril Crassin. Special thanks are given to my friends Dr. Gleb Novichkov, Prof. Artus Krohn-Grimberghe, Yutaka Kanou, and Dr. Gernot Grund for their encouragement and insightful comments. Last, I would like to thank my family: my parents Dierk and Friederike Forstmann, and my brother Hanjo Forstmann for their support and encouragement throughout the thesis.
    [Show full text]
  • Modern Software Rendering
    Production Systems and Information Engineering 57 Volume 6 (2013), pp. 57-68 MODERN SOFTWARE RENDERING PETER MILEFF University of Miskolc, Hungary Department of Information Technology [email protected] JUDIT DUDRA Bay Zoltán Nonprofit Ltd., Hungary Department of Structural Integrity [email protected] [Received January 2012 and accepted September 2012] Abstract. The computer visualization process, under continuous changes has reached a major milestone. Necessary modifications are required in the currently applied physical devices and technologies because the possibilities are nearly exhausted in the field of the programming model. This paper presents an overview of new performance improvement methods that today CPUs can provide utilizing their modern instruction sets and of how these methods can be applied in the specific field of computer graphics, called software rendering. Furthermore the paper focuses on GPGPU based parallel computing as a new technology for computer graphics where a TBR based software rasterization method is investigated in terms of performance and efficiency. Keywords: software rendering, real-time graphics, SIMD, GPGPU, performance optimization 1. Introduction Computer graphics is an integral part of our life. Often unnoticed, it is almost everywhere in the world today. The area has evolved over many years in the past few decades, where an important milestone was the appearance of graphic processors. Because the graphical computations have different needs than the CPU requirements, a demand has appeared early for a fast and uniformly programmable graphical processor. This evolution has opened many new opportunities for developers, such as developing real-time, high quality computer simulations and analysis. The appearance of the first graphics accelerators radically changed everything and created a new basis for computer rendering.
    [Show full text]
  • Performance Comparison on Rendering Methods for Voxel Data
    fl Performance comparison on rendering methods for voxel data Oskari Nousiainen School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo July 24, 2019 Supervisor Prof. Tapio Takala Advisor Dip.Ins. Jukka Larja Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Oskari Nousiainen Title Performance comparison on rendering methods for voxel data Degree programme Master’s Programme in Computer, Communication and Information Sciences Major Computer Science Code of major SCI3042 Supervisor Prof. Tapio Takala Advisor Dip.Ins. Jukka Larja Date July 24, 2019 Number of pages 50 Language English Abstract There are multiple ways in which 3 dimensional objects can be presented and rendered to screen. Voxels are a representation of scene or object as cubes of equal dimensions, which are either full or empty and contain information about their enclosing space. They can be rendered to image by generating surface geometry and passing that to rendering pipeline or using ray casting to render the voxels directly. In this paper, these methods are compared to each other by using different octree structures for voxel storage. These methods are compared in different generated scenes to determine which factors affect their relative performance as measured in real time it takes to draw a single frame. These tests show that ray casting performs better when resolution of voxel model is high, but rasterisation is faster for low resolution voxel models. Rasterisation scales better with screen resolution and on higher screen resolutions most of the tested voxel resolutions where faster to render with with rasterisation.
    [Show full text]
  • GPU Virtualization on Vmware's Hosted I/O Architecture
    GPU Virtualization on VMware’s Hosted I/O Architecture Micah Dowty, Jeremy Sugerman VMware, Inc. 3401 Hillview Ave, Palo Alto, CA 94304 [email protected], [email protected] Published in the USENIX Workshop on I/O Virtualization 2008, Industrial Practice session. Abstract limited documentation. Modern high-end GPUs have more transistors, draw Modern graphics co-processors (GPUs) can produce more power, and offer at least an order of magnitude high fidelity images several orders of magnitude faster more computational performance than CPUs. At the than general purpose CPUs, and this performance expec- same time, GPU acceleration has extended beyond en- tation is rapidly becoming ubiquitous in personal com- tertainment (e.g., games and video) into the basic win- puters. Despite this, GPU virtualization is a nascent field dowing systems of recent operating systems and is start- of research. This paper introduces a taxonomy of strate- ing to be applied to non-graphical high-performance ap- gies for GPU virtualization and describes in detail the plications including protein folding, financial modeling, specific GPU virtualization architecture developed for and medical image processing. The rise in applications VMware’s hosted products (VMware Workstation and that exploit, or even assume, GPU acceleration makes VMware Fusion). it increasingly important to expose the physical graph- We analyze the performance of our GPU virtualiza- ics hardware in virtualized environments. Additionally, tion with a combination of applications and microbench- virtual desktop infrastructure (VDI) initiatives have led marks. We also compare against software rendering, the many enterprises to try to simplify their desktop man- GPU virtualization in Parallels Desktop 3.0, and the na- agement by delivering VMs to their users.
    [Show full text]
  • Appendix: Graphics Software Took
    Appendix: Graphics Software Took Appendix Objectives: • Provide a comprehensive list of graphics software tools. • Categorize graphics tools according to their applications. Many tools come with multiple functions. We put a primary category name behind a tool name in the alphabetic index, and put a tool name into multiple categories in the categorized index according to its functions. A.I. Graphics Tools Listed by Categories We have no intention of rating any of the tools. Many tools in the same category are not necessarily of the same quality or at the same capacity level. For example, a software tool may be just a simple function of another powerful package, but it may be free. Low4evel Graphics Libraries 1. DirectX/DirectSD - - 248 2. GKS-3D - - - 278 3. Mesa 342 4. Microsystem 3D Graphic Tools 346 5. OpenGL 370 6. OpenGL For Java (GL4Java; Maps OpenGL and GLU APIs to Java) 281 7. PHIGS 383 8. QuickDraw3D 398 9. XGL - 497 138 Appendix: Graphics Software Toois Visualization Tools 1. 3D Grapher (Illustrates and solves mathematical equations in 2D and 3D) 160 2. 3D Studio VIZ (Architectural and industrial designs and concepts) 167 3. 3DField (Elevation data visualization) 171 4. 3DVIEWNIX (Image, volume, soft tissue display, kinematic analysis) 173 5. Amira (Medicine, biology, chemistry, physics, or engineering data) 193 6. Analyze (MRI, CT, PET, and SPECT) 197 7. AVS (Comprehensive suite of data visualization and analysis) 211 8. Blueberry (Virtual landscape and terrain from real map data) 221 9. Dice (Data organization, runtime visualization, and graphical user interface tools) 247 10. Enliten (Views, analyzes, and manipulates complex visualization scenarios) 260 11.
    [Show full text]
  • A Framework for Creating a Distributed Rendering Environment on the Compute Clusters
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 4, No. 6, 2013 A Framework for Creating a Distributed Rendering Environment on the Compute Clusters Ali Sheharyar Othmane Bouhali IT Research Computing Science Program and IT Research Computing Texas A&M University at Qatar Texas A&M University at Qatar Doha, Qatar Doha, Qatar Abstract—This paper discusses the deployment of existing depend on any other frame. In order to reduce the total render farm manager in a typical compute cluster environment rendering time, rendering of individual frames can be such as a university. Usually, both a render farm and a compute distributed to a group of computers on the network. An cluster use different queue managers and assume total control animation studio, a company dedicated to production of over the physical resources. But, taking out the physical animated films, typically has a cluster of computers dedicated resources from an existing compute cluster in a university-like to render the virtual scenes. This cluster of computers is called environment whose primary use of the cluster is to run numerical a render farm. simulations may not be possible. It can potentially reduce the overall resource utilization in a situation where compute tasks B. Objectives are more than rendering tasks. Moreover, it can increase the In a university environment, it can be complicated to do the system administration cost. In this paper, a framework has been rendering because many researchers do not have access to a proposed that creates a dynamic distributed rendering dedicated machine for rendering [6]. They do not have access environment on top of the compute clusters using existing render to a rendering machine for a long time as it may become farm managers without requiring the physical separation of the unavailable for other research use.
    [Show full text]
  • Programming of Graphics
    Peter Mileff PhD Programming of Graphics Introduction University of Miskolc Department of Information Technology Task of the semester ⦿ Implement a graphics based demo application ● Any application which focuses mainly to 2D or 3D graphics ● Preferred: simple game or technological demo ○ e.g. collision test, screensaver, animation, etc ● Use the OpenGL API if possible ● Any platform: ○ Android, iOS, PC, Java, C++, etc ⦿ Deadline: end of the semester The main topics ⦿ Fundamentals and evolution of computer graphics ⦿ Overview of GPU technologies ⦿ Game and graphics engines ⦿ Practical 2D visualisation ● Basis and difficulty of rendering - software rendering ● Moving objects, animation ● Collision detection ● Tilemap, bitmap fonts ⦿ Practical 3D graphics ● Object representation, structure of a model, rasterization algorithms ● Programmable Pipeline, applying shaders ● Lights and shadows ● Effects: bump mapping, normal mapping, ambient occlusion, etc ● Billboarding, terrain rendering, particle effect etc ⦿ Raytracing, Voxel based visualisation Fundamentals of computer graphics… Introduction ⦿ Computer graphics forms an integral part of our lives ● Often unnoticed, but almost everywhere in the world today ○ E.g: Games, Films, Advertisement, Posters, etc ⦿ The area has evolved over many years in the past few decades ● Multiple platforms appeared: C64, ZX Spectrum, Plus 4, Atari, Amiga, Nintendo, SNES, etc ● The appearance of the PCs was a big step ⦿ The video game industry started to grow dramatically because of PCs ● Possibility to play game
    [Show full text]
  • Protected Interactive 3D Graphics Via Remote Rendering
    Protected Interactive 3D Graphics Via Remote Rendering David Koller∗ Michael Turitzin∗ Marc Levoy∗ Marco Tarini† Giuseppe Croccia† Paolo Cignoni† Roberto Scopigno† ∗Stanford University †ISTI-CNR, Italy Abstract available for constructive purposes, were the digital 3D model of the David to be distributed in an unprotected fashion, it would soon be pirated, and simulated marble replicas would be manufactured Valuable 3D graphical models, such as high-resolution digital scans outside the provisions of the parties authorizing the creation of the of cultural heritage objects, may require protection to prevent piracy model. or misuse, while still allowing for interactive display and manipu- lation by a widespread audience. We have investigated techniques Digital 3D archives of archaeological artifacts are another example for protecting 3D graphics content, and we have developed a re- of 3D models often requiring piracy protection. Curators of such mote rendering system suitable for sharing archives of 3D mod- artifact collections are increasingly turning to 3D digitization as a els while protecting the 3D geometry from unauthorized extrac- way to preserve and widen scholarly usage of their holdings, by al- tion. The system consists of a 3D viewer client that includes low- lowing virtual display and object examination over the Internet, for resolution versions of the 3D models, and a rendering server that example. However, the owners and maintainers of the artifacts of- renders and returns images of high-resolution models according to ten desire to maintain strict control over the use of the 3D data and client requests. The server implements a number of defenses to to guard against theft.
    [Show full text]