State of the Art and Scenarios

Total Page:16

File Type:pdf, Size:1020Kb

State of the Art and Scenarios PUODARSI http://www.kaemart.it/puodarsi PUODARSI Product User-Oriented Development based on Augmented Reality and Interactive Simulation STATE OF THE ART AND SCENARIOS Summary: The document describes the technologies and the libraries in the fields of interest of the project. Deliverable n°: D1 Version n°: 0.1 Keywords: Virtual Prototyping, Virtual Reality, Augmented Reality, Mixed Reality, scientific visualization, haptic, Reverse Engineering, multimodal interaction, test cases. TECHNICAL REPORT D1 - 1 - PRIN2006 – PUODARSI STATE OF THE ART AND TEST CASE File name D1.doc Deliverable n°: D1 Release date 31/10/2007 Autori: Monica Bordegoni; Francesco Politecnico di Milano Ferrise Giuseppe Monno; Antonello Uva; Politecnico di Bari Michele Fiorentino Fabio Bruno; Francesco Caruso Universita’ della Calabria Piero Mussio; Stefano Valtolina; Universita’ degli studi di Loredana Paralisiti Milano Francesco Caputo; Giuseppe Di Gironimo; Salvatore Gerbino; Massimo Martorelli; Adelaide Università di Napoli Marzano; Stefano Papa; Fabrizio Federico II Renno; Domenico Speranza; Andrea Tarallo TECHNICAL REPORT D1 - 2 - PRIN2006 – PUODARSI STATE OF THE ART AND TEST CASE Index Index________________________________________________________________ 3 1 Introduction ________________________________________________________ 6 2 Augmented Reality and Mixed Reality systems _____________________________ 6 2.1 AR and MR systems ____________________________________________________ 6 2.1.1 Tracking systems ___________________________________________________________7 2.1.2 Visualization systems _______________________________________________________13 2.2 Visualization libraries __________________________________________________ 21 2.2.1 Java 3D __________________________________________________________________22 2.2.2 Open Inventor _____________________________________________________________22 2.2.3 VTK ____________________________________________________________________23 2.2.4 OpenSG__________________________________________________________________23 2.2.5 OpenSceneGraph __________________________________________________________25 2.3 References____________________________________________________________ 27 3 Haptic systems______________________________________________________ 29 3.1 Haptic technology _____________________________________________________ 29 3.1.1 A taxonomy of current haptic technologies ______________________________________29 3.1.2 Possible dimensions in the taxonomy ___________________________________________29 3.1.3 Size scales________________________________________________________________30 3.1.4 Degrees of freedom (DOFs) __________________________________________________30 3.1.5 Grounding and kinematics ___________________________________________________30 3.1.6 Drive type ________________________________________________________________31 3.1.7 Control type ______________________________________________________________31 3.1.8 Contact type ______________________________________________________________31 3.2 Haptic devices_________________________________________________________ 32 3.2.1 Existing force feedback displays_______________________________________________32 3.2.2 1-DOF and 2-DOF displays __________________________________________________32 3.2.3 3-DOF and 6-DOF displays __________________________________________________32 3.2.4 Exoskeleton or humanoid type ________________________________________________34 3.2.5 Existing grasping displays ___________________________________________________36 3.2.6 Existing vibro-tactile and friction displays _______________________________________38 3.2.7 Conclusion on the state of the art in large-scale haptic displays_______________________38 3.3 Haptic libraries _______________________________________________________ 39 3.3.1 CHAI3D _________________________________________________________________39 3.3.2 OpenHaptic _______________________________________________________________40 3.3.3 H3D/VHTK ______________________________________________________________40 3.3.4 Haptik ___________________________________________________________________41 3.3.5 OpenScenceGraph Haptic____________________________________________________42 3.3.6 Haptic libraries overview ____________________________________________________43 3.4 Reference ____________________________________________________________ 43 4 Interactive simulation systems _________________________________________ 44 4.1 CFD analysis technologies_______________________________________________ 44 4.1.1 Deal II ___________________________________________________________________44 4.1.2 OpenFlower ______________________________________________________________44 4.1.3 Comsol Multiphysics _______________________________________________________48 4.1.4 Benchmark of the selected CFD solvers _________________________________________50 4.2 FEM analysis technologies ______________________________________________ 51 4.2.1 Introduction_______________________________________________________________51 4.2.2 Software _________________________________________________________________51 TECHNICAL REPORT D1 - 3 - PRIN2006 – PUODARSI STATE OF THE ART AND TEST CASE 5 Reverse Engineering systems __________________________________________ 53 5.1 Introduction __________________________________________________________ 53 5.2 3D Scanning techniques_________________________________________________ 54 5.2.1 Contact digitizers __________________________________________________________55 5.2.2 Mixed CMM-Optical digitizers _______________________________________________56 5.2.3 Line and Spot Scanners (based on triangulation) __________________________________56 5.2.4 Probes based on the Conoscopic Holography_____________________________________58 5.2.5 Dual-Capability Systems_____________________________________________________60 5.2.6 Other Types of Laser Systems ________________________________________________60 5.2.7 Other Types of Tracking Systems______________________________________________61 5.2.8 Photogrammetry ___________________________________________________________64 5.2.9 Specifications and application criteria __________________________________________66 5.3 Critical issuses related to "Puodarsi" RE systems ___________________________ 68 5.4 Conclusions___________________________________________________________ 69 5.5 References____________________________________________________________ 70 5.6 Reverse Engineering Software: technical specs _____________________________ 71 5.7 Reverse Engineering Hardware: technical specs ____________________________ 72 5.7.1 Mechanical Touch Probe Systems _____________________________________________72 5.7.2 Line Scanners/Triangulation__________________________________________________73 5.7.3 Laser Trackers_____________________________________________________________74 5.7.4 Optical Radar _____________________________________________________________74 5.7.5 Color Capable Systems ______________________________________________________74 5.7.6 3D Metrology Systems for Manufacturing _______________________________________74 5.7.7 Scanners for Very Large Objects and Surveying Applications________________________75 6 Multimodal Annotations______________________________________________ 77 6.1 Introduction: Paper Annotation, electronic annotation and web annotation _____ 77 6.2 Annotation in 2d environments __________________________________________ 77 Tools for Collaborative Annotation: a Comparison among Three Annotation Styles developed by Unimi (University of Milano)____________________________________ 77 A. SyMPA annotation activities ____________________________________________________78 B. T.Arc.H.N.A annotation activities ________________________________________________78 C. BANCO annotation activities ___________________________________________________79 Del.ico.us, Digg, BlinkList _______________________________________________________80 Pliny and traditional scholarly practice ______________________________________________81 DesignDesk ViewLink___________________________________________________________81 Eroiica Edit ___________________________________________________________________82 eReview ______________________________________________________________________82 6.3 Annotation in 3d environments __________________________________________ 83 The Virtual Annotation System ____________________________________________________83 CATIA 3D Functional Tolerancing & Annotation 2 (FTA) ______________________________83 Annotation Authoring in Collaborative 3D Virtual Environments _________________________83 Composing PDF Documents with 3D Content from MicroStation _________________________83 A Direct-Manipulation Tool for JavaScripting Animated Exploded Views Using Acrobat 7.0 Professional ___________________________________________________________________84 NX I-deas Master Notation: For documenting solid model designs ________________________84 Immersive redlining and annotation of 3D design models on the Web ______________________84 Post Processing Tips & Hints: Annotation in ANSYS __________________________________84 Boom Chameleon: Simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display___________________________________________________________85 ANNOT3D DESCRIPTION ______________________________________________________85 Drawing for Illustration and Annotation in 3D ________________________________________85 Markup and Drawing Annotation Tools _____________________________________________86
Recommended publications
  • Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering
    Eurographics Conference on Visualization (EuroVis) 2017 Volume 36 (2017), Number 3 J. Heer, T. Ropinski and J. van Wijk (Guest Editors) Stardust: Accessible and Transparent GPU Support for Information Visualization Rendering Donghao Ren1, Bongshin Lee2, and Tobias Höllerer1 1University of California, Santa Barbara, United States 2Microsoft Research, Redmond, United States Abstract Web-based visualization libraries are in wide use, but performance bottlenecks occur when rendering, and especially animating, a large number of graphical marks. While GPU-based rendering can drastically improve performance, that paradigm has a steep learning curve, usually requiring expertise in the computer graphics pipeline and shader programming. In addition, the recent growth of virtual and augmented reality poses a challenge for supporting multiple display environments beyond regular canvases, such as a Head Mounted Display (HMD) and Cave Automatic Virtual Environment (CAVE). In this paper, we introduce a new web-based visualization library called Stardust, which provides a familiar API while leveraging GPU’s processing power. Stardust also enables developers to create both 2D and 3D visualizations for diverse display environments using a uniform API. To demonstrate Stardust’s expressiveness and portability, we present five example visualizations and a coding playground for four display environments. We also evaluate its performance by comparing it against the standard HTML5 Canvas, D3, and Vega. Categories and Subject Descriptors (according to ACM CCS):
    [Show full text]
  • Data Management for Augmented Reality Applications
    Technische Universitat¨ Munchen¨ c c c c Fakultat¨ fur¨ Informatik c c c c c c c c c c c c Diplomarbeit Data Management for Augmented Reality Applications ARCHIE: Augmented Reality Collaborative Home Improvement Environment Marcus Tonnis¨ Technische Universitat¨ Munchen¨ c c c c Fakultat¨ fur¨ Informatik c c c c c c c c c c c c Diplomarbeit Data Management for Augmented Reality Applications ARCHIE: Augmented Reality Collaborative Home Improvement Environment Marcus Tonnis¨ Aufgabenstellerin: Prof. Gudrun Klinker, Ph.D. Betreuer: Dipl.-Inf. Martin Bauer Abgabedatum: 15. Juli 2003 Ich versichere, daß ich diese Diplomarbeit selbstandig¨ verfaßt und nur die angegebenen Quellen und Hilfsmittel verwendet habe. Munchen,¨ den 15. Juli 2003 Marcus Tonnis¨ Zusammenfassung Erweiterte Realitat¨ (Augmented Reality, AR) ist eine neue Technologie, die versucht, reale und virtuelle Umgebungen zu kombinieren. Gemeinhin werden Brillen mit eingebauten Computerdisplays benutzt um eine visuelle Er- weiterung der Umgebung des Benutzers zu erreichen. In das Display der Brillen, die Head Mounted Displays genannt werden, konnen¨ virtuelle Objekte projiziert werden, die fur¨ den Benutzer ortsfest erscheinen. Das am Lehrstuhl fur¨ Angewandte Softwaretechnik der Technischen Universitat¨ Munchen¨ angesiedelte Projekt DWARF versucht, Methoden des Software Engineering zu benutzen, um durch wiederverwendbare Komponenten die prototypische Implementierung neuer Komponenten zu beschleunigen. DWARF besteht aus einer Sammlung von Softwarediensten, die auf mobiler verteilter Hardware agieren und uber¨ drahtlose oder fest verbundene Netzwerke miteinander kom- munizieren konnen.¨ Diese Kommunikation erlaubt, personalisierte Komponenten mit einge- betteten Diensten am Korper¨ mit sich zu fuhren,¨ wahrend¨ Dienste des Umfeldes intelligente Umgebungen bereitstellen konnen.¨ Die Dienste erkennen sich gegenseitig und konnen¨ dy- namisch kooperieren, um gewunschte¨ Funktionalitat¨ zu erreichen, die fur¨ Augmented Rea- lity Anwendungen gewunscht¨ ist.
    [Show full text]
  • Opensg Starter Guide 1.2.0
    OpenSG Starter Guide 1.2.0 Generated by Doxygen 1.3-rc2 Wed Mar 19 06:23:28 2003 Contents 1 Introduction 3 1.1 What is OpenSG? . 3 1.2 What is OpenSG not? . 4 1.3 Compilation . 4 1.4 System Structure . 5 1.5 Installation . 6 1.6 Making and executing the test programs . 6 1.7 Making and executing the tutorials . 6 1.8 Extending OpenSG . 6 1.9 Where to get it . 7 1.10 Scene Graphs . 7 2 Base 9 2.1 Base Types . 9 2.2 Log . 9 2.3 Time & Date . 10 2.4 Math . 10 2.5 System . 11 2.6 Fields . 11 2.7 Creating New Field Types . 12 2.8 Base Functors . 13 2.9 Socket . 13 2.10 StringConversion . 16 3 Fields & Field Containers 17 3.1 Creating a FieldContainer instance . 17 3.2 Reference counting . 17 3.3 Manipulation . 18 3.4 FieldContainer attachments . 18 ii CONTENTS 3.5 Data separation & Thread safety . 18 4 Image 19 5 Nodes & NodeCores 21 6 Groups 23 6.1 Group . 23 6.2 Switch . 23 6.3 Transform . 23 6.4 ComponentTransform . 23 6.5 DistanceLOD . 24 6.6 Lights . 24 7 Drawables 27 7.1 Base Drawables . 27 7.2 Geometry . 27 7.3 Slices . 35 7.4 Particles . 35 8 State Handling 37 8.1 BlendChunk . 39 8.2 ClipPlaneChunk . 39 8.3 CubeTextureChunk . 39 8.4 LightChunk . 39 8.5 LineChunk . 39 8.6 PointChunk . 39 8.7 MaterialChunk . 40 8.8 PolygonChunk . 40 8.9 RegisterCombinersChunk .
    [Show full text]
  • A Survey of Technologies for Building Collaborative Virtual Environments
    The International Journal of Virtual Reality, 2009, 8(1):53-66 53 A Survey of Technologies for Building Collaborative Virtual Environments Timothy E. Wright and Greg Madey Department of Computer Science & Engineering, University of Notre Dame, United States Whereas desktop virtual reality (desktop-VR) typically uses Abstract—What viable technologies exist to enable the nothing more than a keyboard, mouse, and monitor, a Cave development of so-called desktop virtual reality (desktop-VR) Automated Virtual Environment (CAVE) might include several applications? Specifically, which of these are active and capable display walls, video projectors, a haptic input device (e.g., a of helping us to engineer a collaborative, virtual environment “wand” to provide touch capabilities), and multidimensional (CVE)? A review of the literature and numerous project websites indicates an array of both overlapping and disparate approaches sound. The computing platforms to drive these systems also to this problem. In this paper, we review and perform a risk differ: desktop-VR requires a workstation-class computer, assessment of 16 prominent desktop-VR technologies (some mainstream OS, and VR libraries, while a CAVE often runs on building-blocks, some entire platforms) in an effort to determine a multi-node cluster of servers with specialized VR libraries the most efficacious tool or tools for constructing a CVE. and drivers. At first, this may seem reasonable: different levels of immersion require different hardware and software. Index Terms—Collaborative Virtual Environment, Desktop However, the same problems are being solved by both the Virtual Reality, VRML, X3D. desktop-VR and CAVE systems, with specific issues including the management and display of a three dimensional I.
    [Show full text]
  • First 6-Monthly EPOCH Pipeline Description
    IST-2002- 507382 EPOCH Excellence in Processing Open Cultural Heritage Network of Excellence Information Society Technologies D3.3.2: 2nd 6-monthly EPOCH Pipeline Description Due date of deliverable: 29 April 2005 Actual submission date: 27 April 2005 Start date of project: 15 March 2004 Duration: 4 Years University of Leuven Project co-funded by the European Commission within the Sixth Framework Programme (2002-2006) Dissemination Level PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services) D3.3.2: SECOND 6-MONTHLY EPOCH PIPELINE DESCRIPTION Table of Contents Work package 3, activity 3.3: Common infrastructure .......................................... 3 Objectives.............................................................................................................................. 3 Description of work............................................................................................................... 3 Input from other EPOCH teams............................................................................... 5 Input from the Stakeholder Needs team ................................................................................ 5 Input from the Standards team .............................................................................................. 5 From the Pipeline to a Common Infrastructure ....................................................
    [Show full text]
  • A Scalable Spatial Sound Library for Opensg
    OpenSG Symposium (2003) D. Reiners (Editor) TRIPS – A Scalable Spatial Sound Library for OpenSG Tomas Neumann, Christoph Fünfzig and Dieter Fellner Institute of Computer Graphics, Technical University of Braunschweig, Germany Abstract We present a Sound Library that is scalable on several computers and that brings the idea of a self-organized scenegraph to the Sound Library’s interface. It supports the implementation of audio features right from the start at a high productivity level for rapid prototypes as well as for professional applications in the immersive domain. The system is built on top of OpenSG 12 which offers a high level of functionality for visual applications and research, but does not come with audio support. We show and compare the effort to implement audio in an OpenSG application with and without TRIPS. Today’s audio systems only accept raw 3D-coordinates and are limited to run on the same computer and the same operating system than the application runs on. Breaking these constraints could give developers more freedom and ease to add high-quality spatial sound to their software. Therefore, users benefit from the promising potential OpenSG offers. Categories and Subject Descriptors (according to ACM CCS): H.5.1 [Multimedia Information Systems]: Audio input/output H.5.5 [Sound and Music Computing]: Systems I.3.7 [Computer Graphics]: Virtual Reality Keywords: High quality spatial sound, 3D-audio, cluster system, sound API, FieldContainer, rapid prototyping, game engine, immersive system 1. Introduction tions and computer hardware rapidly developed in the last 10 years, so did the sound and multimedia field. But in a Unix- Current developments on VR applications in general and of based environment nearly all sound engines lack the ability 12 OpenSG applications in particular demand a solution for to benefit from today’s consumer soundboards, which have 3D audio support, as positional sound is more important heavily progressed over the last years as well.
    [Show full text]
  • Scene Graph Rendering
    Scene Graph Rendering Dirk Reiners OpenSG Forum [email protected] March 5, 2002 This part of the course will introduce you to scenegraph systems for rendering. Scene graphs can help simplifying application development and making optimal use of the available graphics hardware. It is assumed that you have some basic knowledge about 3D computer graphics. If the words polygon, directional light source and texture mean nothing to you, you might have problems following the course. The ideal entry level would be having written some programs using OpenGL, or having read the “OpenGL Programming Guide”[1] or “Computer Graphics”[2]. The first section describes the basic structure of a scenegraph and the difference to a standard OpenGL program. As there are a large number of scene graphs around, Open Source and commercial, section 2 gives a short overview of the most commonly used ones. The remainder of this chapter will describe general concepts applying to most scene graphs and use OpenSG as a specific example. The next two sections describe the general node structure and how the graph is traversed. The most important leaf node, the Geometry, is described in section 5. The other of specifying the displayed geometry, the state of transformation and materials etc. is covered in section 6. Some more scene graph specific functionality is hidden inside other node types, as described in sec. 7. To minimize the memory footprint of a scene graph, data can be shared in different ways, sec. 8 gives details. Sec. 9 touches on the importance of multi- threading and what is done in scene graphs to support it.
    [Show full text]
  • Short167.Pdf (571.2Kb)
    EUROGRAPHICS 2003 / M. Chover, H. Hagen and D. Tost Short Presentations A Framework for Video-based and Hardware-Accelerated Remote 3D-Visualization Frank Goetz, Gitta Domik Computer Graphics, Visualization and Image Processing Fuerstenallee 11, D-33102 Paderborn, Germany (frank.goetz|domik)@uni-paderborn.de Abstract This paper presents a framework for video-based and hardware-accelerated remote 3d-visualization that produces and delivers high quality video streams at an accurate frame rate. The framework is based on the portable scenegraph system OpenSG, the MPEG4IP package and the Apple Darwin Streaming Server. In realtime generated visualizations will be multicast as an ISO MPEG-4 compliant video stream over the RealTime Streaming Protocol from a server to a client computer. On the client computer a Java program and an ISO MPEG-4 compliant video player are used to interact with the delivered visualization. While us- ing MPEG-4 video streams we can achieve high image quality at a low bandwidth. Categories and Subject Descriptors (according to ACM CCS): I.3.1 [Computer Graphics] Distributed/network, C.2.4 [Distributed Systems]: Distributed applications. 1. Introduction A common method to transport visualizations is based Not so long ago the lack of communication was ham- on the streaming of compressed meshes and textures pering the progress of research. Today, researchers from a server to a client computer. While visualizing share information over the internet and cooperation is complex scenes with a huge amount of polygons this demanded in all areas of the industry. Particularly com- method often needs too much network capacity and has puter-generated visualizations took a firm place in many high demands on the capabilities of the client com- fields, like geosciences, medicine, architecture, automo- puters.
    [Show full text]
  • The Vjavatar Library: a Toolkit for Development of Avatars in Virtual Reality Applications
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1-1-2004 The vjAvatar library: a toolkit for development of avatars in virtual reality applications Justin Allen Hare Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Recommended Citation Hare, Justin Allen, "The vjAvatar library: a toolkit for development of avatars in virtual reality applications" (2004). Retrospective Theses and Dissertations. 20575. https://lib.dr.iastate.edu/rtd/20575 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. The vjAvatar Library: A toolkit for development of avatars in virtual reality applications by Justin Allen Hare A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Computer Science Program of Study Committee: Carolina Cruz-Neira, Major Professor Yan-bin Jia Adrian Sannier James Oliver Iowa State University Ames, Iowa 2004 Copyright ©Justin Allen Hare, 2004. All rights reserved 11 Graduate College Iowa State University This is to certify that the Master's thesis of Justin Allen Hare has. met the thesis requirements of Iowa State University Signatures have been redacted for
    [Show full text]
  • The Opensg Forum White Paper
    opensg.org The OpenSG Forum White Paper The Challenge Especially with industrially used VR systems, there is a great demand for a rendering kernel system of the following architecture: Large Models and CAD-Data Integration Scenegraph OpenGL IRIX, Windows, LINUX, etc. On the lowest level, several operating systems (above all SGI IRIX, Windows systems, and Linux) are supported. Above, there is a scenegraph layer guaranteeing the performance and quality that are necessary for VR applications (including the support of parallel processes and of several graphics sub-systems). The top level, finally, consists of all routines for the handling of complex models and for the integration of CAD data. The Starting Point Today, there exist some rendering systems, but they meet the above demands only partially. Moreover, in the past, several systems and extensions have been announced and not realized, or even delivered in prototype state and not advanced any further. Great hopes were placed in the rendering API "Fahrenheit" that was announced by SGI in cooperation with Microsoft. In August 1999, even those contracts were canceled. The current situation is equally depressing and hopeless for VR developers, VR users, and the VR research: For several years now, most VR developers have been prepared to realize a re- design on a new rendering platform, in order to be able to support all the above requirements in their products. The lacking availability was extremely obstructive to the development of VR in general (e.g., missing connections to the industry’s CAx environments, etc.). Developments that had already been started proved as bad investments, and a decision for the integration of a new rendering system cannot be made at the moment.
    [Show full text]
  • DELIVERABLE REPORT Doc
    DELIVERABLE REPORT Doc. Identifier: D. 4.1 Date: 28-11-2011 DELIVERABLE REPORT D 4.1 Document identifier: V-Must.net - D 4.1 Due Date of Delivery to EC End of Month 9 – 30 October 2011 Actual Date of Delivery to EC 28/11/2011 Document date: 22/11/2011 Available platform components and platform Deliverable Title: specification Work package: WP 4 Lead Beneficiary: CNR Other Beneficiaries FHG, CULTNAT, KCL, ULUND, INRIA, SEAV, CINECA, VISDIM, ETF Authors: [Contributors] Massimiliano Corsini, Roberto Scopigno, Luigi Calori, Holger Graf, Sorin Hermon, Daniel Pletinckx Document status: Document link: http://www.v-must.net/media_pubs/documents Grant Agreement 270404 CNR Public 1 /61 DELIVERABLE REPORT Doc. Identifier: D. 4.1 Date: 28-11-2011 Changes Version Date Comment Authors 1.1 02/11/2011 First version 1.2 16/11/2011 Second version 1.3 22/11/2011 Third version Copyright notice: Copyright © V-Must.net. For more information on V-Must.net, its partners and contributors please see http://www.v-must.net/ The information contained in this document reflects only the author's views and the Community is not liable for any use that may be made of the information contained therein. Grant Agreement 270404 CNR Public 2 /61 DELIVERABLE REPORT Doc. Identifier: D. 4.1 Date: 28-11-2011 Table of Contents 1. Executive summary.............................................................................................................5 2. Introduction.........................................................................................................................5
    [Show full text]
  • Chapter 4 Procedural Modeling of Complex Objects Using The
    Chapter 4 Procedural Modeling of Complex Objects using the GPU 1 Chapter 4. Procedural Modeling of Complex Objects using the GPU 4.1 Introduction Procedural modeling involves the use of functional specifications of objects to gen- erate complex geometry from a relatively compact set of descriptors, allowing artists to build detailed models without manually specifying each element, with applications to organic modeling, natural scene generation, city layout, and building architecture. Early systems such as those by Lindenmayer and Stiny used grammatic rules to construct new geometric elements. A key feature of procedural models is that geometry is generated as needed by the grammar or language [Lindenmayer, 1968] [Stiny and Gips, 1971]. This may be contrast with scene graphs, which were historically used in systems such as IRIS Performer to group static geometric primitives for efficient hardware rendering of large scenes [Strauss, 1993] [Rohlf and Helman, 1994]. Scene graphs take advan- tage of persistent GPU buffers, and reordering of graphics state switches, to render these scenes in real time, and are ideally suited to static geometry with few scene changes. A more recent trend is to transmit simplified geometry to graphics hard- ware and allow the GPU to dynamically build detailed models without returning to the CPU. This technique has been successfully applied to mesh refinement, charac- ter skinning, displacement mapping, and terrain rendering [Lorenz and Dollner, 2008] [Rhee et al., 2006] [Szirmay-Kalos and Umenhoffer, 2006]. However, it remains an open question as to how to best take advantage of the GPU for generic procedural modeling. Several early procedural systems developed from the study of nature.
    [Show full text]