Massively Parallel Algorithms for Point Cloud Based Object Recognition on Heterogeneous Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Massively Parallel Algorithms for Point Cloud Based Object Recognition on Heterogeneous Architecture Michigan Technological University Digital Commons @ Michigan Tech Dissertations, Master's Theses and Master's Reports 2017 MASSIVELY PARALLEL ALGORITHMS FOR POINT CLOUD BASED OBJECT RECOGNITION ON HETEROGENEOUS ARCHITECTURE Linjia Hu Michigan Technological University, [email protected] Copyright 2017 Linjia Hu Recommended Citation Hu, Linjia, "MASSIVELY PARALLEL ALGORITHMS FOR POINT CLOUD BASED OBJECT RECOGNITION ON HETEROGENEOUS ARCHITECTURE", Open Access Dissertation, Michigan Technological University, 2017. https://doi.org/10.37099/mtu.dc.etdr/508 Follow this and additional works at: https://digitalcommons.mtu.edu/etdr Part of the Computer Sciences Commons MASSIVELY PARALLEL ALGORITHMS FOR POINT CLOUD BASED OBJECT RECOGNITION ON HETEROGENEOUS ARCHITECTURE By Linjia Hu A DISSERTATION Submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY In Computer Science MICHIGAN TECHNOLOGICAL UNIVERSITY 2017 © 2017 Linjia Hu This dissertation has been approved in partial fulfillment of the requirements for the Degree of DOCTOR OF PHILOSOPHY in Computer Science. Department of Computer Science Dissertation Advisor: Dr. Saeid Nooshabadi Committee Member: Dr. Timothy Havens Committee Member: Dr. Scott Kuhl Committee Member: Dr. Stefaan De Winter Department Chair: Dr. Min Song Dedication To my loved mother Yuxiang Duan and my father Huazhong Hu, whose affection, love and encouragement make me able to hew out of the mountain of despair a stone of hope; my brilliant, gorgeous and supportive wife Xiaoting Cao; my sweet, exuberant and kind-hearted little boy Andrew Cao Hu. Contents List of Figures ................................. xv List of Tables .................................. xxi Preface ...................................... xxiii Acknowledgments ............................... xxv Abstract ..................................... xxvii 1 Introduction ................................. 1 1.1 Point Cloud Based Object Recognition ................ 2 1.1.1 PointCloud........................... 2 1.1.2 Object Recognition Pipeline .................. 4 1.1.2.1 Descriptors Computing ................ 6 1.1.2.2 Descriptors Matching ................. 8 1.1.2.3 Correspondence Grouping .............. 10 1.1.2.4 HypothesisVerification................ 13 1.1.3 PointCloudLibrary(PCL).................. 14 vii 1.2 Heterogeneous Parallel Computing . ................ 15 1.2.1 GPUHardwareArchitecture.................. 16 1.2.2 CUDAProgrammingModelandMemoryModel....... 19 1.3ScopeofThisWork........................... 21 1.3.1 Parallelization of Image Descriptor Computing ........ 22 1.3.2 Parallelization of the ANNS .................. 22 1.3.3 Parallelization of the Construction of KD-tree and the BANNS 23 1.3.4 Parallel and Distributed BANNS on the Forest of Randomized KD-trees............................. 24 1.4OverviewofChapters.......................... 25 2 Massively Parallel 3D Local Image Descriptor G-SHOT ..... 27 2.1Introduction............................... 28 2.2MathematicalModeloftheSHOTDescriptor............ 31 2.2.1 Repeatable Local Reference Frame (LRF) .......... 32 2.2.2 Descriptor Organization .................... 33 2.3 Implementation of the G-SHOT Algorithms ............. 36 2.3.1 ExactG-SHOTAlgorithm................... 36 2.3.2 ApproximateG-SHOTAlgorithm............... 37 2.3.3 Performance Optimizations .................. 38 2.4ExperimentResultsandDiscussion.................. 40 viii 2.5 Comparative Performance Evaluation of the Two Parallel G-SHOT Algorithms................................ 44 2.6Conclusion................................ 45 3 Massive Parallelization of the ANNS on the KD-tree ....... 49 3.1Introduction............................... 50 3.2Background............................... 52 3.2.1 KD-tree............................. 52 3.2.2 NNSontheKD-tree...................... 53 3.3RelatedWorkandProposedInnovation................ 55 3.3.1 RelatedWork.......................... 55 3.3.1.1 LinearNNS...................... 55 3.3.1.2 Non-linear NNS .................... 57 3.3.2 ProposedInnovation...................... 58 3.4 Massive Parallel Implementation .................... 60 3.4.1 ParallelAlgorithmDesign................... 60 3.4.2 Performance Optimization ................... 66 3.4.3 MemoryUsage......................... 68 3.5ExperimentsandResults........................ 69 3.5.1 Performance Evaluation on Real-world Image Descriptors . 72 3.5.1.1 Performance Comparison with Related Works . 72 ix 3.5.1.2 Performance Comparison Between the Serial and the ParallelAlgorithms.................. 73 3.5.1.3 Effect of Divergence ................. 74 3.5.1.4 SpeedupImpactFactors............... 75 3.5.1.4.1 Effect of the Dataset Size and the Dimen- sionality................... 75 3.5.1.4.2 Effect of the Node Size ........... 76 3.5.1.4.3 Effect of the Backtracking . ........ 78 3.5.1.4.4 Effect of the Number of Nearest Neighbors of Single Query Point (P ) ......... 79 3.5.2 Experimental Results for Local Descriptor SHOT Matching . 80 3.6Conclusion................................ 82 4 Highly Parallel KD-tree Construction for the BANNS ...... 85 4.1Introduction............................... 86 4.2Background............................... 88 4.2.1 P -NNSontheKD-tree..................... 88 4.3RelatedWorkandProposedInnovation................ 90 4.3.1 RelatedWork.......................... 90 4.3.2 ProposedInnovation...................... 92 4.4 Massive Parallel Implementation .................... 93 4.4.1 ParallelConstructionoftheKD-tree............. 93 x 4.4.2 Parallel P -BANNS on the KD-tree .............. 107 4.5 Performance Optimizations ....................... 112 4.6ExperimentsandResults........................ 115 4.6.1 Evaluation on Real-world Image Descriptors ......... 117 4.6.1.1 Performance Comparison with Related Works . 117 4.6.1.2 Comparison Between the Serial and Parallel Algo- rithms......................... 118 4.6.2 SpeedupandAccuracyImpactFactors............ 119 4.6.2.1 Effect of the Dataset Size and the Dimensionality . 119 4.6.2.2 Effect of the KD-tree Height ............. 121 4.6.2.3 Effect of the Number of Backtracks ......... 122 4.6.2.4 Effect of the Number of Nearest Neighbors of Single Query Point (P ) ................... 123 4.6.3 Evaluation of the Sliding Windows .............. 124 4.6.4 Experiments on the SHOT Matching ............. 127 4.7Conclusion................................ 128 5 Parallel and Distributed BANNS on the Forest of Randomized KD-trees .................................. 131 5.1Introduction............................... 132 5.2Background............................... 135 5.2.1 NNSontheForestofRandomizedKD-trees......... 135 xi 5.3RelatedWork.............................. 136 5.4 Massively Parallel Implementations on the GPU Cluster ...... 141 5.4.1 ConstructionofForestofRandomizedKD-trees....... 142 5.4.2 BANNSontheForestofRandomizedKD-trees....... 154 5.4.2.1 Nodes Distribution .................. 154 5.4.2.2 BANNS Queries ................... 155 5.5 Performance Optimization ....................... 165 5.6ExperimentsandResults........................ 166 5.6.1 Evaluation on Real-world Image Descriptors ......... 167 5.6.1.1 Performance Comparison with Related Works . 168 5.6.1.2 Comparison Between the Parallel Algorithms on the GPUandCPUClusters............... 169 5.6.2 SpeedupandAccuracyImpactFactors............ 170 5.6.2.1 Effect of the Dataset Size and the Dimensionality . 170 5.6.2.2 Effect of the KD-tree Height ............. 172 5.6.2.3 Effect of the Number of Backtracks ......... 173 5.6.2.4 Effect of the Number of Nearest Neighbors of Single Query Point (P ) ................... 174 5.6.2.5 Effect of Global Synchronization to the Number of Backtracks ...................... 176 5.6.3 Evaluation of the Sliding Windows .............. 177 xii 5.6.4 Experiments on the SHOT Matching ............. 178 5.7Conclusion................................ 179 6 Conclusion .................................. 181 6.1 Completed Work . .......................... 181 6.2 Constraints and Limitations ...................... 183 6.2.1 SMOccupancy......................... 184 6.2.1.1 SM Occupancy of Parallel Descriptor Computing . 184 6.2.1.2 SM Occupancy of Parallel KD-Tree Construction . 185 6.2.1.3 SM Occupancy of the Parallel ANNS ........ 186 6.2.1.4 SM Occupancy of the Parallel BANNS ....... 186 6.2.2 Memory Utilization ....................... 187 6.2.3 Communication Cost ...................... 187 6.2.4 OtherConstraints........................ 188 6.3FutureWork............................... 189 6.3.1 Data Compression ....................... 190 6.3.2 Binarization and Hashing ................... 191 References .................................... 193 A Letters of Permission ........................... 221 xiii List of Figures 1.1 Point cloud of bunny .......................... 3 1.2Pointcloudofmountains........................ 3 1.3 3D object recognition pipeline ..................... 5 1.4 Model and scene descriptors matching ................ 9 1.5 Model and scene correspondence grouping .............. 12 1.6Initialandfinalcorrespondencesofhypothesisverification..... 14 1.7 Heterogeneous computing architecture (HCA) ............ 16 1.8FermiGPUarchitecture........................ 17 1.9 Streaming multiprocessor architecture . ................ 18 1.10 CUDA programming model ...................... 20 1.11CUDAmemorymodel......................... 21 2.1Azimuth/radialpartition........................ 34 2.2Elevationpartition........................... 34 2.3 Local histogram for each volume
Recommended publications
  • Surface Recovery: Fusion of Image and Point Cloud
    Surface Recovery: Fusion of Image and Point Cloud Siavash Hosseinyalamdary Alper Yilmaz The Ohio State University 2070 Neil avenue, Columbus, Ohio, USA 43210 pcvlab.engineering.osu.edu Abstract construct surfaces. Implicit (or volumetric) representation of a surface divides the three dimensional Euclidean space The point cloud of the laser scanner is a rich source of to voxels and the value of each voxel is defined based on an information for high level tasks in computer vision such as indicator function which describes the distance of the voxel traffic understanding. However, cost-effective laser scan- to the surface. The value of every voxel inside the surface ners provide noisy and low resolution point cloud and they has negative sign, the value of the voxels outside the surface are prone to systematic errors. In this paper, we propose is positive and the surface is represented as zero crossing two surface recovery approaches based on geometry and values of the indicator function. Unfortunately, this repre- brightness of the surface. The proposed approaches are sentation is not applicable to open surfaces and some mod- tested in realistic outdoor scenarios and the results show ifications should be applied to reconstruct open surfaces. that both approaches have superior performance over the- The least squares and partial differential equations (PDE) state-of-art methods. based approaches have also been developed to implicitly re- construct surfaces. The moving least squares(MLS) [22, 1] and Poisson surface reconstruction [16], has been used in 1. Introduction this paper for comparison, are particularly popular. Lim and Haron review different surface reconstruction techniques in Point cloud is a valuable source of information for scene more details [20].
    [Show full text]
  • Experimental Validation of Autodesk 3Ds Max Design 2009 and Daysim
    Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0 NRC Project # B3241 Submitted to: Autodesk Canada Co. Media & Entertainment Submitted by: Christoph Reinhart1,2 1) National Research Council Canada - Institute for Research in Construction (NRC-IRC) Ottawa, ON K1A 0R6, Canada (2001-2008) 2) Harvard University, Graduate School of Design Cambridge, MA 02138, USA (2008 - ) February 12, 2009 B3241.1 Page 1 Table of Contents Abstract ....................................................................................................................................... 3 1 Introduction .......................................................................................................................... 3 2 Methodology ......................................................................................................................... 5 2.1 Daylighting Test Cases .................................................................................................... 5 2.2 Daysim Simulations ....................................................................................................... 12 2.3 Autodesk 3ds Max Design Simulations ........................................................................... 13 3 Results ................................................................................................................................ 14 3.1 Façade Illuminances ...................................................................................................... 14 3.2 Base Case (TC1) and Lightshelf (TC2)
    [Show full text]
  • Open Source Software for Daylighting Analysis of Architectural 3D Models
    19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011 http://mssanz.org.au/modsim2011 Open Source Software for Daylighting Analysis of Architectural 3D Models Terrance Mc Minn a a Curtin University of Technology School of Built Environment, Perth, Australia Email: [email protected] Abstract:This paper examines the viability of using open source software for the architectural analysis of solar access and over shading of building projects. For this paper open source software also includes freely available closed source software. The Computer Aided Design software – Google SketchUp (Free) while not open source, is included as it is freely available, though with restricted import and export abilities. A range of software tools are used to provide an effective procedure to aid the Architect in understanding the scope of sun penetration and overshadowing on a site and within a project. The technique can be also used lighting analysis of both external (to the building) as well as for internal spaces. An architectural model built in SketchUp (free) CAD software is exported in two different forms for the Radiance Lighting Simulation Suite to provide the lighting analysis. The different exports formats allow the 3D CAD model to be accessed directly via Radiance for full lighting analysis or via the Blender Animation program for a graphical user interface limited option analysis. The Blender Modelling Environment for Architecture (BlendME) add-on exports the model and runs Radiance in the background. Keywords:Lighting Simulation, Open Source Software 3226 McMinn, T., Open Source Software for Daylighting Analysis of Architectural 3D Models INTRODUCTION The use of daylight in buildings has the potential for reduction in the energy demands and increasing thermal comfort and well being of the buildings occupants (Cutler, Sheng, Martin, Glaser, et al., 2008), (Webb, 2006), (Mc Minn & Karol, 2010), (Yancey, n.d.) and others.
    [Show full text]
  • An Advanced Path Tracing Architecture for Movie Rendering
    RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan.
    [Show full text]
  • Getting Started (Pdf)
    GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo.
    [Show full text]
  • Sony Pictures Imageworks Arnold
    Sony Pictures Imageworks Arnold CHRISTOPHER KULLA, Sony Pictures Imageworks ALEJANDRO CONTY, Sony Pictures Imageworks CLIFFORD STEIN, Sony Pictures Imageworks LARRY GRITZ, Sony Pictures Imageworks Fig. 1. Sony Imageworks has been using path tracing in production for over a decade: (a) Monster House (©2006 Columbia Pictures Industries, Inc. All rights reserved); (b) Men in Black III (©2012 Columbia Pictures Industries, Inc. All Rights Reserved.) (c) Smurfs: The Lost Village (©2017 Columbia Pictures Industries, Inc. and Sony Pictures Animation Inc. All rights reserved.) Sony Imageworks’ implementation of the Arnold renderer is a fork of the and robustness of path tracing indicated to the studio there was commercial product of the same name, which has evolved independently potential to revisit the basic architecture of a production renderer since around 2009. This paper focuses on the design choices that are unique which had not evolved much since the seminal Reyes paper [Cook to this version and have tailored the renderer to the specic requirements of et al. 1987]. lm rendering at our studio. We detail our approach to subdivision surface After an initial period of co-development with Solid Angle, we tessellation, hair rendering, sampling and variance reduction techniques, decided to pursue the evolution of the Arnold renderer indepen- as well as a description of our open source texturing and shading language components. We also discuss some ideas we once implemented but have dently from the commercially available product. This motivation since discarded to highlight the evolution of the software over the years. is twofold. The rst is simply pragmatic: software development in service of lm production must be responsive to tight deadlines CCS Concepts: • Computing methodologies → Ray tracing; (less the lm release date than internal deadlines determined by General Terms: Graphics, Systems, Rendering the production schedule).
    [Show full text]
  • Semantic Segmentation of Surface from Lidar Point Cloud 3
    Noname manuscript No. (will be inserted by the editor) Semantic Segmentation of Surface from Lidar Point Cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: date / Accepted: date Abstract In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast al- gorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized tra- jectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.
    [Show full text]
  • 2014 3-4 Acta Graphica.Indd
    Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 author viewpoint acta graphica 234 Performance Assessment of Three Rendering Engines in 3D Computer Graphics Software Authors Žan Vidmar, Aleš Hladnik, Helena Gabrijelčič Tomc* University of Ljubljana Faculty of Natural Sciences and Engineering Slovenia *E-mail: [email protected] Abstract: The aim of the research was the determination of testing conditions and visual and numerical evaluation of renderings made with three different rendering engines in Maya software, which is widely used for educational and computer art purposes. In the theoretical part the overview of light phenomena and their simulation in virtual space is presented. This is followed by a detailed presentation of the main rendering methods and the results and limitations of their applications to 3D ob- jects. At the end of the theoretical part the importance of a proper testing scene and especially the role of Cornell box are explained. In the experimental part the terms and conditions as well as hardware and software used for the research are presented. This is followed by a description of the procedures, where we focused on the rendering quality and time, which enabled the comparison of settings of different render engines and determination of conditions for further rendering of testing scenes. The experimental part continued with rendering a variety of simple virtual scenes including Cornell box and virtual object with different materials and colours. Apart from visual evaluation, which was the starting point for comparison of renderings, a procedure for numerical estimation and colour deviations of ren- derings using the selected regions of interest in the final images is presented.
    [Show full text]
  • Simulations and Visualisations with the VI-Suite
    Simulations and Visualisations with the VI-Suite For VI-Suite Version 0.6 (document version 0.6.0.1) Dr Ryan Southall - School of Architecture & Design - University of Brighton. Contents 1 Introduction .............................................. 3 2 Installation .............................................. 3 3 Configuration ............................................. 4 4 The VI-Suite Interface ........................................ 5 4.1 Changes from v0.4 ...................................... 5 4.2 Collections .......................................... 5 4.3 File structure ......................................... 5 4.4 The Node System ....................................... 5 4.4.1 VI-Suite Nodes .................................. 6 4.4.2 EnVi Material nodes ................................ 7 4.4.3 EnVi Network nodes ................................ 7 4.5 Other panels ......................................... 8 4.5.1 Object properties panel .............................. 8 4.5.2 Material properties panel ............................. 9 4.5.3 Visualisation panel ................................. 10 4.6 Common Nodes ....................................... 11 4.6.1 The VI Location node ............................... 11 4.6.2 The ASC Import node ............................... 11 4.6.3 The VI Chart node ................................. 12 4.6.4 The VI CSV Export node ............................. 12 4.6.5 The Text Edit Node ................................ 13 4.6.6 The VI Metrics node ................................ 13 4.7 Specific
    [Show full text]
  • Image-Based Lighting
    Tutorial Image-Based Paul Debevec Lighting USC Institute for Creative Technologies mage-based lighting (IBL) is the process of 2. mapping the illumination onto a representation of Iilluminating scenes and objects (real or syn- the environment; thetic) with images of light from the real world. It 3. placing the 3D object inside the environment; and evolved from the reflection-mapping technique1,2 in 4. simulating the light from the environment illumi- which we use panoramic images as nating the computer graphics object. texture maps on computer graphics This tutorial shows how models to show shiny objects reflect- Figure 1 shows an example of an object illuminated ing real and synthetic environments. entirely using IBL. Gary Butcher created the models in image-based lighting can IBL is analogous to image-based 3D Studio Max, and the renderer used was the Arnold modeling, in which we derive a 3D global illumination system written by Marcos Fajardo. illuminate synthetic objects scene’s geometric structure from I captured the light in a kitchen so it includes light from images, and to image-based render- a ceiling fixture; the blue sky from the windows; and with measurements of real ing, in which we produce the ren- the indirect light from the room’s walls, ceiling, and dered appearance of a scene from its cabinets. Gary mapped the light from this room onto a light, making objects appear appearance in images. When used large sphere and placed the model of the microscope effectively, IBL can produce realistic on the table in the middle of the sphere.
    [Show full text]
  • Poser Pro 2014 Quickly & Easily Design with 3D Figures!
    Poser Pro 2014 Quickly & Easily Design with 3D Figures! Poser Pro 2014 is the fully featured choice for creating new poser content and adding 3D characters to Max, Maya, Lightwave and C4D projects. Poser Pro 2014 provides the fastest way for professionals to create content and render 3D character images and animations from Poser scenes. COLLADA support enables Poser content integration with game engines and other 2D/3D tools. Fully 64-bit native and optimised for multi-core systems, Poser Pro 2014 saves time by efficiently using system memory and processing resources. This is the ideal 3D program for the professional artist and 3D production teams. RRP: £299.99 Product Code: AVQ-SPP14-DVD Barcode: 5 016488 126854 Poser Pro 2014 Uses:- • Supports pro-production workflow – ideal for motion graphics on TV • Medical/forensic reconstruction – crime scene animation • Architectural design – 3D building animation/concept design work • Computer game character design/ flash design for online games • Car/engineering concept construction • Can be used as a standalone or plug-in to other pro graphics packages Poser Pro 2014 Exclusive New Features:- New Fitting Room Interactively fit clothing and props to any Poser figure and create conforming clothing using five intelligent modes that automatically loosen, tighten, smooth and preserve soft and rigid features. Paint maps on the mesh to control the exact areas that you want to modify. Use pre-fit tools to direct the mesh around the goal figure’s shape. With a single button generate a new conforming item using the goal figure’s rig, complete with full morph transfer. New Powerful Parameter Controls Hidden parameters can be displayed so they can be modified automatically, allowing Poser content developers greater power.
    [Show full text]
  • An Overview of 3D Data Content, File Formats and Viewers
    Technical Report: isda08-002 Image Spatial Data Analysis Group National Center for Supercomputing Applications 1205 W Clark, Urbana, IL 61801 An Overview of 3D Data Content, File Formats and Viewers Kenton McHenry and Peter Bajcsy National Center for Supercomputing Applications University of Illinois at Urbana-Champaign, Urbana, IL {mchenry,pbajcsy}@ncsa.uiuc.edu October 31, 2008 Abstract This report presents an overview of 3D data content, 3D file formats and 3D viewers. It attempts to enumerate the past and current file formats used for storing 3D data and several software packages for viewing 3D data. The report also provides more specific details on a subset of file formats, as well as several pointers to existing 3D data sets. This overview serves as a foundation for understanding the information loss introduced by 3D file format conversions with many of the software packages designed for viewing and converting 3D data files. 1 Introduction 3D data represents information in several applications, such as medicine, structural engineering, the automobile industry, and architecture, the military, cultural heritage, and so on [6]. There is a gamut of problems related to 3D data acquisition, representation, storage, retrieval, comparison and rendering due to the lack of standard definitions of 3D data content, data structures in memory and file formats on disk, as well as rendering implementations. We performed an overview of 3D data content, file formats and viewers in order to build a foundation for understanding the information loss introduced by 3D file format conversions with many of the software packages designed for viewing and converting 3D files.
    [Show full text]