Einführung Einführung in Die Algorithmische Algorithmische Geometrie Geometrie Und Geometrische Modellierung Und Geometrische Modellierung

Total Page:16

File Type:pdf, Size:1020Kb

Einführung Einführung in Die Algorithmische Algorithmische Geometrie Geometrie Und Geometrische Modellierung Und Geometrische Modellierung Einführung in die Algorithmische Geometrie und Geometrische Modellierung Scan Processing parts from Bastian Steder, Univ Freiburg Radu B. Rusu, Willow Garage Project Dr.-Ing. Christoph Fünfzig What are Point Clouds Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 2 What are Point Clouds Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 3 Scanning Techniques Passive Active Laser Scanning Techniques, http://home.arcor.de/laserscanner/ Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 4 Scan Technology LIDAR SLR Time-of-flight Phase interference/shift Triangulation (passive) Triangulation (active) Structured Light (active) Measurement Error/ Object Size Conoscopy Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 5 Scan Processing ICP Zippering, Ball Pivoting, Poisson Recon Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 6 Iterative Closest Point (ICP) input: points from two raw scans, initial estimate of transform. output: refined transform T(abg,xyz). While (squared distance d too large) { Compute nearest neighbor n(p) for each point p. Estimate rotation (3 dof) and translation (3 dof) using a mean square 2 cost function: d=minT(abg,xyz) Si (Tn(pi) – pi) Store transformation T(abg,xyz). } Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 7 Model Fitting Model Fitting Model Fitting: Find model (e.g., line) to noisy set of points. Linear Least Squares (use all points for fitting) Hough Transform (histogramming) RANSAC (randomized, max consensus set) Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 8 Random Sample Consensus Model Fitting (RANSAC) input: data: a set of observations n: the minimum number of data required to fit the model k: the number of iterations performed by the algorithm t: a threshold value for determining when a datum fits a model d: the number of close data values required to assert that a model fits well to data output: best_model: model parameters which best fit the data best_consensus_set: data points from which this model has been estimated best_error: the error of this model relative to the data iterations := 0; best_model := null; best_consensus_set := null; best_error := infinity; while iterations < k do maybe_inliers := n randomly selected values from data; maybe_model := model fitted to maybe_inliers; consensus_set := maybe_inliers; for every point in data not in maybe_inliers if point fits maybe_model with an error smaller than t then add point to consensus_set ; if the number of elements in consensus_set is > d then this_model := model parameters fitted to all points in consensus_set; this_error := a measure of how well this_model fits these points; if this_error < best_error then store this_model; Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 9 Point Cloud Library (PCL) http://www.ros.org/wiki/pcl Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 21 Data representation As previously presented: a point is represented as a tuple, e.g., a Point Cloud is represented as a collection of points : Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 22 Data representation In terms of data structures: an XYZ point can be represented as: float x float y float z a n-dimensional point can be represented as float[] point, which is nothing else but a std::vector<float> point Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 23 Data representation In terms of data structures: a point cloud is: Point[] points or: std::vector<Point> points in C++, where Point is the structure/data type representing a single point Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 24 PointCloud vs. PointCloud2 We distinguish between two data formats for the point clouds: PointCloud<PointType> with a specific data type (for actual usage in the code) PointCloud2 as a general representation containing a header defining the point cloud structure (for loading, saving or sending as a ROS message) Conversion between the two is easy: pcl::fromROSMsg and pcl::toROSMsg Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 25 PointCloud structure PointCloud class (templated over the point type): template <typename PointT> class PointCloud; Important members: std::vector<PointT> points; // the data uint32_t width, height; // scan structure Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 26 Point types Examples of PointT [pcl/include/pcl/point_types.h]: struct PointXYZ { float x; float y; float z; } or struct Normal { float normal[3]; float curvature; } Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 27 Range image class A range image is derived from PointCloud<PointWithRange> can be created from a normal point cloud given a viewpoint more convenient to work with: neighborhood accessible methods from vision usable encodes negative information is easy to visualize (have a look at pcl_visualization -> range_image_visualizer) Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 28 What is PCL? PCL is a fully templated modern C++ library for 3D point cloud processing uses SSE optimizations (Eigen backend) for fast computations on modern CPUs uses OpenMP and Intel TBB for parallelization passes data between modules (e.g., algorithms) using Boost shared pointers will be made independent from ROS in one of the next releases Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 29 Algorithms in PCL downsampling, outlier removal, indices extraction, normals and curvatures, boundary points, moment projections invariants, principal curvatures, Point Feature Histograms Iterative Closest Point (ICP), non linear optimizations cluster extraction, polygonal prism extraction meshing, convex hulls, Moving Least Squares Input/Output Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 30 PCL structure PCL is a collection of smaller, modular C++ libraries: libpcl_features: many 3D features (e.g., normals and curvatures, boundary points, moment invariants, principal curvatures, Point Feature Histograms (PFH), Fast PFH, ...) libpcl_surface: surface reconstruction techniques (e.g., meshing, convex hulls, Moving Least Squares, ...) libpcl_filters: point cloud data filters (e.g., downsampling, outlier removal, indices extraction, projections, ...) libpcl_io: I/O operations (e.g., writing to/reading from PCD (Point Cloud Data) and BAG files) libpcl_segmentation: segmentation operations (e.g.,cluster extraction, Sample Consensus model fitting, polygonal prism extraction, ...) libpcl_registration: point cloud registration methods (e.g., Iterative Closest Point (ICP), non linear optimizations, ...) libpcl_range_image: range image class with specialized methods It provides unit tests, examples, tutorials, … Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 31 Point Cloud file format Point clouds can be stored to disk as files, into the PCD format: # Point Cloud Data ( PCD ) file format v .5 FIELDS x y z rgba SIZE 4 4 4 4 TYPE F F F U WIDTH 307200 HEIGHT 1 POINTS 307200 DATA binary ... DATA can be either ascii or binary. If ascii, then DATA ascii 0.0054216 0.11349 0.040749 … Usage: pcl::io::loadPCDFile and pcl::io::savePCDFile Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 32 Architecture Inheritance simplifies development and testing: pcl::Feature<PointT> feat; feat = pcl::Normal<PointT> (input); feat = pcl::FPFH<PointT> (input); feat = pcl::BoundaryPoint<PointT> (input); ... feat.compute (&output); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 33 Basic Interface Filters, Features, Segmentation all use the same basic interface: create the object use setInputCloud to give the input set some parameters call compute to get the output Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 34 Filter example 1 pcl::PassThrough<T> p; // clamp to interval p.setInputCloud (data); p.FilterLimits (0.0, 0.5); p.SetFilterFieldName ("z"); filter_field_name = "x"; | filter_field_name = "xz"; Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 35 Filter example 2 pcl::VoxelGrid<T> p; p.setInputCloud (data); p.FilterLimits (0.0, 0.5); p.SetFilterFieldName ("z"); p.setLeafSize (0.01, 0.01, 0.01); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 36 Filter example 3 pcl::StatisticalOutlierRemoval<T> p; p.setInputCloud (data); m as as p.setMeanK (50); p.setStddevMulThresh (1.0); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 37 Features example 1 pcl::NormalEstimation<T> p; p.setInputCloud (data); p.SetRadiusSearch (0.01); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 38 Features example 2 pcl::BoundaryEstimation<T,N> p; p.setInputCloud (data); p.setInputNormals (normals); p.SetRadiusSearch (0.01); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 39 Segmentation example 1 pcl::SACSegmentation<T> p; p.setInputCloud (data); p.setModelType (pcl::SACMODEL_PLANE); p.setMethodType (pcl::SAC_RANSAC); p.setDistanceThreshold (0.01); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 41 Segmentation example 2 pcl::EuclideanClusterExtraction<T> p; p.setInputCloud (data); p.setClusterTolerance (0.05); p.setMinClusterSize (1); Geometrische Modellierung & Algorithmische Geometrie, WS 2013/2014 42 Segmentation example 3 pcl::SegmentDifferences<T> p; p.setInputCloud (source); p.setTargetCloud (target); p.setDistanceThreshold (0.001); Geometrische
Recommended publications
  • Automatic Generation of a 3D City Model
    UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA COMPUTER ENGINEERING DEGREE DEGREE FINAL PROJECT Automatic generation of a 3D city model David Murcia Pacheco June, 2017 AUTOMATIC GENERATION OF A 3D CITY MODEL Escuela Superior de Informática UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA Information Technology and Systems SPECIFIC TECHNOLOGY OF COMPUTER ENGINEERING DEGREE FINAL PROJECT Automatic generation of a 3D city model Author: David Murcia Pacheco Director: Dr. Félix Jesús Villanueva Molina June, 2017 David Murcia Pacheco Ciudad Real – Spain E-mail: [email protected] Phone No.:+34 625 922 076 c 2017 David Murcia Pacheco Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". i TRIBUNAL: Presidente: Vocal: Secretario: FECHA DE DEFENSA: CALIFICACIÓN: PRESIDENTE VOCAL SECRETARIO Fdo.: Fdo.: Fdo.: ii Abstract HIS document collects all information related to the Degree Final Project (DFP) of Com- T puter Engineering Degree of the student David Murcia Pacheco, tutorized by Dr. Félix Jesús Villanueva Molina. This work has been developed during 2016 and 2017 in the Escuela Superior de Informática (ESI), in Ciudad Real, Spain. It is based in one of the proposed sub- jects by the faculty of this university for this year, called "Generación automática del modelo en 3D de una ciudad".
    [Show full text]
  • Benchmarks on WWW Performance
    The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Application Nicholas F. Polys, Chair Doug A. Bowman Peter Sforza Aug 14, 2020 Blacksburg, Virginia Keywords: Point Cloud, WebGL, X3DOM, x3d Copyright 2020, Yanshen Sun The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun (ABSTRACT) With the development of remote sensing devices, it becomes more and more convenient for individual researchers to acquire high-resolution point cloud data by themselves. There have been plenty of online tools for the researchers to exhibit their work. However, the drawback of existing tools is that they are not flexible enough for the users to create 3D scenes of a mixture of point-based and triangle-based models. X3DOM is a WebGL-based library built on Extensible 3D (X3D) standard, which enables users to create 3D scenes with only a little computer graphics knowledge. Before X3D 4.0 Specification, little attention has been paid to point cloud rendering in X3DOM. PointProperties, an appearance node newly added in X3D 4.0, provides point size attenuation and texture-color mixing effects to point geometries. In this work, we propose an X3DOM implementation of PointProperties. This implementation fulfills not only the features specified in X3D 4.0 documentation, but other shading effects comparable to the effects of triangle-based geometries in X3DOM, as well as other state-of-the-art point cloud visualization tools.
    [Show full text]
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Objekttracking Für Dynamisches Videoprojektions Mapping
    Bachelorthesis Iwer Petersen Using object tracking for dynamic video projection mapping Fakultät Technik und Informatik Faculty of Engineering and Computer Science Studiendepartment Informatik Department of Computer Science Iwer Petersen Using object tracking for dynamic video projection mapping Bachelorthesis submitted within the scope of the Bachelor examination in the degree programme Bachelor of Science Technical Computer Science at the Department of Computer Science of the Faculty of Engineering and Computer Science of the Hamburg University of Applied Science Mentoring examiner: Prof. Dr. Ing. Birgit Wendholt Second expert: Prof. Dr.-Ing. Andreas Meisel Submitted at: January 31, 2013 Iwer Petersen Title of the paper Using object tracking for dynamic video projection mapping Keywords video projection mapping, object tracking, point cloud, 3D Abstract This document presents a way to realize video projection mapping onto moving objects. Therefore a visual 3D tracking method is used to determine the position and orientation of a known object. Via a calibrated projector-camera system, the real object then is augmented with a virtual texture. Iwer Petersen Thema der Arbeit Objekttracking für dynamisches Videoprojektions Mapping Stichworte Video Projektions Mapping, Objektverfolgung, Punktwolke, 3D Kurzzusammenfassung Dieses Dokument präsentiert einen Weg um Video Projektions Mapping auf sich bewegende Objekte zu realisieren. Dafür wird ein visuelles 3D Trackingverfahren verwendet um die Position und Lage eines bekannten Objekts zu bestimmen. Über ein kalibriertes Projektor- Kamera System wird das reale Objekt dann mit einer virtuellen Textur erweitert. Contents 1 Introduction1 1.1 Motivation . .1 1.2 Structure . .3 2 Related Work4 2.1 Corresponding projects . .4 2.1.1 Spatial Augmented Reality . .4 2.1.2 2D Mapping onto people .
    [Show full text]
  • Room Layout Estimation on Mobile Devices
    En vue de l'obtention du DOCTORAT DE L'UNIVERSITÉ DE TOULOUSE Délivré par : Institut National Polytechnique de Toulouse (Toulouse INP) Discipline ou spécialité : Image, Information et Hypermédia Présentée et soutenue par : M. VINCENT ANGLADON le vendredi 27 avril 2018 Titre : Room layout estimation on mobile devices Ecole doctorale : Mathématiques, Informatique, Télécommunications de Toulouse (MITT) Unité de recherche : Institut de Recherche en Informatique de Toulouse (I.R.I.T.) Directeur(s) de Thèse : M. VINCENT CHARVILLAT M. SIMONE GASPARINI Rapporteurs : M. CARSTEN GRIWODZ, UNIVERSITE D'OSLO M. DAVID FOFI, UNIVERSITE DE BOURGOGNE Membre(s) du jury : Mme LUCE MORIN, INSA DE RENNES, Président M. PASCAL BERTOLINO, UNIVERSITE GRENOBLE ALPES, Membre M. SIMONE GASPARINI, INP TOULOUSE, Membre M. TOMISLAV PRIBANIC, UNIVERSITE DE ZAGREB, Membre M. VINCENT CHARVILLAT, INP TOULOUSE, Membre Acknowledgments I would like to thank my thesis committee for giving me the opportunity to work on an exciting topic, and for the trust they placed in me. First, Simone Gasparini, I had the pleasure to have as advisor, who kept a cautious eye on my scientific and technical productions. I am certain this great attention to the details played an important role in the quality of my publications and the absence of rejection notice. Then, my thesis director, Vincent Charvillat, who was always generous in original ideas and positive energy. His advice helped me to put a more flattering light on my works and feel more optimistic. Finally, Telequid, which funded my works, with a special thought to Frédéric Bruel and Benjamin Ahsan for their great patience. I would also like to thank my referees: Prof.
    [Show full text]
  • Basic Implementation and Measurements of Plane Detection
    Basic Implementation and Measurements of Plane Detection in Point Clouds A project of the 2017 Robotics Course of the School of Information Science and Technology (SIST) of ShanghaiTech University https://robotics.shanghaitech.edu.cn/teaching/robotics2017 Chen Chen Wentao Lv Yuan Yuan School of Information and School of Information and School of Information and Science Technology Science Technology Science Technology [email protected] [email protected] [email protected] ShanghaiTech University ShanghaiTech University ShanghaiTech University Abstract—In practical robotics research, plane detection is an Corsini et al.(2012) is working on dealing with the problem important prerequisite to a wide variety of vision tasks. FARO is of taking random samples over the surface of a 3D mesh another powerful devices to scan the whole environment into describing and evaluating efficient algorithms for generating point cloud. In our project, we are working to apply some algorithms to convert point cloud data to plane mesh, and then different distributions. In this paper, the author we propose path planning based on the plane information. Constrained Poisson-disk sampling, a new Poisson-disk sam- pling scheme for polygonal meshes which can be easily I. Introduction tweaked in order to generate customized set of points such as In practical robotics research, plane detection is an im- importance sampling or distributions with generic geometric portant prerequisite to a wide variety of vision tasks. Plane constraints. detection means to detect the plane information from some Kazhdan and Hoppe(2013) is talking about poisson surface basic disjoint information, for example, the point cloud. In this reconstruction, which can create watertight surfaces from project, we will use point clouds from FARO devices which oriented point sets.
    [Show full text]
  • Point Cloud Subjective Evaluation Methodology Based on Reconstructed Surfaces
    Point cloud subjective evaluation methodology based on reconstructed surfaces Evangelos Alexioua, Antonio M. G. Pinheirob, Carlos Duartec, Dragan Matkovi´cd, Emil Dumi´cd, Luis A. da Silva Cruzc, Lovorka Gotal Dmitrovi´cd, Marco V. Bernardob Manuela Pereirab and Touradj Ebrahimia aMultimedia Signal Processing Group, Ecole´ Polytechnique F´ed´eralede Lausanne, Switzerland; bInstitute of Telecommunications, University of Beira Interior, Portugal; cDepartment of Electrical and Computer Engineering, University of Coimbra and Institute of Telecommunications, Portugal; dDepartment of Electrical Engineering, University North, Croatia ABSTRACT Point clouds have been gaining importance as a solution to the problem of efficient representation of 3D geometric and visual information. They are commonly represented by large amounts of data, and compression schemes are important for their manipulation transmission and storing. However, the selection of appropriate compression schemes requires effective quality evaluation. In this work a subjective quality evaluation of point clouds using a surface representation is analyzed. Using a set of point cloud data objects encoded with the popular octree pruning method with different qualities, a subjective evaluation was designed. The point cloud geometry was presented to observers in the form of a movie showing the 3D Poisson reconstructed surface without textural information with the point of view changing in time. Subjective evaluations were performed in three different laboratories. Scores obtained from each test were correlated and no statistical differences were observed. Scores were also correlated with previous subjective tests and a good correlation was obtained when compared with mesh rendering in 2D monitors. Moreover, the results were correlated with state of the art point cloud objective metrics revealing poor correlation.
    [Show full text]
  • MEPP2: a Generic Platform for Processing 3D Meshes and Point Clouds
    EUROGRAPHICS 2020/ F. Banterle and A. Wilkie Short Paper MEPP2: a generic platform for processing 3D meshes and point clouds Vincent Vidal, Eric Lombardi , Martial Tola , Florent Dupont and Guillaume Lavoué Université de Lyon, CNRS, LIRIS, Lyon, France Abstract In this paper, we present MEPP2, an open-source C++ software development kit (SDK) for processing and visualizing 3D surface meshes and point clouds. It provides both an application programming interface (API) for creating new processing filters and a graphical user interface (GUI) that facilitates the integration of new filters as plugins. Static and dynamic 3D meshes and point clouds with appearance-related attributes (color, texture information, normal) are supported. The strength of the platform is to be generic programming oriented. It offers an abstraction layer, based on C++ Concepts, that provides interoperability over several third party mesh and point cloud data structures, such as OpenMesh, CGAL, and PCL. Generic code can be run on all data structures implementing the required concepts, which allows for performance and memory footprint comparison. Our platform also permits to create complex processing pipelines gathering idiosyncratic functionalities of the different libraries. We provide examples of such applications. MEPP2 runs on Windows, Linux & Mac OS X and is intended for engineers, researchers, but also students thanks to simple use, facilitated by the proposed architecture and extensive documentation. CCS Concepts • Computing methodologies ! Mesh models; Point-based models; • Software and its engineering ! Software libraries and repositories; 1. Introduction GUI for processing and visualizing 3D surface meshes and point clouds. With the increasing capability of 3D data acquisition devices, mod- Several platforms exist for processing 3D meshes such as Meshlab eling software and graphics processing units, three-dimensional [CCC08] based on VCGlibz, MEPP [LTD12], and libigl [JP∗18].
    [Show full text]
  • Fast 3D Model Fitting and Anthropometrics Using Single Consumer Depth Camera and Synthetic Data
    ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.21.3DIPM-045 Im2Fit: Fast 3D Model Fitting and Anthropometrics using Single Consumer Depth Camera and Synthetic Data Qiaosong Wang 1, Vignesh Jagadeesh 2, Bryan Ressler 2 and Robinson Piramuthu 2 1University of Delaware, 18 Amstel Ave, Newark, DE 19716 2eBay Research Labs, 2065 Hamilton Avenue, San Jose, CA 95125 Abstract at estimating high quality 3D models by using high resolution 3D Recent advances in consumer depth sensors have created human scans during registration process to get statistical model of many opportunities for human body measurement and modeling. shape deformations. Such scans in the training set are very limited Estimation of 3D body shape is particularly useful for fashion in variations, expensive and cumbersome to collect and neverthe- e-commerce applications such as virtual try-on or fit personaliza- less do not capture the diversity of the body shape over the entire tion. In this paper, we propose a method for capturing accurate population. All of these systems are bulky, expensive, and require human body shape and anthropometrics from a single consumer expertise to operate. grade depth sensor. We first generate a large dataset of synthetic Recently, consumer grade depth camera has proven practical 3D human body models using real-world body size distributions. and quickly progressed into markets. These sensors cost around Next, we estimate key body measurements from a single monocu- $200 and can be used conveniently in a living room. Depth sen- lar depth image. We combine body measurement estimates with sors can also be integrated into mobile devices such as tablets local geometry features around key joint positions to form a ro- [8], cellphones [9] and wearable devices [10].
    [Show full text]
  • Streamlining 3D Component Digitization, Modification, and Reproduction a Major Qualifying Project Report
    Project Number: CF-RI13 Fully Reversed Engineering: streamlining 3D component digitization, modification, and reproduction A Major Qualifying Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Bachelor of Science in Mechanical Engineering by Ryan Muller Chris Thomas Date: April 25, 2013 Keywords: Approved: 1. structured light 2. rapid digitization Professor Cosme Furlong, Advisor Abstract The availability of rapid prototyping enhances a designer's creativity and speed, enabling quicker development of new products. However, because this process re- lies heavily on computer-aided design (CAD) models it can often be time costly and inefficient when a component is needed urgently in the field. This paper proposes a method to seamlessly integrate the digitization of existing objects with the rapid prototyping process. Our technique makes use of multiple structured-light techniques in conjunction with photogrammetry to build a more efficient means of product de- velopment. This combination of methods allows our developed application to rapidly scan an entire object using inexpensive hardware. Single views obtained by projecting binary and sinusoidal patterns are combined using photogrammetry feature tracking to create a computer model of the subject. We present also the results of the application of these concepts, as applied to several familiar objects{these objects have been scanned, modified, and sent to a rapid prototyping machine to demonstrate the power of this tool. This technique is useful in a wide range of engineering applications, both in the field and in the lab. Future projects may improve the accuracy of the scans through better calibration and meshing, and test the accuracy of the digitized models more thoroughly.
    [Show full text]
  • Remote Sensing
    remote sensing Article 3D-TTA: A Software Tool for Analyzing 3D Temporal Thermal Models of Buildings Juan García 1, Blanca Quintana 2,* , Antonio Adán 1,Víctor Pérez 3 and Francisco J. Castilla 3 1 3D Visual Computing and Robotics Lab, Universidad de Castilla-La Mancha, 13071 Ciudad Real, Spain; [email protected] (J.G.); [email protected] (A.A.) 2 Department of Electrical, Electronic, Control, Telematics and Chemistry Applied to Engineering, Universidad Nacional de Educación a Distancia, 28040 Madrid, Spain 3 Modelización y Análisis Energético y Estructural en Edificación y Obra Civil Group, Universidad de Castilla-La Mancha, 16002 Cuenca, Spain; [email protected] (V.P.); [email protected] (F.J.C.) * Correspondence: [email protected] Received: 10 June 2020; Accepted: 9 July 2020; Published: 14 July 2020 Abstract: Many software packages are designed to process 3D geometric data, although very few are designed to deal with 3D thermal models of buildings over time. The software 3D Temporal Thermal Analysis (3D-TTA) has been created in order to visualize, explore and analyze these 3D thermal models. 3D-TTA is composed of three modules. In the first module, the temperature of any part of the building can be explored in a 3D visual framework. The user can also conduct separate analyses of structural elements, such as walls, ceilings and floors. The second module evaluates the thermal evolution of the building over time. A multi-temporal 3D thermal model, composed of a set of thermal models taken at different times, is handled here. The third module incorporates several assessment tools, such as the identification of representative thermal regions on structural elements and the comparison between real and simulated (i.e., obtained from energy simulation tools) thermal models.
    [Show full text]
  • Rendering Realistic Cloud Effects for Computer Generated Films
    Brigham Young University BYU ScholarsArchive Theses and Dissertations 2011-06-24 Rendering Realistic Cloud Effects for Computer Generated Films Cory A. Reimschussel Brigham Young University - Provo Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Computer Sciences Commons BYU ScholarsArchive Citation Reimschussel, Cory A., "Rendering Realistic Cloud Effects for Computer Generated Films" (2011). Theses and Dissertations. 2770. https://scholarsarchive.byu.edu/etd/2770 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. Rendering Realistic Cloud Effects for Computer Generated Production Films Cory Reimschussel A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science Michael Jones, Chair Parris Egbert Mark Clement Department of Computer Science Brigham Young University August 2011 Copyright © 2011 Cory Reimschussel All Rights Reserved ABSTRACT Rendering Realistic Cloud Effects for Computer Generated Production Films Cory Reimschussel Department of Computer Science, BYU Master of Science This work addresses the problem of rendering clouds. The task of rendering clouds is important to film and video game directors who want to use clouds to further the story or create a specific atmosphere for the audience. While there has been significant progress in this area, other solutions to this problem are inadequate because they focus on speed instead of accuracy, or focus only on a few specific properties of rendered clouds while ignoring others.
    [Show full text]