INFO-H502 Virtual Reality

Total Page:16

File Type:pdf, Size:1020Kb

INFO-H502 Virtual Reality 26 Oct. 2020 INFO-H502 Virtual Reality 1 What is VR? tracking 3D content HMD = creation Head Mounted Stereoscopic rendering Device 3D content animation & rendering https://wiki.epfl.ch/cultural.data.sculpting/week11 2 https://www.youtube.com/watch?v=rDgRDrez0pw&app=desktop Biomed Virtual Reality ULB (PROJ-H402): From 2D slices to 3D model 3 Why standards? https://www.iso.org/standards.html https://youtu.be/AYBVTeqKahk OpenGL/WebGL used in this course (GL = Graphics Library) 4 Virtual Reality: Extract 3D from patient Laparoscopic data Laparoscopic 3D simulator ACMM2017 submission (non-ULB) 8 VR = 3D or multi-cam & multi-output VR on real/natural scenes Light Field display = Hundreds of projections Multi-cam acquisition for Free Navigation How to compress? A. Jones et. Al., “An automultiscopic projector array for interactive digital humans,” Proceeding SIGGRAPH '15 ACM SIGGRAPH 2015 Emerging Technologies, article no.9 6, 2015, https://vimeo.com/128641902 DIY Light Field Display Project hundreds of directional views on a diffuser Transmit a couple of views & http://gl.ict.usc.edu/Research/PicoArray/ Synthesize all other views10 Light Field display: 72 output views, Holografika@VUB 11 Holographic Stereograms Holographic stereograms made by ULB using DIBR & RVS 3D games with synthetic content = OpenGL 3D graphics = 3D mesh + 2D texture 6DoF (6 Degrees of Freedom) http://soundvenue.com/tech/2015/05/technologik-hvad-er-virtual-reality-briller-148840 13 360-VR = Image-Based Rendering (IBR) = 3 Degrees of Freedom (rotational) Cybersickness when having many translational movements https://thenextweb.com/creativity/2015/06/17/getty-teams-up-with-oculus-for- immersive-360-degree-image-viewing/ 14 Stitching artefacts in creating a panoramic video GigaEye project EPFL (2015) http://dx.doi.org/10.5281/zenodo.16544 15 Look to the sphere inside/out Translational movements do not give parallax = Cyber-sickness 3DoF 3DoF+ VR with motion parallax = 3DoF+ No cyber-sickness 3DoF 6DoF Plenoptic camera capture (also in JPEG-PLENO) Micro-lenses image http://clim.inria.fr/Datasets/RaytrixR8Dataset-5x5/index.html (CC-BY-SA INRIA) 17 Photogrammetry: 3D reconstruction 18 People 3D reconstructions (OpenGL rendering) MPEG-I test content https://renderpeople.com/3d-people/janett-animated-003-standing/ https://renderpeople.com/3d-people/bundle-walking-animated-001/ 19 Geometric 3D: Be careful with the Uncanny Valley Creepy look https://www.newscientist.com/article/dn28432-into-the-uncanny-valley-80-robot-faces-ranked-by-creepiness/ 20 Augmented Reality with Point Clouds http://research.microsoft.com/holoportation https://www.3ders.org/articles/20180305-russians-take-ar- selfies-with-40-ft-vladimir-putin.html 6DoF with possibility to turn around the objects/persons + possibility to move the objects in the scene 21 Geometric smoothening of captured 3D Fusion 4D: Real-time performance capture of challenging scenes © Microsoft https://www.youtube.com/watch?v=2dkcJ1YhYw422 3D mesh (left) vs. light fields-DIBR (right) 500 cameras photogrammetry “a dozen of” cameras (light fields) 23 3D mesh (left) vs. light fields-DIBR (right) A couple of images needed Many hundreds of images needed 24 Virtual Reality with DIBR/MIV 25 © ULB 26 © ULB Real-time view synthesis Courtesy EPFL (2015) http://dx.doi.org/10.5281/zenodo.16544 27 Creating a MultiPlane Image (MPI) requires depth https://www.reddit.com/r/SelfDrivingCars/comments/g9jl4v/singleview_view_synthesis_with_multiplane_images/ https://www.youtube.com/watch?v=gnZT34DYwyE https://www.youtube.com/watch?v=aJqAaMNL2m4 28 Creating an MPI is a difficult task https://openaccess.thecvf.com/content_ICCVW_2019/pa pers/AIM/Busam_SteReFo_Efficient_Image_Refocusing_w ith_Stereo_Vision_ICCVW_2019_paper.pdf https://www.reddit.com/r/SelfDrivingCars/comments/g9jl4v/singleview_view_synthesis_with_multiplane_images/29 DeepVideo: MPI with depth groups and meshes https://augmentedperception.github.io/deepviewvideo/ 31 RVS-MIV (left) & Multi-Plane Images (MPI - right) https://lisaserver.ulb.ac.be/rvs https://single-view-mpi.github.io/view.html?i=1 32 https://gitlab.com/mpeg-i-visual/rvs HoviTron Operator observes with holographic vision Operator HOLOGRAPHIC VISION FOR IMMERSIVE TELE-ROBOTIC Automatic eye accommodation OPERATION Cameras capture the site Robot operates on site Heavy calculations to create a dense light field from a sparse light field 33 EU H2020 project no. 951989 Acknowledgement This work was supported by: • Innoviris, the Brussels Institute for Research and Innovation, Belgium, under contract no. 2015 R39c, 3DLicorneA. • The Fonds de la Recherche Scientifique-FNRS, Belgium, under Grant no. 33679514, ColibriH. • The EU project no. 951989 on Interactive Technologies, H2020-ICT- 2019-3, Hovitron. 34 OpenGL pipeline 35 3D model + 2D texture Flat shading https://en.wikipedia.org/wiki/UV_mapping#/media/File:UVMapping.png 3D OpenGL pipeline Raster scan on screen Vertex Processor/Shader: Fragment Processor/Shader: Projection of the vertices Coloring (including lighting (*)) triangle by triangle the pixels within each triangle (*) some light calculus can be done in the vertex shader 37 Drawing triangles One ‘continuous’ triangle Screen Pixels Always the same step linear One ‘discrete’ triangle (cf. pixels) on a ‘ ’ surface Camera Center Z-buffering at the end of the pipeline Painting of the triangle pixels is done from rear to front = Painter Algorithm Clipping Z-buffering Projection on screen Overlapping triangles are rendered well by keeping also track of the pixel depths Orthographic Perspective 3D ➔ 2D projection View Orthographic Perspective Frustum View Frustum: Only what is in the 3D pyramidal (or cubical) region is effectively handled and projected towards the 2D screen 40 ModelView Transformation (cf. Vertex Shader) Model Coordinates Model Matrix World Coordinates View Matrix Camera Coordinates Combination of Rotations and Translations in the Vertex shader Translation: P’= P + T Rotation: P’ = R. P Combination of (T1, R1), (T2, R2), etc: R2 . { [ R1 . (P+T1) ] + T2} = R .R .P + R .T + T 2 1 2 1 2 2x2 matrix in 2D ➔ Gets rather complex 3x3 matrix in 3D 4D Homogeneous Coordinates 3D Cartesian coordinates: 11 12 13 푡푥 푥 푥′ 푅 푅 푅 푥 푥′ 21 22 23 = 푡푦 + 푦 푦′ = 푅 푅 푅 . 푦 푦′ 31 32 33 푡푧 푧 Physical 푧′ 푅 푅 푅 푧 4D homogeneous푧′ coordinates: Rotation coordinates Translation 11 11 11 0 푡푥 Τ 푥′ 푅 푅 푅 0 푥 푥′ 1 0 푥 푥 푥 푤 21 21 21 푦′ 0 1 0 푡푦 푦 푦 푦Τ 푤 푦′ 푅 푅 푅 0 푦 = . ≡ = 31 31 31 . 푧 0 0 1 푡푧 푧 푧 Τ 푧′ 푅 푅 푅 0 푧′ 푧 푤 1 scale 1 0 0 0 1 1 푤 1 0 0 0 1 Homogeneous 푥′ 푟11 푟11 푟11 푡1 푥 coordinates 푥′ 푟11 푟11 푟11 푡1 푥 푦′ 푟21 푟21 푟21 푡2 푦 푦′ 푟21 푟21 푟21 푡2 푦 = 31 31 31 3 . = 31 31 31 3 . 푧′ 푟 푟 푟 푡 푧 푧′ 푟 푟 푟 푡 푧 1 0 0 0 1 1 푤 4x40 matrix 0 in0 OpenGL 1 푤 Lighting equations Various shading approaches RayTracing Flat Shading Gouraud & Phong Shading 45 1 2 1. Wireframe 2. Flat shading 3. Gouraud 3 4 shading 4. Phong shading with textures Diffuse Light equation = uniformly redistribute the incoming light I I: Light intensity L: Light direction S I/S N: Normal to the surface a N S=S´. cos(a) S´ I/S´ S´=S/cos(a) a L I/S´ = (I/S). cos(a) = (I/S). N.L I´ = I. Kd. max(N.L,0) V1.V2 = |V1|.|V2|.cos(a) S´ Kd = diffusion coefficient of the material Specular Light equation V=E=Viewing/Eye direction R = mirrored reflection of L Rv = mirrored reflection of V I N ns= shininess (typ. 20) V Rv R (cosb)^ns L b b Try out with b=45° a a BRDF ns=1 ns=5 Still some reflections around the ideal mirrored ns=50 reflection direction R b cosb = R.V ➔ I’= I. Ks. (R.V)^ns BRDF = Bidirectional cos ns b Ks = specular coefficient of the material Reflectance Distribution ns Function Calculate the Reflected Ray R from L and N only I’= I. Ks. (R.V)^ns At the time of inventing OpenGL, this was judged too complex, therefore the halfway vector (next slide) was preferred Specular Light equation with Halfway vector b and b2 have the same N Halfway = (L+V)/2 trend: when b increases, so does b2 Light b2 Reflection Therefore, instead of using: Viewing a a b I’= I. Ks. (R.V)^ns We rather use: I’= I. Ks. (N.H)^ns’ Gouraud versus Phong shading Gouraud (mainly vertex shader): Phong (mainly fragment shader): 1. Calculate Light equation in the 3 vertices 1. Interpolate normal from the 3 vertices 2. Interpolate the obtained color 2. Calculate Light equation in each fragment Use ViewInverseTranspose for Normals in shaders A transform M transforms vertices (v) and edges/tangents (t) in the Vertex Shader: v → Mv t → Mt n → Qn → Normals (n) will be transformed with an unknown transform Q. What is Q, knowing that nt.t = 0 ? t . = (nx ny nz) . = nx.tx + ny.ty + nz.tz = 0 푛푥 푡푥 푡푥 nt.t = 0 푛푦 푡푦 푡푦 푛푧 푡푧 푡푧 Orthogonality = (Qn)t.(Mt)=0 0 scalar product ntQt.Mt=0 Qt.M = I Q = (M-1)t 52 PS: one can work with a mix of 3D and 4D dimensions Example of shaders GLSL simple lighting (Graphics Library Shader Language) Conclusion With these concepts, Nevertheless, you should be able OpenGL is much … to be discussed in next lessons to start the exercises more than that … 55 Further OpenGL aspects A bit of everything … Projection matrices Orthographic Perspective 3D ➔ 2D projection View Orthographic Perspective Frustum View Frustum: Only what is in the 3D pyramidal (or cubical) region is effectively handled and projected towards the 2D screen 3 Projection Matrix l = left r = right b = bottom t = top n = near f = far View frustum Focal length F Focal length F Camera projection 4 Frustum planes l = left r = right b = bottom t = top n = near f = far (A) (B) 2 Normalized Device Coordinates (NDC) = easier hardware working with values within (-1, 1) in floating point representation5 Orthographic vs.
Recommended publications
  • Automatic Generation of a 3D City Model
    UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA COMPUTER ENGINEERING DEGREE DEGREE FINAL PROJECT Automatic generation of a 3D city model David Murcia Pacheco June, 2017 AUTOMATIC GENERATION OF A 3D CITY MODEL Escuela Superior de Informática UNIVERSITY OF CASTILLA-LA MANCHA ESCUELA SUPERIOR DE INFORMÁTICA Information Technology and Systems SPECIFIC TECHNOLOGY OF COMPUTER ENGINEERING DEGREE FINAL PROJECT Automatic generation of a 3D city model Author: David Murcia Pacheco Director: Dr. Félix Jesús Villanueva Molina June, 2017 David Murcia Pacheco Ciudad Real – Spain E-mail: [email protected] Phone No.:+34 625 922 076 c 2017 David Murcia Pacheco Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". i TRIBUNAL: Presidente: Vocal: Secretario: FECHA DE DEFENSA: CALIFICACIÓN: PRESIDENTE VOCAL SECRETARIO Fdo.: Fdo.: Fdo.: ii Abstract HIS document collects all information related to the Degree Final Project (DFP) of Com- T puter Engineering Degree of the student David Murcia Pacheco, tutorized by Dr. Félix Jesús Villanueva Molina. This work has been developed during 2016 and 2017 in the Escuela Superior de Informática (ESI), in Ciudad Real, Spain. It is based in one of the proposed sub- jects by the faculty of this university for this year, called "Generación automática del modelo en 3D de una ciudad".
    [Show full text]
  • Benchmarks on WWW Performance
    The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Application Nicholas F. Polys, Chair Doug A. Bowman Peter Sforza Aug 14, 2020 Blacksburg, Virginia Keywords: Point Cloud, WebGL, X3DOM, x3d Copyright 2020, Yanshen Sun The Scalability of X3D4 PointProperties: Benchmarks on WWW Performance Yanshen Sun (ABSTRACT) With the development of remote sensing devices, it becomes more and more convenient for individual researchers to acquire high-resolution point cloud data by themselves. There have been plenty of online tools for the researchers to exhibit their work. However, the drawback of existing tools is that they are not flexible enough for the users to create 3D scenes of a mixture of point-based and triangle-based models. X3DOM is a WebGL-based library built on Extensible 3D (X3D) standard, which enables users to create 3D scenes with only a little computer graphics knowledge. Before X3D 4.0 Specification, little attention has been paid to point cloud rendering in X3DOM. PointProperties, an appearance node newly added in X3D 4.0, provides point size attenuation and texture-color mixing effects to point geometries. In this work, we propose an X3DOM implementation of PointProperties. This implementation fulfills not only the features specified in X3D 4.0 documentation, but other shading effects comparable to the effects of triangle-based geometries in X3DOM, as well as other state-of-the-art point cloud visualization tools.
    [Show full text]
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Efficient Rendering of Caustics with Streamed Photon Mapping
    BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 65, No. 3, 2017 DOI: 10.1515/bpasts-2017-0040 Efficient rendering of caustics with streamed photon mapping K. GUZEK and P. NAPIERALSKI* Institute of Information Technology, Lodz University of Technology, 215 Wolczanska St., 90-924 Lodz, Poland Abstract. In this paper, we present the streamed photon mapping method for enhancing the rendering of caustics. In order to achieve a realistic caustic effect, global illumination methods require additional data, which are gathered by creating a caustic map or increasing the number of samples used for rendering. Our method employs a stream of photons with a varying luminance level depending on the material properties of the surface. The application of a concentrated photon stream provides the ability to render caustics effectively without increasing the number of photons in a photon map. Such an approach increases visibility of results, while also allowing for faster computations. Key words: rendering, global illumination, photon mapping, caustics. 1. Introduction 2. Rendering caustics The interaction of light with matter in the real world results The first attempt to simulate a natural caustic effect was the in a variety of optical phenomena. Understanding how those path tracing method, introduced by James Kajiya in 1986 [4]. phenomena occur and where to implement them is crucial for However, the method proved to be highly inefficient. The creating realistic image renders. When observing the reflection caustics were poorly rendered, as the light source was obscure. or refraction of light through curved surfaces, one may notice A significant improvement was introduced both and inde- some characteristic patches of light, referred to as caustics.
    [Show full text]
  • Integrating Open Source Distributed Rendering Solutions in Public and Closed Networking Envi- Ronments
    Integrating open source distributed rendering solutions in public and closed networking envi- ronments Seppälä, Heikki & Suomalainen, Niko 2010 Leppävaara Laurea University of Applied Sciences Laurea Leppävaara Integrating open source distributed rendering solutions in public and closed networking environments Heikki Seppälä Niko Suomalainen Information Technology Programme Thesis 02/2010 Laurea-ammattikorkeakoulu Tiivistelmä Laurea Leppävaara Tietojenkäsittelyn koulutusohjelma Yritysten tietoverkot Heikki Seppälä & Niko Suomalainen Avoimen lähdekoodin jaetun renderöinnin ratkaisut julkisiin ja suljettuihin ympäristöihin Vuosi 2010 Sivumäärä 64 Moderni tutkimustiede on yhä enemmän riippuvainen tietokoneista ja niiden tuottamasta laskentatehosta. Tutkimusprojektit kasvavat jatkuvasti, mikä aiheuttaa tarpeen suuremmalle tietokoneteholle ja lisää kustannuksia. Ratkaisuksi tähän ongelmaan tiedemiehet ovat kehittäneet hajautetun laskennan järjestelmiä, joiden tarkoituksena on tarjota vaihtoehto kalliille supertietokoneille. Näiden järjestelmien toiminta perustuu yhteisön lahjoittamaan tietokonetehoon. Open Rendering Environment on Laurea-ammattikorkeakoulun aloittama projekti, jonka tärkein tuotos on yhteisöllinen renderöintipalvelu Renderfarm.fi. Palvelu hyödyntää hajautettua laskentaa nopeuttamaan 3D-animaatioiden renderöintiä. Tämä tarjoaa uusia mahdollisuuksia mallintajille ja animaatioelokuvien tekijöille joilta tavallisesti kuluu paljon aikaa ja tietokoneresursseja töidensä valmiiksi saattamiseksi. Renderfarm.fi-palvelu perustuu BOINC-pohjaiseen
    [Show full text]
  • Openscenegraph 3.0 Beginner's Guide
    OpenSceneGraph 3.0 Beginner's Guide Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines Rui Wang Xuelei Qian BIRMINGHAM - MUMBAI OpenSceneGraph 3.0 Beginner's Guide Copyright © 2010 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2010 Production Reference: 1081210 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-82-4 www.packtpub.com Cover Image by Ed Maclean ([email protected]) Credits Authors Editorial Team Leader Rui Wang Akshara Aware Xuelei Qian Project Team Leader Reviewers Lata Basantani Jean-Sébastien Guay Project Coordinator Cedric Pinson
    [Show full text]
  • 2014 3-4 Acta Graphica.Indd
    Vidmar et al.: Performance Assessment of Three Rendering..., acta graphica 25(2014)3–4, 101–114 author viewpoint acta graphica 234 Performance Assessment of Three Rendering Engines in 3D Computer Graphics Software Authors Žan Vidmar, Aleš Hladnik, Helena Gabrijelčič Tomc* University of Ljubljana Faculty of Natural Sciences and Engineering Slovenia *E-mail: [email protected] Abstract: The aim of the research was the determination of testing conditions and visual and numerical evaluation of renderings made with three different rendering engines in Maya software, which is widely used for educational and computer art purposes. In the theoretical part the overview of light phenomena and their simulation in virtual space is presented. This is followed by a detailed presentation of the main rendering methods and the results and limitations of their applications to 3D ob- jects. At the end of the theoretical part the importance of a proper testing scene and especially the role of Cornell box are explained. In the experimental part the terms and conditions as well as hardware and software used for the research are presented. This is followed by a description of the procedures, where we focused on the rendering quality and time, which enabled the comparison of settings of different render engines and determination of conditions for further rendering of testing scenes. The experimental part continued with rendering a variety of simple virtual scenes including Cornell box and virtual object with different materials and colours. Apart from visual evaluation, which was the starting point for comparison of renderings, a procedure for numerical estimation and colour deviations of ren- derings using the selected regions of interest in the final images is presented.
    [Show full text]
  • Objekttracking Für Dynamisches Videoprojektions Mapping
    Bachelorthesis Iwer Petersen Using object tracking for dynamic video projection mapping Fakultät Technik und Informatik Faculty of Engineering and Computer Science Studiendepartment Informatik Department of Computer Science Iwer Petersen Using object tracking for dynamic video projection mapping Bachelorthesis submitted within the scope of the Bachelor examination in the degree programme Bachelor of Science Technical Computer Science at the Department of Computer Science of the Faculty of Engineering and Computer Science of the Hamburg University of Applied Science Mentoring examiner: Prof. Dr. Ing. Birgit Wendholt Second expert: Prof. Dr.-Ing. Andreas Meisel Submitted at: January 31, 2013 Iwer Petersen Title of the paper Using object tracking for dynamic video projection mapping Keywords video projection mapping, object tracking, point cloud, 3D Abstract This document presents a way to realize video projection mapping onto moving objects. Therefore a visual 3D tracking method is used to determine the position and orientation of a known object. Via a calibrated projector-camera system, the real object then is augmented with a virtual texture. Iwer Petersen Thema der Arbeit Objekttracking für dynamisches Videoprojektions Mapping Stichworte Video Projektions Mapping, Objektverfolgung, Punktwolke, 3D Kurzzusammenfassung Dieses Dokument präsentiert einen Weg um Video Projektions Mapping auf sich bewegende Objekte zu realisieren. Dafür wird ein visuelles 3D Trackingverfahren verwendet um die Position und Lage eines bekannten Objekts zu bestimmen. Über ein kalibriertes Projektor- Kamera System wird das reale Objekt dann mit einer virtuellen Textur erweitert. Contents 1 Introduction1 1.1 Motivation . .1 1.2 Structure . .3 2 Related Work4 2.1 Corresponding projects . .4 2.1.1 Spatial Augmented Reality . .4 2.1.2 2D Mapping onto people .
    [Show full text]
  • Room Layout Estimation on Mobile Devices
    En vue de l'obtention du DOCTORAT DE L'UNIVERSITÉ DE TOULOUSE Délivré par : Institut National Polytechnique de Toulouse (Toulouse INP) Discipline ou spécialité : Image, Information et Hypermédia Présentée et soutenue par : M. VINCENT ANGLADON le vendredi 27 avril 2018 Titre : Room layout estimation on mobile devices Ecole doctorale : Mathématiques, Informatique, Télécommunications de Toulouse (MITT) Unité de recherche : Institut de Recherche en Informatique de Toulouse (I.R.I.T.) Directeur(s) de Thèse : M. VINCENT CHARVILLAT M. SIMONE GASPARINI Rapporteurs : M. CARSTEN GRIWODZ, UNIVERSITE D'OSLO M. DAVID FOFI, UNIVERSITE DE BOURGOGNE Membre(s) du jury : Mme LUCE MORIN, INSA DE RENNES, Président M. PASCAL BERTOLINO, UNIVERSITE GRENOBLE ALPES, Membre M. SIMONE GASPARINI, INP TOULOUSE, Membre M. TOMISLAV PRIBANIC, UNIVERSITE DE ZAGREB, Membre M. VINCENT CHARVILLAT, INP TOULOUSE, Membre Acknowledgments I would like to thank my thesis committee for giving me the opportunity to work on an exciting topic, and for the trust they placed in me. First, Simone Gasparini, I had the pleasure to have as advisor, who kept a cautious eye on my scientific and technical productions. I am certain this great attention to the details played an important role in the quality of my publications and the absence of rejection notice. Then, my thesis director, Vincent Charvillat, who was always generous in original ideas and positive energy. His advice helped me to put a more flattering light on my works and feel more optimistic. Finally, Telequid, which funded my works, with a special thought to Frédéric Bruel and Benjamin Ahsan for their great patience. I would also like to thank my referees: Prof.
    [Show full text]
  • DVD-Ofimática 2014-07
    (continuación 2) Calizo 0.2.5 - CamStudio 2.7.316 - CamStudio Codec 1.5 - CDex 1.70 - CDisplayEx 1.9.09 - cdrTools FrontEnd 1.5.2 - Classic Shell 3.6.8 - Clavier+ 10.6.7 - Clementine 1.2.1 - Cobian Backup 8.4.0.202 - Comical 0.8 - ComiX 0.2.1.24 - CoolReader 3.0.56.42 - CubicExplorer 0.95.1 - Daphne 2.03 - Data Crow 3.12.5 - DejaVu Fonts 2.34 - DeltaCopy 1.4 - DVD-Ofimática Deluge 1.3.6 - DeSmuME 0.9.10 - Dia 0.97.2.2 - Diashapes 0.2.2 - digiKam 4.1.0 - Disk Imager 1.4 - DiskCryptor 1.1.836 - Ditto 3.19.24.0 - DjVuLibre 3.5.25.4 - DocFetcher 1.1.11 - DoISO 2.0.0.6 - DOSBox 0.74 - DosZip Commander 3.21 - Double Commander 0.5.10 beta - DrawPile 2014-07 0.9.1 - DVD Flick 1.3.0.7 - DVDStyler 2.7.2 - Eagle Mode 0.85.0 - EasyTAG 2.2.3 - Ekiga 4.0.1 2013.08.20 - Electric Sheep 2.7.b35 - eLibrary 2.5.13 - emesene 2.12.9 2012.09.13 - eMule 0.50.a - Eraser 6.0.10 - eSpeak 1.48.04 - Eudora OSE 1.0 - eViacam 1.7.2 - Exodus 0.10.0.0 - Explore2fs 1.08 beta9 - Ext2Fsd 0.52 - FBReader 0.12.10 - ffDiaporama 2.1 - FileBot 4.1 - FileVerifier++ 0.6.3 DVD-Ofimática es una recopilación de programas libres para Windows - FileZilla 3.8.1 - Firefox 30.0 - FLAC 1.2.1.b - FocusWriter 1.5.1 - Folder Size 2.6 - fre:ac 1.0.21.a dirigidos a la ofimática en general (ofimática, sonido, gráficos y vídeo, - Free Download Manager 3.9.4.1472 - Free Manga Downloader 0.8.2.325 - Free1x2 0.70.2 - Internet y utilidades).
    [Show full text]
  • 12.Raytracing.Pdf
    Základní raytracing Detaily implementace Distribuovaný raytracing Další globální zobrazovací metody Galerie Literatura Raytracing a další globální zobrazovací metody Pavel Strachota FJFI CVUTˇ v Praze 11. kvetnaˇ 2015 Základní raytracing Detaily implementace Distribuovaný raytracing Další globální zobrazovací metody Galerie Literatura Obsah 1 Základní raytracing 2 Detaily implementace 3 Distribuovaný raytracing 4 Další globální zobrazovací metody 5 Galerie Základní raytracing Detaily implementace Distribuovaný raytracing Další globální zobrazovací metody Galerie Literatura Obsah 1 Základní raytracing 2 Detaily implementace 3 Distribuovaný raytracing 4 Další globální zobrazovací metody 5 Galerie Základní raytracing Detaily implementace Distribuovaný raytracing Další globální zobrazovací metody Galerie Literatura Motivace obsah prednáškyˇ až dosud: kroky vedoucí k prímémuˇ zobrazení jednotlivých objekt˚una pr˚umetnuˇ =) vhodné pro zobrazování v reálném caseˇ definice scény, triangulace promítání - maticová transformace rešeníˇ viditelnosti - obrazový a objektový prístup,ˇ více ciˇ méneˇ univerzální algoritmy, z-buffer, prímýˇ výpocetˇ stín ˚u osvetlováníˇ a metody stínování - aplikace osvetlovacíhoˇ modelu na polygonální sít’ mapování textur (bude potrebaˇ i pro globální metody), perspektivneˇ korektní mapování prímoˇ v pr ˚umetnˇ eˇ nyní: globální zobrazovací metody, snaha o fotorealistické zobrazování =) barvu každého pixelu ovlivnujíˇ obecneˇ všechny objekty ve scéneˇ Základní raytracing Detaily implementace Distribuovaný raytracing Další
    [Show full text]
  • Basic Implementation and Measurements of Plane Detection
    Basic Implementation and Measurements of Plane Detection in Point Clouds A project of the 2017 Robotics Course of the School of Information Science and Technology (SIST) of ShanghaiTech University https://robotics.shanghaitech.edu.cn/teaching/robotics2017 Chen Chen Wentao Lv Yuan Yuan School of Information and School of Information and School of Information and Science Technology Science Technology Science Technology [email protected] [email protected] [email protected] ShanghaiTech University ShanghaiTech University ShanghaiTech University Abstract—In practical robotics research, plane detection is an Corsini et al.(2012) is working on dealing with the problem important prerequisite to a wide variety of vision tasks. FARO is of taking random samples over the surface of a 3D mesh another powerful devices to scan the whole environment into describing and evaluating efficient algorithms for generating point cloud. In our project, we are working to apply some algorithms to convert point cloud data to plane mesh, and then different distributions. In this paper, the author we propose path planning based on the plane information. Constrained Poisson-disk sampling, a new Poisson-disk sam- pling scheme for polygonal meshes which can be easily I. Introduction tweaked in order to generate customized set of points such as In practical robotics research, plane detection is an im- importance sampling or distributions with generic geometric portant prerequisite to a wide variety of vision tasks. Plane constraints. detection means to detect the plane information from some Kazhdan and Hoppe(2013) is talking about poisson surface basic disjoint information, for example, the point cloud. In this reconstruction, which can create watertight surfaces from project, we will use point clouds from FARO devices which oriented point sets.
    [Show full text]