INFO-H502 Virtual Reality
Total Page:16
File Type:pdf, Size:1020Kb
26 Oct. 2020 INFO-H502 Virtual Reality 1 What is VR? tracking 3D content HMD = creation Head Mounted Stereoscopic rendering Device 3D content animation & rendering https://wiki.epfl.ch/cultural.data.sculpting/week11 2 https://www.youtube.com/watch?v=rDgRDrez0pw&app=desktop Biomed Virtual Reality ULB (PROJ-H402): From 2D slices to 3D model 3 Why standards? https://www.iso.org/standards.html https://youtu.be/AYBVTeqKahk OpenGL/WebGL used in this course (GL = Graphics Library) 4 Virtual Reality: Extract 3D from patient Laparoscopic data Laparoscopic 3D simulator ACMM2017 submission (non-ULB) 8 VR = 3D or multi-cam & multi-output VR on real/natural scenes Light Field display = Hundreds of projections Multi-cam acquisition for Free Navigation How to compress? A. Jones et. Al., “An automultiscopic projector array for interactive digital humans,” Proceeding SIGGRAPH '15 ACM SIGGRAPH 2015 Emerging Technologies, article no.9 6, 2015, https://vimeo.com/128641902 DIY Light Field Display Project hundreds of directional views on a diffuser Transmit a couple of views & http://gl.ict.usc.edu/Research/PicoArray/ Synthesize all other views10 Light Field display: 72 output views, Holografika@VUB 11 Holographic Stereograms Holographic stereograms made by ULB using DIBR & RVS 3D games with synthetic content = OpenGL 3D graphics = 3D mesh + 2D texture 6DoF (6 Degrees of Freedom) http://soundvenue.com/tech/2015/05/technologik-hvad-er-virtual-reality-briller-148840 13 360-VR = Image-Based Rendering (IBR) = 3 Degrees of Freedom (rotational) Cybersickness when having many translational movements https://thenextweb.com/creativity/2015/06/17/getty-teams-up-with-oculus-for- immersive-360-degree-image-viewing/ 14 Stitching artefacts in creating a panoramic video GigaEye project EPFL (2015) http://dx.doi.org/10.5281/zenodo.16544 15 Look to the sphere inside/out Translational movements do not give parallax = Cyber-sickness 3DoF 3DoF+ VR with motion parallax = 3DoF+ No cyber-sickness 3DoF 6DoF Plenoptic camera capture (also in JPEG-PLENO) Micro-lenses image http://clim.inria.fr/Datasets/RaytrixR8Dataset-5x5/index.html (CC-BY-SA INRIA) 17 Photogrammetry: 3D reconstruction 18 People 3D reconstructions (OpenGL rendering) MPEG-I test content https://renderpeople.com/3d-people/janett-animated-003-standing/ https://renderpeople.com/3d-people/bundle-walking-animated-001/ 19 Geometric 3D: Be careful with the Uncanny Valley Creepy look https://www.newscientist.com/article/dn28432-into-the-uncanny-valley-80-robot-faces-ranked-by-creepiness/ 20 Augmented Reality with Point Clouds http://research.microsoft.com/holoportation https://www.3ders.org/articles/20180305-russians-take-ar- selfies-with-40-ft-vladimir-putin.html 6DoF with possibility to turn around the objects/persons + possibility to move the objects in the scene 21 Geometric smoothening of captured 3D Fusion 4D: Real-time performance capture of challenging scenes © Microsoft https://www.youtube.com/watch?v=2dkcJ1YhYw422 3D mesh (left) vs. light fields-DIBR (right) 500 cameras photogrammetry “a dozen of” cameras (light fields) 23 3D mesh (left) vs. light fields-DIBR (right) A couple of images needed Many hundreds of images needed 24 Virtual Reality with DIBR/MIV 25 © ULB 26 © ULB Real-time view synthesis Courtesy EPFL (2015) http://dx.doi.org/10.5281/zenodo.16544 27 Creating a MultiPlane Image (MPI) requires depth https://www.reddit.com/r/SelfDrivingCars/comments/g9jl4v/singleview_view_synthesis_with_multiplane_images/ https://www.youtube.com/watch?v=gnZT34DYwyE https://www.youtube.com/watch?v=aJqAaMNL2m4 28 Creating an MPI is a difficult task https://openaccess.thecvf.com/content_ICCVW_2019/pa pers/AIM/Busam_SteReFo_Efficient_Image_Refocusing_w ith_Stereo_Vision_ICCVW_2019_paper.pdf https://www.reddit.com/r/SelfDrivingCars/comments/g9jl4v/singleview_view_synthesis_with_multiplane_images/29 DeepVideo: MPI with depth groups and meshes https://augmentedperception.github.io/deepviewvideo/ 31 RVS-MIV (left) & Multi-Plane Images (MPI - right) https://lisaserver.ulb.ac.be/rvs https://single-view-mpi.github.io/view.html?i=1 32 https://gitlab.com/mpeg-i-visual/rvs HoviTron Operator observes with holographic vision Operator HOLOGRAPHIC VISION FOR IMMERSIVE TELE-ROBOTIC Automatic eye accommodation OPERATION Cameras capture the site Robot operates on site Heavy calculations to create a dense light field from a sparse light field 33 EU H2020 project no. 951989 Acknowledgement This work was supported by: • Innoviris, the Brussels Institute for Research and Innovation, Belgium, under contract no. 2015 R39c, 3DLicorneA. • The Fonds de la Recherche Scientifique-FNRS, Belgium, under Grant no. 33679514, ColibriH. • The EU project no. 951989 on Interactive Technologies, H2020-ICT- 2019-3, Hovitron. 34 OpenGL pipeline 35 3D model + 2D texture Flat shading https://en.wikipedia.org/wiki/UV_mapping#/media/File:UVMapping.png 3D OpenGL pipeline Raster scan on screen Vertex Processor/Shader: Fragment Processor/Shader: Projection of the vertices Coloring (including lighting (*)) triangle by triangle the pixels within each triangle (*) some light calculus can be done in the vertex shader 37 Drawing triangles One ‘continuous’ triangle Screen Pixels Always the same step linear One ‘discrete’ triangle (cf. pixels) on a ‘ ’ surface Camera Center Z-buffering at the end of the pipeline Painting of the triangle pixels is done from rear to front = Painter Algorithm Clipping Z-buffering Projection on screen Overlapping triangles are rendered well by keeping also track of the pixel depths Orthographic Perspective 3D ➔ 2D projection View Orthographic Perspective Frustum View Frustum: Only what is in the 3D pyramidal (or cubical) region is effectively handled and projected towards the 2D screen 40 ModelView Transformation (cf. Vertex Shader) Model Coordinates Model Matrix World Coordinates View Matrix Camera Coordinates Combination of Rotations and Translations in the Vertex shader Translation: P’= P + T Rotation: P’ = R. P Combination of (T1, R1), (T2, R2), etc: R2 . { [ R1 . (P+T1) ] + T2} = R .R .P + R .T + T 2 1 2 1 2 2x2 matrix in 2D ➔ Gets rather complex 3x3 matrix in 3D 4D Homogeneous Coordinates 3D Cartesian coordinates: 11 12 13 푡푥 푥 푥′ 푅 푅 푅 푥 푥′ 21 22 23 = 푡푦 + 푦 푦′ = 푅 푅 푅 . 푦 푦′ 31 32 33 푡푧 푧 Physical 푧′ 푅 푅 푅 푧 4D homogeneous푧′ coordinates: Rotation coordinates Translation 11 11 11 0 푡푥 Τ 푥′ 푅 푅 푅 0 푥 푥′ 1 0 푥 푥 푥 푤 21 21 21 푦′ 0 1 0 푡푦 푦 푦 푦Τ 푤 푦′ 푅 푅 푅 0 푦 = . ≡ = 31 31 31 . 푧 0 0 1 푡푧 푧 푧 Τ 푧′ 푅 푅 푅 0 푧′ 푧 푤 1 scale 1 0 0 0 1 1 푤 1 0 0 0 1 Homogeneous 푥′ 푟11 푟11 푟11 푡1 푥 coordinates 푥′ 푟11 푟11 푟11 푡1 푥 푦′ 푟21 푟21 푟21 푡2 푦 푦′ 푟21 푟21 푟21 푡2 푦 = 31 31 31 3 . = 31 31 31 3 . 푧′ 푟 푟 푟 푡 푧 푧′ 푟 푟 푟 푡 푧 1 0 0 0 1 1 푤 4x40 matrix 0 in0 OpenGL 1 푤 Lighting equations Various shading approaches RayTracing Flat Shading Gouraud & Phong Shading 45 1 2 1. Wireframe 2. Flat shading 3. Gouraud 3 4 shading 4. Phong shading with textures Diffuse Light equation = uniformly redistribute the incoming light I I: Light intensity L: Light direction S I/S N: Normal to the surface a N S=S´. cos(a) S´ I/S´ S´=S/cos(a) a L I/S´ = (I/S). cos(a) = (I/S). N.L I´ = I. Kd. max(N.L,0) V1.V2 = |V1|.|V2|.cos(a) S´ Kd = diffusion coefficient of the material Specular Light equation V=E=Viewing/Eye direction R = mirrored reflection of L Rv = mirrored reflection of V I N ns= shininess (typ. 20) V Rv R (cosb)^ns L b b Try out with b=45° a a BRDF ns=1 ns=5 Still some reflections around the ideal mirrored ns=50 reflection direction R b cosb = R.V ➔ I’= I. Ks. (R.V)^ns BRDF = Bidirectional cos ns b Ks = specular coefficient of the material Reflectance Distribution ns Function Calculate the Reflected Ray R from L and N only I’= I. Ks. (R.V)^ns At the time of inventing OpenGL, this was judged too complex, therefore the halfway vector (next slide) was preferred Specular Light equation with Halfway vector b and b2 have the same N Halfway = (L+V)/2 trend: when b increases, so does b2 Light b2 Reflection Therefore, instead of using: Viewing a a b I’= I. Ks. (R.V)^ns We rather use: I’= I. Ks. (N.H)^ns’ Gouraud versus Phong shading Gouraud (mainly vertex shader): Phong (mainly fragment shader): 1. Calculate Light equation in the 3 vertices 1. Interpolate normal from the 3 vertices 2. Interpolate the obtained color 2. Calculate Light equation in each fragment Use ViewInverseTranspose for Normals in shaders A transform M transforms vertices (v) and edges/tangents (t) in the Vertex Shader: v → Mv t → Mt n → Qn → Normals (n) will be transformed with an unknown transform Q. What is Q, knowing that nt.t = 0 ? t . = (nx ny nz) . = nx.tx + ny.ty + nz.tz = 0 푛푥 푡푥 푡푥 nt.t = 0 푛푦 푡푦 푡푦 푛푧 푡푧 푡푧 Orthogonality = (Qn)t.(Mt)=0 0 scalar product ntQt.Mt=0 Qt.M = I Q = (M-1)t 52 PS: one can work with a mix of 3D and 4D dimensions Example of shaders GLSL simple lighting (Graphics Library Shader Language) Conclusion With these concepts, Nevertheless, you should be able OpenGL is much … to be discussed in next lessons to start the exercises more than that … 55 Further OpenGL aspects A bit of everything … Projection matrices Orthographic Perspective 3D ➔ 2D projection View Orthographic Perspective Frustum View Frustum: Only what is in the 3D pyramidal (or cubical) region is effectively handled and projected towards the 2D screen 3 Projection Matrix l = left r = right b = bottom t = top n = near f = far View frustum Focal length F Focal length F Camera projection 4 Frustum planes l = left r = right b = bottom t = top n = near f = far (A) (B) 2 Normalized Device Coordinates (NDC) = easier hardware working with values within (-1, 1) in floating point representation5 Orthographic vs.