Teaching Visual Storytelling for Virtual Production Pipelines Incorporating Motion Capture and Visual Effects

Total Page:16

File Type:pdf, Size:1020Kb

Teaching Visual Storytelling for Virtual Production Pipelines Incorporating Motion Capture and Visual Effects Teaching Visual Storytelling for virtual production pipelines incorporating Motion Capture and Visual Effects Gregory Bennett∗ Jan Krusey Auckland University of Technology Auckland University of Technology Figure 1: Performance Capture for Visual Storytelling at AUT. Abstract solid theoretical foundation, and could traditionally only be ex- plored through theory and examples in a lecture/lab style context. Film, television and media production are subject to consistent Particularly programs that aim to deliver content in a studio-based change due to ever-evolving technological and economic environ- environment suffer from the complexity and cost-time-constraints ments. Accordingly, tertiary teaching of subject areas such as cin- inherently part of practical inquiry into storytelling through short ema, animation and visual effects require frequent adjustments re- film production or visual effects animation. Further, due to the garding curriculum structure and pedagogy. This paper discusses a structure and length of Film, Visual Effects and Digital Design de- multifaceted, cross-disciplinary approach to teaching Visual Narra- grees, there is normally only time for a single facet of visual nar- tives as part of a Digital Design program. Specifically, pedagogical rative to be addressed, for example a practical camera shoot, or challenges in teaching Visual Storytelling through Motion Capture alternatively a visual effects or animation project. This means that and Visual Effects are addressed, and a new pedagogical frame- comparative exploratory learning is usually out of the question, and work using three different modes of moving image storytelling is students might only take a singular view on technical and creative applied and cited as case studies. Further, ongoing changes in film story development throughout their undergraduate years. Normally production environments and their impact on curricula for tertiary only postgraduate study offers sufficient time to understand and ex- education providers are detailed, and appropriate suggestions based plore multiple facets of filmic narratives. on tangible teaching experience are made. This paper also discusses Film production pipelines in general, and visual effects workflows the advantages of teaching Motion Capture in the context of immer- in particular, have seen a significant paradigm-shift in the past sive environments. decade. Traditional workflows are based on frequent conversions between two dimensional images and three dimensional data sets Keywords: motion capture, visual storytelling, narrative, visual [Okun and Zwerman 2010]. For example, the acquisition of cam- effects, curriculum era images (digital or on film) projects a 3D world through a lens on to a 2D camera plane (sensor or film back). This digitized 2D image is then used to match-move the camera, effectively recon- 1 Outline and Background structing a 3D scene from 2D data. The 3D scene is manipulated as needed, and then rendered back into a sequence of 2D images, Visual Storytelling has been used in various contexts, from market- which finally make their way into compositing software. One can ing [Mileski et al. 2015] to film [McClean 2007] and even games see the redundancy within these conversions, but technological lim- [Howland et al. 2007]. While the Digital Design program at Auck- itations and established software tools, data formats and workflows land University of Technology utilizes Visual Narratives in differ- simply dictate the way film (effects) production is traditionally ac- ent contexts throughout the curriculum, the focus of this paper is on complished. Recent developments in the fields of RGB-D cameras film and immersive environments. [Hach and Steurer 2013], performance capture, deep compositing, Teaching Visual Storytelling or Visual Narratives poses significant real-time stereoscopic rendering as well as scene based file formats challenges to educators and students alike. The topic requires a such as FBX (filmbox) or ABC (alembic) start to revolutionize the way pipelines in visual effects studios are set up. Less con- ∗e-mail:[email protected] versions and a strong emphasis on 3D data sets instead of images ye-mail:[email protected] have changed workflows and simplified interchange of image data. Within their undergraduate years, students are usually not exposed Permission to make digital or hard copies of part or all of this work for personal or to the changes in production environments. Students experience a classroom use is granted without fee provided that copies are not made or distributed snapshot of current technology, and therefore lack an appreciation for commercial advantage and that copies bear this notice and the full citation on the and understanding for the implications and benefits of technologi- first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. cal change. Copyright is held by the owner/author(s). One of the most significant workflow changes in commercial envi- SA’15 Symposium on Education, November 02 – 06, 2015, Kobe, Japan. ronments is probably the way visual storytelling has been detached ACM 978-1-4503-3927-8/15/11. http://dx.doi.org/10.1145/2818498.2818516 from a linear production mode. Using techniques such as perfor- 2 Identifying pedagogical needs in rapidly mance capture allows the decoupling of performance from cine- changing environments matography. While the acting is recorded in form of a motion and facial capture, the ’virtual camera’ can be changed at a later stage during production. This is a seemingly obvious, but important dif- Here in the Digital Design Department, in the School of Art & De- ference to traditional image-based filming. It allows film makers sign, Auckland University of Technology (AUT) in New Zealand, to employ a faster and less restricted construction of the narrative our undergraduate programme caters to a largely vocationally ori- through acting or performance, with the possibility to alter environ- ented cohort who can select pathways in Visual Effects, 3D An- ments, camera, lights and visual style at any point of the subsequent imation and Game Design. These areas encourage overlaps and production pipeline. This impacts not just on abstract or hyper- convergences according to the interests and aspirations of individ- realistic visual styles, but also on invisible and seamless visual nar- ual students. A recently commissioned 24-camera Motion Analy- ratives as classified by McClean [2007]. But more importantly, it sis MoCap studio at AUT (including a Vicon Cara Facial MoCap has some significant implications for teaching the fundamentals of system and an Insight VCS virtual camera) has allowed for the in- cinematic storytelling. Cinematic or visual storytelling is the basis troduction of a comprehensive MoCap minor programme, covering of any practical moving image teaching whether for live action, an- all aspects of the MoCap pipeline including capture, data process- imation or visual effects production. Understanding and practically ing and post-production. In relation to our established pathways applying the conventions of cinematography (shot size, framing, MoCap has been found to be a valuable augmentation to all of the camera movement, lighting), editing (shot coverage, continuity and disciplines, and often a point of convergence in issues of perfor- montage editing styles, point-of-view, screen direction, cutting on mance (digital characters for animation, gaming or visual effects action) and sound (diegetic/non-diegetic audio) remains a core ped- driven by MoCap or key-frame animation or a combination of both) agogical challenge and goal in learning outcomes for students. The and visual storytelling (3D previz for animation, gaming or visual taxonomy of these conventions remain the core of foundational film effects). studies [Bordwell and Thompson 2008; Corrigan and White 2012; This paper focuses on the construction of a foundational lesson for Kolker 2005] and film making texts [Bowen 2013; Katz 1991; Sijll teaching visual storytelling for a visual effects sequence using Mo- 2005], and provide both useful frameworks and practical challenges Cap. It identifies some of the key factors of teaching and learning for their application within curriculum design and delivery. strategies based on practical experience with delivering the funda- Recent developments in digital production technologies and tools mentals of a film grammar curriculum. The lesson plan incorpo- have transformed the possibilities for both technical pipelines and rates new teaching strategies in acknowledgement of the changing the creative development of visual storytelling. The term ’virtual digital production environment, and the evolving language of vi- production’ has been coined to describe a confluence of technolo- sual effects cinema. This lesson plan is based on the creation and gies including motion capture (MoCap), gaming software and vir- exploration of the creative possibilities for visual storytelling in a tual cinematography. Improved live 3D data streaming, which has 3D previz sequence, and is designed for students new to the Mo- enabled an increasingly seamless, realtime 3D production pipeline tion Capture pipeline , but who have some foundational knowledge from conception to delivery of animation and visual effects mov- of the primary 3D software platform used, in this case Autodesk ing image, has been hailed as ‘The New Art of Virtual Moviemak- Maya. ing’[Autodesk 2009]. Teaching strategies are based
Recommended publications
  • Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche
    Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche To cite this version: Amaury Louarn, Marc Christie, Fabrice Lamarche. Automated Staging for Virtual Cinematography. MIG 2018 - 11th annual conference on Motion, Interaction and Games, Nov 2018, Limassol, Cyprus. pp.1-10, 10.1145/3274247.3274500. hal-01883808 HAL Id: hal-01883808 https://hal.inria.fr/hal-01883808 Submitted on 28 Sep 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automated Staging for Virtual Cinematography Amaury Louarn Marc Christie Fabrice Lamarche IRISA / INRIA IRISA / INRIA IRISA / INRIA Rennes, France Rennes, France Rennes, France [email protected] [email protected] [email protected] Scene 1: Camera 1, CU on George front screencenter and Marty 3/4 backright screenleft. George and Marty are in chair and near bar. (a) Scene specification (b) Automated placement of camera and characters (c) Resulting shot enforcing the scene specification Figure 1: Automated staging for a simple scene, from a high-level language specification (a) to the resulting shot (c). Our system places both actors and camera in the scene. Three constraints are displayed in (b): Close-up on George (green), George seen from the front (blue), and George screencenter, Marty screenleft (red).
    [Show full text]
  • Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
    Journal of Multidisciplinary Engineering Science and Technology (JMEST) ISSN: 3159-0040 Vol. 2 Issue 7, July - 2015 Architectural Model For An Augmented Reality Based Mobile Learning Application Oluwaranti, A. I., Obasa A. A., Olaoye A. O. and Ayeni S. Department of Computer Science and Engineering Obafemi Awolowo University Ile-Ife, Nigeria [email protected] Abstract— The work developed an augmented students. It presents a model to utilize an Android reality (AR) based mobile learning application for based smart phone camera to scan 2D templates and students. It implemented, tested and evaluated the overlay the information in real time. The model was developed AR based mobile learning application. implemented and its performance evaluated with This is with the view to providing an improved and respect to its ease of use, learnability and enhanced learning experience for students. effectiveness. The augmented reality system uses the marker- II. LITERATURE REVIEW based technique for the registration of virtual Augmented reality, commonly referred to as AR contents. The image tracking is based on has garnered significant attention in recent years. This scanning by the inbuilt camera of the mobile terminology has been used to describe the technology device; while the corresponding virtual behind the expansion or intensification of the real augmented information is displayed on its screen. world. To “augment reality” is to “intensify” or “expand” The recognition of scanned images was based on reality itself [4]. Specifically, AR is the ability to the Vuforia Cloud Target Recognition System superimpose digital media on the real world through (VCTRS). The developed mobile application was the screen of a device such as a personal computer or modeled using the Object Oriented modeling a smart phone, to create and show users a world full of techniques.
    [Show full text]
  • Virtual Cinematography Using Optimization and Temporal Smoothing
    Virtual Cinematography Using Optimization and Temporal Smoothing Alan Litteneker Demetri Terzopoulos University of California, Los Angeles University of California, Los Angeles [email protected] [email protected] ABSTRACT disagree both on how well a frame matches a particular aesthetic, We propose an automatic virtual cinematography method that takes and on the relative importance of aesthetic properties in a frame. a continuous optimization approach. A suitable camera pose or path As such, virtual cinematography for scenes that are planned is determined automatically by computing the minima of an ob- before delivery to the viewer, such as with 3D animated films made jective function to obtain some desired parameters, such as those popular by companies such as Pixar, is generally solved manu- common in live action photography or cinematography. Multiple ally by a human artist. However, when a scene is even partially objective functions can be combined into a single optimizable func- unknown, such as when playing a video game or viewing a pro- tion, which can be extended to model the smoothness of the optimal cedurally generated real-time animation, some sort of automatic camera path using an active contour model. Our virtual cinematog- virtual cinematography system must be used. raphy technique can be used to find camera paths in either scripted The type of system with which the present work is concerned or unscripted scenes, both with and without smoothing, at a rela- uses continuous objective functions and numerical optimization tively low computational cost. solvers. As an optimization problem, virtual cinematography has some difficult properties: it is generally nonlinear, non-convex, CCS CONCEPTS and cannot be efficiently expressed in closed form.
    [Show full text]
  • Resource Materials on the Learning and Teaching of Film This Set of Materials Aims to Develop Senior Secondary Students' Film
    Resource Materials on the Learning and Teaching of Film This set of materials aims to develop senior secondary students’ film analysis skills and provide guidelines on how to approach a film and develop critical responses to it. It covers the fundamentals of film study and is intended for use by Literature in English teachers to introduce film as a new literary genre to beginners. The materials can be used as a learning task in class to introduce basic film concepts and viewing skills to students before engaging them in close textual analysis of the set films. They can also be used as supplementary materials to extend students’ learning beyond the classroom and promote self-directed learning. The materials consist of two parts, each with the Student’s Copy and Teacher’s Notes. The Student’s Copy includes handouts and worksheets for students, while the Teacher’s Notes provides teaching steps and ideas, as well as suggested answers for teachers’ reference. Part 1 provides an overview of film study and introduces students to the fundamentals of film analysis. It includes the following sections: A. Key Aspects of Film Analysis B. Guiding Questions for Film Study C. Learning Activity – Writing a Short Review Part 2 provides opportunities for students to enrich their knowledge of different aspects of film analysis and to apply it in the study of a short film. The short film “My Shoes” has been chosen to illustrate and highlight different areas of cinematography (e.g. the use of music, camera shots, angles and movements, editing techniques). Explanatory notes and viewing activities are provided to improve students’ viewing skills and deepen their understanding of the cinematic techniques.
    [Show full text]
  • Fusing Multimedia Data Into Dynamic Virtual Environments
    ABSTRACT Title of dissertation: FUSING MULTIMEDIA DATA INTO DYNAMIC VIRTUAL ENVIRONMENTS Ruofei Du Doctor of Philosophy, 2018 Dissertation directed by: Professor Amitabh Varshney Department of Computer Science In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a signifcant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotem- poral flters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spher- ical harmonics (SH). Our spherical residual model can be applied to virtual cine- matography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identifed several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, cal- ibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Fur- thermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality.
    [Show full text]
  • Joint Vr Conference of Eurovr and Egve, 2011
    VTT CREATES BUSINESS FROM TECHNOLOGY Technology and market foresight • Strategic research • Product and service development • IPR and licensing VTT SYMPOSIUM 269 • Assessments, testing, inspection, certification • Technology and innovation management • Technology partnership • • • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. CURRENT AND FUTURE PERSPECTIVE... • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. The Joint Virtual Reality Conference (JVRC2011) of euroVR and EGVE is an inter- national event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), aug- mented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This year it was held in the UK in Nottingham hosted by the Human Factors Research Group (HFRG) and the Mixed Reality Lab (MRL) at the University of Nottingham. This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the tech- nologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. Here we have presentations from around the world. Joint VR Conference of
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    Online Submission ID: 1676 VR as a Content Creation Tool for Movie Previsualisation Category: Application Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT previsualisation technologies as a way to reduce the costs by early anticipation of issues that may occur during the shooting. Creatives in animation and film productions have forever been ex- ploring the use of new means to prototype their visual sequences Previsualisations of movies or animated sequences are actually before realizing them, by relying on hand-drawn storyboards, phys- crafted by dedicated companies composed of 3D artists, using tra- ical mockups or more recently 3D modelling and animation tools. ditional modelling/animation tools such as Maya, 3DS or Motion However these 3D tools are designed in mind for dedicated ani- Builder. Creatives in film crews (editors, cameramen, directors of mators rather than creatives such as film directors or directors of photography) however do not master these tools and hence can only photography and remain complex to control and master. In this intervene by interacting with the 3D artists. While a few tools such paper we propose a VR authoring system which provides intuitive as FrameForge3D, MovieStorm or ShotPro [2, 4, 6] or research pro- ways of crafting visual sequences, both for expert animators and totypes have been proposed to ease the creative process [23, 27, 28], expert creatives in the animation and film industry.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan Et Al
    USOO9729765B2 (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan et al. (45) Date of Patent: Aug. 8, 2017 (54) MOBILE VIRTUAL CINEMATOGRAPHY A63F 13/70 (2014.09); G06T 15/20 (2013.01); SYSTEM H04N 5/44504 (2013.01); A63F2300/1093 (71) Applicant: Drexel University, Philadelphia, PA (2013.01) (58) Field of Classification Search (US) None (72) Inventors: Girish Balakrishnan, Santa Monica, See application file for complete search history. CA (US); Paul Diefenbach, Collingswood, NJ (US) (56) References Cited (73) Assignee: Drexel University, Philadelphia, PA (US) PUBLICATIONS (*) Notice: Subject to any disclaimer, the term of this Lino, C. et al. (2011) The Director's Lens: An Intelligent Assistant patent is extended or adjusted under 35 for Virtual Cinematography. ACM Multimedia, ACM 978-1-4503 U.S.C. 154(b) by 651 days. 0616-Apr. 11, 2011. Elson, D.K. and Riedl, M.O (2007) A Lightweight Intelligent (21) Appl. No.: 14/309,833 Virtual Cinematography System for Machinima Production. Asso ciation for the Advancement of Artificial Intelligence. Available (22) Filed: Jun. 19, 2014 from www.aaai.org. (Continued) (65) Prior Publication Data US 2014/O378222 A1 Dec. 25, 2014 Primary Examiner — Maurice L. McDowell, Jr. (74) Attorney, Agent, or Firm — Saul Ewing LLP: Related U.S. Application Data Kathryn Doyle; Brian R. Landry (60) Provisional application No. 61/836,829, filed on Jun. 19, 2013. (57) ABSTRACT A virtual cinematography system (SmartWCS) is disclosed, (51) Int. Cl. including a mobile tablet device, wherein the mobile tablet H04N 5/222 (2006.01) device includes a touch-sensor Screen, a first hand control, a A63F 3/65 (2014.01) second hand control, and a motion sensor.
    [Show full text]
  • VR As a Content Creation Tool for Movie Previsualisation
    VR as a Content Creation Tool for Movie Previsualisation Quentin Galvane* I-Sheng Lin Fernando Argelaguet† Tsai-Yen Li‡ Inria, CNRS, IRISA, M2S National ChengChi Inria, CNRS, IRISA National ChengChi France University France University Taiwan Taiwan Marc Christie§ Univ Rennes, Inria, CNRS, IRISA, M2S France Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT storyboards or digital storyboards [8] remain one of the most tradi- Creatives in animation and film productions have forever been ex- tional ways to design the visual and narrative dimensions of a film ploring the use of new means to prototype their visual sequences sequence. before realizing them, by relying on hand-drawn storyboards, phys- With the advent of realistic real-time rendering techniques, the ical mockups or more recently 3D modelling and animation tools. film and animation industries have been extending storyboards by However these 3D tools are designed in mind for dedicated ani- exploring the use of 3D virtual environments to prototype movies, mators rather than creatives such as film directors or directors of a technique termed previsualisation (or previs). Previsualisation photography and remain complex to control and master. In this consists in creating a rough 3D mockup of the scene, laying out the paper we propose a VR authoring system which provides intuitive elements, staging the characters, placing the cameras, and creating ways of crafting visual sequences, both for expert animators and a very early edit of the sequence.
    [Show full text]
  • Voluntube Video Training Programme
    VOLUNTUBE VIDEO TRAINING PROGRAMME 1.0 - INTRODUCTION TO VIDEO - FILMING BASICS In this session we will talk about the basic tools of a videomaker and how to use them for our purposes. 1.1 CAMERA SET-UP - VIDEO FORMAT This is the first element that needs to be set before doing any other operation on the camera. We will decide, according to the equipment available to all the volunteers, which is the best HD codec , format (1080p, 1080i, 720p, 720i) and video system (PAL, NTSC) to be used as a standard for Voluntube. For all those who have never heard of this technical stuff, we will briefly explain what's the meaning behind concepts like High Definition, resolution, the difference between progressive or interlaced shooting, video systems and compression codecs. We want to choose a unique standard that could be the same for all the volunteers: HD 1920x1080, 25p, High Quality FH. Audio stereo 2CH. and NOT 5.1. Setting up the format can be done just once at the beginning of the training and then kept the same throughout the whole period of participation to the project. The choice of the right format is also very important in relation to the initial set-up of the projects in Wondershare Filmora. - RECORDING MEDIA & FOOTAGE STORAGE The card onto which the footage is recorded and stored (until editing) must be the first concern of our videomakers as they prepare themselves for a day of shooting, and also the last thing to check before they put their equipment back in place at the end of the day.
    [Show full text]
  • Choices in the Editing Room
    Choices in the Editing Room: How the Intentional Editing of Dialogue Scenes through Shot Choice can Enhance Story and Character Development within Motion Pictures Presented to the Faculty of Liberty University School of Communication & Creative Arts In Partial Fulfillment of the Requirements for the Master of Arts in Strategic Communication By Jonathan Pfenninger December 2014 Pfenninger ii Thesis Committee Carey Martin, Ph.D., Chair Date Stewart Schwartz, Ph.D. Date Van Flesher, MFA Date Pfenninger iii Copyright © 2014 Jonathan Ryan Pfenninger All Rights Reserved Pfenninger iv Dedication: To Momma and Daddy: The drive, passion, and love that you have instilled in me has allowed me to reach farther than I thought I would ever be able to. Pfenninger v Acknowledgements I would like to thank my parents, Arlen and Kelly Pfenninger, for their love and support throughout this journey. As you have watched me grow up there have been times when I have questioned whether I was going to make it through but you both have always stood strong and supported me. Your motivation has helped me know that I can chase my dreams and not settle for mediocrity. I love you. Andrew Travers, I never dreamed of a passion in filmmaking and storytelling before really getting to know you. Thank you for the inspiration and motivation. Dr. Martin, your example as a professor and filmmaker have inspired me over the last three years. I have gained an incredible amount of knowledge and confidence under your teaching and guidance. I cannot thank you enough for the time you have invested in me and this work.
    [Show full text]
  • Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality
    EUROGRAPHICS 2019 Volume 38 (2019), Number 2 A. Giachetti and H. Rushmeier STAR – State of The Art Report (Guest Editors) Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality G. A. Koulieris1 , K. Ak¸sit2 , M. Stengel2, R. K. Mantiuk3 , K. Mania4 and C. Richardt5 1Durham University, United Kingdom 2NVIDIA Corporation, United States of America 3University of Cambridge, United Kingdom 4Technical University of Crete, Greece 5University of Bath, United Kingdom Abstract Virtual and augmented reality (VR/AR) are expected to revolutionise entertainment, healthcare, communication and the manufac- turing industries among many others. Near-eye displays are an enabling vessel for VR/AR applications, which have to tackle many challenges related to ergonomics, comfort, visual quality and natural interaction. These challenges are related to the core elements of these near-eye display hardware and tracking technologies. In this state-of-the-art report, we investigate the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies. We begin our discussion by describing the basics of light and image formation. Later, we recount principles of visual perception by relating to the human visual system. We provide two structured overviews on state-of-the-art near-eye display and tracking technologies involved in such near-eye displays. We conclude by outlining unresolved research questions to inspire the next generation of researchers. 1. Introduction it take to go from VR ‘barely working’ as Brooks described VR’s technological status in 1999, to technologies being seamlessly inte- Near-eye displays are an enabling head-mounted technology that grated in the everyday lives of the consumer, going from prototype immerses the user in a virtual world (VR) or augments the real world to production status? (AR) by overlaying digital information, or anything in between on the spectrum that is becoming known as ‘cross/extended reality’ The shortcomings of current near-eye displays stem from both im- (XR).
    [Show full text]