Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Eyetap Devices for Augmented, Deliberately Diminished, Or Otherwise Altered Visual Perception of Rigid Planar Patches of Real-Wo
Steve Mann EyeTap Devicesfor Augmented, [email protected] Deliberately Diminished,or James Fung Otherwise AlteredVisual [email protected] University ofToronto Perceptionof Rigid Planar 10King’ s College Road Patches ofReal-World Scenes Toronto,Canada Abstract Diminished reality is as important as augmented reality, and bothare possible with adevice called the RealityMediator. Over thepast twodecades, we have designed, built, worn,and tested many different embodiments ofthis device in thecontext of wearable computing. Incorporated intothe Reality Mediator is an “EyeTap”system, which is adevice thatquanti es and resynthesizes light thatwould otherwise pass through one or bothlenses ofthe eye(s) ofa wearer. Thefunctional principles of EyeTap devices are discussed, in detail. TheEyeTap diverts intoa spatial measure- ment system at least aportionof light thatwould otherwise pass through thecen- ter ofprojection ofat least one lens ofan eye ofa wearer. TheReality Mediator has at least one mode ofoperation in which itreconstructs these rays oflight, un- der thecontrol of a wearable computer system. Thecomputer system thenuses new results in algebraic projective geometry and comparametric equations toper- form head tracking, as well as totrack motionof rigid planar patches present in the scene. We describe howour tracking algorithm allows an EyeTap toalter thelight from aparticular portionof the scene togive rise toa computer-controlled, selec- tively mediated reality. Animportant difference between mediated reality and aug- mented reality includes theability tonot just augment butalso deliberately diminish or otherwise alter thevisual perception ofreality. For example, diminished reality allows additional information tobe inserted withoutcausing theuser toexperience information overload. Our tracking algorithm also takes intoaccount the effects of automatic gain control,by performing motionestimation in bothspatial as well as tonal motioncoordinates. -
Augmented Reality Glasses State of the Art and Perspectives
Augmented Reality Glasses State of the art and perspectives Quentin BODINIER1, Alois WOLFF2, 1(Affiliation): Supelec SERI student 2(Affiliation): Supelec SERI student Abstract—This paper aims at delivering a comprehensive and detailled outlook on the emerging world of augmented reality glasses. Through the study of diverse technical fields involved in the conception of augmented reality glasses, it will analyze the perspectives offered by this new technology and try to answer to the question : gadget or watershed ? Index Terms—augmented reality, glasses, embedded electron- ics, optics. I. INTRODUCTION Google has recently brought the attention of consumers on a topic that has interested scientists for thirty years : wearable technology, and more precisely ”smart glasses”. Howewer, this commercial term does not fully take account of the diversity and complexity of existing technologies. Therefore, in these lines, we wil try to give a comprehensive view of the state of the art in different technological fields involved in this topic, Fig. 1. Different kinds of Mediated Reality for example optics and elbedded electronics. Moreover, by presenting some commercial products that will begin to be released in 2014, we will try to foresee the future of smart augmented reality devices and the technical challenges they glasses and their possible uses. must face, which include optics, electronics, real time image processing and integration. II. AUGMENTED REALITY : A CLARIFICATION There is a common misunderstanding about what ”Aug- III. OPTICS mented Reality” means. Let us quote a generally accepted defi- Optics are the core challenge of augmented reality glasses, nition of the concept : ”Augmented reality (AR) is a live, copy, as they need displaying information on the widest Field Of view of a physical, real-world environment whose elements are View (FOV) possible, very close to the user’s eyes and in a augmented (or supplemented) by computer-generated sensory very compact device. -
Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche
Automated Staging for Virtual Cinematography Amaury Louarn, Marc Christie, Fabrice Lamarche To cite this version: Amaury Louarn, Marc Christie, Fabrice Lamarche. Automated Staging for Virtual Cinematography. MIG 2018 - 11th annual conference on Motion, Interaction and Games, Nov 2018, Limassol, Cyprus. pp.1-10, 10.1145/3274247.3274500. hal-01883808 HAL Id: hal-01883808 https://hal.inria.fr/hal-01883808 Submitted on 28 Sep 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automated Staging for Virtual Cinematography Amaury Louarn Marc Christie Fabrice Lamarche IRISA / INRIA IRISA / INRIA IRISA / INRIA Rennes, France Rennes, France Rennes, France [email protected] [email protected] [email protected] Scene 1: Camera 1, CU on George front screencenter and Marty 3/4 backright screenleft. George and Marty are in chair and near bar. (a) Scene specification (b) Automated placement of camera and characters (c) Resulting shot enforcing the scene specification Figure 1: Automated staging for a simple scene, from a high-level language specification (a) to the resulting shot (c). Our system places both actors and camera in the scene. Three constraints are displayed in (b): Close-up on George (green), George seen from the front (blue), and George screencenter, Marty screenleft (red). -
Virtual Cinematography Using Optimization and Temporal Smoothing
Virtual Cinematography Using Optimization and Temporal Smoothing Alan Litteneker Demetri Terzopoulos University of California, Los Angeles University of California, Los Angeles [email protected] [email protected] ABSTRACT disagree both on how well a frame matches a particular aesthetic, We propose an automatic virtual cinematography method that takes and on the relative importance of aesthetic properties in a frame. a continuous optimization approach. A suitable camera pose or path As such, virtual cinematography for scenes that are planned is determined automatically by computing the minima of an ob- before delivery to the viewer, such as with 3D animated films made jective function to obtain some desired parameters, such as those popular by companies such as Pixar, is generally solved manu- common in live action photography or cinematography. Multiple ally by a human artist. However, when a scene is even partially objective functions can be combined into a single optimizable func- unknown, such as when playing a video game or viewing a pro- tion, which can be extended to model the smoothness of the optimal cedurally generated real-time animation, some sort of automatic camera path using an active contour model. Our virtual cinematog- virtual cinematography system must be used. raphy technique can be used to find camera paths in either scripted The type of system with which the present work is concerned or unscripted scenes, both with and without smoothing, at a rela- uses continuous objective functions and numerical optimization tively low computational cost. solvers. As an optimization problem, virtual cinematography has some difficult properties: it is generally nonlinear, non-convex, CCS CONCEPTS and cannot be efficiently expressed in closed form. -
Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Carolina Digital Repository Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass. A Master’s Paper for the M.S. in I.S degree. April, 2015. 42 pages. Advisor: Brad Hemminger This article introduces augmented reality technology, some current applications, and augmented reality technology for wearable devices. Then it introduces how to use NyARToolKit as a software library to build AR applications. The article also introduces how to design an AR application in Google Glass. The application can recognize two different images through NyARToolKit build-in function. After find match pattern files, the application will draw different 3D graphics according to different input images. Headings: Augmented Reality Google Glass Application - Design Google Glass Application - Development RESEARCH ON AUGMENT REALITY TECHNOLOGY AND BUILD AR APPLICATION ON GOOGLE GLASS by Yonghao Yu A Master’s paper submitted to the faculty of the School of Information and Library Science of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Master of Science in Information Science. Chapel Hill, North Carolina April 2015 Approved by ____________________________________ Brad Hemminger 1 Table of Contents Table of Contents ................................................................................................................ 1 1. Introduction .................................................................................................................... -
University of Southampton Research Repository
University of Southampton Research Repository Copyright © and Moral Rights for this thesis and, where applicable, any accompanying data are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis and the accompanying data cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content of the thesis and accompanying research data (where applicable) must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holder/s. When referring to this thesis and any accompanying data, full bibliographic details must be given, e.g. Thesis: M Zinopoulou (2019) " A Framework for improving Engagement and the Learning Experience through Emerging Technologies ", University of Southampton, The Faculty of Engineering and Physical Sciences. School of Electronics and Computer Science (ECS), PhD Thesis, 119. Data: M Zinopoulou (2019) "A Framework for improving Engagement and the Learning Experience through Emerging Technologies" The Faculty of Engineering and Physical Sciences School of Electronics and Computer Science (ECS) A Framework for improving Engagement and the Learning Experience through Emerging Technologies by Maria Zinopoulou Supervisors: Dr. Gary Wills and Dr. Ashok Ranchhod Thesis submitted for the degree of Doctor of Philosophy November 2019 University of Southampton Abstract Advancements in Information Technology and Communication have brought about a new connectivity between the digital world and the real world. Emerging Technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) and their combination as Extended Reality (XR), Artificial Intelligence (AI), the Internet of Things (IoT) and Blockchain Technology are changing the way we view our world and have already begun to impact many aspects of daily life. -
Teaching Visual Storytelling for Virtual Production Pipelines Incorporating Motion Capture and Visual Effects
Teaching Visual Storytelling for virtual production pipelines incorporating Motion Capture and Visual Effects Gregory Bennett∗ Jan Krusey Auckland University of Technology Auckland University of Technology Figure 1: Performance Capture for Visual Storytelling at AUT. Abstract solid theoretical foundation, and could traditionally only be ex- plored through theory and examples in a lecture/lab style context. Film, television and media production are subject to consistent Particularly programs that aim to deliver content in a studio-based change due to ever-evolving technological and economic environ- environment suffer from the complexity and cost-time-constraints ments. Accordingly, tertiary teaching of subject areas such as cin- inherently part of practical inquiry into storytelling through short ema, animation and visual effects require frequent adjustments re- film production or visual effects animation. Further, due to the garding curriculum structure and pedagogy. This paper discusses a structure and length of Film, Visual Effects and Digital Design de- multifaceted, cross-disciplinary approach to teaching Visual Narra- grees, there is normally only time for a single facet of visual nar- tives as part of a Digital Design program. Specifically, pedagogical rative to be addressed, for example a practical camera shoot, or challenges in teaching Visual Storytelling through Motion Capture alternatively a visual effects or animation project. This means that and Visual Effects are addressed, and a new pedagogical frame- comparative exploratory learning is usually out of the question, and work using three different modes of moving image storytelling is students might only take a singular view on technical and creative applied and cited as case studies. Further, ongoing changes in film story development throughout their undergraduate years. -
Fusing Multimedia Data Into Dynamic Virtual Environments
ABSTRACT Title of dissertation: FUSING MULTIMEDIA DATA INTO DYNAMIC VIRTUAL ENVIRONMENTS Ruofei Du Doctor of Philosophy, 2018 Dissertation directed by: Professor Amitabh Varshney Department of Computer Science In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a signifcant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotem- poral flters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spher- ical harmonics (SH). Our spherical residual model can be applied to virtual cine- matography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identifed several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, cal- ibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Fur- thermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. -
Joint Vr Conference of Eurovr and Egve, 2011
VTT CREATES BUSINESS FROM TECHNOLOGY Technology and market foresight • Strategic research • Product and service development • IPR and licensing VTT SYMPOSIUM 269 • Assessments, testing, inspection, certification • Technology and innovation management • Technology partnership • • • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. CURRENT AND FUTURE PERSPECTIVE... • VTT SYMPOSIUM 269 JOINT VR CONFERENCE OF EUROVR AND EGVE, 2011. The Joint Virtual Reality Conference (JVRC2011) of euroVR and EGVE is an inter- national event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), aug- mented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This year it was held in the UK in Nottingham hosted by the Human Factors Research Group (HFRG) and the Mixed Reality Lab (MRL) at the University of Nottingham. This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the tech- nologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. Here we have presentations from around the world. Joint VR Conference of -
VR As a Content Creation Tool for Movie Previsualisation
Online Submission ID: 1676 VR as a Content Creation Tool for Movie Previsualisation Category: Application Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT previsualisation technologies as a way to reduce the costs by early anticipation of issues that may occur during the shooting. Creatives in animation and film productions have forever been ex- ploring the use of new means to prototype their visual sequences Previsualisations of movies or animated sequences are actually before realizing them, by relying on hand-drawn storyboards, phys- crafted by dedicated companies composed of 3D artists, using tra- ical mockups or more recently 3D modelling and animation tools. ditional modelling/animation tools such as Maya, 3DS or Motion However these 3D tools are designed in mind for dedicated ani- Builder. Creatives in film crews (editors, cameramen, directors of mators rather than creatives such as film directors or directors of photography) however do not master these tools and hence can only photography and remain complex to control and master. In this intervene by interacting with the 3D artists. While a few tools such paper we propose a VR authoring system which provides intuitive as FrameForge3D, MovieStorm or ShotPro [2, 4, 6] or research pro- ways of crafting visual sequences, both for expert animators and totypes have been proposed to ease the creative process [23, 27, 28], expert creatives in the animation and film industry. -
(12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan Et Al
USOO9729765B2 (12) United States Patent (10) Patent No.: US 9,729,765 B2 Balakrishnan et al. (45) Date of Patent: Aug. 8, 2017 (54) MOBILE VIRTUAL CINEMATOGRAPHY A63F 13/70 (2014.09); G06T 15/20 (2013.01); SYSTEM H04N 5/44504 (2013.01); A63F2300/1093 (71) Applicant: Drexel University, Philadelphia, PA (2013.01) (58) Field of Classification Search (US) None (72) Inventors: Girish Balakrishnan, Santa Monica, See application file for complete search history. CA (US); Paul Diefenbach, Collingswood, NJ (US) (56) References Cited (73) Assignee: Drexel University, Philadelphia, PA (US) PUBLICATIONS (*) Notice: Subject to any disclaimer, the term of this Lino, C. et al. (2011) The Director's Lens: An Intelligent Assistant patent is extended or adjusted under 35 for Virtual Cinematography. ACM Multimedia, ACM 978-1-4503 U.S.C. 154(b) by 651 days. 0616-Apr. 11, 2011. Elson, D.K. and Riedl, M.O (2007) A Lightweight Intelligent (21) Appl. No.: 14/309,833 Virtual Cinematography System for Machinima Production. Asso ciation for the Advancement of Artificial Intelligence. Available (22) Filed: Jun. 19, 2014 from www.aaai.org. (Continued) (65) Prior Publication Data US 2014/O378222 A1 Dec. 25, 2014 Primary Examiner — Maurice L. McDowell, Jr. (74) Attorney, Agent, or Firm — Saul Ewing LLP: Related U.S. Application Data Kathryn Doyle; Brian R. Landry (60) Provisional application No. 61/836,829, filed on Jun. 19, 2013. (57) ABSTRACT A virtual cinematography system (SmartWCS) is disclosed, (51) Int. Cl. including a mobile tablet device, wherein the mobile tablet H04N 5/222 (2006.01) device includes a touch-sensor Screen, a first hand control, a A63F 3/65 (2014.01) second hand control, and a motion sensor. -
VR As a Content Creation Tool for Movie Previsualisation
VR as a Content Creation Tool for Movie Previsualisation Quentin Galvane* I-Sheng Lin Fernando Argelaguet† Tsai-Yen Li‡ Inria, CNRS, IRISA, M2S National ChengChi Inria, CNRS, IRISA National ChengChi France University France University Taiwan Taiwan Marc Christie§ Univ Rennes, Inria, CNRS, IRISA, M2S France Figure 1: We propose a novel VR authoring tool dedicated to sketching movies in 3D before shooting them (a phase known as previsualisation). Our tool covers the main stages in film-preproduction: crafting storyboards (left), creating 3D animations (center images), and preparing technical aspects of the shooting (right). ABSTRACT storyboards or digital storyboards [8] remain one of the most tradi- Creatives in animation and film productions have forever been ex- tional ways to design the visual and narrative dimensions of a film ploring the use of new means to prototype their visual sequences sequence. before realizing them, by relying on hand-drawn storyboards, phys- With the advent of realistic real-time rendering techniques, the ical mockups or more recently 3D modelling and animation tools. film and animation industries have been extending storyboards by However these 3D tools are designed in mind for dedicated ani- exploring the use of 3D virtual environments to prototype movies, mators rather than creatives such as film directors or directors of a technique termed previsualisation (or previs). Previsualisation photography and remain complex to control and master. In this consists in creating a rough 3D mockup of the scene, laying out the paper we propose a VR authoring system which provides intuitive elements, staging the characters, placing the cameras, and creating ways of crafting visual sequences, both for expert animators and a very early edit of the sequence.