Combining Computer Vision and Knowledge Acquisition to Provide Real-Time Activity Recognition for Multiple Persons Within Immersive Environments

Total Page:16

File Type:pdf, Size:1020Kb

Combining Computer Vision and Knowledge Acquisition to Provide Real-Time Activity Recognition for Multiple Persons Within Immersive Environments Combining Computer Vision and Knowledge Acquisition to provide Real-Time Activity Recognition for Multiple Persons within Immersive Environments Anuraag Sridhar Schoool of Computer Science and Engineering University of New South Wales, Sydney, Australia A thesis submitted in fulfilment of the requirements for the degree of PhilosophiæDoctor (PhD) 2012 b c d Abstract In recent years, vision is gaining increasing importance as a method of human- computer interaction. Vision techniques are becoming popular especially within immersive environment systems, which rely on innovation and novelty, and in which the large spatial area benefits from the unintrusive operation of visual sensors. However, despite the vast amount of research on vision based analysis and immersive environments, there is a considerable gap in coupling the two fields. In particular, a vision system that can provide recognition of the activ- ities of multiple persons within the environment in real time would be highly beneficial in providing data not just for real-virtual interaction, but also for so- ciology and psychology research. This thesis presents novel solutions for two important vision tasks that sup- port the ultimate goal of performing activity recognition of multiple persons within an immersive environment. Although the work within this thesis has been tested in a specific immersive environment, namely the Advanced Visual- isation and Interaction Environment, the components and frameworks can be easily carried over to other immersive systems. The first task is the real-time tracking of multiple persons as they navigate within the environment. Numer- ous low-level algorithms, which leverage the spatial positioning of the cameras, are combined in an innovative manner to provide a high-level, extensible frame- work that provides robust tracking of up to 10 persons within the immersive environment. The framework uses multiple cameras distributed over multiple computers for efficiency, and supports additional cameras for greater coverage of larger areas. The second task is that of converting the low-level feature values derived from an underlying vision system into activity classes for each person. Such a system can be used in later stages to recognize increasingly complex activities using symbolic logic. An on-line, incremental knowledge acquisition (KA) philosophy is utilised for this task, which allows the introduction of additional features and classes even during system operation. The philosophy lends itself to a robust software framework, which allows a vision programmer to add activity classifi- cation rules to the system ad infinitum. The KA framework provides automated knowledge verification techniques and leverages the power of human cognition to provide computationally efficient yet accurate classification. The final system is able to discriminate 8 different activities performed by up to 5 persons. Dedication I dedicate this thesis to my dear parents. To Amma, for the love and support that she gave to me along the long and arduous journey and for inspiring me to pursue creative skills along with the technical. To Nanna, for encouraging me to pursue education and scientific endeavours from a very young age and for imbuing within me a deep passion for reading and learning. f Acknowledgements Along the long and difficult road that led to the final goal that is this thesis, I have been fortunate to have the support and encouragement of many people to whom I owe my deepest gratitude. First is my supervisor Arcot Sowmya and co-supervisor Paul Compton. They have both been incredibly supportive and patient and have inspired in me a great appreciation for the world of academia and the pursuit of knowledge. Without their help, this thesis simply could not have been achieved. I extend my deepest gratitude to Avishkar Misra, who drew me into the concept of Ripple-Down Rules and provided great insights and conversations along the way. Thanks also goes to my PhD review group, Maurice Pagnucco, Claude Sam- mut, and Alan Blair who showed a keen interest in my work during my presen- tations and who asked some very insightful and thought-provoking questions. I would especially like to thank Maurice for providing an immense amount of guidance during some very difficult times. The people at the iCinema Centre for Interactive Cinema Research provided me with the resources to see my thesis through to the end. For this reason, I thank Jeffrey Shaw and Dennis Del Favero for allowing me use the iCinema system for my research. The person to whom I owe the most thanks at iCinema, how- ever, is Ardrian Hardjono, who provided me with not just physical resources but also moral guidance and a strong friendship. I have also become highly in- debted to the many people at iCinema who started as colleagues but, through their incredible support, have become my dear friends over the course of the PhD. These are Alex Kuptsov, Som Guan, Robin Chow, Marc Yu-San Chee, Den- san Obst, Matthew Mcginity, Jared Berghold, Piyush Bedi, Rob Lawther, Sue Midgley, and Volker Kuchelmeister. Finally, I would like to thank my friends and family who provided so much en- couragement and support to see me through to the end. Thanks to my parents, Kumar and Ambika Sridhar, for their love, support and patience. Thanks to Oleg Sushkov, for many great coffee conversations about coding and life in gen- eral. Thanks to Boris Derevnin and Vince Clemente for the many dinners where the question of "Are you finished yet?" kept me on the path. Last but in no manner the least, I thank the many researchers whose shoulders have provided such terrific foundations. g Publications arising from Thesis 1. A. Sridhar and A. Sowmya. SparseSPOT: using a priori 3-D tracking for real-time multi-person voxel reconstruction. In Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, pages 135–138. ACM, 2009. 2. A. Sridhar and A. Sowmya. Distributed, multi-sensor tracking of multiple partici- pants within immersive environments using a 2-cohort camera setup. Machine Vision and Applications, pages 1–17, 2010. 3. A. Sridhar and A. Sowmya. Multiple Camera, Multiple Person Tracking with Pointing Gesture Recognition in Immersive Environments. In Advances in Visual Computing, pages 508–519. Springer, 2008. 4. A. Sridhar, A. Sowmya, and P. Compton. On-line, Incremental Learning for Real- Time Vision Based Movement Recognition. In The 9th International Conference on Machine Learning and Applications, 2010. h Contents List of Figures iv List of Tables viii 1 Computer Vision, Activity Recognition, and the Immersive Environment 1 1.1 Activity Recognition within Immersive Environments . 2 1.2 Thesis Scope . 6 1.3 Research Overview . 8 1.3.1 Tracking Multiple Persons . 8 1.3.2 A Knowledge-Based Approach to Activities . 9 1.4 Thesis Contribution . 9 1.5 Thesis Organization . 10 2 Activity Recognition within the Large Scale Projection Display 12 2.1 Real-Time Performance . 13 2.2 Tracking Systems . 14 2.2.1 Lighting and Colour . 15 2.2.2 Camera Calibration . 16 2.2.3 Segmentation . 20 2.2.4 Monocular Tracking Systems . 21 2.2.5 Stereo Tracking Systems . 24 2.2.6 Multi-Camera Tracking System . 24 2.2.7 Bayesian State Estimation . 28 2.3 Activity Recognition . 32 2.3.1 Rigid-Body Motion Analysis . 33 2.3.2 The Use of Joint Models . 34 2.3.3 The Silhouette and the Visual Hull . 39 2.3.4 Activity Classes : The Benefits of Cognitive Rules . 44 2.4 Summary . 47 3 Immersitrack: Tracking Multiple Persons in Real-Time using a 2-Cohort Camera Setup 48 3.1 Immersitrack Architecture . 49 3.1.1 The Slave Process . 53 3.1.2 The Master Process . 56 3.2 Camera Calibration . 57 3.2.1 Camera Parameters . 59 i 3.2.2 Calibration Accuracy . 61 3.3 Overhead Cameras . 62 3.3.1 2-D Motion Models . 62 3.3.2 Correspondence . 63 3.3.3 Tracker Switching . 66 3.4 Oblique Cameras . 68 3.4.1 Body Localization . 68 3.5 Determining 3-D Hypotheses on the Master . 70 3.5.1 3-D Motion Models . 71 3.5.2 Assign Blobs to Existing Persons . 74 3.5.3 Group Unassigned Blobs . 75 3.5.4 Reconstructing World Position . 77 3.5.5 Verifying Observed Persons . 80 3.6 Experimental Results . 83 3.6.1 Baseline Error Values for Single Person Tracking . 83 3.6.2 Framerates and Filter Prediction . 86 3.6.3 Tracking Multiple Persons . 90 3.7 SparseSPOT : Real-Time Multi-Person Visual Hull Reconstruction . 94 3.7.1 SPOT . 95 3.7.2 SparseSPOT: Leveraging Immersitrack for Spatial Coherency . 96 3.7.3 Performance Comparison between SPOT and SparseSPOT . 99 3.7.4 Final Frame-rates of Immersitrack and SparseSPOT . 101 3.8 Summary . 102 4 Real-World Applications of Immersitrack 103 4.1 A Real-Time Pointing Gesture Recognition System . 104 4.1.1 Finger Detection in Overhead Cameras . 104 4.1.2 Finger Detection in Oblique Cameras . 106 4.1.3 3-D Reconstruction . 108 4.1.4 Head Position . 111 4.1.5 Multiple Persons . 112 4.1.6 Experimental Results : Pointing Gestures vs. Inertial Sensor . 113 4.1.7 Application . 116 4.2 Scenario . 117 4.3 Mnajdra . 119 4.4 Summary . 120 5 An Incremental Knowledge Acquisition Framework for Activity Recogni- tion 121 5.1 Ripple Down Rules . 122 5.1.1 Single Classification RDR (SCRDR) . 123 5.1.2 Knowledge Revision . 124 5.1.3 Multiple Classifications . 126 5.1.4 Generalisation . 130 5.2 Encoding Knowledge for Activity Recognition . 132 5.2.1 Rule Format . 132 5.2.2 Static versus Dynamic Attributes . 133 5.2.3 Cases and the Temporal Nature of Activities . 134 5.2.4 The Final Rule Format .
Recommended publications
  • A Quality Controllable Multi-View Object Reconstruction Method for 3D Imaging Systems
    J. Vis. Commun. Image R. 21 (2010) 427–441 Contents lists available at ScienceDirect J. Vis. Commun. Image R. journal homepage: www.elsevier.com/locate/jvci A quality controllable multi-view object reconstruction method for 3D imaging systems Wen-Chao Chen a, Hong-Long Chou b, Zen Chen a,* a Department of Computer Science, National Chiao Tung University, 1001 University Road, Hsinchu 300, Taiwan b Altek Corporation, Science-Based Industrial Park, Hsinchu, Taiwan article info abstract Article history: This paper addresses a novel multi-view visual hull mesh reconstruction for 3D imaging with a system Received 2 July 2009 quality control capability. There are numerous 3D imaging methods including multi-view stereo algo- Accepted 22 February 2010 rithms and various visual hull/octree reconstruction methods known as modeling from silhouettes. Available online 21 April 2010 The octree based reconstruction methods are conceptually simple to implement, while encountering a conflict between model accuracy and memory size. Since the tree depth is discrete, the system perfor- Keywords: mance measures (in terms of accuracy, memory size, and computation time) are generally varying rapidly 3D imaging system with the pre-specified tree depth. This jumping system performance is not suitable for practical applica- Modeling from silhouettes tions; a desirable 3D reconstruction method must have a finer control over the system performance. The Octree model XOR projection error proposed method aims at the visual quality control along with better management of memory size and System performance computation time. Furthermore, dynamic object modeling is made possible by the new method. Also, Dynamic modeling progressive transmission of the reconstructed model from coarse to fine is provided.
    [Show full text]
  • Incorporating Dynamic Real Objects Into Immersive Virtual Environments
    Incorporating Dynamic Real Objects into Immersive Virtual Environments Mary Whitton, Benjamin Lok Samir Naik Frederick P. Brooks Jr. University of North Carolina at Charlotte Disney Corporation University of North Carolina at Chapel Hill [email protected] [email protected] [whitton, brooks]@cs.unc.edu Abstract In the assembly verification example, the ideal VE system would have the participant fully convinced he was actually We present algorithms that enable virtual objects to interact performing a task [Sutherland 1965]. Parts and tools would have with and respond to virtual representations, avatars, of real mass, feel real, and handle properly with appropriate visual and objects. These techniques allow dynamic real objects, such as the haptic feedback. The user would interact with virtual objects as if user, tools, and parts, to be visually and physically incorporated he were actually doing the task, and virtual objects would respond into the virtual environment (VE). The system uses image-based to the user’s actions. object reconstruction and a volume query mechanism to detect Both assembly and servicing are hands-on tasks and the collisions and to determine plausible collision responses between principal drawback of virtual models — that there is nothing there virtual objects and the avatars. This allows our system to provide to feel, to give manual affordances, and to constrain motions — is the user natural interactions with the VE. a serious one for these applications. Using a six degree-of- We have begun a collaboration with NASA Langley Research freedom wand to simulate a wrench, for example, is far from Center to apply the hybrid environment system to a satellite natural or realistic, perhaps too far to be useful.
    [Show full text]
  • Multicamera Real-Time 3D Modeling for Telepresence And
    Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration Benjamin Petit, Jean-Denis Lesage, Clément Menier, Jérémie Allard, Jean-Sébastien Franco, Bruno Raffin, Edmond Boyer, François Faure To cite this version: Benjamin Petit, Jean-Denis Lesage, Clément Menier, Jérémie Allard, Jean-Sébastien Franco, et al.. Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration. International Jour- nal of Digital Multimedia Broadcasting, Hindawi, 2010, Advances in 3DTV: Theory and Practice, 2010, Article ID 247108, 12 p. 10.1155/2010/247108. inria-00436467v2 HAL Id: inria-00436467 https://hal.inria.fr/inria-00436467v2 Submitted on 6 Sep 2010 (v2), last revised 18 Apr 2012 (v3) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Hindawi Publishing Corporation International Journal of Digital Multimedia Broadcasting Volume 2010, Article ID 247108, 12 pages doi:10.1155/2010/247108 Research Article Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration Benjamin Petit,1 Jean-Denis Lesage,2 Clement´ Menier,3 Jer´
    [Show full text]
  • Rendering Methods for Augmented Reality
    Rendering Methods for Augmented Reality Dissertation der Fakultat¨ fur¨ Informations- und Kognitionswissenschaften der Eberhard-Karls-Universitat¨ Tubingen¨ zur Erlangung des Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) vorgelegt von Dipl.-Inform. Jan T. Fischer aus Mainz Tubingen¨ 2006 Tag der mundlichen¨ Qualifikation: 14.06.2006 Dekan: Prof. Dr. rer. soc. Michael Diehl 1. Berichterstatter: Prof. Dr.-Ing. Dr.-Ing. E.h. Wolfgang Straßer 2. Berichterstatter: Prof. Dr. rer. nat. Bernd Frohlich¨ (Bauhaus-Universitat¨ Weimar) 3. Berichterstatter: PD Dr. rer. nat. Dirk Bartz Erklarung¨ Hiermit erklare¨ ich, dass ich die Arbeit selbstandig¨ und nur mit den angegebenen Hilfsmitteln angefertigt habe und dass alle Stellen, die im Wortlaut oder dem Sinne nach anderen Werken entnommen sind, durch Angaben der Quellen als Entlehnung kenntlich gemacht worden sind. Tubingen,¨ Februar 2006 Jan Fischer iv Zusammenfassung In den letzten Jahren hat sich die Erweiterte Realitat¨ (englisch: Augmented Reality)zuei- ner vielversprechenden und schnell an Bedeutung gewinnenden Anwendung der Computer- graphik entwickelt. Augmented-Reality-Systeme kombinieren vom Computer erzeugte gra- phische Darstellungen mit der Ansicht der realen Welt. Mehrere zentrale Problemstellungen lassen sich im Bereich der Augmented Reality identifizieren. Dabei handelt es sich um die Entwicklung spezieller Anzeigegerate,¨ das Kamera-Tracking, den Systementwurf, die Benut- zerinteraktion und Darstellungsverfahren. Wahrend¨ sich der Großteil vorhergehender Arbei- ten mit
    [Show full text]
  • Multi-Layer Skeleton Fitting for Online Human Motion Capture
    Multi-Layer Skeleton Fitting for Online Human Motion Capture Christian Theobalt Marcus Magnor Pascal Schuler¨ Hans-Peter Seidel Max-Planck-Institut fur¨ Informatik Stuhlsatzenhausweg 85, Saarbruck¨ en, Germany ¡ ¡ ¢ theobalt ¡ magnor schueler hpseidel @mpi-sb.mpg.de Abstract has also been considered. Optical flow and prob- abilistic body part models were used to fit a hier- This paper describes a new approach to fitting a archical skeleton to walking sequences [2]. None kinematic model to human motion data which is of the above approaches runs in real-time or comes a component of a marker-free optical human mo- close to interactive performance, however. If real- tion capture system. Efficient vision-based fea- time performance is to be achieved, comparably ture tracking and volume reconstruction by shape- simple models, such as probabilistic region repre- from-silhouette are applied to raw image data ob- sentations and probabilistic filters for tracking [24], tained from several synchronized cameras in real- or the combination of feature tracking and dynamic time. The combination of both sources of informa- appearance models [8] are used. Unfortunately, tion enables the application of a new method for fit- these approaches fail to support sophisticated body ting a sophisticated multi-layer humanoid skeleton. models. We present results with real video data that demon- New methods for the acquisition and efficient strate that our system runs at 1-2 fps. rendering of volumetric scene representations ob- tained from multiple camera views, known as shape 1 Introduction from silhouette or the visual hull [12, 20, 18, 4], have been presented. Recent research shows that it The field of human motion capture is an example for is possible to acquire and render polyhedral visual the coalescence of computer vision and computer hulls in real-time [15].
    [Show full text]
  • Accelerated Real-Time Reconstruction of 3D Deformable Objects from Multi-View Video Channels
    Accelerated Real-Time Reconstruction of 3D Deformable Objects from Multi-View Video Channels Holger Graf1, Leon Hazke1, Svenja Kahn1, Cornelius Malerczyk2 1 Fraunhofer Institute for Computer Graphics Research IGD, Fraunhoferstr. 5, 64283 Darmstadt, Germany {Holger.Graf, Leon.Hazke, Svenja.Kahn}@igd.fraunhofer.de 2 University of Applied Science Giessen-Friedberg, Faculty of Mathematics, Natural Sciences and Computer Sciences Wilhelm-Leuschner-Strasse 13, 61169 Friedberg, Germany {Cornelius.Malerczyk}@mnd.fh-friedberg.de Abstract. In this paper we present a new framework for an accelerated 3D reconstruction of deformable objects within a multi-view setup. It is based on a new memory management and an enhanced algorithm pipeline of the well known Image-Based Visual Hull (IBVH) algorithm that enables efficient and fast reconstruction results and opens up new perspectives for the scalability of time consuming computations within larger camera environments. As a result, a significant increase of frame rates for the volumetric reconstruction of deformable objects can be achieved using an optimized CUDA-based implementation on NVIDIA's Fermi-GPUs. Keywords: Image based 3D reconstruction, GPU, CUDA, Real-Time reconstruction, Image-Based Visual Hull. 1 Introduction In this paper we present an efficient high resolution Image Based Visual Hull (IBVH) algorithm that entirely runs in real-time on a single consumer graphics card. The target application is a real-time motion capture system with a full body 3D reconstruction. The topic of 3D scene reconstruction based on multiple images has been investigated during the last twenty years and produced numerous results in the area of computer graphics and computer vision. Especially, real-time 3D reconstruction of target objects within a GPU environment has become one of the hot issues nowadays.
    [Show full text]
  • Marker-Less Real Time 3D Modeling for Virtual Reality Jérémie Allard, Edmond Boyer, Jean-Sébastien Franco, Clément Ménier, Bruno Raffin
    Marker-less Real Time 3D Modeling for Virtual Reality Jérémie Allard, Edmond Boyer, Jean-Sébastien Franco, Clément Ménier, Bruno Raffin To cite this version: Jérémie Allard, Edmond Boyer, Jean-Sébastien Franco, Clément Ménier, Bruno Raffin. Marker-less Real Time 3D Modeling for Virtual Reality. Immersive Projection Technology, May 2004, Ames, United States. inria-00349066 HAL Id: inria-00349066 https://hal.inria.fr/inria-00349066 Submitted on 23 Dec 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Marker-less Real Time 3D Modeling for Virtual Reality Jérémie Allard Edmond Boyer Jean-Sébastien Franco Clément Ménier Bruno Raffin fi[email protected] Laboratoire Gravir, Laboratoire ID INRIA Rhône-Alpes 655 avenue de l’Europe, 38334 Saint Ismier, France Abstract equipments (and a tight body suit). In immersive environ- ments, most of the time, only a few points are tracked, the Today, immersive environments mainly use a few 3D posi- user’s head and an interaction device, a wand for example. tions given by trackers to align the real and virtual worlds. Tracking only a few points shorten calibration procedures In this paper we present an alternative approach using com- and ease passing the equipment from one user to the other.
    [Show full text]
  • Download File
    SUPPORTING MULTI-USER INTERACTION IN CO-LOCATED AND REMOTE AUGMENTED REALITY BY IMPROVING REFERENCE PERFORMANCE AND DECREASING PHYSICAL INTERFERENCE Ohan Oda Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2016 © 2016 Ohan Oda All rights reserved ABSTRACT Supporting Multi-User Interaction in Co-Located and Remote Augmented Reality by Improving Reference Performance and Decreasing Physical Interference Ohan Oda One of the most fundamental components of our daily lives is social interaction, ranging from simple activities, such as purchasing a donut in a bakery on the way to work, to complex ones, such as instructing a remote colleague how to repair a broken automobile. While we inter- act with others, various challenges may arise, such as miscommunication or physical interference. In a bakery, a clerk may misunderstand the donut at which a customer was pointing due to the uncertainty of their finger direction. In a repair task, a technician may remove the wrong bolt and accidentally hit another user while replacing broken parts due to unclear instructions and lack of attention while communicating with a remote advisor. This dissertation explores techniques for supporting multi-user 3D interaction in aug- mented reality in a way that addresses these challenges. Augmented Reality (AR) refers to inter- actively overlaying geometrically registered virtual media on the real world. In particular, we address how an AR system can use overlaid graphics to assist users in referencing local objects accurately and remote objects efficiently, and prevent co-located users from physically interfer- ing with each other.
    [Show full text]
  • A Multi-GPU Accelerated Virtual-Reality Interaction Simulation Framework
    RESEARCH ARTICLE A multi-GPU accelerated virtual-reality interaction simulation framework 1 1 2 3 Xuqiang ShaoID *, Weifeng Xu , Lina Lin , Fengquan Zhang 1 School of Control and Computer Engineering, North China Electric Power University, Baoding, Hebei, China, 2 92524 troops, Ningbo, Zhejiang, China, 3 School of Computer Science, North China University of Technology, Beijing, China * [email protected] a1111111111 a1111111111 a1111111111 Abstract a1111111111 a1111111111 In this paper, we put forward a real-time multiple GPUs (multi-GPU) accelerated virtual-real- ity interaction simulation framework where the reconstructed objects from camera images interact with virtual deformable objects. Firstly, based on an extended voxel-based visual hull (VbVH) algorithm, we design an image-based 3D reconstruction platform for real objects. Then, an improved hybrid deformation model, which couples the geometry con- OPEN ACCESS strained fast lattice shape matching method (FLSM) and total Lagrangian explicit dynamics Citation: Shao X, Xu W, Lin L, Zhang F (2019) A (TLED) algorithm, is proposed to achieve efficient and stable simulation of the virtual multi-GPU accelerated virtual-reality interaction simulation framework. PLoS ONE 14(4): objects' elastic deformations. Finally, one-way virtual-reality interactions including soft tis- e0214852. https://doi.org/10.1371/journal. sues' virtual cutting with bleeding effects are successfully simulated. Moreover, with the pur- pone.0214852 pose of significantly improving the computational efficiency of each time step, we propose Editor: Huijia Li, Central University of Finanace and an entire multi-GPU implementation method of the framework using compute unified device Economics, CHINA architecture (CUDA). The experiment results demonstrate that our multi-GPU accelerated Received: September 12, 2018 virtual-reality interaction framework achieves real-time performance under the moderate Accepted: March 10, 2019 calculation scale, which is a new effective 3D interaction technique for virtual reality applications.
    [Show full text]
  • Novel View Telepresence with High-Scalability Using Multi-Casted Omni-Directional Videos
    NOVEL VIEW TELEPRESENCE WITH HIGH-SCALABILITY USING MULTI-CASTED OMNI-DIRECTIONAL VIDEOS Tomoya Ishikawa, Kazumasa Yamazawa and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology, Ikoma, Nara, Japan Keywords: Telepresence, Novel view generation, Multi-cast, Network, Image-based rendering. Abstract: The advent of high-speed network and high performance PCs has prompted research on networked telepres- ence, which allows a user to see virtualized real scenes in remote places. View-dependent representation, which provides a user with arbitrary view images using an HMD or an immersive display, is especially effec- tive in creating a rich telepresence. The goal of our work is to realize a networked novel view telepresence system which enables multiple users to control the viewpoint and view-direction independently by virtual- izing real dynamic environments. In this paper, we describe a novel view generation method from multiple omni-directional images captured at different positions. We mainly describe our prototype system with high- scalability which enables multiple users to use the system simultaneously and some experiments with the system. The novel view telepresence system constructs a virtualized environment from real live videos. The live videos are transferred to multiple users by using multi-cast protocol without increasing network traffic. The system synthesizes a view image for each user with a varying viewpoint and view-direction measured by a magnetic sensor attached to an HMD and presents the generated view on the HMD. Our system can generate the user’s view image in real-time by giving correspondences among omni-directional images and estimating camera intrinsic and extrinsic parameters in advance.
    [Show full text]
  • A 3D Camera-Based System Concept for Safe and Intuitive Use of a Surgical Robot System
    A 3D camera-based system concept for safe and intuitive use of a surgical robot system zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften von der KIT-Fakultat¨ fur¨ Informatik des Karlsruher Instituts fur¨ Technologie (KIT) genehmigte Dissertation von Philip Matthias Nicolai aus Heidelberg Tag der mundlichen¨ Prufung:¨ 9. Juni 2016 Erster Gutachter: Prof. Dr.-Ing. Dr. h.c. Heinz Worn¨ Zweiter Gutachter: Prof. Paolo Fiorini, PhD Credits This thesis would not have been possible without the support of numerous people. First, I’d like to thank my supervisor, Prof. Dr.-Ing. Dr. h.c. Heinz Worn,¨ for pro- viding me with the opportunity to work in the exciting and fast-moving research field of surgical robot systems and 3D camera systems. My sincere thanks also goes to Prof. Paolo Fiorini, PhD, who led the European research project Patient Safety in Robotic Surgery (SAFROS), during which I had the chance to develop many concepts of this thesis, and who graciously agreed to be the second reviewer of my thesis. A further big “Thank you” goes to Dr. Jorg¨ Raczkowsky, leader of the medical research group MeGI at IAR-IPR, for his support, discussions and for his trust that allowed me to freely explore research topics. In parallel with this thesis, the OP:Sense system has been created as a collabo- ration with multiple colleagues from both medical research groups at IAR-IPR. OP:Sense aims to integrate the results from different projects and theses, and I could not have finished this thesis in its final scope without relying on many parts of OP:Sense contributed by my colleagues.
    [Show full text]
  • Online Model Reconstruction for Interactive Virtual Environments Benjamin Lok University of North Carolina at Chapel Hill Department of Computer Science
    Online Model Reconstruction for Interactive Virtual Environments Benjamin Lok University of North Carolina at Chapel Hill Department of Computer Science ABSTRACT system to render a visually faithful avatar. Calculating exact models is difficult and not required for many real-time We present a system for generating real-time 3D reconstructions applications. A useful approximation is the visual hull. A shape of the user and other real objects in an immersive virtual from silhouette concept, the visual hull is the tightest model that environment (IVE) for visualization and interaction. For can be obtained by examining only object silhouettes [5]. example, when parts of the user's body are in his field of view, our system allows him to see a visually faithful graphical Our system uses graphics hardware to accelerate examining a representation of himself, an avatar. In addition, the user can volume for the visual hull. By using the framebuffer to compute grab real objects, and then see and interact with those objects in results in a massively parallel manner, the system can generate the IVE. Our system bypasses an explicit 3D modeling stage, and reconstructions of real scene objects from arbitrary views in real- does not use additional tracking sensors or prior object time. The system discretizes the 3D visual hull problem into a set knowledge, nor do we generate dense 3D representations of of 2D problems that can be solved by the substantial yet objects using computer vision techniques. We use a set of specialized computational power of graphics hardware. The outside-looking-in cameras and a novel visual hull technique that resulting dynamic representations are used for displaying visually leverages the tremendous recent advances in graphics hardware faithful avatars and objects to the user, and as elements for performance and capabilities.
    [Show full text]