PhD Dissertation Proposal Manipulation of 3D Objects in Immersive Virtual Environments Daniel Filipe Martins Tavares Mendes [email protected] September 2015 Abstract Interactions within virtual environments (VE) often require manipu- lation of 3D virtual objects. For this purpose, several research have been carried out focusing on mouse, touch and gestural input. While exist- ing mid-air gestures in immersive VE (IVE) can offer natural manipula- tions, mimicking interactions with physical objects, they cannot achieve the same levels of precision attained with mouse-based applications for computer-aided design (CAD). Following prevailing trends in previous research, such as degrees-of- freedom (DOF) separation, virtual widgets and scaled user motion, we intend to explore techniques for IVEs that combine the naturalness of mid-air gestures and the precision found in traditional CAD systems. In this document we survey and discuss the state-of-the-art in 3D object manipulation. With challenges identified, we present a research proposal and corresponding work plan. 1 Contents 1 Introduction 3 2 Background and Related Work 4 2.1 Key players . .4 2.2 Relevant events and journals . .5 2.3 Virtual Environments Overview . .7 2.4 Mouse and Keyboard based 3D Interaction . .9 2.5 3D Manipulation on Interactive Surfaces . 12 2.6 Touching Stereoscopic Tabletops . 16 2.7 Mid-Air Interactions . 17 2.8 Within Immersive Virtual Environments . 20 2.9 Discussion . 25 3 Research Proposal 28 3.1 Problem . 28 3.2 Hypotheses . 28 3.3 Objectives . 29 4 Preliminary Study of Mid-Air Manipulations 31 4.1 Implemented Techniques . 31 4.2 User Evaluation . 32 4.3 Results . 33 5 Work Plan 34 5.1 Past: 2013 / 2014 . 34 5.2 Present: 2015 . 34 5.3 Future: 2016 / 2017 . 36 References 37 2 1 Introduction Since the early days of virtual environments that moving, rotating and resizing virtual objects have been target of research. Considering three-dimensional virtual environments, these kind of manipulations are not trivial, mainly due to the required mapping between traditional input devices (2D) and the virtual environment (3D). Most common solutions resort to techniques that somehow relate the actions performed in the two-dimensional space of the input device (e.g. mouse cursor or touch) to three-dimensional transformations. Since it is usual for people to interact with these kind of environments with traditional displays, 3D content is displayed in a 2D rendered image, which hin- ders content's perception. To overcome both the limitations of the input and the output devices, mainstream solutions for creating and editing 3D virtual con- tent, namely computer-aided design (CAD) tools, resort to different orthogonal views of the environment. This allows a more direct two-dimensional interaction with limited degrees of freedom. Solutions that offer a single perspective view usually either apply the transformation in a plane parallel to the view plane, or resort to widgets that constraint interactions and ease the 2D-3D mapping. Research has shown that the first approach can sometimes result in unexpected transformations when users are allowed to freely navigate through the virtual environment, and that constrained interactions allow for more accurate manip- ulations. Recent technological advances lead to an increased interest in immersive vir- tual reality settings. Affordable hardware for immersive visualization of virtual environments, such as the Oculus Rift head-mounted display (HMD), ease the perception of three-dimensional content. Moreover, advances in user tracking solutions make possible to know where users' head, limbs and hands are in space. This allows for more direct interactions, mimicking the ones with physical ob- jects. First results of our work showed that mid-air direct interactions with 3D virtual content can reduce tasks' duration and are appealing to users. Although mid-air interactions show promising results, the accuracy of hu- man spatial interactions is limited. Moreover, the limited dexterity of mid-air hand gestures, aggravated by the low-definition of current HMDs, constrain pre- cise manipulations. This precision is of extreme importance when creating or assembling engineering models or architectural mock-ups. Inspired by previous research that focused on mouse- and touch-based in- teractions, we intend to explore different techniques to increase users' precision for mid-air interactions, when working with 3D virtual models. Our objective is to combine the naturalness of everyday physical interactions with the precision that is only possible with computer systems. We expect to develop techniques that, albeit being less natural than the ones that strictly copy physical world interactions, offer more controlled and precise manipulations. In the remainder of this document we will present some background and the state-of-the-art regarding 3D object manipulations within virtual environments. With open challenges identified, we introduce those we intend to tackle, referring which are our research hypothesis and objectives. We will then propose an approach to our research for the following couple of years. 3 2 Background and Related Work Objects' manipulation in virtual environments have been subject of previous research. In this section we will cover the more relevant works in this field, ranging from manipulation using traditional input devices, such as mouse and keyboard, to more recent approaches that rely on mid-air interactions to interact with objects displayed through stereoscopic imagery. But, before we do so, we will present the main research groups and individuals that have been carrying those research and the most relevant venues where this kind of work has been being published in. 2.1 Key players Several people conducted research on how to improve interaction in 3D vir- tual environments. Below are presented some of the research groups that have created valuable contributions. Natural Interaction Research group at Microsoft Research This group1 aims to enrich and reimagine the human-computer interface. Their team ex- plores a wide variety of interaction topics including sensing and display hard- ware, touch and stylus input, spatial and augmented reality, and user modeling. Among others, notable names appear in their team: Bill Buxton, Andy Wilson and Hrvoje Benko. HCI group at the University of Hamburg This group2 explores perceptu- ally-inspired and (super-)natural forms of interaction to seamlessly couple the space where the flat 2D digital world meets the three dimensions we live in. Their research is driven by understanding the human perceptual, cognitive and motor skills and limitations in order to reform the interaction as well as the experience in computer-mediated realities. The group is led by Frank Steinicke, who is also responsible for taking part in the organization of relevant conferences in the field, namely ACM SUI and IEEE 3DUI. Working also in this group, Gerd Bruder, a postdoctoral researcher, and Paul Lubos, a PhD candidate, have been working in 3D user interfaces for immersive environments. Potioc at Inria Bordeaux The overall objective of Potioc3 is to design, to develop, and to evaluate new approaches that provide rich interaction expe- riences between users and the digital world. They are interested in popular interaction, mainly targeted at the general public. They explore input and out- put modalities that go beyond standard interaction approaches which are based on mice/keyboards and (touch)screens. Similarly, they are interested in 3D con- tent that offer new opportunities compared to traditional 2D contexts. More precisely, Potioc explores interaction approaches that rely notably on interac- tive 3D graphics, augmented and virtual reality (AR/VR), tangible interaction, brain-computer interfaces (BCI) and physiological interfaces. Martin Hachet leads this group, whose main area of interest is 3D User Interaction. One of the 1Natural Interaction Research group: research.microsoft.com/en-us/groups/natural/ 2HCI group at the University of Hamburg: www.inf.uni-hamburg.de/en/inst/ab/hci 3Potioc: team.inria.fr/potioc 4 former members of this group, Aur´elieCoh´e,explored widget based manipula- tions for 3D object in her PhD. MINT The MINT team4 focuses on gestural interaction, i.e. the use of ges- ture for human-computer interaction. In the particular context of HCI, they are more specifically interested in users' movements that a computing system can sense and respond to. Their goal is to foster the emergence of new interactions, to further broaden the use of gesture by supporting more complex operations. They are developing the scientific and technical foundations required to facili- tate the design, implementation and evaluation of these interactions. This team is led by Laurent Grisoni, and one of their alumni is Anthony Martinet, whose PhD focused on techniques for 3D object manipulation for multi-touch displays. InnoVis at the Unniversity of Calgary InnoVis5 was founded and is di- rected by Sheelagh Carpendale. They investigate innovations in the area of Information Visualization and Human Computer Interaction. Active research topics in Information Visualization include the exploration of effective use of display space, and navigation, exploration, and manipulation of information spaces. In the context of Human Computer Interaction members of the lab study how to best support collaborative work on large displays. One of their former PhD students, Mark Hancock, made great contributions to the field
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages42 Page
-
File Size-