
Integrating Real and Virtual Objects in Virtual Environments Mary Whitton1, Benjamin Lok2, Brent Insko3, Fred Brooks1 1University of North Carolina at Chapel Hill 2University of Florida 33Dlabs, Inc. Chapel Hill, NC Gainesville, FL Austin, TX {whitton|brooks}@cs.unc.edu [email protected] [email protected] Abstract Integration of real objects into a virtual environment (VE), creating a mixed environment (ME), helps overcome one of VEs greatest weaknesses: nothing to really touch—no tools to hold, no walls to run into. Avatars, particularly self-avatars, are a special class of such real objects. No VE system today supports immediate and infrastructure-free integration of real objects into the virtual scene. Successful integration of real objects into VEs places demands on the VE system in several areas; it must acquire both shape and appearance (textures) models of the real objects, track real objects, merge the representation of the real object into the virtual scene, and simulate appropriate behaviors for situations such as object collisions. We draw on work in our own labs to frame a discussion of different approaches to integrating real and virtual objects, including strengths, weaknesses, and open problems. Keywords: Mixed environments, 3D interaction, cognition in VR, passive haptics, visual hulls, integrating real and virtual, mixed reality, virtual environments, VE systems 1 The Need This paper examines the challenges of implementing an effective system to integrate virtual objects and representations of real objects in immersive virtual environments (VEs). Our purpose is to present the range of problems that must be solved before we can routinely have systems that will robustly, and without complex, costly, and motion restricting infrastructure, allow us to interact with both real and virtual objects in a virtual environment. Importantly, the general problem includes the special case of integrating avatars into the VE. You have only to consider how disconcerting it is not to see your own body in a VE to understand the need. We need to see our bodies so that we don’t violate the expectations of seeing ourselves that we have developed in the real world and so we can confirm where are in the VE. We need to see our feet for confident locomotion, including collision avoidance; we need to see our hands so that we can use them to do things. In a VE, you can’t perform fine motor tasks that require visual feedback of your finger positions if the hand model has only three states—full open, midway closed, and fully closed. Having fully functional hands is a startlingly simple goal, but a goal that is startlingly difficult to achieve. Figure 1. First person view of dial being activated (left). User performing action at passive haptic barrier (middle) and with passive haptic barrier, but not exterior wall, removed (right). Dotted lines added to clarify position of virtual barrier. User posture reflects his lack of confidence in location of virtual dial. 1.1 Fundamental issues in integrating real and virtual The major challenges of integrating representations of dynamic real objects in virtual environments are: Acquisition: Make representations of real objects look right in the VE. The VE system must know what the real object looks like—both its shape and its appearance. This is a problem in modeling. Modeling is particularly complicated when the real objects are articulated or deformable, as are human bodies, since the shape of the ensemble of segments depends both on modeling the constraints on the motion of individual segments and on accurate tracking of the segments. Tracking: Know what’s where. The VE system has to know where the real objects are in the real world and it has to know which object is which. This is a tracking problem and a problem of differentiating among objects once the positions are known. Merging real and virtual: Put the representation of the real object in the right place, at the right time. To avoid cue conflict, real and virtual instances of the same object must be spatially registered. This is a problem of calibration and registration of coordinate systems, and of having sufficient information to resolve occlusion relationships between the purely virtual objects and those with physical correlates. While potentially less noticeable and distracting in immersive VE than in augmented reality, the mis-registration of real and virtual objects due to overall system latency cannot be ignored. Simulation: Make objects behave properly. This concerns the simulation that is controlling object behaviors in the application. While collision detection and response is the most common simulation model running in VEs, it is only one of many that may control behaviors and interactions of real and purely virtual objects. For our purposes, we consider collision response to include generating substitute sensory cues for haptics that are missing in interactions between real and purely virtual objects or pairs of purely virtual objects. 1.2 Scope This paper is about mixed environment (ME) systems where real objects are used to augment an immersive virtual environment and users wear head-mounted displays. Objects that are forever out-of-reach of all participants aren’t a concern here, as it is sufficient for them to be purely virtual, and, in this paper, we consider the use of haptics only in its traditional sensory role, and not as a means to display supplemental data. The next sections describe three lab- based systems that provide concrete examples on which to base a discussion of the issues. 2 Integrating Real Objects that have Geometric Models The simplest, oldest, and still popular way of integrating a real object into a virtual scene is to generate a geometric model of it and simply add it, with all its properties, to the scene description. For static objects, this step is all that is required; for objects that can move, a tracker or trackers must be attached to the object and the object’s coordinate system reconciled with that of the rest of the scene. 2.1 The Techniques Static real objects: passive haptics. Incorporating real representations of static objects was motivated by the desire to provide the user with haptic feedback he “touches” objects in the VE. The idea of passive haptics is to create quickly assembled, relatively inexpensive approximate physical models of real objects such as walls and fixtures and to locate them in the VE system space so that they are registered with the virtual model. The virtual model, viewed by the user in a head-mounted display, supplies the visual details and the physical model provides haptic feedback when the user touches or collides with it. We have built passive haptics from plywood, fiberboard, and from polystyrene construction blocks. We strive for, but don’t always achieve, ¼” construction and ¼” placement accuracy. Figures 1 and 2 show the passive haptics in use. Dynamic real objects. Models of real objects become part of the scene description. For proper positions of the virtual representation of the moving objects, their changing locations must be reported to the system, typically from a magnetic, acoustic, or optical tracker attached to the object. The pose of the model, the position and orientation of each of its component parts, is established on a frame-by-frame basis from tracker readings. When a limited number of trackers are available, the real objects are limited in number and complexity. 2.2 Effectiveness Passive Haptics. Insko showed that the addition of a 1½” plywood ledge corresponding to the virtual ledge in UNC’s Pit VE, Figure 2, caused a statistically significant increase in the rise in heart rate that occurs when users enter the room and find themselves on a ledge high over the room below [Insko01, Meehan02]. He also showed that participants who trained for a blindfolded maze navigation task with a passive haptic representation of the maze navigated the real maze faster and with fewer errors than those trained in a purely virtual environment. Users trained without passive haptics got visual and auditory cues to collisions between their hands and the maze structure. Note that there were audio cues to collisions which were outside the field-of-view of the user. Over 300 individuals have experienced the Pit either with or without the ReddiForm™ block walls that can guide the user, but not provide real support. It is our observation that feeling the walls, in addition to seeing them, provides users with a powerful confirmation of where they are in the space. With the walls, most users walk along the ledge with modest confidence; without the walls, users often don’t move at all or take only baby steps. Dynamic Real Objects. Tracked real objects have been used as components in user interfaces, and for applications such as treatment of phobias. An early use of dynamic real objects was to represent the avatar of the user’s (tracked) hand; later the technique was used to track real objects that provide tactile augmentation when user touches the corresponding virtual object [Hoffman96]. Lindeman’s work evaluated the impact of having real objects to support GUIs and widgets within the VE [Lindeman01]. A VE system developed and used at the University of Washington was effective in reducing fear of spiders through controlled exposure to virtual spiders in the virtual environment. Midway through treatment the therapists added a tracked, furry toy spider to the system to provide further increase realism. At the end of therapy, the client’s percentile on a fear-of-spiders scale had decreased from 99th to 71st and, demonstrating transfer of training from VE to the real world, she was able to engage in outdoor activities such as camping [Carlin97]. 2.3 Plusses and Minuses of Real Objects with Geometric Models Acquisition: Look right—shape and appearance.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-