An Introduction to 3-D User Interface Design
Total Page:16
File Type:pdf, Size:1020Kb
Doug A. Bowman An Introduction to 3-D User [email protected] Department of Computer Science Interface Design Virginia Polytechnic Institute and State University Ernst Kruijff [email protected] Abstract InfAR-computer science in architecture Bauhaus Universitaet Weimar Three-dimensional user interface design is a critical component of any virtual envi- ronment (VE) application. In this paper, we present a broad overview of 3-D inter- Joseph J. LaViola, Jr. action and user interfaces. We discuss the effect of common VE hardware devices [email protected] on user interaction, as well as interaction techniques for generic 3-D tasks and the Computer Science Department use of traditional 2-D interaction styles in 3-D environments. We divide most user- Brown University interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available Ivan Poupyrev techniques but also practical guidelines for 3-D interaction design and widely held [email protected] myths. Finally, we briefly discuss two approaches to 3-D interaction design and Sony Computer Science Lab some example applications with complex 3-D interaction requirements. We also Tokyo, Japan present an annotated online bibliography as a reference companion to this article. 1 Introduction User interfaces (UIs) for computer applications are becoming more di- verse. Mice, keyboards, windows, menus, and icons—the standard parts of tra- ditional WIMP interfaces—are still prevalent, but nontraditional devices and interface components are proliferating rapidly. These include spatial input de- vices such as trackers, 3-D pointing devices, and whole-hand devices allowing gestural input. Three-dimensional, multisensory output technologies—such as stereoscopic projection displays, head-mounted displays (HMDs), spatial audio systems, and haptic devices—are also becoming more common. With this new technology, new problems have also been revealed. People often find it inherently difficult to understand 3-D spaces and to perform ac- tions in free space (Herndon, van Dam, & Gleicher, 1994). Although we live and act in a 3-D world, the physical world contains many more cues for under- standing and constraints and affordances for action that cannot currently be represented accurately in a computer simulation. Therefore, great care must go into the design of user interfaces and interaction techniques for 3-D applica- tions. It is clear that simply adapting traditional WIMP interaction styles to three dimensions does not provide a complete solution to this problem. Rather, novel 3-D user interfaces, based on real-world interaction or some other metaphor, must be developed. This paper is a broad overview of the current state of the art in 3-D user in- terfaces and interaction. It summarizes some of the major components of tuto- Presence, Vol. 10, No. 1, February 2001, 96–108 rials and courses given by the authors at various conferences, including the © 2001 by the Massachusetts Institute of Technology 1999 Symposium on Virtual Reality Software and Technology. Our goals are 96 PRESENCE: VOLUME 10, NUMBER 1 Bowman et al. 97 to describe some of the major interaction techniques devices. Fully immersive displays—such as head- and interface components that have been developed for mounted displays (HMDs), arm-mounted displays, and 3-D systems, especially immersive virtual environments virtual retinal displays—occlude the real world. (Aug- (VEs), and to provide the reader with some practical mented realty is an exception to this rule.) Because of guidelines for the design of 3-D user interfaces in real- this, physical objects require a graphical representation world applications. in the virtual world. In addition, more-complicated in- It is impractical to list even a small fraction of the ref- put devices may be difficult to use because they cannot erence materials from which this overview is taken, but be seen. th we refer readers to the “20 Century 3DUI Bibliogra- Semi-immersive displays—such as stereo monitors, phy” (Poupyrev & Kruijff, 2000), a continuously up- workbenches, and surround-screen virtual reality sys- dated and annotated list of references that is available tems—allow the user to see both the physical and vir- online. This bibliography was created both by the au- tual world. A number of interface issues arise with thors and by other members of the 3DUI mailing list, semi-immersive displays. For instance, because users an online community of researchers in this field. can see their own hands in front of the display, they can inadvertently block out virtual objects that should appear to be closer than their hands. This problem 2 Input and Output Devices occurs frequently during manipulation tasks and can be alleviated if the virtual object is offset from the Input and output (I/O) devices are an important hand so that it is always visible. Another problem re- component in building 3-D user interfaces for VE appli- lates to the use of stereo shutter glasses. Because cations. Interaction designers must have a thorough these require emitters, any line-of-sight occlusion will understanding of the ergonomics, advantages, and limi- disrupt the stereo effect. Interface designers should tations of the devices used so they can find natural and try to avoid interaction techniques that cause users to intuitive mappings between interaction techniques and move their hands or physical objects in the line of hardware. sight of the glasses and emitters. Literally hundreds of different I/O devices are used In addition to visual displays, auditory, tactile, and in VE applications, and a thorough analysis of them is haptic displays are becoming more prominent in 3-D beyond the scope of this paper. (See Youngblut, John- son, Nash, Wienclaw, and Will (1996) for a somewhat user interfaces for VEs. With auditory displays (Be- dated but thorough discussion on many I/O devices.) gault, 1994), the main interface goals are localiza- However, many of these devices can be classified into tion, the generation of 3-D sound, and sonification groups with common characteristics and interaction de- (transforming certain types of information into sign criteria. In the following subsections, we examine sound). Auditory output is especially useful in collab- these groups and discuss interface design issues within orative interfaces in which participants can get a sense them. for where others are in the virtual world. It can also be used to substitute for missing tactile or haptic feedback. For example, a sound could substitute for 2.1 Output Devices the feel of pressing a button. Haptic and tactile dis- We commonly use the term display to describe plays (Burdea, 1996) are an important component of output. Although the most common displays are visual, 3-D user interfaces for VEs, and this is becoming an there are other important displays for the auditory, hap- active area of research. Allowing users to touch and tic, tactile, and olfactory channels. In the context of 3-D feel in the virtual world can be extremely powerful, user interfaces for VEs, visual displays can be roughly especially for object manipulation and collision categorized into fully immersive and semi-immersive detection. 98 PRESENCE: VOLUME 10, NUMBER 1 2.2 Input Devices computer know when to listen to the user’s voice is one such issue. A user may issue a speech command unin- A distinction must be made between input devices tentionally when talking to another person. One of the and interaction techniques. Input devices are just the best ways to avoid this problem is to use an implicit or physical tools used to implement various interaction invisible push-to-talk scheme (LaViola, 1999). A push- techniques. In general, many different interaction tech- to-talk interface lets users tell the application when they niques can be mapped onto a given input device. The are speaking to it. In order to maintain the naturalism of question is how naturally, efficiently, and appropriately a the speech interface, one can embed the “push” into given input device will work with a given technique. existing interaction techniques so the user does not have Input devices are also governed by the degrees of free- the burden of remembering the signal to the application dom (DOFs) they have. In general, an input device with that a voice command is about to be issued. a smaller number of DOFs can be used to emulate a device with more DOFs with the addition of buttons or modifier keys. 3 Interaction Techniques Input devices can be roughly categorized based on the types of events they generate. Devices that generate 3.1 Navigation one event at a time based on the user are considered The task of navigation is the most prevalent user discrete-input devices. A button press or other action action in most large-scale 3-D environments, and it pre- signals an event, which is usually a Boolean (up/down) sents challenges such as supporting spatial awareness, value. Pinch gloves, developed by Fakespace, are an ex- providing efficient and comfortable movement between ample of a discrete-input device (the user pinches two distant locations, and making navigation lightweight so or more fingers to signal an event). In contrast to dis- that users can focus on more-important tasks. We subdi- crete-input devices, continuous-input devices generate a vide navigation into the motor component called travel stream of events. Two of the most common continu- and the cognitive component called wayfinding. ous-input devices are position/orientation trackers and Navigation tasks can generally be classified into three datagloves, which transmit bend angles of the fingers. categories. Exploration is navigation with no explicit Devices that combine both discrete and continuous target: the user is simply investigating the environment. events to form single, more-flexible devices are called Search tasks involve moving to a particular target loca- combination or hybrid-input devices.