Quick viewing(Text Mode)

VRID: a Design Model and Methodology for Developing Virtual Reality Interfaces Vildan Tanriverdi and Robert J.K

VRID: a Design Model and Methodology for Developing Virtual Reality Interfaces Vildan Tanriverdi and Robert J.K

VRID: A Design Model and Methodology for Developing Virtual Interfaces Vildan Tanriverdi and Robert J.K. Jacob Department of Electrical Engineering and Computer Tufts University Medford, MA 02155 {vildan | jacob} @eecs.tufts.edu

ABSTRACT displays. A major goal of virtual environments is to provide users with realistic environments. In order to provide the sense of Compared to conventional interfaces, Virtual reality (VR) “ there,” VR interfaces heavily use 3D graphical displays. interfaces contain a richer variety and more complex types of Design and implementation of 3D graphical displays are usually objects, behaviors, interactions and communications. Therefore, more difficult than 2D displays. designers of VR interfaces face significant conceptual and methodological challenges in: a) thinking comprehensively about Conventional interfaces typically contain only virtual, computer- the overall design of the VR interface; b) decomposing the design generated objects. VR interfaces, on the other hand, may contain task into smaller, conceptually distinct, and easier tasks; and c) both virtual and physical objects that coexist and exchange communicating the structure of the design to software developers. with each other, as in the case of augmented reality To help designers to deal with these challenges, we propose a systems. The need to properly align virtual and physical objects Virtual Reality Interface Design (VRID) Model, and an associated constitutes an additional challenge in VR interface design [3]. VRID methodology. Behavioral characteristics. In conventional interfaces, objects usually exhibit passive behaviors. In general, they have Categories and Subject Descriptors predetermined behaviors that are activated in response to user Realism-Virtual reality, Computing Methodologies actions. Therefore, communication patterns among objects are usually deterministic. VR interfaces contain both real world-like General Terms objects and magical objects that exhibit autonomous behaviors. Unlike passive objects, autonomous objects can change their own Design, states. They can communicate with each other and each other's behaviors and communication patterns. Therefore, Keywords: designing object behaviors is more challenging in VR interfaces. Virtual reality, user interface software, design model, design Interaction characteristics. While conventional interfaces methodology support mainly explicit interactions, VR interfaces usually support both explicit and implicit style interactions [6, 23, 25]. 1. INTRODUCTION Implicit style interactions allow more natural and easier to use Virtual Reality (VR) is seen as a promising platform for human-computer interactions by allowing arm, hand, head, or eye development of new applications in many domains such as movement based interactions. However, these implicit style medicine, , science and business [1, 17, 29, 31]. interactions are more complex to design compared to explicit style Despite their potential advantages, however, we do not yet see interactions. widespread development and use of VR applications in practice. The lack of proliferation of VR applications can be attributed Table 1 summarizes the differences between characteristics of partly to the challenges of building VR applications [2, 8]. In conventional and VR interfaces. As the table indicates, VR particular, interfaces of VR applications are more complex and interfaces contain a richer variety and more complex types of challenging to design compared to interfaces of conventional objects, object graphics, behaviors, interactions, and desktop based applications [7, 9]. VR interfaces exhibit distinctive communications. These characteristics bring significant visual, behavioral and interaction characteristics. challenges to the design of VR interfaces. In the absence of a conceptual model and a methodology that provide guidance Visual characteristics. While conventional interfaces mainly use during VR interface design, designers face significant 2D graphical displays, VR interfaces use both 2D and 3D challenges in a) thinking comprehensively about the VR interface characteristics reviewed above; b) in decomposing the overall design task into smaller, conceptually distinct, and easier tasks; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are and c) in communicating the design to software developers. In this not made or distributed for profit or commercial advantage and that copies paper, we propose the VRID (Virtual Reality Interface Design) bear this notice and the full on the first page. To copy otherwise, or model and methodology to provide conceptual and republish, to post on servers or to redistribute to lists, requires prior specific methodological guidance to designers in dealing with these permission and/or a fee. challenges. VRST’01, November 15-17, 2001, Banff, Alberta, Canada. Copyright 2001 ACM 1-58113-385-5/01/0011…$5.00.

Table 1. Comparative characteristics of conventional and VR interfaces Characteristics Conventional interfaces VR interfaces Object graphics Mainly 2D Mainly 3D Object types Mainly virtual objects Both virtual and physical objects Object behaviors Mainly passive objects Both passive and active objects Communication patterns Mainly simple Mainly complex Human-computer interactions Mainly explicit Both explicit and implicit

Table 2. Characteristics of existing design models and 2. RELATED WORK methodologies In this section, we briefly review relevant previous work to assess Support for distinctive characteristics of VR whether the existing interface design models and interfaces methodologies address the distinctive needs of VR interfaces.

A well-known user interface design model is the Four Level approach developed by Foley and colleagues [11]. This model describes the user interface as a medium that provides dialogue

between the user and the computer. The four levels of the model Object graphics Object dynamism Communication patterns Implicit interactions Design model and methodology are organized based on the meaning and the form of the dialogue Four No No Minimal No between the user and the computer. The levels focus mostly on level specifications of user interactions using explicit commands. This CLG No No No No approach works well for command language and GUI (graphical OAI Yes No No No user interface) style interfaces. But it is not sufficient to meet VR interface needs such as implicit interactions, object dynamism, OO No Yes Yes No and physical objects. Communication among objects is not significant amount of relevant work in the VR field, which can be sufficiently addressed either. used as building blocks in developing such models and Another relevant user interface design model is the Command methodologies. The scope and complexity of VR interfaces have Language Grammar (CLG) developed by Moran [24]. It provides prompted ongoing efforts in the VR field to develop design designers a model to describe and design command language languages, frameworks and other tools that can support the design interfaces. As in Foley’s model, CLG divides the interface design of VR interfaces. These frameworks, languages and tools provide into levels. But specifications of the levels are more formal and solutions for individual characteristics and requirements of VR detailed in CLG. Although CLG works well in command interfaces such as the behavioral aspects, interaction aspects or a language interfaces, its applicability to VR interfaces is limited. particular issue within these domains. Object dynamism, interactions using implicit commands, physical objects, and communication patterns among objects are out of the Table 3 provides a summary of the previous work addressing scope of this model. behavioral and interaction characteristics of VR interfaces. Once the designer figures out how to conceptualize the interface and A third related interface design model is Shneiderman’s Object- how to decompose the overall design task into smaller Action Interface (OAI) model [27]. It is developed particularly for components such as behaviors, interactions, communications, design of GUI style interfaces. In order to meet the needs of GUI graphics, etc., she or he can draw on previous studies which may style interfaces, the OAI model emphasizes the importance of provide help in dealing with design challenges within individual visual representations of objects and their actions. This model components. However, in the absence of higher level conceptual focuses on explicit command style interactions using direct guidance, it is difficult for designers to decompose a complex VR manipulations, and keeps the amount of syntax small in interface into smaller, conceptually distinct components. interaction specifications. However, OAI does not address the Therefore, there is need for design models and methodologies that distinctive characteristics of VR interfaces such as object provide conceptual and methodological guidance to designers at a dynamism, implicit style interactions, physical objects, and higher level of abstraction. We propose the VRID model and communication patterns among objects. methodology to address this need. Another possibility for designers is to use general-purpose design models and methodologies such as object oriented design model 3. COMPONENTS OF A VIRTUAL and methodology and Object Modeling Technique (OMT), which are proposed for software development [5, 26]. However, these REALITY SYSTEM models and methodologies do not provide conceptual guidance for Before introducing the VRID model, it is important to clarify addressing specific challenges of VR interface design such as what we mean by a VR interface. As depicted in Figure 1, we implicit style interactions. conceptualize a VR system in terms of three major components: application, interface, and dialog control, inspired by the Seeheim Table 2 summarizes our assessment as to whether the existing user interface system architecture [14] and the Model View interface design models and methodologies meet the distinctive Controller architecture [20]. needs of VR interfaces. As the table indicates, none of the four design models and methodologies that we reviewed adequately The application component is the VR application itself, which contains features, rules and defining the of the meets the distinctive needs of the VR interfaces. application. The interface component is the front-end through Despite the lack of design models and methodologies that which users and other external entities exchange information with comprehensively address the needs of VR interfaces, there is Table 3. Previous on behavioral and interaction characteristics of VR interfaces Frameworks and models Languages Others Behavioral Cremer, Kearney et al. 1995 [10], Green and Halliday 1996 Behavioral library: Stansfield, Shawver et al. Blumberg and Galyean 1995 [4], Tu and [15], Steed and Slater 1996 1995 [30] Terzopoulos 1994 [35] [32] Gobbetti and Balaguer 1993 [13]

Interaction Kessler 1999 [19], Lewis, Koved et al. Jacob, Deligiannidis et al. Interaction techniques: Bowman and Hodges 1991 [21], Gobbetti and Balaguer 1993 1999 [18], Smith and Duke 1997 [6], Liang and Green 1993 [22], [13], Bowman 1999 [7] 1999 [28] Poupyrev, Billinghurst et al. 1996 [25], Stoakley, Conway et al. 1995 [33], Tanriverdi and Jacob 2000 [34], Wloka and Greenfield 1995 [36]

VR System Composite Behavior Interface Graphics Physical Magical Behavior Behavior interface objects Dialog Application control Mediator

data Interaction Communicator Data Figure 1. Components of a VR system and manipulate the system. The interface consists of data and Figure 2. Multi -component object architecture used in the objects. Data refers to inputs received from users (or other VRID model external entities) whereas objects refer to entities in the interface designers. That is why we treat the graphics component as a black that have well defined roles and identities. Dialog control enables box, and focus only on its outcomes, i.e., graphical models. We communication between the application and the interface. Due to aim to provide guidance to VR interface designers in specifying conceptual separation, internal details of application and interface graphical models associated with interface objects and their components are transparent to each other. This feature allows behaviors. By doing so, we seek to facilitate communications designers to work on the two components independently. between graphics designers and VR interface designers so that We propose the VRID model and methodology only for the graphical representations and animations generated by graphics design of the interface component. Design of other VR system designers are compatible with behaviors specified by interface components is beyond the scope of this paper. designers. The behavior component is for specifying various types of 4. THE VRID MODEL object behaviors. In order to help designers to understand and Building on our review and synthesis of the previous work on VR simplify complex object behaviors, we categorize object interface design, we identify object graphics, object behaviors, behaviors into two groups: physical behaviors and magical object interactions and object communications as the key behaviors. Physical behavior refers to those changes in an constructs that designers should think about in designing VR object’s state that are observable in the real world. Magical interfaces. Therefore, we organize the VRID model around a behavior refers to those changes in an object’s state, which are multi-component object architecture that is depicted in Figure 2. rarely seen, or not seen at all in the real world. Consider a virtual Graphics, behavior, interaction and communicator components are basketball exhibiting the following behaviors: it falls; it bounces included to conceptually distinguish and address the distinctive off; it changes its color when touched by a different player; and it characteristics of VR interfaces. The mediator component is displays player statistics (e.g., ball possession, scores, etc.). Here, included to coordinate communications among the other four the falling and bouncing off behaviors are physical behaviors components of an object. These five components serve as the key because these behaviors have counterparts in the physical world. constructs of our design model. Next, we will explain each of However, “changing color” or “displaying player statistics” are these components. magical behaviors because a real world basketball does not exhibit these behaviors. The graphics component is for specifying graphical Breaking down complex behaviors into simple physical and representations of interface objects. It covers specification of all graphical models that are needed for computer-generated magical behaviors serve two purposes. First, it allows designers to appearance and animations of the objects. We include the graphics generate a library of behaviors, which can be reused in creating component in the model in order to address the distinctive visual new behaviors. Second, physical and magical distinction enables designers to assess the level of detail required in communicating characteristics of VR interfaces. Since VR interface objects design specifications to software developers unambiguously. exhibit more complex behaviors and these behaviors need to be represented by more complex visual displays, it is important to Physical behaviors are relatively easy to describe and associate object behaviors with object graphics at a high level of communicate. Since they have counterparts in the real world, software developers can easily relate to physical behaviors. abstraction. Hence, interface designers may not need to describe all details of In general, responsibility for the design of visual aspects of VR physical behaviors. Magical behaviors, on the other hand, are interfaces lies with graphics designers rather than VR interface either rarely seen or not seen at all in the real world. Therefore, software developers may have difficulty in visualizing the magical Identifying data behaviors. Interface designers may need to specify magical elements behaviors in more detail to avoid any misunderstandings in the later stages of development. Identifying objects Objects may exhibit composite behaviors that consist of a series High level specifications of graphics, of simple physical and magical behaviors. For example, running High Level Design Phase behaviors, interactions, internal and external behavior of an athlete can be considered as a composite behavior communications for each object consisting of simple physical behaviors such as leg and arm movements. By conceptually distinguishing between “simple” and Low level specifications of graphics, "composite behaviors," we aim to help designers to decompose behaviors, interactions, internal and external Low Level Design Phase communications for each object complex behaviors into smaller, conceptually distinct parts; and to increase the reusability of the resulting software code. Designers Figure 3. VRID Methodology can combine simple behaviors in different ways and sequences to generate new composite behaviors. Breaking down a complex The communication component is for external behavior into simpler behaviors also increases clarity of communications of the object with other objects, data elements, or communication between designers and software developers since with the application component. In this component, designers it is easier to visualize a series of simpler behaviors. need to specify sources of communication inflows into the object, destinations of communications outflows from the object, and the We devote specific attention to the design of object behaviors message passing mechanisms between them such as the because defining object behaviors has typically been a challenge synchronous, asynchronous, balking, or timeout mechanisms in VR [15]. Most virtual environments remain visually rich, but discussed by Booch [5]. Previous work on object behaviors behaviorally impoverished [10]. Previous work on behavioral discusses message-passing mechanisms among objects during low characteristics of VR interfaces does not provide high level level specifications of behaviors. The difference in our approach guidance for decomposing complex behaviors into simpler is that we start analyzing the communication needs of objects at a behaviors. By proposing physical-magical and simple-composite higher level of abstraction, and that we make a distinction categories, we aim to help designers to decompose complex between internal and external communications of objects. By behaviors into simpler, easier to design components, which can using two separate components for specification of internal and also be reused in creating new behaviors. external communications mechanisms of objects, we help The interaction component is used to specify where inputs of designers to decompose the complexity associated with design of the VR system come from and how they change object behaviors. communications into smaller, conceptually distinct components, The interaction component receives the input, interprets its which are easier to analyze, design, code, and maintain. meaning, decides on the implication of the input for object In summary, the VRID model synthesizes previous work on VR behavior, and communicates with behavioral components to make interface design and proposes a comprehensive of modeling the desired change in the object behavior. VR interfaces need to structures in the form of a multi-component object architecture, support implicit style interactions, which require monitoring and which helps designers to see clearly which issues and decisions interpretation of inputs such as hand, arm, head and eye are involved in VR interface design, and why. movements. Each of these interaction types presents a particular design challenge. Therefore, unlike previous work, which usually 5. THE VRID METHODOLOGY merge the design of interactions and behaviors, we make a conceptual distinction between interactions and their implications To systematically apply the VRID model in VR interface design, for object behaviors. This distinction allows designers to focus on we propose the VRID methodology. We conceptualize design of a the challenges of interactions in the interaction component, and on VR interface as an iterative process in which requirements for the the challenges of behaviors in the behavior component. It also interface are translated into design specifications that can be increases reusability of resulting interactions and behaviors since implemented by software developers. We divide the design interactions and behaviors are de-coupled from each other. process into high-level and low-level design phases as depicted in Figure 3. In the high-level design phase, the goal is to specify a The mediator component is for specifying control and design solution, at a high-level of abstraction, using the multi coordination mechanisms for communications among other component object architecture as a conceptual guidance. The components of the object. The goals are to avoid conflicts in input of the high-level design phase is a functional description of object behaviors, and to enable loose coupling among the VR interface. The output is a high-level representation of data components. To achieve these goals, we propose the mediator elements and objects in the interface. Graphical models, component by adapting the of “mediator design pattern” behaviors, interactions, and internal and external communication suggested by Gamma and colleagues [12]. The mediator controls characteristics of interface objects are identified and defined at a and coordinates all communications within the object. When a high level of abstraction. This output becomes the input of the component needs to communicate with another component, it low-level design phase. In the low-level design phase, the goal is sends its message to the mediator rather than sending it directly to to provide fine-grained details of the high-level representations, the destination. This component enables designers to identify, in and to provide procedural details as to how they will be formally advance, which communication requests might lead to conflicts in represented. The outcome of low-level design is a set of design object behaviors, and to specify how the requests can be managed specifications, which are represented in formal, implementation- to avoid the conflicts. Since a component only needs to know oriented terminology, and ready to be implemented by software about itself and the mediator rather than having to know about all developers. We take a top -down approach to the design process components with which it might communicate, the mediator by going from high-level abstractions to lower level details. component also ensures loose coupling between components. However, this is not a linear process. It requires iterations between the high-level and low-level design phases, and reciprocal Table 4. Description of the virtual surgery system computer. Physical objects are physical entities that interact with Consider an augmented reality system developed for the VR system. Physical objects may or may not require training surgeons. It includes a virtual patient body and a modeling. If they are capable of coexisting and exchanging data with the VR interface, they do not require modeling. For example, physical biopsy needle. The surgeon wears a head-mounted the biopsy needle in our virtual surgery example is capable of display, and uses the needle to interact with the patient. A Polhemus is attached to the needle to communicate 3D sending data to the VR interface through the Polhemus. Hence, it coordinates of the needle to the VR system. Coordinate data should be identified as a physical object. Physical objects that is used to interpret actions of the surgeon. Abdominal area exhibit magical behaviors need to be identified and modeled as virtual objects. For example, a biopsy needle, which is capable of of the body is open, and organs, nerves, vessels, and muscles melting down and disappearing when the surgeon makes a wrong are visible. When the surgeon punctures an organ with the needle, it starts bleeding. The surgeon can see status of move, is exhibiting a magical behavior. This behavior is only operation by prodding the organ. When prodded, the organ possible through computer generation since no physical biopsy enlarges, and shows the status of surrounding nerves, needle is capable of exhibiting this behavior. Therefore, such vessels, and muscles by highlighting each of them with a objects should be modeled as virtual objects. unique color. In the virtual surgery example, potential objects are biopsy needle, patient body, organs, nerves, vessels, and muscles. In parts (a) and refinements at both levels of abstraction until a conceptually (b), the biopsy needle and the patient body can be identified as sound and practically implementable design emerges. legitimate objects using the general guidelines of object-oriented In the following sections, we explain step-by-step details of each and design. The patient body is an aggregate object phase of the VRID methodology. We use an example, which runs comprising of organ, nerve, vessel, and muscle components. In throughout the paper, to illustrate how the VRID model and part (c), the needle can be identified as a physical object because methodology are applied in developing a VR interface design. The it is capable of coexisting and exchanging data with the VR example, which is described in Table 4, is a hypothet ical virtual interface. The patient body should be identified as a virtual object surgery system inspired by and adapted from the descriptions because it exhibits magical behaviors such as highlighting nerves, given in prior studies [29, 31]. vessels, and muscles with unique colors. 5.1 High-level (HL) design phase 5.1.3 HL3: Modeling the objects High-level design phase consists of three major steps: In the reminder of the design, we are no longer concerned with entities that are identified as physical objects because they do not ?? HL1. Identifying data elements require modeling, and their inputs to the VR system had already ?? HL2. Identifying objects been identified as data elements in HL1. Therefore, the goal in ?? HL3. Modeling the objects this step is to model the virtual objects identified in HL2. o HL3.1. Graphics Modeling of virtual objects involves specification of: a) graphical o HL3.2. Behaviors models; b) behaviors; c) interactions; d) internal communication o HL3.3. Interactions characteristics; and e) external communication characteristics of o HL3.4. Internal communications (mediator) the objects. Designers should analyze the interface description and o HL3.5. External communications use the VRID model to specify characteristics of each virtual object, as described below. 5.1.1 HL1: Identifying data elements The role of data elements is to enable communication between VR HL3.1: Graphics interface and entities that are external to the VR system. The goal In this step, the goal is to specify a high-level description of of the first step is to identify data inflows coming into the VR graphics needs of virtual objects. Designers should describe what interface. The interface can receive data from three sources: a) kinds of graphical representations are needed for each object, and users, b) physical devices; and c) other VR systems. Designer its parts, if any. Since representing objects graphically is a should analyze the description of the VR interface to identify the creative task, this step aims to provide flexibility to graphical data inflows. In the virtual surgery example, the only data element designers by focusing only on general, high-level descriptions of is the 3D coordinates of the needle communicated to the interface graphical needs. by the Polhemus. Identification of data elements is a relatively In our example, we should describe graphics needs of the patient simple design task, which does not require deliberations at body, organs, nerves, vessels, and muscles in enough detail for different levels of abstraction. We include this task in the high- graphics designers to understand the context of the graphical level design phase in order to enable designers to understand and modeling needs. We should mention that we need graphical model define data inputs of the VR interface early in the design process. of an adult human body, which lies on its back on the operation table. Gender is arbitrary. Abdominal part of the body should be 5.1.2 HL2: Identifying objects open, and show the organs in the area, and the nerves, vessels, and In this step, the goal is to identify objects that have well defined muscles that weave the organs. Boundaries of organs, nerves, roles and identities in the interface. This step involves: a) vessels, and muscles must be distinguishable when highlighted. identifying potential objects mentioned in the interface HL3.2: Behaviors description; b) deciding on legitimate objects; and c) The goals of this step are to identify behaviors exhibited by distinguishing between virtual and physical objects. In parts (a) objects; classify them into simple physical, simple magical, or and (b), designers can use the object-oriented analysis and design composite behavior categories; and to describe them in enough guidelines provided for identification of potential objects and detail for designers to visualize the behaviors. This step involves selection of legitimate objects [5, 26]. In part (c), virtual objects the following activities: a) identify behaviors from the description; are those entities that need to be modeled and generated by the b) classify the behaviors into simple and composite categories; c) classify simple behaviors into physical and magical behavior In our example, behaviors of the patient body are: 1) bleeding; 2) categories; and d) for composite behaviors, specify sequences in enlarging; 3) highlighting nerves, vessels, and muscles with which simple behaviors are to be combined for producing the unique colors; and 4) showing status of operation. If composite behaviors. communication requests for bleeding and showing status of In our example, behaviors exhibited by the patient body are: 1) operation arrive simultaneously, the patient may enter into an bleeding; 2) enlarging; 3) highlighting nerves, vessels, and unstable state between bleeding and showing status of operation muscles with unique colors; and 4) showing the status of behaviors. This conflict can be avoided by prioritizing, holding, or operation. Bleeding can be specified as a simple behavior or as a denying the communication requests. Here, we give higher composite behavior obtained by combining simple behaviors of priority to showing status of operation to avoid the conflict. increasing the amount, color intensity, and viscosity of blood. HL3.5: External communications This design decision should be based on reusability considerations In this step, the goal is to specify control and coordination needs and providing clear communications with software developers. for external communications of the objects. This involves a) We classify bleeding as composite behavior to reuse its identifying communication inflows into the object, and their components in generating different behaviors of blood such as sources; b) communications outflows from the object, and their coagulating. Similarly, to prevent communication problems with destinations; and c) describing and buffering semantics of software developers and help software developers to visualize external communications of the object. what we mean by “showing the status of operation,” we classify showing the status of operation as composite behavior that In our example, communication inflows into the patient body are consists of enlarging and highlighting behaviors. Enlarging organ 3D coordinates coming from the needle. There are no and highlighting nerves, vessels, and muscles are simple communications outflows. Although it is not specified in the behaviors. Components of bleeding behavior are physical description, for illustrative purposes, we assume that a behaviors because there is about them and they can communication between the needle and patient body starts when be observed in real world. Enlarging organ and highlighting the needle initiates an operation (e.g., prodding, puncturing), and nerves, vessels, and muscles with unique colors are magical the patient body is ready to display the associated behavior with that operation. If the patient body is busy, the needle will wait for behaviors because they have no counterpart in the real world. a specified amount of time. If the patient body is still not ready The next task is to specify sequences in which simple behaviors after a certain amount of time, the needle will abort the are to be combined for producing the composite behaviors. The communication request. description indicates that enlarging and highlighting behaviors should be superimposed and exhibited simultaneously. Similarly, 5.2 Low-level (LL) design phase components of the bleeding behavior are exhibited Output of the high-level design becomes the input to the low-level simultaneously. design, which repeats the five modeling steps at a lower level HL3.3: Interactions abstraction to generate fine-grained details of the high-level The goal in this step is to specify where inputs of interface objects design specifications: come from and how they change object behaviors. This step ?? LL1. Graphics involves: a) identifying interaction requests to objects; b) ?? LL2. Behaviors identifying behavioral changes caused by these requests and ?? LL3. Interactions which behavioral components will be notified for these changes ?? LL4. Internal communications (mediator) In our example, user interacts with the patient body using the ?? LL5. External communications needle. The interaction component should be able to process the 3D coordinates of the needle and interpret their meaning to decide 5.2.1 LL1: Graphics whether the surgeon is prodding or puncturing (since this is a Low-level design of graphics aims to associate graphical models hypothetical example, we do not specify in detail how coordinates and behaviors of objects. The outcome of this step should enable are to be processed and interpreted). If the surgeon is prodding, graphical designer to understand how object behaviors can be the implication for the behavior of the patient body is to show the animated. This step involves matching the graphical models status of operation. The interaction component should specified in HL3.1 with behaviors specified in HL3.2. communicate with the composite behavior component to initiate the "showing the status of operation" behavior. If the surgeon is In our example, graphical models were specified for patient body puncturing, the implication for the behavior of the patient body is and its parts (organs, nerves, vessels, and muscles in the bleeding. The interaction component should communicate with abdominal area) in HL3.1. Associated behaviors specified in the composite behavior component to initiate the bleeding HL3.2 were bleeding, enlarging, highlighting, and showing the behavior. status of operation. Using the description, we need to associate graphical models of organs with all four behaviors; and the HL3.4: Internal communications (mediator) graphical models of nerves, vessels, and muscles with the In this step, the goal is to specify control and coordination needs bleeding and highlighting behaviors. for internal communications among the components of objects in order to avoid potential conflicts in object behavior. This 5.2.2 LL2: Behaviors involves: a) examining all communication requests and behavioral In low-level design of behaviors, the goal is to formalize fine- changes that are caused by these requests; b) identifying grained procedural details of behaviors that have been specified in communications requests that may cause the potential conflicts; HL3.2. Formal representation of behaviors requires use of and c) deciding how to prioritize, sequence, hold or deny the constructs of a selected design language. In our example, we use communications requests to avoid the potential conflicts. PMIW, the user interface description language that we had originally developed for specifying interactions[18], but is also Increase BLEEDON / blood Polhemus_cursor.inside(organ) amount Start bleed start puncture Increase BLEEDON BLEEDOFF viscosity BLEEDOFF / not (Polhemus_cursor.inside(organ)) and timeout(bleed) increase increase Increase Increase color blood color viscosity intensity amount intensity Figure 5. PMIW representation of puncturing interaction (a) Continuous parts of bleeding behavior (b) Discrete parts of bleeding behavior 6. DISCUSSIONS AND CONCLUSIONS Figure 4. PMIW representation of bleeding behavior In this paper, we identified a gap in the VR literature, namely, the lack of high-level design models and methodologies for design suitable for specifying behaviors. PMIW representation requires: and development of VR interfaces. By proposing the VRID model a) identification of discrete and continuous components of and methodology as one possible approach, we have taken an behaviors; b) use of data flow diagrams to represent continuous initial step towards addressing this gap. behaviors; and c) use of statecharts [16] and state transition The VRID model allows designers to think comprehensively diagrams to represent discrete behaviors. For illustration purposes, about various types of human-computer interactions, objects, we depict in Figure 4 formal representation of continuous and behaviors, and communications that need to be supported by VR discrete parts of the bleeding behavior using PMIW. interfaces. It enables designers to decompose the overall design task into smaller, conceptually distinct, and easier to design tasks. 5.2.3 LL3: Interactions It provides a common framework and vocabulary, which can Like low-level design of behaviors, low-level design of enhance communication and collaboration among users, designers interactions aims to formalize fine-grained aspects of the and software developers involved in development of VR interactions that have been specified in HL3.3. Formal interfaces. The model may also be useful in implementation and representation of interactions also requires selection of a design maintenance stages of the life cycle of a VR interface since it language. PMIW is well suited for this purpose, although isolates details of components, and makes changes in one designers may choose any other suitable design language. component transparent to other components. Activities outlined for formal representation of behaviors are The VRID methodology contributes to practice by guiding repeated in this step, this time for representing interactions. For designers in the application of the VRID model to the VR illustration purposes, we depict in Figure 5 a formal representation interface design process. The methodology formalizes the process of the puncturing interaction using PMIW. of VR interface design into two phases, which represent different levels of abstraction, and breaks down the phases into a discrete 5.2.4 LL4: Internal communications (mediator) number of steps. High-level design phase helps designers to In low-level design of internal communications, the goal is to conceptually design the interface without having to use specify scheduling mechanisms for managing the communication implementation specific terminology. Low-level design phase requests identified in HL3.4 as giving rise to conflicting object guides designers in representing design specifications in formal, behaviors. As in the previous steps of the low level phase, implementation oriented terminology. The VRID offers flexibility designers are free to choose appropriate scheduling mechanisms. in selection of languages, tools or mechanisms for specifying fine- In our example, to resolve conflicts between bleeding and grained details of the interface. showing the status of operation behaviors, we prefer to use We evaluated the VRID model and methodology by applying priority scheduling mechanisms that are similar to the ones used them in designing various types and complexities of VR in operating systems. interfaces, which we identified from the literature or created for test purposes, including the virtual surgery example presented 5.2.5 LL5: External communications here. We have also just completed an experimental user study, Low-level design of external communications aims to specify the which assessed validity, usability, and usefulness of the VRID message passing mechanisms that control and coordinate external model and the methodology. Findings provide empirical support communications of the objects. Designers can select from the for the validity of the VRID model and methodology. These synchronous, asynchronous, timeout, and bulking message findings are reported in a separate paper, which is currently under passing mechanisms discussed by Booch [5]. In our example, the review. communication needs between the patient body and the biopsy needle, which had been specified in HL3.5, can be modeled with 7. ACKNOWLEDGMENT the timeout mechanism. This work is supported by National Science Foundation Grant This step concludes a complete pass of the phases of the VRID IRI-9625573 and Office of Naval Research Grant N00014-95-1- methodology. In applying the VRID methodology, we started with 1099. the English description of the VR system. Then, we followed the steps of the VRID methodology and applied the conceptual 8. REFERENCES framework of the VRID model to decompose the overall design [1] D. Allison, B. Wills, D. Bowman, J. Wineman, and L. task into simpler, easier to design components. Each component F. Hodges, “The Virtual Reality Gorilla Exhibit,” IEEE represents a nicely encapsulated, conceptually distinct part of the Computer Graphics and Applications, vol. 17, pp. 30- interface. Designers should iterate between steps of the high and 38, 1997. low level design phases to refine the specifications until they are [2] P. Astheimer, “A Business View of Virtual Reality,” convinced that conceptually sound and implementable IEEE Computer Graphics and Applications, vol. 19, pp. specifications are produced. 28-29, 1999. [3] R. T. Azuma, “A of Augmented Reality,” Smalltalk-80,” Journal of Object-Oriented Presence, vol. 6, pp. 355-385, 1997. Programming, vol. 1, pp. 26-49, 1988. [4] B. Blumberg and T. Galyean, “Multi-Level Direction of [21] J. B. Lewis, L. Koved, and D. T. Ling, “Dialogue Autonomous Creatures for Real-Time Virtual Structures for Virtual Worlds,” presented at ACM Environments,” presented at ACM Conference on Human Factors in Computing Systems, CHI'91, 1991. Computer Graphics, SIGGRAPH'95, 1995. [22] J. Liang and M. Green, “Interaction Techniques For A [5] G. Booch, Object Oriented Design with Applications. Highly Interactive 3D Geometric Modeling System,” Redwood City, CA: Benjamin/Cummings Pub. Co., presented at ACM Solid Modeling, 1993. 1991. [23] M. Mine, F. Brooks, and C. Sequin, “Moving Objects in [6] D. Bowman and L. Hodges, “An of Space: Exploiting Proprioception in Virtual Techniques for Grabbing and Manipulating Remote Environment Interaction,” presented at SIGGRAPH'97, Objects in Immersive Virtual Environments,” presented Los Angeles, CA, 1997. at Symposium on Interactive 3D Graphics, 1997. [24] T. Moran, “The Command Language Grammar,” [7] D. A. Bowman, “Interaction Techniques For Common International Journal of Man-Machine Studies, vol. 15, Tasks In Immersive Virtual Environments: Design, pp. 3-50, 1981. Evaluation, and Application,” Doctoral dissertation, [25] I. Poupyrev, M. Billinghurst, S. Weghorst, and T. Georgia Institute of Technology, 1999, Ichikawa, “The Go-Go Interaction Technique: Non- http://vtopus.cs.vt.edu/~bowman/thesis/. linear Mapping for Direct Manipulation in VR,” [8] F. P. Brooks, “What's Real About Virtual Reality?,” presented at ACM Symposium on User Interface IEEE Computer Graphics and Applications, vol. 19, pp. Software and Technology, UIST'96, 1996. 16-27, 1999. [26] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, and [9] S. Bryson, “Virtual Reality in Scientific Visualization,” W. Lorenson, Object Oriented Modeling and Design. Communications of the ACM, pp. 62-71, 1996. Edglewood Cliffs, New Jersey: Prentice Hall, 1991. [10] J. Cremer, J. Kearney, and Y. Papelis, “HCSM: A [27] B. Shneiderman, Designing the User Interface, Framework for Behavior and Scenario Control in Strategies for Effective Human-Computer Interaction, Virtual Environments,” ACM Transactions on Modeling Third Edition ed. Reading, MA: Addison-Wesley, 1998. and Computer , vol. 5, pp. 242-267, 1995. [28] S. Smith and D. J. Duke, “Virtual Environments as [11] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Hybrid Systems,” presented at Annual Conference Computer Graphics, and Practice, Second Eurographics UK Chapter, Cambridge, 1999. Edition ed. Reading, MA: Addision-Wesley, 1996. [29] D. Sorid and S. Moore, “The Virtual Surgeon,” IEEE [12] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Spectrum, pp. 26-31, 2000. Design Patterns, Elements of Reusable Object-Oriented [30] S. Stansfield, D. Shawver, N. Miner, and D. Rogers, Software. Reading, MA: Addison-Wesley, 1995. “An Application of Shared Virtual Reality to Situational [13] E. Gobbetti and J.-F. Balaguer, “VB2 An Architecture Training,” presented at IEEE Virtual Reality Annual For Interaction in Synthetic Worlds,” presented at ACM International Symposium, 1995. Symposium on User Interface Software and [31] A. State, M. A. Livingston, G. Hirota, W. F. Garrett, M. Technology, UIST'93, 1993. C. Whitton, H. Fuchs, and E. D. Pisano, “Technologies [14] M. Green, “Report on dialogue specification tools,” for Augmented-Reality Systems: realizing Ultrasound- presented at Workshop on User Interface Management Guided Needle Biopsies,” presented at ACM Systems, Seeheim, FRG, 1983. Conference on Computer Graphics, SIGGRAPH'96, [15] M. Green and S. Halliday, “A Geometric Modeling and 1996. Animation System for Virtual Reality,” [32] A. Steed and M. Slater, “A Dataflow Representation for Communications of the ACM, vol. 39, pp. 46-53, 1996. Defining Behaviors within Virtual Environments,” [16] D. Harel and A. Naamad, “The Statemate Semantics of presented at IEEE Virtual Reality Annual Symposium, Statecharts,” ACM Transactions on Software VRAIS'96, 1996. Engineering and Methodology, vol. 5, pp. 293-333, [33] R. Stoakley, M. Conway, and R. Pausch, “Virtual 1996. Reality on a WIM: Interactive Worlds in Miniature,” [17] L. Hodges, B. RothBaum, R. Kooper, O. D., T. Meyer, presented at ACM Human Factors in Computing N. M., J. de Graff, and J. Williford, “Virtual Systems, CHI'95, 1995. Environments for Treating the Fear of Heights,” IEEE [34] V. Tanriverdi and R. J. K. Jacob, “Interacting with Eye Computer, vol. 28, pp. 27-34, 1995. Movements in Virtual Environments,” presented at [18] R. J. K. Jacob, L. Deligiannidis, and S. Morrison, “A ACM Human Factors in Computing Systems, CHI'00, Software Model and Specification Language for Non- 2000. WIMP User Interfaces,” ACM Transactions on [35] X. Tu and D. Terzopoulos, “Artificial Fishes: , Computer-Human Interaction, vol. 6, pp. 1-46, 1999. Locomotion, , Behavior,” presented at ACM [19] D. Kessler, “A Framework for Interactors in Immersive Conference on Computer Graphics, SIGGRAPH'94, Virtual Environments,” presented at IEEE Virtual 1994. Reality, 1999. [36] M. Wloka and E. Greenfield, “The Virtual Tricorder: A [20] G. E. Krasner and S. T. Pope, “A Cookbook for Using Uniform Interface for Virtual Reality,” presented at the Model-View Controller User Interface in ACM Symposium on User Interface Software and Technology, UIST'95, 1995.