An Introduction to 3-D User Interface Design

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to 3-D User Interface Design Doug A. Bowman An Introduction to 3-D User [email protected] Department of Computer Science Interface Design Virginia Polytechnic Institute and State University Ernst Kruijff [email protected] Abstract InfAR-computer science in architecture Bauhaus Universitaet Weimar Three-dimensional user interface design is a critical component of any virtual envi- ronment (VE) application. In this paper, we present a broad overview of 3-D inter- Joseph J. LaViola, Jr. action and user interfaces. We discuss the effect of common VE hardware devices [email protected] on user interaction, as well as interaction techniques for generic 3-D tasks and the Computer Science Department use of traditional 2-D interaction styles in 3-D environments. We divide most user- Brown University interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available Ivan Poupyrev techniques but also practical guidelines for 3-D interaction design and widely held [email protected] myths. Finally, we briefly discuss two approaches to 3-D interaction design and Sony Computer Science Lab some example applications with complex 3-D interaction requirements. We also Tokyo, Japan present an annotated online bibliography as a reference companion to this article. 1 Introduction User interfaces (UIs) for computer applications are becoming more di- verse. Mice, keyboards, windows, menus, and icons—the standard parts of tra- ditional WIMP interfaces—are still prevalent, but nontraditional devices and interface components are proliferating rapidly. These include spatial input de- vices such as trackers, 3-D pointing devices, and whole-hand devices allowing gestural input. Three-dimensional, multisensory output technologies—such as stereoscopic projection displays, head-mounted displays (HMDs), spatial audio systems, and haptic devices—are also becoming more common. With this new technology, new problems have also been revealed. People often find it inherently difficult to understand 3-D spaces and to perform ac- tions in free space (Herndon, van Dam, & Gleicher, 1994). Although we live and act in a 3-D world, the physical world contains many more cues for under- standing and constraints and affordances for action that cannot currently be represented accurately in a computer simulation. Therefore, great care must go into the design of user interfaces and interaction techniques for 3-D applica- tions. It is clear that simply adapting traditional WIMP interaction styles to three dimensions does not provide a complete solution to this problem. Rather, novel 3-D user interfaces, based on real-world interaction or some other metaphor, must be developed. This paper is a broad overview of the current state of the art in 3-D user in- terfaces and interaction. It summarizes some of the major components of tuto- Presence, Vol. 10, No. 1, February 2001, 96–108 rials and courses given by the authors at various conferences, including the © 2001 by the Massachusetts Institute of Technology 1999 Symposium on Virtual Reality Software and Technology. Our goals are 96 PRESENCE: VOLUME 10, NUMBER 1 Bowman et al. 97 to describe some of the major interaction techniques devices. Fully immersive displays—such as head- and interface components that have been developed for mounted displays (HMDs), arm-mounted displays, and 3-D systems, especially immersive virtual environments virtual retinal displays—occlude the real world. (Aug- (VEs), and to provide the reader with some practical mented realty is an exception to this rule.) Because of guidelines for the design of 3-D user interfaces in real- this, physical objects require a graphical representation world applications. in the virtual world. In addition, more-complicated in- It is impractical to list even a small fraction of the ref- put devices may be difficult to use because they cannot erence materials from which this overview is taken, but be seen. th we refer readers to the “20 Century 3DUI Bibliogra- Semi-immersive displays—such as stereo monitors, phy” (Poupyrev & Kruijff, 2000), a continuously up- workbenches, and surround-screen virtual reality sys- dated and annotated list of references that is available tems—allow the user to see both the physical and vir- online. This bibliography was created both by the au- tual world. A number of interface issues arise with thors and by other members of the 3DUI mailing list, semi-immersive displays. For instance, because users an online community of researchers in this field. can see their own hands in front of the display, they can inadvertently block out virtual objects that should appear to be closer than their hands. This problem 2 Input and Output Devices occurs frequently during manipulation tasks and can be alleviated if the virtual object is offset from the Input and output (I/O) devices are an important hand so that it is always visible. Another problem re- component in building 3-D user interfaces for VE appli- lates to the use of stereo shutter glasses. Because cations. Interaction designers must have a thorough these require emitters, any line-of-sight occlusion will understanding of the ergonomics, advantages, and limi- disrupt the stereo effect. Interface designers should tations of the devices used so they can find natural and try to avoid interaction techniques that cause users to intuitive mappings between interaction techniques and move their hands or physical objects in the line of hardware. sight of the glasses and emitters. Literally hundreds of different I/O devices are used In addition to visual displays, auditory, tactile, and in VE applications, and a thorough analysis of them is haptic displays are becoming more prominent in 3-D beyond the scope of this paper. (See Youngblut, John- son, Nash, Wienclaw, and Will (1996) for a somewhat user interfaces for VEs. With auditory displays (Be- dated but thorough discussion on many I/O devices.) gault, 1994), the main interface goals are localiza- However, many of these devices can be classified into tion, the generation of 3-D sound, and sonification groups with common characteristics and interaction de- (transforming certain types of information into sign criteria. In the following subsections, we examine sound). Auditory output is especially useful in collab- these groups and discuss interface design issues within orative interfaces in which participants can get a sense them. for where others are in the virtual world. It can also be used to substitute for missing tactile or haptic feedback. For example, a sound could substitute for 2.1 Output Devices the feel of pressing a button. Haptic and tactile dis- We commonly use the term display to describe plays (Burdea, 1996) are an important component of output. Although the most common displays are visual, 3-D user interfaces for VEs, and this is becoming an there are other important displays for the auditory, hap- active area of research. Allowing users to touch and tic, tactile, and olfactory channels. In the context of 3-D feel in the virtual world can be extremely powerful, user interfaces for VEs, visual displays can be roughly especially for object manipulation and collision categorized into fully immersive and semi-immersive detection. 98 PRESENCE: VOLUME 10, NUMBER 1 2.2 Input Devices computer know when to listen to the user’s voice is one such issue. A user may issue a speech command unin- A distinction must be made between input devices tentionally when talking to another person. One of the and interaction techniques. Input devices are just the best ways to avoid this problem is to use an implicit or physical tools used to implement various interaction invisible push-to-talk scheme (LaViola, 1999). A push- techniques. In general, many different interaction tech- to-talk interface lets users tell the application when they niques can be mapped onto a given input device. The are speaking to it. In order to maintain the naturalism of question is how naturally, efficiently, and appropriately a the speech interface, one can embed the “push” into given input device will work with a given technique. existing interaction techniques so the user does not have Input devices are also governed by the degrees of free- the burden of remembering the signal to the application dom (DOFs) they have. In general, an input device with that a voice command is about to be issued. a smaller number of DOFs can be used to emulate a device with more DOFs with the addition of buttons or modifier keys. 3 Interaction Techniques Input devices can be roughly categorized based on the types of events they generate. Devices that generate 3.1 Navigation one event at a time based on the user are considered The task of navigation is the most prevalent user discrete-input devices. A button press or other action action in most large-scale 3-D environments, and it pre- signals an event, which is usually a Boolean (up/down) sents challenges such as supporting spatial awareness, value. Pinch gloves, developed by Fakespace, are an ex- providing efficient and comfortable movement between ample of a discrete-input device (the user pinches two distant locations, and making navigation lightweight so or more fingers to signal an event). In contrast to dis- that users can focus on more-important tasks. We subdi- crete-input devices, continuous-input devices generate a vide navigation into the motor component called travel stream of events. Two of the most common continu- and the cognitive component called wayfinding. ous-input devices are position/orientation trackers and Navigation tasks can generally be classified into three datagloves, which transmit bend angles of the fingers. categories. Exploration is navigation with no explicit Devices that combine both discrete and continuous target: the user is simply investigating the environment. events to form single, more-flexible devices are called Search tasks involve moving to a particular target loca- combination or hybrid-input devices.
Recommended publications
  • Chapter 2 3D User Interfaces: History and Roadmap
    30706 02 pp011-026 r1jm.ps 5/6/04 3:49 PM Page 11 CHAPTER 2 3D3D UserUser Interfaces:Interfaces: HistoryHistory andand RoadmapRoadmap Three-dimensional UI design is not a traditional field of research with well-defined boundaries. Like human–computer interaction (HCI), it draws from many disciplines and has links to a wide variety of topics. In this chapter, we briefly describe the history of 3D UIs to set the stage for the rest of the book. We also present a 3D UI “roadmap” that posi- tions the topics covered in this book relative to associated areas. After reading this chapter, you should have an understanding of the origins of 3D UIs and its relation to other fields, and you should know what types of information to expect from the remainder of this book. 2.1. History of 3D UIs The graphical user interfaces (GUIs) used in today’s personal computers have an interesting history. Prior to 1980, almost all interaction with com- puters was based on typing complicated commands using a keyboard. The display was used almost exclusively for text, and when graphics were used, they were typically noninteractive. But around 1980, several technologies, such as the mouse, inexpensive raster graphics displays, and reasonably priced personal computer parts, were all mature enough to enable the first GUIs (such as the Xerox Star). With the advent of GUIs, UI design and HCI in general became a much more important research area, since the research affected everyone using computers. HCI is an 11 30706 02 pp011-026 r1jm.ps 5/6/04 3:49 PM Page 12 12 Chapter 2 3D User Interfaces: History and Roadmap 1 interdisciplinary field that draws from existing knowledge in perception, 2 cognition, linguistics, human factors, ethnography, graphic design, and 3 other areas.
    [Show full text]
  • Civil 3D User Interface
    AutoCAD Civil 3D 2010 Education Curriculum Instructor Guide Unit 1: Civil 3D Environment Lesson 2 Civil 3D User Interface Overview This lesson describes the user interface in AutoCAD® Civil 3D® software and explains how you manage the user interface to maximize your productivity. Civil 3D is a complex design and drafting environment. Users work with many interface components to accomplish design and drafting tasks. When used properly, the final drafting and production of engineering and construction drawings is a by-product of the design process. Objectives After completing this lesson, you will be able to: . Navigate through the AutoCAD Civil 3D software. Use the user interface to open files and display static and contextual ribbons. Examine the two main components of Toolspace: the Prospector and Settings tabs. Describe the function of Toolspace in drawing creation and management. Use the Panorama window, Properties Palette, and Tool Palette . Explore existing workspaces and create a custom workspace. Create reports using the Toolbox tab of Toolspace. Exercises The following exercises are provided in a step-by-step format in this lesson: 1. Explore the Civil 3D User Interface 2. Explore Toolspace 3. The Panorama Window, Properties and Tool Palettes 4. Work with Workspaces 5. Create Reports The Interface The standard interface is shown in the graphic below. Notice the following elements: 1. The Graphic Window or Drawing Area: This is the main window where the user inputs, modifies, and views visual data. 2. Toolspace: Toolspace is an integral component in the user interface for accessing commands, styles, and data. Use it to access the Prospector, Settings, Survey, and Toolbox tabs.
    [Show full text]
  • Correlating the Effects of Flow and Telepresence in Virtual Worlds: Enhancing Our Understanding of User Behavior in Game-Based Learning
    CITATION: Faiola, A., Newlon, C., Pfaff, M., & Smysolva, O. (2013) Correlating the effects of flow and telepresence in virtual worlds: Enhancing our understanding of user behavior in game-based learning. Computers in Human Behavior, 29, 1113-1121. (Elsevier) Correlating the effects of flow and telepresence in virtual worlds: Enhancing our understanding of user behavior in game-based learning Anthony Faiola a , Christine Newlon a, Mark Pfaff a, Olga Smyslova b a Indiana University, School of Informatics (IUPUI), Indianapolis, IN, USA b Kaiser Permanente, USA ABSTRACT Article history: Recent research on online learning suggests that virtual worlds are becoming an important environment Available online xxxx to observe the experience of flow. From these simulated spaces, researchers may gather a deeper under- standing of cognition in the context of game-based learning. Csikszentmihalyi (1997) describes flow as a Keywords: feeling of increased psychological immersion and energized focus, with outcomes that evoke disregard Flow for external pressures and the loss of time consciousness, issuing in a sense of pleasure. Past studies sug- Telepresence gest that flow is encountered in an array of activities and places, including those in virtual worlds. The Human–computer interaction authors’ posit that flow in virtual worlds, such as Second Life (SL), can be positively associated with Virtual worlds degrees of the cognitive phenomenon of immersion and telepresence. Flow may also contribute to a bet- Gaming Online learning ter attitude and behavior during virtual game-based learning. This study tested three hypotheses related to flow and telepresence, using SL. Findings suggest that both flow and telepresence are experienced in SL and that there is a significant correlation between them.
    [Show full text]
  • New Realities Risks in the Virtual World 2
    Emerging Risk Report 2018 Technology New realities Risks in the virtual world 2 Lloyd’s disclaimer About the author This report has been co-produced by Lloyd's and Amelia Kallman is a leading London futurist, speaker, Amelia Kallman for general information purposes only. and author. As an innovation and technology While care has been taken in gathering the data and communicator, Amelia regularly writes, consults, and preparing the report Lloyd's does not make any speaks on the impact of new technologies on the future representations or warranties as to its accuracy or of business and our lives. She is an expert on the completeness and expressly excludes to the maximum emerging risks of The New Realities (VR-AR-MR), and extent permitted by law all those that might otherwise also specialises in the future of retail. be implied. Coming from a theatrical background, Amelia started Lloyd's accepts no responsibility or liability for any loss her tech career by chance in 2013 at a creative or damage of any nature occasioned to any person as a technology agency where she worked her way up to result of acting or refraining from acting as a result of, or become their Global Head of Innovation. She opened, in reliance on, any statement, fact, figure or expression operated and curated innovation lounges in both of opinion or belief contained in this report. This report London and Dubai, working with start-ups and corporate does not constitute advice of any kind. clients to develop connections and future-proof strategies. Today she continues to discover and bring © Lloyd’s 2018 attention to cutting-edge start-ups, regularly curating All rights reserved events for WIRED UK.
    [Show full text]
  • Customizing Eclipse RCP Applications Techniques to Use with SWT and Jface
    Customizing Eclipse RCP applications Techniques to use with SWT and JFace Skill Level: Intermediate Scott Delap ([email protected]) Desktop/Enterprise Java Consultant Annas Andy Maleh ([email protected]) Consultant 27 Feb 2007 Most developers think that an Eclipse Rich Client Platform (RCP) application must look similar in nature to the Eclipse integrated development environment (IDE). This isn't the case, however. This tutorial will explain a number of simple techniques you can use with the Standard Widget Toolkit (SWT) and JFace to create applications that have much more personality than the Eclipse IDE. Section 1. Before you start About this tutorial This tutorial will explain a number of UI elements that can be changed in Eclipse RCP, JFace, and SWT. Along the way, you will learn about basic changes you can make, such as fonts and colors. You will also learn advanced techniques, including how to create custom wizards and section headers. Using these in conjunction should provide you the ability to go from a typical-looking Eclipse RCP application to a distinctive but visually appealing one. Prerequisites Customizing Eclipse RCP applications © Copyright IBM Corporation 1994, 2008. All rights reserved. Page 1 of 40 developerWorks® ibm.com/developerWorks You should have a basic familiarity with SWT, JFace, and Eclipse RCP. System requirements To run the examples, you need a computer capable of adequately running Eclipse V3.2 and 50 MB of free disk space. Section 2. Heavyweight and lightweight widgets Before diving into techniques that can be used to modify SWT, JFace, and Eclipse RCP in general, it's important to cover the fundamental characteristics of SWT and how they apply to the appearance of the widget set.
    [Show full text]
  • Exploring Telepresence in Virtual Worlds
    Exploring Telepresence in Virtual Worlds Dan Zhang z3378568 A thesis in fulfillment of the requirements for the degree of Doctor of Philosophy School of Information Systems and Technology Management UNSW Business School March 2018 PLEASE TYPE THE UNIVERSITY OF NEW SOUTH WALES Thesis/Dissertation Sheet Surname or Family name: Zhang First name: Dan Other name/s: Abbreviation for degree as given in the University calendar: PhD School: School of Information Systems and Technology Management Faculty: UNSW Business School Title: Exploring telepresence in virtual worlds Abstract 350 words maximum: (PLEASE TYPE) Virtual worlds, as the computer-based simulated environments incorporating various representations of real-world elements, have great potential to not only transform the structures and operation modes of various industries but also change the way people work, do business, learn, play, and communicate. However, the existing sharp distinctions between virtual worlds and the real world also bring critical challenges. To address these challenges, the concept of telepresence—the user’s feeling of ‘being there’ in the virtual environments—is adopted as it is considered a direct and essential consequence of a virtual world’s reality. To cultivate this feeling, it is essential to understand what factors can lead to telepresence. However, some literature gaps on telepresence antecedents impede the understanding of telepresence antecedents and affect the adoption of the telepresence construct in the design of virtual worlds. To address these issues, this study explores the concept of telepresence in the context of virtual worlds. Specifically, by adopting means-end chain (MEC) theory, the study aims to investigate the antecedents of telepresence; to reveal the inter-relationships among these antecedents by building a hierarchical structure; and to develop an innovative approach for user segmentation to understand in-depth individual differences in perceiving telepresence.
    [Show full text]
  • How to Use the Graphical User Interface TCS Technical Bulletin
    How to Use the Graphical User Interface TCS Technical Bulletin A. Create/Edit the Graphical Interface (Build Mode) Accessing the site using the Graphical Interface requires that you first build a layout (one or more layers/tabs depending on your site). This is done using the setup wizard to upload images/backgrounds and place controllers in appropriate locations on those images/backgrounds. When finished and saved, the User accesses the site using the Graphical Interface. 1. Click the “+” button to add a layer/tab for the site. (Skip to step 7 to edit an existing layer.) 2. Name the layer/tab by clicking in the field and entering the desired name. 3. Click the Choose File button to select the desired background image from your computer’s drive and click the Save button. 4. The Place View will open showing you the layer/tab title, a Save Positions button, the background image, and a bin of available controllers along the right-hand edge of the Graphical Interface which can be placed onto the layer/ tab. 5. Drag/drop controller icons from the icon bin to the desired location on the background image. Moving your mouse over each icon will show that controller’s name. The arrows at the top and bottom of scroll bar or the scroll bar itself allow you to scroll through the available controllers. NOTE: If you have placed controller icons too close to the icon bin and you would like to move them, you may need to scroll the available controllers up or down to clear the area around an icon to allow it to be dragged/dropped again.
    [Show full text]
  • Widget Toolkit – Getting Started
    APPLICATION NOTE Atmel AVR1614: Widget Toolkit – Getting Started Atmel Microcontrollers Prerequisites • Required knowledge • Basic knowledge of microcontrollers and the C programming language • Software prerequisites • Atmel® Studio 6 • Atmel Software Framework 3.3.0 or later • Hardware prerequisites • mXT143E Xplained evaluation board • Xplained series MCU evaluation board • Programmer/debugger: • Atmel AVR® JTAGICE 3 • Atmel AVR Dragon™ • Atmel AVR JTAGICE mkll • Atmel AVR ONE! • Estimated completion time • 2 hours Introduction The aim of this document is to introduce the Window system and Widget toolkit (WTK) which is distributed with the Atmel Software Framework. This application note is organized as a training which will go through: • The basics of setting up graphical widgets on a screen to make a graphical user interface (GUI) • How to get feedback when a user has interacted with a widget • How to draw custom graphical elements on the screen 8300B−AVR−07/2012 Table of Contents 1. Introduction to the Window system and widget toolkit ......................... 3 1.1 Overview ........................................................................................................... 3 1.2 The Window system .......................................................................................... 4 1.3 Event handling .................................................................................................. 5 1.3.2 The draw event ................................................................................... 6 1.4 The Widget
    [Show full text]
  • Abaqus GUI Toolkit User's Guide
    Abaqus GUI Toolkit User’s Guide ABAQUS 6.14 GUI TOOLKIT USER’S GUIDE Abaqus ID: Printed on: Abaqus GUI Toolkit User’s Guide Abaqus ID: Printed on: Legal Notices CAUTION: This documentation is intended for qualified users who will exercise sound engineering judgment and expertise in the use of the Abaqus Software. The Abaqus Software is inherently complex, and the examples and procedures in this documentation are not intended to be exhaustive or to apply to any particular situation. Users are cautioned to satisfy themselves as to the accuracy and results of their analyses. Dassault Systèmes and its subsidiaries, including Dassault Systèmes Simulia Corp., shall not be responsible for the accuracy or usefulness of any analysis performed using the Abaqus Software or the procedures, examples, or explanations in this documentation. Dassault Systèmes and its subsidiaries shall not be responsible for the consequences of any errors or omissions that may appear in this documentation. The Abaqus Software is available only under license from Dassault Systèmes or its subsidiary and may be used or reproduced only in accordance with the terms of such license. This documentation is subject to the terms and conditions of either the software license agreement signed by the parties, or, absent such an agreement, the then current software license agreement to which the documentation relates. This documentation and the software described in this documentation are subject to change without prior notice. No part of this documentation may be reproduced or distributed in any form without prior written permission of Dassault Systèmes or its subsidiary. The Abaqus Software is a product of Dassault Systèmes Simulia Corp., Providence, RI, USA.
    [Show full text]
  • Insert a Hyperlink OPEN the Research on First Ladies Update1 Document from the Lesson Folder
    Step by Step: Insert a Hyperlink Step by Step: Insert a Hyperlink OPEN the Research on First Ladies Update1 document from the lesson folder. 1. Go to page four and select the Nancy Reagan picture. 2. On the Insert tab, in the Links group, click the Hyperlink button to open the Insert Hyperlink dialog box. The Insert Hyperlink dialog box opens. By default, the Existing File or Web Page is selected. There are additional options on where to place the link. 3. Key http://www.firstladies.org/biographies/ in the Address box; then click OK. Hypertext Transfer Protocol (HTTP) is how the data is transfer to the external site through the servers. The picture is now linked to the external site. 4. To test the link, press Ctrl then click the left mouse button. When you hover over the link, a screen tip automatically appears with instructions on what to do. 5. Select Hilary Clinton and repeat steps 2 and 3. Word recalls the last address, and the full address will appear once you start typing. You have now linked two pictures to an external site. 6. It is recommended that you always test your links before posting or sharing. You can add links to text or phrases and use the same process that you just completed. 7. Step by Step: Insert a Hyperlink 8. Insert hyperlinks with the same Web address to both First Ladies names. Both names are now underlined, showing that they are linked. 9. Hover over Nancy Reagan’s picture and you should see the full address that you keyed.
    [Show full text]
  • The Three-Dimensional User Interface
    32 The Three-Dimensional User Interface Hou Wenjun Beijing University of Posts and Telecommunications China 1. Introduction This chapter introduced the three-dimensional user interface (3D UI). With the emergence of Virtual Environment (VE), augmented reality, pervasive computing, and other "desktop disengage" technology, 3D UI is constantly exploiting an important area. However, for most users, the 3D UI based on desktop is still a part that can not be ignored. This chapter interprets what is 3D UI, the importance of 3D UI and analyses some 3D UI application. At the same time, according to human-computer interaction strategy and research methods and conclusions of WIMP, it focus on desktop 3D UI, sums up some design principles of 3D UI. From the principle of spatial perception of people, spatial cognition, this chapter explained the depth clues and other theoretical knowledge, and introduced Hierarchical Semantic model of “UE”, Scenario-based User Behavior Model and Screen Layout for Information Minimization which can instruct the design and development of 3D UI. This chapter focuses on basic elements of 3D Interaction Behavior: Manipulation, Navigation, and System Control. It described in 3D UI, how to use manipulate the virtual objects effectively by using Manipulation which is the most fundamental task, how to reduce the user's cognitive load and enhance the user's space knowledge in use of exploration technology by using navigation, and how to issue an order and how to request the system for the implementation of a specific function and how to change the system status or change the interactive pattern by using System Control.
    [Show full text]
  • Metaverse Roadmap Overview, 2007. 2007
    A Cross-Industry Public Foresight Project Co-Authors Contributing Authors John Smart, Acceleration Studies Foundation Corey Bridges, Multiverse Jamais Cascio, Open the Future Jochen Hummel, Metaversum Jerry Paffendorf, The Electric Sheep Company James Hursthouse, OGSi Randal Moss, American Cancer Society Lead Reviewers Edward Castronova, Indiana University Richard Marks, Sony Computer Entertainment Alexander Macris, Themis Group Rueben Steiger, Millions of Us LEAD SPONSOR FOUNDING PARTNERS Futuring and Innovation Center Graphic Design: FizBit.com accelerating.org metaverseroadmap.org MVR Summit Attendees Distinguished industry leaders, technologists, analysts, and creatives who provided their insights in various 3D web domains. Bridget C. Agabra Project Manager, Metaverse Roadmap Project Patrick Lincoln Director, Computer Science Department, SRI Janna Anderson Dir. of Pew Internet’s Imagining the Internet; Asst. International Prof. of Communications, Elon University Julian Lombardi Architect, Open Croquet; Assistant VP for Tod Antilla Flash Developer, American Cancer Society Academic Services and Technology Support, Office of Information Technology Wagner James Au Blogger, New World Notes; Author, The Making of Second Life, 2008 Richard Marks Creator of the EyeToy camera interface; Director of Special Projects, Sony CEA R&D Jeremy Bailenson Director, Virtual Human Interaction Lab, Stanford University Bob Moore Sociologist, Palo Alto Research Center (PARC), PlayOn project Betsy Book Director of Product Management, Makena Technologies/There;
    [Show full text]