Published in Proceedings of Conference on Human Factors in Computing Systems (CHI‘98), ACM, April, 1998. Coincident Display Using Haptics and Holographic Video Wendy Plesniak and Ravikanth Pappu Spatial Imaging Group, MIT Media Laboratory Cambridge, MA {wjp,pappu}@media.mit.edu ABSTRACT displayward and hands to controllers elsewhere, comfort- In this paper, we describe the implementation of a novel ably and naturally facilitate some activities (like driving a system which enables a user to “carve” a simple free-stand- car, playing a musical score, or moving a cursor with a ing electronic holographic image using a force-feedback mouse). In such familiar manual tasks, vision is useful for device. The force-feedback (or haptic) device has a stylus transporting the arm/hand to an object, but manipulation can which is held by the hand like an ordinary cutting tool. The often proceed quite well either in the absence of vision 3D position of the stylus tip is reported by the device, and (after physical contact is made) or with the monitoring of appropriate forces can be displayed to the hand as it inter- visual feedback provided elsewhere. However, this may not acts with 3D objects in the haptic workspace. The haptic be the best paradigm for all tasks—especially, those which workspace is spatially overlapped and registered with the are harder to control, require constant and precise visual and holographic video display volume. Within the resulting haptic monitoring and near-constant manual response. coincident visuo-haptic workspace, a 3D synthetic cylinder is presented, spinning about its long axis, which a person In this paper, we describe an early prototype system which can see, feel, and lathe with the stylus. This paper intro- spatially reunites the focus of eye and hand and also takes a duces the concept of coincident visuo-haptic display and step toward bringing materials-working pleasure to com- describes the implementation of the lathe simulation. After puter-assisted design. While there are several conventional situating the work in a research context, we present the kinds of visual display hardware suitable for coincident details of system design and implementation, including the visuo-haptic display of 3D information—head tracked LCD haptic and holographic modeling. Finally, we discuss the shutter glasses or head mounted displays (HMDs) combined performance of this prototype system and future work. with stereo computer graphics for instance—and while many of these visual display options currently offer ade- KEYWORDS quate image quality and frame rate, they are cumbersome to Haptics, holography, electro-holography, autostereoscopic wear and have attendant viewing problems. Instead, we are display, offset display, coincident display. using a prototype glasses-free autostereoscopic display which allows untethered movement throughout the view- INTRODUCTION zone. To recognize the intimate dialog between materials and the skilled eyes, hands, and intuition of the craftsperson is to This prototype display device, MIT’s second-generation acknowledge the enormity of the technology and interaction holographic video (holovideo) system, is capable of render- design tasks which still lie ahead of us. Ideally, we would ing moving, monochromatic, free-standing, three-dimen- rally the full exploratory and manipulative dexterity of the sional holographic images. Currently, this device has its hand, and the rich sensory capabilities of both hand and eye own shortcomings, but many will be addressed by future to the tasks we engineer for. research and routine advances in technology. For position tracking and force display, we use the PhantomTM haptic Consider the domain of traditional craft, in which gaze and interface, a three degree-of-freedom (d.o.f) mechanical link- touch convene in the same location: vision directs the hand age with a three d.o.f passive gimbal that supports a simple and tool; the hand senses, manipulates tools and coaxes thimble or stylus used by the hand. The haptic and visual material to take an envisioned form. Such tight alliance of workspaces are physically co-located so that a single, free- eye and hand has traditionally been fundamental to tasks in standing multimodal image of a cylinder to be “carved” is which material is artfully worked into form, and a similar presented. condition may hold for other domains as well, like surgery, component assembly, or repair and maintenance training. In the coincident workspace, a user can see the stylus inter- acting with the holographic image while feeling forces that Yet, in most computer-assisted applications, the hands result from contact with its force model (Figure 1). As the manipulate a pointing device while the gaze is turned to a user pushes the tool into the simulated cylinder, it deforms screen. Such offset display configurations, which direct eyes in a non-volume-conserving way and an arbitrary surface of revolution can be fashioned. Ultimately, the finished com- puter model can be dispatched to a 3D printer providing an actual hardcopy of the design (Figure 5b). In effect, a user “sculpts light” and produces a physical result. With these combined apparati and supporting computation, we are beginning to investigate high-quality multimodal dis- Published in Proceedings of Conference on Human Factors in Computing Systems (CHI‘98), ACM, April, 1998. play and interaction that is more Newtonian than symbolic, tion of active objects (metaDESK) as input. which may be preferable for tasks which have traditionally been practiced in this fashion. The Digital Desk project represents an attempt to render the computer desktop onto a real desk surface, and to merge common physical desk-objects with computational desktop functionality. The system employs a video projector situated above the desk for display of information, and a nearly co- located camera to monitor a person’s movements in the workspace. Hand gestures are interpreted by a computa- overlapping holographic tional vision algorithm to be requests for various utilities and force that the system offers. images The metaDESK project attempts to physically instantiate many of the familiar GUI mechanisms (menus, windows, icons, widgets, etc.) in the form of tangible user interfaces (TUI’s). The mapping between physical icons and virtual ones can be literally or poetically assigned; for instance placing a small physical model of MIT’s Great Dome on the desk surface might cause an elaborate map of MIT to be dis- Figure 1. Coincident display played. In addition to summoning the map to the display and indicating its position, the physical Great Dome icon BACKGROUND can be moved or rotated to correspondingly transform the One well-established approach to joining the eyes and hands map. in a coincident workspace is to employ manipulable “wired” physical objects as controllers for digital objects or pro- The metaDESK system design includes a flat rear-projected cesses. Several research efforts are investigating the use of desk surface, physical icons and functional instruments for physical handles to virtual objects by attaching interfacing use on the surface. The state of these physical objects is sensors or other electronics to real objects. These tangible sensed and used as application input. Not only can the state objects then act as physical controllers for virtual processes, of virtual objects be changed by manual interaction with providing whole-hand interaction and rich visuo-haptic physical objects, but part of the display itself can be “hand- feedback that seems both natural and obvious. In these held” and likewise manipulated. The metaDESK project applications, a participant perceives his or her own body underscores the seemingly inexhaustible palette of ideas for interacting with physical interface objects, but usually also instrumenting interactive space, harkening to the rich set of monitors the action-outcome on another separate display or sensibilities and skills people develop from years of experi- in the ambient environment. ence with real world objects, tools, and their physics. One such project, called Graspable User Interface: Bricks A wide variety of virtual reality (VR) and augmented reality [12], employed basic physical objects called “bricks” which (AR) application areas such as telesurgery, maintenance were physical instantiations of virtual objects or functions. repair and training, computer modeling and entertainment Once a brick was attached to a virtual object, the computa- employ haptic interaction and high-quality computer graph- tional model became itself functionally graspable. A brick ics to study, interact with or modify data. Here, many appli- might be used, for instance, to geometrically transform a cations employ instrumented force-feedback, rather than virtual object to which it was attached, availing direct con- physical objects and whole-hand interaction, and trade off trol through physical handles. Tactile and kinesthetic feed- sensory richness for flexibility in physical modeling and back are also present and exploitable with such an interface; visual / force rendering. thus the ability to operate quickly and efficiently, using two- handed input is possible. Extending this work to incorporate Most existing applications offset the visual and manual a small set of differentiable geometries and material textures workspaces, but several diverse efforts to conjoin eye and among the bricks could increase a person’s ability to iden- hand in interactive applications exist. An example themati-
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-