Bringing the Physical to the Digital: a New Model for Tabletop Interaction

Total Page:16

File Type:pdf, Size:1020Kb

Bringing the Physical to the Digital: a New Model for Tabletop Interaction Bringing the Physical to the Digital: A New Model for Tabletop Interaction Otmar Hilliges München 2009 Bringing the Physical to the Digital: A New Model for Tabletop Interaction Otmar Hilliges Dissertation an der Fakultät für Mathematik, Informatik und Statistik der Ludwig-Maximilians-Universität München vorgelegt von Otmar Hilliges aus München München, den 19. Januar 2009 iv Erstgutachter: Prof. Dr. Andreas Butz Zweitgutachter: Prof. Dr. Sheelagh Carpendale Drittgutachter: Dr. Shahram Izadi Viertgutachter: Dr. Andrew D. Wilson Tag der mündlichen Prüfung: 22. Juli 2009 Abstract v Abstract This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large, horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could be appropriate for this new class of devices was a open question, with many equally promising answers. Many everyday activities rely on our hands ability to skillfully control and manipulate physi- cal objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion. In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style – namely gesture-based interaction and tangible interaction – have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based on these criteria. In a second stage the learnings from these initial explorations are applied to inform the de- sign of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quan- tities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects. Our model makes digital tabletop interaction even more “natural”. However, because the interaction – the sensed input and the displayed output – is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue, we present a technique that allows users to – conceptually – pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for “as-direct-as possible” interactions. We also present two hardware prototypes capable of sensing the users’ interactions beyond the table’s surface. Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface. vi Abstract This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elements’ physicality. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in such settings. Finally, we extend this model so to enable as direct as possible interactions with 3D data, interacting from above the table’s surface. Zusammenfassung vii Zusammenfassung Das Thema dieser Dissertation ist die Erforschung von Interaktionsstilen für digitale Tabletop- Computer. Das ultimative Ziel ist ein neues Interaktionsmodel für die Tabletopinteraktion. Im Rahmen dieser Arbeit steht der Begriff ‘digital tabletop’ für eine neue, aufstrebende Klasse von Geräten die viele neue Arten mit digitalen Inhalten zu interagieren ermöglicht. Eine Klasse von Geräten, die es Benutzern erlauben direkt mit digitaler Information zu interagieren die auf großen, horizontalen Bildschirmen angezeigt wird. Da es sich um ein relativ junges Forschungsgebiet handelt, sind viele Entwicklungen im Fluss. Hardware und Software entwickeln sich mit hoher Geschwindigkeit und zurzeit gibt es viele unterschiedliche, teilweise konkurrierende Ansätze. Von zentralem Interesse für unsere Forschung sind Geräte die in der Lage sind mehrere Kon- taktpunkte (z.B. Fingerspitzen) aber auch weitergehende Informationen wie Umrisse von ganzen Händen oder anderen Objekten zu erfassen. Wenn wir Objekte in der physikalischen Welt manipulieren, profitieren wir dabei von der Geschicklichkeit unserer Hände. In dieser Arbeit wird versucht diese händische Geschicklichkeit auszunutzen um dem Benutzer vielfältigere Interaktionsmöglichkeiten zu eröffnen. Dies ließe sich zum Beispiel durch physikalische Objekte als Interaktionsmedium erreichen, oder durch die Verwendung von virtuellen Objekten, deren Verhalten stärker dem realer Objekte ähnelt. Wir haben einen Forschungsansatz gewählt der sich in zwei Phasen einteilen lässt, um ein besseres Verständnis der Materie zu erlangen. In einer ersten Phase wurden zwei Interaktions- stile anhand von Prototypen untersucht. Einerseits gestenbasierte Interaktion und andererseits Interaktion mit Hilfe von physikalischen Objekten (‘tangible interaction’). Die Flexibilität und Physikalität dieser Lösungsansätze wird als Kriterium definiert um die Interaktionsstile zu be- werten. Darauffolgend werden die beiden Paradigmen auf Ihre Tauglichkeit als generelle Inter- aktionsmodelle hin untersucht. In der zweiten Phase werden die gewonnenen Erkenntnisse als Grundlage für die Entwick- lung eines neuen Modells für Tabletop-Interaktion verwendet. Dieses Modell kombiniert fortge- schrittene multi-touch Sensortechnik mit einer virtuellen 3D-Welt, deren Objekte von einer Phy- siksimulation aus dem Computerspielebereich kontrolliert werden. Der vorgeschlagene Ansatz erlaubt es Benutzern mit virtuellen Objekten zu interagieren in dem Konzepte die aus der rea- len Welt bekannt sind, wie z.B. Kollisionen und Reibung, angewendet werden. Dadurch werden eine Reihe von komplexen Interaktionen ermöglicht, zum Beispiel Interaktionen mit mehreren Fingern gleichzeitig, mit der ganzen Hand oder durch Verwendung von physikalischen Objekten. Das Modell stellt einen Schritt in Richtung noch natürlicherer und intuitiverer Tabletop- Interaktionen dar. Allerdings sind die vom System wahrgenommenen Benutzeraktionen und die angezeigten Informationen nach wie vor an die Displayoberfläche gebunden. Dadurch ergibt sich eine tiefgreifende Einschränkung bei der Manipulation von dreidimensionalen (3D) Objekten. Um diese Problem zu adressieren, wird eine Technik vorgestellt die es erlaubt Objekte - kon- zeptionell – vom Tisch aufzuheben und daraufhin deren Position im 3D Raum zu bestimmen. Hierbei war unser Ziel das ursprüngliche Modell zu ergänzen und eine Interaktionstechnik zu entwerfen, die es erlaubt so direkt wie nur möglich mit virtuellen Objekten zu interagieren. Um dies zu bewerkstelligen wurden zwei neue Hardwareprototypen entwickelt, die es ermöglichen viii Zusammenfassung Benutzerinteraktion zu messen die in größerer Entfernung von der Displayoberfläche stattfin- det. Darüber hinaus werden visuelle Feedbackmechanismen vorgestellt die die Illusion erzeugen sollen, dass der Benutzer die virtuellen Objekte tatsächlich von der Oberfläche aufhebt. Diese Dissertationsschrift leistet auf mehreren Ebenen einen wissenschaftlichen Beitrag. Mehrere neuartige Prototypen werden vorgestellt die während den Forschungsarbeiten erstellt und evaluiert wurden. Diese Prototypen werden verwendet um systematisch den Designspace zu erforschen. Dabei werden die Flexibiltät in der Benutzung und die vermittelte Physikalität als Kri- terien verwendet. Die betrachteten Interaktionsstile werden daraufhin untersucht, ob sie die hoch dynamischen und oft unstrukturierten Interaktionen unterstützen die typisch sind für Tabletop- Interaktionen. In einem weiteren Schritt wird ein neues Modell für die Tabletop-Interaktion vor- gestellt, das die Qualität dieser Interaktionen steigert. Abschließend wird eine Erweiterung dieses Modells vorgestellt die es ermöglicht so direkt wie nur möglich mit virtuellen 3D Objekten zu interagieren. Dabei werden Objekte auf dem (zwei-dimensionalen) Display angezeigt während die Interaktion
Recommended publications
  • Supporting Everyday Activities Through Always-Available Mobile Computing
    ©Copyright 2010 Timothy Scott Saponas Supporting Everyday Activities through Always-Available Mobile Computing Timothy Scott Saponas A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2010 Program Authorized to Offer Degree: Department of Computer Science and Engineering University of Washington Graduate School This is to certify that I have examined this copy of a doctoral dissertation by Timothy Scott Saponas and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the final examining committee have been made. Co-Chairs of the Supervisory Committee: James A. Landay Desney S. Tan Reading Committee: James A. Landay Desney S. Tan James A. Fogarty Date:______________________________________ In presenting this dissertation in partial fulfillment of the requirements for the doctoral degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of the dissertation is allowable only for scholarly purposes, consistent with ―fair use‖ as prescribed in the U.S. Copyright Law. Requests for copying or reproduction of this dissertation may be referred to ProQuest Information and Learning, 300 North Zeeb Road, Ann Arbor, MI 48106-1346, 1-800-521-0600, to whom the author has granted ―the right to reproduce and sell (a) copies of the manuscript in microform and/or (b) printed copies of the manuscript made from microform.‖ Signature __________________________ Date ______________________________ University of Washington Abstract Supporting Everyday Activities through Always-Available Mobile Computing T. Scott Saponas Co-Chairs of the Supervisory Committee: Professor James A.
    [Show full text]
  • Smart Workstation with a Kinect-Projector Assistance System for Manual Assembly
    Lorenzo Andreoli Smart Workstation with a Kinect-Projector Assistance System for Manual Assembly Master’s thesis in Global Manufacturing Management Supervisor: Fabio Sgarbossa June 2019 Master’s thesis Master’s NTNU Faculty of Engineering Faculty Norwegian University of Science and Technology of Science University Norwegian Department of Mechanical and Industrial Engineering and Industrial Department of Mechanical Lorenzo Andreoli Smart Workstation with a Kinect-Projector Assistance System for Manual Assembly Master’s thesis in Global Manufacturing Management Supervisor: Fabio Sgarbossa June 2019 Norwegian University of Science and Technology Faculty of Engineering Department of Mechanical and Industrial Engineering Executive Summary Although automation has increased more and more in manufacturing companies over the last decades, manual labor is still used in a variety of complex tasks and is currently irreplaceable, especially in assembly operations. The problem of assisting and supporting the human worker during potentially complex assembly tasks is, therefore, very relevant. Clear and easy-to-read assembly instructions, error-proofing methods, and an intuitive user interface for the worker have the potential to reduce the cognitive workload of the operator, increase the productivity, improve the quality, reduce defects, and consequently reduce costs. Digital technologies such as augmented reality and motion recognition sensors can help assembly workers in their tasks and have been subject of research with increased interest over the years. In this thesis, we develop a prototype of a smart workstation equipped with a Kinect-projector assistance system for manual assembly. By following a V-model for systems development approach, we identify key requirements for assistance systems for both continuously supporting workers and teaching assembly steps to workers.
    [Show full text]
  • Using a Depth Camera As a Touch Sensor
    ITS 2010: Devices & Algorithms November 7-10, 2010, Saarbr ¨ucken, Germany Using a Depth Camera as a Touch Sensor Andrew D. Wilson Microsoft Research Redmond, WA 98052 USA [email protected] ABSTRACT In comparison with more traditional techniques, such as We explore the application of depth-sensing cameras to capacitive sensors, the use of depth cameras to sense touch detect touch on a tabletop. Limits of depth estimate resolu- has the following advantages: tion and line of sight requirements dictate that the determi- • The interactive surface need not be instrumented. nation of the moment of touch will not be as precise as that • The interactive surface need not be flat. of more direct sensing techniques such as capacitive touch • Information about the shape of the users and users’ screens. However, using a depth-sensing camera to detect arms and hands above the surface may be exploited in touch has significant advantages: first, the interactive sur- useful ways, such as determining hover state, or that face need not be instrumented. Secondly, this approach multiple touches are from the same hand or user. allows touch sensing on non-flat surfaces. Finally, infor- mation about the shape of the users and their arms and However, given the depth estimate resolution of today’s hands above the surface may be exploited in useful ways, depth-sensing cameras, and the various limitations imposed such as determining hover state, or that multiple touches by viewing the user and table from above, we expect that are from same hand or from the same user. We present relying exclusively on the depth camera will not give as techniques and findings using Microsoft Kinect.
    [Show full text]
  • (MRT): a Low-Cost Teleconferencing Framework for Mixed-Reality Applications
    Mixed Reality Tabletop (MRT): A Low-Cost Teleconferencing Framework for Mixed-Reality Applications Daniel Bekins, Scott Yost, Matthew Garrett, Jonathan Deutsch, Win Mar Htay, Dongyan Xu+, and Daniel Aliaga* Department of Computer Science at Purdue University There is a wide spectrum of application scenarios made pos- ABSTRACT sible by the MRT environment. Given its low-cost infrastructure, Today’s technology enables a rich set of virtual and mixed- schools can install the system to enable children at several differ- reality applications and provides a degree of connectivity and ent stations to work collaboratively on a virtual puzzle, a painting, interactivity beyond that of traditional teleconferencing scenarios. or learn a skill like origami from a remotely stationed teacher. The In this paper, we present the Mixed-Reality Tabletop (MRT), an MRT keeps children focused on the learning task instead of com- example of a teleconferencing framework for networked mixed puter interaction. Students could also use real-world objects to reality in which real and virtual worlds coexist and users can fo- participate in physical simulations such as orbital motion, colli- cus on the task instead of computer interaction. For example, sions, and fluid flow in a common virtual environment. Because students could use real-world objects to participate in physical there are no physical input devices involved, several students can simulations such as orbital motion, collisions, and fluid flow in a participate on the same station without restriction. common virtual environment. Our framework isolates the low- Our general framework provides the essential functionality level system details from the developer and provides a simple common to networked mixed-reality environments wrapped in a programming interface for developing novel applications in as flexible application programmer interface (API).
    [Show full text]
  • A Surface Technology with an Electronically Switchable
    Going Beyond the Display: A Surface Technology with an Electronically Switchable Diffuser Shahram Izadi1, Steve Hodges1, Stuart Taylor1, Dan Rosenfeld2, Nicolas Villar1, Alex Butler1 and Jonathan Westhues2 1 Microsoft Research Cambridge 2 Microsoft Corporation 7 JJ Thomson Avenue 1 Microsoft Way Cambridge CB3 0FB, UK Redmond WA, USA {shahrami, shodges, stuart, danr, nvillar, dab, jonawest}@microsoft.com Figure 1: We present a new rear projection-vision surface technology that augments the typical interactions afforded by multi-touch and tangible tabletops with the ability to project and sense both through and beyond the display. In this example, an image is projected so it appears on the main surface (far left). A second image is projected through the display onto a sheet of projection film placed on the surface (middle left). This image is maintained on the film as it is lifted off the main surface (middle right). Finally, our technology allows both projections to appear simultaneously, one displayed on the surface and the other on the film above, with neither image contaminating the other (far right). ABSTRACT Keywords: Surface technologies, projection-vision, dual We introduce a new type of interactive surface technology projection, switchable diffusers, optics, hardware based on a switchable projection screen which can be made INTRODUCTION diffuse or clear under electronic control. The screen can be Interactive surfaces allow us to manipulate digital content continuously switched between these two states so quickly in new ways, beyond what is possible with the desktop that the change is imperceptible to the human eye. It is then computer. There are many compelling aspects to such sys- possible to rear-project what is perceived as a stable image tems – for example the interactions they afford have analo- onto the display surface, when the screen is in fact transpa- gies to real-world interactions, where we manipulate ob- rent for half the time.
    [Show full text]
  • Extended Multitouch: Recovering Touch Posture and Differentiating Users Using a Depth Camera
    Extended Multitouch: Recovering Touch Posture and Differentiating Users using a Depth Camera Sundar Murugappan1, Vinayak1, Niklas Elmqvist2, Karthik Ramani1;2 1School of Mechanical Engineering and 2School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47907, USA fsmurugap,fvinayak,elm,[email protected] Figure 1. Extracting finger and hand posture from a Kinect depth camera (left) and integrating with a pen+touch sketch interface (right). ABSTRACT INTRODUCTION Multitouch surfaces are becoming prevalent, but most ex- Multitouch surfaces are quickly becoming ubiquitous: from isting technologies are only capable of detecting the user’s wristwatch-sized music players and pocket-sized smart- actual points of contact on the surface and not the identity, phones to tablets, digital tabletops, and wall-sized displays, posture, and handedness of the user. In this paper, we define virtually every surface in our everyday surroundings may the concept of extended multitouch interaction as a richer in- soon come to life with digital imagery and natural touch in- put modality that includes all of this information. We further put. However, achieving this vision of ubiquitous [27] sur- present a practical solution to achieve this on tabletop dis- face computing requires overcoming several technological plays based on mounting a single commodity depth camera hurdles, key among them being how to augment any physical above a horizontal surface. This will enable us to not only surface with touch sensing. Conventional multitouch tech- detect when the surface is being touched, but also recover the nology relies on either capacitive sensors embedded in the user’s exact finger and hand posture, as well as distinguish surface itself, or rear-mounted cameras capable of detect- between different users and their handedness.
    [Show full text]
  • Extended Multitouch: Recovering Touch Posture, Handedness, and User Identity Using a Depth Camera
    Extended Multitouch: Recovering Touch Posture, Handedness, and User Identity using a Depth Camera Sundar Murugappana, Vinayaka, Niklas Elmqvistb, Karthik Ramania aSchool of Mechanical Engineering bSchool of Electric Engineering Purdue University, West Lafayette, IN 47907 [email protected] Figure 1. Extracting finger and hand posture from a Kinect depth camera (left) and integrating with a pen+touch sketch interface (right). ABSTRACT Author Keywords Multitouch surfaces are becoming prevalent, but most ex- Multitouch, tabletop displays, depth cameras, pen + touch, isting technologies are only capable of detecting the user’s User studies. actual points of contact on the surface and not the identity, posture, and handedness of the user. In this paper, we define the concept of extended multitouch interaction as a richer in- INTRODUCTION put modality that includes all of this information. We further Multitouch surfaces are quickly becoming ubiquitous: from present a practical solution to achieve this on tabletop dis- wristwatch-sized music players and pocket-sized smart- plays based on mounting a single commodity depth camera phones to tablets, digital tabletops, and wall-sized displays, above a horizontal surface. This will enable us to not only virtually every surface in our everyday surroundings may detect when the surface is being touched, but also recover the soon come to life with digital imagery and natural touch in- user’s exact finger and hand posture, as well as distinguish put. However, achieving this vision of ubiquitous [28] sur- between different users and their handedness. We validate face computing requires overcoming several technological our approach using two user studies, and deploy the technol- hurdles, key among them being how to augment any phys- ogy in a scratchpad application as well as in a sketching tool ical surface with touch sensing.
    [Show full text]
  • Beyond Flat Surface Computing
    BEYOND FLAT SURFACE COMPUTING Hrvoje Benko Microsoft© Research ACM MultiMedia 2009 – Beijing, China Surface Computing Holowall, ‘97 Augmented Surfaces, ‘99 Microsoft Surface Apple iPhone DiamondTouch, ‘01 PlayAnywhere, ‘05 Perceptive Pixel TouchWall Smart Table FTIR Display, ‘06 TouchLight, ‘06 Microsoft© Research Microsoft Surface Microsoft© Research Surface Computing An interface where instead of through indirect input devices (mice and keyboard) the user is interacting directly with the content on the screen’s surface. Direct un-instrumented interaction! Microsoft© Research Surface Computing “Surface computing is the term for the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects.” Wikipedia Content is the interface! Microsoft© Research Digital vs. Real Microsoft© Research Beyond Flat Surface Computing Transcend the flat two-dimensional surface and typical 2D media associated with it and explore the curved, three-dimensional interfaces that cross the boundary between the digital and physical world. Direct un-instrumented interaction . Content is the interface Microsoft© Research Two Approaches 1. Non-flat interactive surfaces 2. Depth-aware interactions above the surface Microsoft© Research Approach #1 Enable touch and gesture interactions on non-flat surfaces. Microsoft© Research Sphere Benko, Wilson, & Balakrishnan, ACM UIST 2008 Microsoft© Research Microsoft© Research Microsoft© Research Reusing the Optical Path Microsoft© Research Sensing & Projection Distortions Microsoft© Research Unique Properties = Opportunities . Borderless, but finite display . Non-visible hemisphere . No master user position/orientation . Smooth transitions between Vertical and horizontal Near and far Shared and private Microsoft© Research Sphere Interactions Microsoft© Research OmniDirectionalSphere ProjectorProjector Microsoft© Research Everywhere Displays Pinhanez et al. ‘01 Microsoft© Research Microsoft© Research Pinch-the-Sky Dome Microsoft© Benko, Wilson, and Fay, 2009 Research Omni-Directional Content .
    [Show full text]