Extended Multitouch: Recovering Touch Posture and Differentiating Users Using a Depth Camera
Total Page:16
File Type:pdf, Size:1020Kb
Extended Multitouch: Recovering Touch Posture and Differentiating Users using a Depth Camera Sundar Murugappan1, Vinayak1, Niklas Elmqvist2, Karthik Ramani1;2 1School of Mechanical Engineering and 2School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 47907, USA fsmurugap,fvinayak,elm,[email protected] Figure 1. Extracting finger and hand posture from a Kinect depth camera (left) and integrating with a pen+touch sketch interface (right). ABSTRACT INTRODUCTION Multitouch surfaces are becoming prevalent, but most ex- Multitouch surfaces are quickly becoming ubiquitous: from isting technologies are only capable of detecting the user’s wristwatch-sized music players and pocket-sized smart- actual points of contact on the surface and not the identity, phones to tablets, digital tabletops, and wall-sized displays, posture, and handedness of the user. In this paper, we define virtually every surface in our everyday surroundings may the concept of extended multitouch interaction as a richer in- soon come to life with digital imagery and natural touch in- put modality that includes all of this information. We further put. However, achieving this vision of ubiquitous [27] sur- present a practical solution to achieve this on tabletop dis- face computing requires overcoming several technological plays based on mounting a single commodity depth camera hurdles, key among them being how to augment any physical above a horizontal surface. This will enable us to not only surface with touch sensing. Conventional multitouch tech- detect when the surface is being touched, but also recover the nology relies on either capacitive sensors embedded in the user’s exact finger and hand posture, as well as distinguish surface itself, or rear-mounted cameras capable of detect- between different users and their handedness. We validate ing the touch points of the user’s hand on the surface [10]. our approach using two user studies, and deploy the tech- Both approaches depend on heavily instrumenting the sur- nique in a scratchpad tool and in a pen + touch sketch tool. face, which is not feasible for widespread deployment. In this context, Wilson’s idea [30] of utilizing a front- ACM Classification Keywords mounted depth camera to sense touch input on any physical H.5.2 Information Interfaces and Presentation: User surface is particularly promising for the following reasons: Interfaces—Interaction styles; I.3.6 Computer Graphics: Methodology and Techniques—Interaction techniques • Minimal instrumentation: Because they are front- mounted, depth cameras integrate well with both pro- General Terms jected imagery as well as conventional screens; Design, Algorithms, Human Factors • Surface characteristics: Any surface, digital or inani- mate, flat or non-flat, horizontal or vertical, can be instru- Author Keywords mented to support touch interaction; Multitouch, tabletop, depth camera, pen + touch, evaluation. • On and above: Interaction both on as well as above the surface can be detected and utilized for input; Permission to make digital or hard copies of all or part of this work for • Low cost: Depth cameras have recently become com- personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies modity products with the rise of the Microsoft Kinect, bear this notice and the full citation on the first page. To copy otherwise, or originally developed for full-body interaction with the republish, to post on servers or to redistribute to lists, requires prior specific Xbox game console, but usable with any computer. permission and/or a fee. UIST’12, October 7–10, 2012, Cambridge, Massachusetts, USA. In this paper, we define a richer touch interaction modality Copyright 2012 ACM 978-1-4503-1580-7/12/10...$15.00. that we call extended multitouch (EMT) which includes not only sensing multiple points of touch on and above a sur- surface, but their system requires a black background for re- face using a depth camera, but also (1) recovering finger, liable recognition. Wang et al. [25] detect the directed fin- wrist and hand posture of the user, as well as (2) differenti- ger orientation vector from contact information in real time ating users and (3) their handedness. We have implemented by considering the dynamics of the finger landing process. a Windows utility that interfaces with a Kinect camera, ex- Dang et al. [4] mapped fingers to their associated hand by tracts the above data from the depth image in real-time, and making use of constraints applied to the touch position with publishes the information as a global hook to other appli- finger orientation. Freeman et al. [7] captured the user’s cations, or using the TUIO [16] protocol. To validate our hand image on a Microsoft Surface to distinguish hand pos- approach, we have also conducted two empirical user stud- tures through vision techniques. Holz et al. [14] proposed ies: one for measuring the tracking accuracy of the touch a generalized perceived input point model in which finger- sensing functionality, and the other for measuring the finger prints were extracted and recognized to provide accurate and hand posture accuracy of our utility. touch information and user identification. Finally, Zhang et al. [34] used the orientation of the touching finger for dis- The ability to not only turn any physical surface into a touch- criminating user touches on a tabletop. sensitive one, but also to recover full finger and hand pos- ture of any interaction performed on the surface, opens up an entirely new design space of interaction modalities. We Glove-based Techniques have implemented two separate touch applications for the Gloves are widely popular in virtual and augmented reality purpose of exploring this design space. The first is a proto- environments [22]. Wang et al. [26] used a single camera to type scratchpad where multiple users can draw simple vector track a hand wearing an ordinary cloth glove imprinted with graphics using their fingers, each finger on each user being a custom pattern. Marquardt et al. [20] used a fiduciary- separately identified and drawing a unique color and stroke. tagged gloves on interactive surfaces to gather input about The second application is called USKETCH, and is an ad- many parts of a hand and to discriminate between one per- vanced sketching application for creating beautified mechan- son’s or multiple peoples’ hands. Although simple and ro- ical engineering sketches that can be used for further finite bust, these approaches require instrumented gear which is element analysis. uSketch integrates pen-based input that we not always desirable compared to using bare hands. use for content creation with our touch sensing functionality that we use for navigation and mode changes. Specialized Hardware Specialized hardware is capable of deriving additional touch RELATED WORK properties beyond the mere touch points. Benko et al. [1] Our work is concerned with touch interaction, natural user sensed muscle activity through forearm electromyography to interfaces, and the integration of several input modalities. provide additional information about hand and finger move- ments away from the surface. Lopes et al. [17] integrated Multitouch Technologies touch with sound to expand the expressiveness of touch in- Several different technologies exist for sensing multiple terfaces by supporting recognition of different body parts, touches on a surface, including capacitance, RFID, and such as fingers and knuckles. TapSense [12] is a similar vision-based approaches. Vision-based approaches are par- acoustic sensing system that allows conventional surfaces to ticularly interested for our purposes since they are suitable identify the type of object being used for input such as fin- for surfaces with large form factor, such as tabletops and gertip, pad and nail. The above systems require additional walls [10, 19, 29]. These systems rely on the presence of a instrumentation of either the user or the surface and take a dedicated surface for interaction. significant amount of time for setup. Background noise is also a deterrent to use acoustic sensing techniques. Recently, depth-sensing cameras are beginning to be used in various interactive applications where any surface can be Pen and Touch-based Interaction made interactive. Wilson [30] explored the use of depth A few research efforts have explored the combination of pen sensing cameras to sense touch on any surface. Omni- and touch [2, 8, 33]. Hinckley et al. [13] described tech- Touch [11] is a wearable depth-sensing and projection sys- niques for direct pen + touch input for a prototype, cen- tem that enables interactive multitouch applications on ev- tered on note-taking and scrapbooking of materials. Lopes et eryday surfaces. Lightspace [31] is a small room installation al. [18] assessed the suitability of combining pen and multi that explores interactions between on, above and between touch interactions for 3D sketching. Through experimen- surfaces combining multiple depth cameras and projectors. tal evidence, they demonstrated that using bimanual inter- actions simplifies work flows and lowers task times, when Sensing Touch Properties compared to the pen-only interface. Going beyond sensing touch and deriving properties of the touches requires more advanced approaches. We classify ex- isting approaches for this into vision-based techniques, in- OVERVIEW: EXTENDED MULTITOUCH INTERACTION strumented glove-based tracking, and specialized hardware. Traditional multitouch interaction [5] is generally defined as the capability to detect multiple points of touch input on a surface. Here, we enrich this concept into what we call ex- Vision-based Techniques tended multitouch interaction (EMT), defined as follows: Malik et al. [19] used two cameras mounted above the sur- face to distinguish which finger of which hand touched a • Detecting multiple touch points on and above a surface; 2 • Recovering finger, wrist and hand postures as well as the handedness (i.e., whether using the left or right hand); and • Distinguishing between users interacting with the surface.