Cooperative Augmentation of Smart Objects Using Projector-Camera Systems
Total Page:16
File Type:pdf, Size:1020Kb
Smart Object, not Smart Environment: Cooperative Augmentation of Smart Objects Using Projector-Camera Systems David Molyneaux April 2008 Lancaster University Submitted in part fulfilment of the requirement for the degree of Doctor of Philosophy in Computer Science Abstract Smart objects research explores embedding sensing and computing into everyday objects - augmenting objects to be a source of information on their identity, state, and context in the physical world. A major challenge for the design of smart objects is to preserve their original appearance, purpose and function. Consequently, many research projects have focussed on adding input capabilities to objects, while neglecting the requirement for an output capability which would provide a balanced interface. This thesis presents a new approach to add output capability by smart objects cooperating with projector-camera systems. The concept of Cooperative Augmentation enables the knowledge required for visual detection, tracking and projection on smart objects to be embedded within the object itself. This allows projector-camera systems to provide generic display services, enabling spontaneous use by any smart object to achieve non-invasive and interactive projected displays on their surfaces. Smart objects cooperate to achieve this by describing their appearance directly to the projector-camera systems and use embedded sensing to constrain the visual detection process. We investigate natural appearance vision-based detection methods and perform an experimental study specifically analysing the increase in detection performance achieved with movement sensing in the target object. We find that detection performance significantly increases with sensing, indicating the combination of different sensing modalities is important, and that different objects require different appearance representations and detection methods. These studies inform the design and implementation of a system architecture which serves as the basis for three applications demonstrating the aspects of visual detection, integration of sensing, projection, interaction with displays and knowledge updating. The displays achieved with Cooperative Augmentation allow any smart object to deliver visual feedback to users from implicit and explicit interaction with information represented or sensed by the physical object, supporting objects as both input and output medium simultaneously. This contributes to the central vision of Ubiquitous Computing by enabling users to address tasks in physical space with direct manipulation and have feedback on the objects themselves, where it belongs in the real world. ii Preface This dissertation has not been submitted in support of an application for another degree at this or any other university. It is the result of my own work and includes nothing which is the outcome of work done in collaboration except where specifically indicated. Excerpts of this thesis have been published in conference and workshop manuscripts, most notably: [Molyneaux and Gellersen 2006] [Molyneaux, Gellersen et al. 2007] [Molyneaux, Gellersen et al. 2008] iii Acknowledgments This work would not have been possible without the support and facilities offered by my supervisors Prof. Hans Gellersen and Dr. Gerd Kortuem. I thank them both for their never-ending encouragement, inspiration and guidance throughout my time as a PhD student. I consider myself lucky to have been given the opportunity to work in such an interesting field and honoured to have been part of the exceptional research group they have created at Lancaster. The unique lab environment with the large international crowd of people makes it a special place, and one that I will cherish forever. I wish to thank all those other colleagues in the Ubicomp group at Lancaster who, either through direct assistance, or in discussions have helped this work in some way: Henoc Agbota, Mohammed Alloulah, Bashar Al Takrouri, Urs Bischoff, Florian Block, Carl Fischer, Roswitha Gostner, Andrew Greaves, Yukang Guo, Robert Hardy, Mike Hazas, Henrik Jernström, Serko Katsikian, Chris Kray, Kristof van Laerhoeven, Rene Mayrhofer, Matthew Oppenheim, Faizul Abdul Ridzab, Enrico Rukzio, Dominik Schmidt, Jennifer Sheridan, Martin Strohbach, Vasughi Sundramoorthy, Nic Villar, Jamie Ward and Martyn Welsh. Plus all the visitors and other alumni: Aras Bilgen, Martin Berchtold, Andreas Bulling, Clara Fernández de Castro, Manuel García-Herranz del Olmo (especially for all the boiled eggs), Alina Hang, Pablo Haya, Paul Holleis, Matthew Jervis, Russell Johnson, Yasue Kishino, Matthias Kranz, Masood Masoodian, Christoph März, Michael Müller, Kavitha Muthukrishnan, Albrecht Schmidt, Sara Streng, Tsutomu Terada and all those other people I have encountered at Lancaster during my time here. Special thanks go to some of the most important people at Lancaster - the secretaries and support staff who have helped me during my time here Aimee, Ben, Cath, Chris, Gillian, Helen, Ian, Jess, Liz, Sarah, Steve and Trish. Also thanks to Gordon for being a superb head of department and to CAKES for interesting presentations and away-days. Others outside of Lancaster have also helped greatly with this work, most notably Bernt Schiele in Darmstadt – without his support and guidance much of this work would not have been possible. I am greatly indebted to him. Thanks also to everyone from Multimodal Interactive Systems in Darmstadt for the great visits: Krystian Mikolajczyk, Bastion Leibe, Gyuri Dorko, Edgar Seemann, Mario Fritz, Andreas Zinnen, Nicky Kern, Tam Huynh, Ulrich Steinhoff, Niko Majer and Ursula Paeckel. The FLUIDUM project and specifically Andreas Butz and Mira Spassova deserve a big mention, as that is where it all started. Without your initial support, Java code and spark of enthusiasm for the steerable projector work I would never have travelled as far as I did during my PhD journey. Thanks also to those whose code was incorporated in some form in the demos, most notably Mark Pupilli from Bristol for the basic Particle Filter and Rob Hess for the SIFT implementation I modified for my work. Thanks to both my examiners – Mike Hazas and Andrew Calway from the University of Bristol for the interesting discussions and insights during the viva that have enhanced this thesis greatly beyond its initial submission. Most importantly, I thank my family – Mum, Dad, Jane and grandparents - for all the love and support (in every sense of the word) they have given me over the years. Rose also deserves a special mention. Without her I would still be stuck in my second year with no demo or paper. She has inspired me, kept me on track, put up with my grumpy mornings and read probably more about steerable projectors and Cooperative iv Augmentation in the last two years than she would otherwise want to... Thank you for being there for me Rose, I hope I can do the same for you. Finally, I would like to thank the projects and organisations that funded this work, namely the EPSRC, the Ministry of Economic Affairs of the Netherlands through the BSIK project Smart Surroundings under contract no. 03060 and Lancaster University through the e-Campus grant. David Molyneaux Lancaster, September 2008 v CONTENTS Contents CHAPTER 1 INTRODUCTION .................................................................................................... 1 1.1 SMART OBJECT OUTPUT ....................................................................................................... 1 1.2 COOPERATIVE AUGMENTATION ............................................................................................ 2 1.3 CHALLENGES ........................................................................................................................ 3 1.4 CONTRIBUTIONS ................................................................................................................... 4 1.5 THESIS STRUCTURE ............................................................................................................... 5 CHAPTER 2 RELATED WORK ................................................................................................... 6 2.1 INTRODUCTION ..................................................................................................................... 6 2.2 UBIQUITOUS COMPUTING ..................................................................................................... 8 2.2.1 Sensor Nodes ................................................................................................................... 9 2.2.2 Smart Objects ................................................................................................................ 10 2.2.3 Tangible User Interfaces ............................................................................................... 12 2.2.4 Input-Output Imbalance ................................................................................................ 14 2.3 PROJECTOR-BASED AUGMENTED REALITY ......................................................................... 14 2.3.1 Projector-Camera Systems ............................................................................................ 16 2.3.2 Mobile, Handheld and Wearable Projector-Camera Systems ....................................... 17 2.3.3 Multi-Projector Display Systems ................................................................................... 18 2.3.4 Steerable Projector-Camera Systems ............................................................................ 19 2.3.5 Interaction with