Stereoscopic Label Placement

Stereoscopic Label Placement

Linköping Studies in Science and Technology Dissertations, No. 1293 Stereoscopic Label Placement Reducing Distraction and Ambiguity in Visually Cluttered Displays Stephen Daniel Peterson Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden Norrköping, 2009 Stereoscopic Label Placement: Reducing Distraction and Ambiguity in Visually Cluttered Displays Copyright © 2009 Stephen D. Peterson [email protected] Division of Visual Information Technology and Applications (VITA) Department of Science and Technology, Linköping University SE-601 74 Norrköping, Sweden ISBN 978-91-7393-469-5 ISSN 0345-7524 This thesis is available online through Linköping University Electronic Press: www.ep.liu.se Printed by LiU-Tryck, Linköping, Sweden 2009 Abstract With increasing information density and complexity, computer displays may become visually cluttered, adversely affecting overall usability. Text labels can significantly add to visual clutter in graphical user interfaces, but are generally kept legible through specific label placement algorithms that seek visual separation of labels and other ob- jects in the 2D view plane. This work studies an alternative approach: can overlap- ping labels be visually segregated by distributing them in stereoscopic depth? The fact that we have two forward-looking eyes yields stereoscopic disparity: each eye has a slightly different perspective on objects in the visual field. Disparity is used for depth perception by the human visual system, and is therefore also provided by stereoscopic 3D displays to produce a sense of depth. This work has shown that a stereoscopic label placement algorithm yields user per- formance comparable with existing algorithms that separate labels in the view plane. At the same time, such stereoscopic label placement is subjectively rated significantly less disturbing than traditional methods. Furthermore, it does not allow for poten- tially ambiguous spatial relationships between labels and background objects inher- ent to labels separated in the view plane. These findings are important for display systems where disturbance, distraction and ambiguity of the overlay can negatively impact safety and efficiency of the system, including the reference application of this work: an augmented vision system for Air Traffic Control towers. iii Acknowledgments I would first and foremost like to thank my collaborator, Dr. Stephen R. Ellis at NASA Ames Research Center in Mountain View, California, for his support and guidance throughout this thesis, and for the opportunity of conducting parts of my work as a guest researcher at Ames. His advice has been instrumental during the course of this work, and I cannot think of a more knowledgeable and inspiring collaborator. I would like to thank my thesis advisors Professor Anders Ynnerman and Dr. Matthew Cooper at the VITA group, Linköping University, for their continuous support and numerous proof-readings of manuscripts during the thesis. Many thanks are also directed to staff at the Eurocontrol Experimental Centre in Brétigny-sur-Orge, France: Vu Duong for hosting me during my Master’s thesis project and subsequently initializing this PhD thesis; Anna Wennerberg, Raymond Dowdall and Peter Eriksen for interesting discussions and feedback, in particular concerning ATC operations; and Marc Bourgois for valuable advice and for introducing Dr. Ellis to our group. I extend my gratitude to all participants in my experiments at Eurocontrol, NASA Ames, and the VITA group. This thesis was made possible thanks to your time and effort! I have had the privilege to meet and interact with many colleagues and collaborators in the various research labs over the past years. I would like to thank my friend and long time colleague Magnus Axholt for his involvement and feedback on my work. I imagine that we will collaborate on many projects to come. Other colleagues include Ella Pinska, Monica Tavanti, Konrad Hofbauer, Ronish Joyekurun, Claus Gwiggner, Sonja Straussberger, Frank Dowling, Simone Rozzi, Horst Hering and Marco Gibellini. Finally I would like to thank my friends and family for your deeply appreciated sup- port and interest in my work. The main part of this thesis was funded through a PhD scholarship from Eurocontrol. Additional funding was provided by Linköping University and the VITA group. The visit and experiments at NASA Ames were also funded in part through the NASA Grant NNA 06 CB28A to the San José State University Research Foundation. v Contents 1Introduction 1 1.1 Background..................................... 1 1.2 ReferenceApplication.............................. 2 1.2.1 AugmentedReality............................ 3 1.2.2 AugmentedVisionforAirTrafficControl.............. 4 1.3 ResearchChallenges............................... 7 1.4 ThesisOverview.................................. 8 2 Stereoscopic Imaging 9 2.1 DepthPerception................................. 9 2.1.1 PictorialDepthCues........................... 10 2.1.2 OculomotorDepthCues........................ 11 2.1.3 BinocularDepthCues.......................... 12 2.2 Stereopsis...................................... 12 2.3 StereoscopicDisplaysforAugmentedReality................ 15 2.3.1 MultiplexingTechniques........................ 16 2.3.2 Non-SuitableMultiplexingforAR................... 19 2.4 StereoscopicDisparityandPerception.................... 20 2.4.1 ImageSegregation............................ 20 2.4.2 VisualSearch............................... 23 2.4.3 MotionDetection............................. 24 3 Text Labels and Visual Clutter 27 3.1 VisualClutter.................................... 27 3.2 LabelAdjustmentandFiltering......................... 29 3.3 AutomaticLabelPlacement........................... 31 3.3.1 Cartography................................ 31 3.3.2 InteractiveExternalLabeling..................... 32 3.3.3 TextMotionandPerception...................... 37 4 Stereoscopic Label Placement 39 4.1 OverallGoals.................................... 39 4.2 Method....................................... 40 vii CONTENTS 4.3 ExperimentalPlatform.............................. 40 4.3.1 ProjectionScreenSetup......................... 41 4.3.2 HMDSetup................................ 43 4.3.3 VRWorkbenchSetup.......................... 44 4.4 SummaryofStudies............................... 45 4.4.1 PaperI................................... 45 4.4.2 PaperII................................... 48 4.4.3 PaperIII.................................. 50 4.4.4 PaperIV.................................. 52 4.4.5 PaperV................................... 55 5 Discussion 59 5.1 GeneralDiscussion................................ 59 5.1.1 DesignAspectsofStereoscopicLabelPlacement.......... 59 5.1.2 ComparisonswithOtherApproaches................ 61 5.2 MainConclusions................................. 63 5.3 FutureWork.................................... 64 Bibliography 67 viii List of Publications I Objective and Subjective Assessment of Stereoscopically Separated Labels in Augmented Reality S. D. Peterson, M. Axholt and S. R. Ellis in Computers & Graphics, vol 33, no 1 February 2009 II Label Segregation by Remapping Stereoscopic Depth in Far-Field Augmented Reality S. D. Peterson, M. Axholt and S. R. Ellis in Proceedings of the IEEE & ACM Int’l Symposium on Mixed and Augmented Reality (ISMAR) Cambridge, UK, September 2008 III Visual Clutter Management in Augmented Reality: Effects of Three Label Separation Methods on Spatial Judgments S. D. Peterson, M. Axholt, M. Cooper and S. R. Ellis in Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI) Lafayette (LA), USA, March 2009 IV Evaluation of Alternative Label Placement Techniques in Dynamic Virtual Environments S. D. Peterson, M. Axholt, M. Cooper and S. R. Ellis in Proceedings of the International Symposium on Smart Graphics Salamanca, Spain, May 2009 V Detection Thresholds for Label Motion in Visually Cluttered Displays S. D. Peterson, M. Axholt, M. Cooper and S. R. Ellis to Appear in Proceedings of the IEEE Virtual Reality Conference Waltham (MA), USA, March 2010 ix CONTENTS Related Publications This section contains publications that relate to the presented work, but are not in- cluded in the thesis. VI Comparing Disparity Based Label Segregation in Augmented and Virtual Reality S. D. Peterson, M. Axholt and S. R. Ellis in Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST) Bordeaux, France, October 2008 VII Managing Visual Clutter: A Generalized Technique for Label Segregation Using Stereoscopic Disparity S. D. Peterson, M. Axholt and S. R. Ellis in Proceedings of the IEEE Virtual Reality Conference Reno (NV), USA, March 2008 VIII Very Large Format Stereoscopic Head-up Display for the Airport Tower S. D. Peterson, M. Axholt and S. R. Ellis in Proceedings of the 16th Virtual Images Seminar Paris, France, January 2007 x Chapter 1 Introduction 1.1 Background Labels are, in various forms, ubiquitous in our everyday life. We tend to label persons, things, and events, in order to categorize, understand and structure our surrounding environment. Physical labels also provide understanding and structure: they tell the price of milk in the grocery store, they classify the size of garments, and they identify each city on a map. Textual labels are therefore useful for providing contextual and supplemental information in a wide range of situations, where data is difficult to convey in non-textual form. Labels are also widely used in computer software, providing metadata in textual form

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    85 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us