Adaptive Display of Virtual Content for Title Improving Usability and Safety in Mixed and Augmented Reality Author(s) Orlosky, Jason Citation Issue Date Text Version ETD URL https://doi.org/10.18910/55854 DOI 10.18910/55854 rights Note Osaka University Knowledge Archive : OUKA https://ir.library.osaka-u.ac.jp/ Osaka University Adaptive Display of Virtual Content for Improving Usability and Safety in Mixed and Augmented Reality January 2016 Jason Edward ORLOSKY Adaptive Display of Virtual Content for Improving Usability and Safety in Mixed and Augmented Reality Submitted to the Graduate School of Information Science and Technology Osaka University January 2016 Jason Edward ORLOSKY Thesis Committee Prof. Haruo Takemura (Osaka University) Prof. Takao Onoye (Osaka University) Assoc. Prof. Yuichi Itoh (Osaka University) Assoc. Prof. Kiyoshi Kiyokawa (Osaka University) I List of Publications Journals 1) Orlosky, J., Toyama, T., Kiyokawa, K., and Sonntag, D. ModulAR: Eye-controlled Vision Augmentations for Head Mounted Displays. In IEEE Transactions on Visualization and Computer Graphics (Proc. ISMAR), Vol. 21, No. 11. pp. 1259–1268, 2015. (Section 5.3) 2) Orlosky, J., Toyama, T., Sonntag, D., and Kiyokawa, K. The Role of Focus in Advanced Visual Interfaces. In KI-Künstliche Intelligenz. pp. 1–10, 2015. (Chapter 4) 3) Orlosky, J., Shigeno, T. Kiyokawa, K. and Takemura, H. Text Input Evaluation with a Torso-mounted QWERTY Keyboard in Wearable Computing. In Transaction of the Virtual Reality Society of Japan. Vol.19, No. 2, pp. 117–120, 2014. (Section 6.3) 4) Kishishita, N., Orlosky, J., Kiyokawa, K., Mashita, T., and Takemura, H. Investigation on the Peripheral Visual Field for Information Display with Wide-view See-through HMDs. In Transaction of the Virtual Reality Society of Japan. Vol. 19, No. 2, pp.121–130, 2014. 5) Orlosky, J., Kiyokawa, K., and Takemura, H. Managing Mobile Text in Head Mounted Displays: Studies on Visual Preference and Text Placement. In the Mobile Computing and Communications Review, Vol. 18, No. 2, pp. 20–31, 2014. (Section 3.4) Peer Reviewed Conferences 1) Orlosky, J., Toyama, T., Kiyokawa, K., and Sonntag, D. ModulAR: Eye-controlled Vision Augmentations for Head Mounted Displays. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (same as journal #1), 2015. (Chapter 5.3) 2) Orlosky, J., Toyama, T., Kiyokawa, K., and Sonntag, D. Halo Content: Context-aware Viewspace Management for Non-invasive Augmented Reality. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI), pp. 369–373. 2015. (Section 3.5) 3) Toyama, T., Orlosky, J., Sonntag, D., and Kiyokawa, K. Attention Engagement and Cognitive State Analysis for Augmented Reality Text Display Functions. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI), pp. 322–332. 2015. 4) Kishishita, N., Kiyokawa, K., Orlosky, J., Mashita, T., Takemura, H., and Kruijff, E. Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 177–186, 2015. 5) Orlosky, J., Wu, Q., Kiyokawa, K., Takemura, H., and Nitschke, C. Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction (SUI), pp. 54–61, 2015. (Section 5.2) 6) Toyama, T., Orlosky, J., Sonntag, D., and Kiyokawa, K. A natural interface for multi-focal plane head mounted displays using 3D gaze. In Proceedings of The Working Conference on Advanced Visual Interfaces (AVI), pp. 25–32, 2014. II 7) Orlosky, J., Kiyokawa, K., and Takemura, H. Dynamic Text Management for See-through Wearable and Heads-up Display Systems. In Proceedings of the International Conference on Intelligent User Interfaces (IUI), pp. 363–370, 2013. (Section 3.4) Best Paper Peer Reviewed Posters, Demos, Consortia, and Workshops 1) Orlosky, J., Weber, M., Gu. Y., Sonntag, D., and Sosnovsky, S. An Interactive Pedestrian Environment Simulator for Cognitive Monitoring and Evaluation. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI) Companion, pp. 57–60, 2015. (Section 6.5) 2) Orlosky, J., Depth based interaction and field of view manipulation for augmented reality. In Proceedings of the Adjunct Publication of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 5–8, 2014. (Chapter 5) 3) Orlosky, J., Toyama, T., Sonntag, D., Sarkany, A., and Lorincz, A. On-body multi-input indoor localization for dynamic emergency scenarios: fusion of magnetic tracking and optical character recognition with mixed-reality display. In Proceedings of the 2014 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), pp. 320–325, 2014. (Section 6.2) 4) Orlosky, J., Kiyokawa, K., and Takemura, H. Towards intelligent view management: A study of manual text placement tendencies in mobile environments using video see- through displays. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 281–282, 2013. (Section 3.4.9) 5) Orlosky, J., Kiyokawa, K., and Takemura, H. Management and Manipulation of Text in Dynamic Mixed Reality Workspaces. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1–4, 2013. (Chapter 3) 6) Kishishita, N., Orlosky, J., Mashita, T., Kiyokawa, K., and Takemura, H. Poster: Investigation on the peripheral visual field for information display with real and virtual wide field-of-view see-through HMDs. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI), pp. 143–144, 2013. 7) Walker, B., Godfrey, M., Orlosky, J., Bruce, C., and Sanford, J. Aquarium Sonification: Soundscapes for Accessible Dynamic Informal Learning Environments. In Proceedings of The 12th International Conference on Auditory Display (ICAD), pp. 238–241, 2006. Other Non-peer Reviewed Work 1) Orlosky, J., Adaptive Display for Augmented Reality. The 9th Young Researcher’s Retreat, 2015. Best Poster 2) Orlosky, J., Toyama, T., Sonntag, D., and Kiyokawa, K. Using Eye-Gaze and Visualization to Augment Memory. In Distributed, Ambient, and Pervasive Interactions, pp. 282–291. 2014. (Section 6.4) 3) Voros, G., Miksztai Rethey, B., Vero, A., Orlosky, J., Toyoma, T., Sonntag, D., and Lorincz. A., Mobile AAC Solutions using Gaze Tracking and Optical Character Recognition. In Proceedings of the 16th Biennial Conference of the International Society for Augmentative and Alternative Communication (ISAAC), 2014. 4) Vero, A., B. Miksztai Rethey, B., Pinter, B., Voros, G., Orlosky, J., Toyoma, T., Sonntag, III and D., Lorincz. A. Gaze Tracking and Language Model for Flexible Augmentative and Alternative Communication in Practical Scenarios. In Proceedings of the 16th Biennial Conference of the International Society for Augmentative and Alternative Communication (ISAAC), 2014. 5) Orlosky, J., Kiyokawa, K., and Takemura, H. Automated Text Management for Wearable and See-through Display Systems. In Proceedings of the 6th Korea-Japan Workshop on Mixed Reality (KJMR), 2013. (Section 3.4) 6) Orlosky, J., Kiyokawa, K., and Takemura, H. Scene Analysis for Improving Visibility in Wearable Displays. In Proceedings of the 16th Meeting on Image Recognition and Understanding (MIRU), 2013. (Section 3.4) 7) Orlosky, J. Katzakis, N. Kiyokawa, K. and Takemura, H. Torso Keyboard: A Wearable Text Entry Device That Can Be Used While Sitting, Standing or Walking. In Proceedings of the 10th Asia Pacific Conference on Human Computer Interaction (APCHI), pp. 781–782. 2012. (Section 6.3) IV Abstract In mobile augmented reality, a number of barriers still exist that make head worn devices unsafe and difficult to use. One of these problems is the display of content in or around the user’s field of view, which can result in occlusion of physical objects, distractions, interference with conversations, and a limited view of the user’s natural environment. This thesis proposes the use of dynamic content display and field of view manipulation techniques as a step towards overcoming these safety and usability issues. More specifically, I introduce novel strategies for dynamic content movement, gaze depth tracking techniques for automated content management, and hands-free spatial manipulation of the user’s field of view. In combination with a number of new head mounted display prototypes, these methods can decrease the invasiveness of and increase the usability of head worn displays and related mixed and augmented reality applications. In addition to proposing frameworks and strategies for improving usability and safety, new information about the human eye, brain, and perception of virtual content are revealed and discussed. In order to conduct an initial comparison of standard mobile interfaces to head mounted displays, I first describe pilot experiments that study user tendencies related to viewing and placing text in mobile environments. The experiments studied smartphone and head mounted display use, and tested general environmental awareness and performance between the two devices for concentration intensive mobile tasks. Results showed that head mounted displays already have some advantages in terms of environmental awareness, but more importantly, users would prefer text that is affixed to visible locations in the background rather than affixed to a single point on the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages165 Page
-
File Size-