GIVE-ME: Gamification in Virtual Environments for Multimodal Evaluation- a Framework
Total Page:16
File Type:pdf, Size:1020Kb
GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework Wai Lun Khoo A dissertation submitted to the Graduate Faculty in Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York. Committee Members: Professor Zhigang Zhu Professor Tony Ro Professor Yingli Tian Dr. Aries Arditi August 30, 2016 c 2016 Wai Lun Khoo All Rights Reserved GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework by Wai Lun Khoo This manuscript has been read and accepted for the Graduate Faculty in Computer Science in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. Date Professor Zhigang Zhu Chair of Examining Committee Date Professor Robert Haralick Executive Officer Supervisory Committee Professor Zhigang Zhu Professor Tony Ro Professor Yingli Tian Dr. Aries Arditi The City University of New York Abstract GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework by Wai Lun Khoo Advisor: Professor Zhigang Zhu In the last few decades, a variety of assistive technologies (AT) have been developed to im- prove the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal sys- tems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmark- ing heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via develop- ing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics eval- uation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling devel- opers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user iv interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each applica- tion. Finally, both conclusions and a few directions for future work are presented in the last chapter. v Acknowledgements This work is supported by the U.S. National Science Foundation (NSF) through Award # EFRI-1137172 and the CUNY Graduate Center Science Fellowship (2009-2014), for my Ph.D. research. The work has also been supported in part by other grants, such as EFRI-REM sup- plement (for my mentoring and outreach activities), NSF CBET-1160046 and VentureWell (formerly NCIIA) through Award # 10087-12 (for my collaboration with the capstone class students in surveying and evaluation), and NSF IIP-1416396 (for support of my design for evaluation of VISTA Wearable). I want to thank my advisor, Prof. Zhigang Zhu, for his constant support, advice, and encouragement. It has been an amazing 10-year journey with Prof. Zhu. I still remember the first piece of advice he gave me when I was in his undergraduate Data Structures class: \You may not be viewed as a good student if you don't know how to solve a problem, but it would be a very good first step if you at least understand why you cannot solve the problem." His research, teaching, mentoring, and collaboration styles have been imprinted on me ever since. And I hope to pass on some of what I've learned to my students as I begin my academic career as a professor. I also want to thank my committee members, Drs. Tony Ro, YingLi Tian, and Aries Arditi, for their valuable and insightful comments and feedback in my dissertation work and helping me in pursuing my dream job of being a professor. I also want to thank my colleagues and fellow students/candidates (former and current) Dr. Tao Wang, Dr. Hao Tang, Dr. Edgardo Molina, Ms. Farnaz Abtahi, Mr. Martin Goldberg, Mr. Feng Hu, Mr. Wei Li, and Mr. Greg Olmschenk, for their friendship and collabration, allowing me to brainstorm my ideas with them. I also want to thank other collaborators both in research and in mentoring students, Dr. Yuying Gosser, Dr. Lei Ai, and Ms. Camille Santistevan, in helping me in more ways than I can count. vi Finally, I want thank my mentees Mr. John Settineri, Mr. August Seiple, Ms. Takami Nishimoto, Mr. Joey Knapp, Mr. Joey Pan, and many other unnamed mentees, for helping me in various research projects, and most important of all, for teaching me how to be a better mentor. vii To my parents and siblings, Albert, Doris, Lillian, and Jonathan. To my spouse, Chris, without whom I wouldn't have made it this far. viii Contents Abstract . iv Acknowledgements . vi List of Tables . xiii List of Figures . xiv 1 Introduction 1 1.1 Motivation . .1 1.2 Assistive Technology . .2 1.3 Problem Statement . .3 1.4 Overview . .4 2 Related Works 8 2.1 Virtual Reality . .8 2.2 Gamification . 12 2.3 Multimodality . 14 3 GIVE-ME: A Multimodal Evaluation Framework 20 3.1 Objectives . 20 3.2 Framework: User Interface . 22 3.2.1 Controllers . 23 3.2.2 Multimodal Stimulators . 25 3.2.3 Measurement Device . 28 ix 3.3 Framework: Foundations and Components . 30 3.3.1 Multimodal (Virtual) Sensors . 31 3.3.2 Game Mechanics . 40 3.3.3 Virtual Environment Toolbox . 42 3.3.4 Data Collection . 45 3.4 Intended Use . 48 3.5 Software Implementation . 49 3.5.1 Workflow . 49 3.5.2 GIVE-ME Package . 50 4 Application 1 - BrainportNav 58 4.1 Experiment . 58 4.2 Controller and Its Setup . 60 4.3 Stimulator . 60 4.4 Virtual Sensor and Transducing . 61 4.5 Game Mechanics . 61 4.6 Environment Design . 62 4.7 Measurement Device & Data Collection . 62 4.8 Results . 63 4.9 Discussion . 66 5 Application 2 - CrowdSourceNav 68 5.1 Experiment . 68 5.2 Controller and Its Setup . 72 5.3 Stimulator . 72 5.4 Virtual Sensor and Transducing . 73 5.5 Game Mechanics . 74 5.6 Environment Design . 74 x 5.7 Measurement Device & Data Collection . 75 5.8 Results . 77 5.9 Discussion . 81 6 Application 3 - VibrotactileNav 83 6.1 Experiment . 84 6.2 Controller and Its Setup . 85 6.3 Multimodal Stimulators . 86 6.4 Virtual Sensor and Transducing . 87 6.5 Game Mechanics . 89 6.6 Environment Design . 90 6.7 Measurement Device & Data Collection . 91 6.8 Results . 92 6.9 Discussion . 95 6.9.1 EEG Data Collection . 96 7 Application 4 - VistaNav 98 7.1 Experiment . 99 7.2 Controller and Its Setup . 103 7.3 Multimodal Stimulators . 103 7.4 Virtual Sensor and Transducing . 104 7.5 Game Mechanics . 105 7.6 Environment Design . 105 7.7 Measurement Device & Data Collection . 106 7.8 Results . 107 7.9 Discussion . 116 8 Conclusion and Discussion 119 8.1 GIVE-ME Framework . 119 xi 8.2 Discussion . 122 8.3 Future Work . 125 A Visual Function and Disorders 128 B System Usability Scale (SUS) 130 C Candidate's Publications 132 Bibliography 135 xii List of Tables 2.1 Multimodal ATs . 19 4.1 Accuracy for four subjects . 65 5.1 Independent and dependent variables. 76 5.2 Crowd vs ground truth times in first experiment. 77 5.3 Crowd times in second experiment. 79 5.4 Survey Results . 80 6.1 Chicken Finder data: Time and Bumping . 94 7.1 Subjects descriptive statistics for all . 107 7.2 Subjects descriptive statistics for VIP . 108 7.3 Subjects descriptive statistics for sighted (for both experiments) . 108 7.4 Group means for all configurations and subjects. 111 7.5 Group means for all configurations, for VIP only . ..