The Development and Evaluation of a Virtual Intervention for Adults with Autism: A Design-based Research Study

A dissertation submitted to the

Graduate School of the

University of Cincinnati in partial fulfillment of the

requirements for the degree of

Doctor of Philosophy

in the Department of Instructional Design and Technology

of the College of Education, Criminal Justice, and Human Services

by

Noah Glaser

June 2020

Committee Chair Matthew Schmidt, Ph.D.

ABSTRACT

Interest in using technologies to provide therapeutic and educational platforms for individuals with ASD has been growing for decades. The contents of this article-based dissertation are related to the design, development, implementation, and evaluation of a virtual reality intervention called Virtuoso. Virtuoso is a suite of virtual reality technologies designed to promote the acquisition of adaptive skills related to catching public transportation for individuals with autism in an adult day program. This article-based dissertation addresses both pragmatic research to practice gaps, and the advancement of theory that underpins design considerations of the intervention. The first article presents a design and development case study that describes how interdisciplinary processes were utilized to create a multi-user virtual environment for adults with autism. While research points to the difficulty of developing virtual environments, few studies have articulated the process in detail which leaves a gap in the literature. The second article presents findings from a user-centric evaluation of the first prototype of Virtuoso. Evaluation focused on the users' of acceptance, feasibility, ease-of-use, user , and relevance of the system. The third article examines the character of cybersickness symptoms that participants experienced while undergoing three virtual reality research sessions that evolved in visual fidelity and task complexity. The nature of learner while using commercial head-mounted displays including the Rift and Cardboard are also reported. The fourth article presents findings from a systematic review of the literature that was conducted in the spring of 2020. This review of the literature examines how virtual reality interventions for individuals with autism have been designed from the early work in the field to the present day. Six scholarly were queried to synthesize relevant literature around virtual reality and autism spectrum disorders. Data extracted includes technological descriptions, hardware and design, target audience, and system goals.

i

ii

Dedication:

This work is dedicated to Maggie Center.

Thank you for all of the notes of support you hid in my pockets, for all the time you spent listening to me read the same papers again and again, and for all the cups of coffee. The frog seems alright.

iii

ACKNOWLEDGEMENTS

I would like to thank everyone who made this possible. This includes the members of my committee for all of the time and energy they took to review and provide feedback on my work. I greatly appreciate your advice and expertise. I would like to recognize Dr. Miriam Raider-Roth for sparking my passion for research when she invited me to join the Jewish Court of All Time research project when I was still in the master’s program. I especially want to thank Dr. Matthew Schmidt for all of his care and support as I grew as an academic. It has been an absolute pleasure working with him over the last four years. Without him I would be unable to use a calendar, an organizer, or the reply-all function. I look forward to continuing our collaboration as colleagues in the institution. I want to thank the Association for Educational Communications & Technology and other professional organizations that provide invaluable opportunities for emerging scholars. I would like to extend my gratitude towards the associates of the Impact Innovation program for participating in my study and the staff who helped coordinate it. I also want to thank my good friends and colleagues Tina Riedy and Heath Palmer. You two have been invaluable as designers and developers on many projects I have been part of, but more importantly as friends. I also want to thank all my family for constantly asking “when will you finish.” Without your constant probing I may still be in my first semester. Finally, I want to give a much deserved thank you from the bottom of my heart to my wife, Maggie. I could not have done this without you. Thank you for your patience, support, and invaluable insight.

iv

TABLE OF CONTENTS CHAPTER 1: Introduction 1

Virtuoso Design Framework 1

My Involvement in the Virtuoso Design-based Research Process 5

Descriptions of the Four Manuscripts 7

Scholarly Publication 1: The centrality of interdisciplinarity for overcoming design and

development constraints of a multi-user virtual reality intervention for adults with autism:

A design case 8

Scholarly Publication 2: Investigating the Usability and User Experience of a Virtual

Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder 9

Scholarly Publication 3: Investigating the Experience of Virtual Reality Head-Mounted

Displays for Adults with Autism Spectrum Disorder 11

Scholarly Publication 4: Systematic Literature Review of Virtual Reality Intervention

Design Patterns for Individuals with Autism Spectrum Disorders 12

References 14

CHAPTER 2: The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case

16

Project Description 18

Methodology 20

Key Participants 21

Data Collection and Analysis 21

v

The Case 22

Developing a Realistic Terrain 26

Developing Realistic Buildings 27

Developing Realistic Scenarios 31

Discussion 34

References 36

CHAPTER 3: Investigating the Usability and User Experience of a Virtual Reality

Adaptive Skills Intervention for Adults with Autism Spectrum Disorder 41

Abstract (164 words) 42

Importance 44

Literature Review 47

Single-user, desktop-based VR applications 47

Multi-user, desktop-based VR applications 48

Headset-based VR applications 49

Project Description 50

Methods 53

Participants 56

Data Collection Error! Bookmark not defined.

System Usability Scale. 59

Adjectival Ease of Use scale. 59

Structured expert interviews. 59

vi

Screen, webcam, and audio recordings. 60

Unstructured, post-usage testing interviews. 60

Field notes. 60

Analysis 60

Quantitative Analysis 61

Qualitative Analysis 61

Deductive analysis. 61

Coding procedures. 62

Agreement and reliability analyses. 62

Inductive analysis. 63

Findings 65

Experts Testers’ Perceptions of Usability 65

Participant Testers’ Perceptions of Usability 66

Expert Testers’ Perceptions of Feasibility and Relevance 67

Nature of Participant Testers’ User Experience 69

Discussion and Implications 76

References 80

CHAPTER 4: Investigating the Experience of Virtual Reality Head-Mounted Displays for

Adults with Autism Spectrum Disorder 89

Abstract 90

1. Introduction 91

vii

1.1 Background 92

1.2 Promises and Challenges of Virtual Reality for Autism 95

2. Methods 100

2.1 Research Design 101

2.2 Informed Consent 106

2.3 Site Description 106

2.4 Study Procedures 108

2.5 Data Sources 109

2.6 Data Analysis 110

3. Findings 114

3.1 RQ1: Character of Symptoms of Cybersickness Across Sessions for ASD Group 115

3.2 Research Question 2: How do MSAQ Scores Between ASD Group and NT Group

Differ 119

3.3 Research Question 3: HMD Impact on Learner Experience 122

4. Discussion 127

4.1 Directions for future research and design 131

Appendixes 141

CHAPTER 5: Literature Review for VR and Adults with Autism Spectrum Disorder 152

Rationale 152

Methods 163

Protocol and Registration 164

viii

Eligibility Criteria 164

Information Sources 165

Search 166

Study Selection 168

Data Collection Process 169

Data Items 169

Results 171

Study Selection Results 171

Aim 1: How VR interventions for individuals with ASD are defined and characterized 173

Aim 2: How VR systems can be categorized using Parsons’ (2016) schema 182

Aim 3: How distinguishing characteristics are instantiated in VR interventions designed for individuals with ASD 194

Discussion 198

Limitations 203

Conclusion 205

References 206

CHAPTER 6: Dissertation Conclusion 221

Dissertation Appendices 226

Appendix A: Summer Symposium Proposal Submission 226

Appendix B: Initial Submission to Summer Symposium 229

ix

Appendix : Email with Dr. Hokanson from the 2019 Summer Symposium 246

Appendix D: Response Letter to Committee Member Addressing Suggested Revisions

247

x

TABLE OF

CHAPTER 1: Introduction

CHAPTER 2: The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case

Table 1. Interdisciplinary Processes and Skills Required to Create Virtuoso. 24

CHAPTER 3: Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder

Table 1. Study procedures across phases 55

Table 2. Demographics and Description of Expert Review Participants 56

Table 3. Participant demographics and measures of Peabody Picture Vocabulary Test (PPVT), Social Responsiveness Scale (SRS), and Behavior Rating Inventory of Executive Function (BRIEF) 57

Table 4. Supplemental codes appended to Kushniruk & Borycki’s (2015) video analysis coding scheme 62

Table 5. Interobserver agreement and Kappa for coding and duration in Virtuoso-SVVR and Virtuoso-VR 63

Table 6. Qualitative codes and operationalizations that emerged from inductive analysis 63

Table 7. Usability issues identified during expert testing 65

Table 8. Mean System Usability Scale (SUS) scores across participant testers 66

CHAPTER 4: Investigating the Experience of Virtual Reality Head-Mounted Displays for Adults with Autism Spectrum Disorder

Table 1. Participant demographics and measures of Peabody Picture Vocabulary Test (PPVT), Social Responsiveness Scale (SRS), and Behavior Rating Inventory of Executive Function (BRIEF) 103

xi

Table 2. Neurotypical Participant Demographics 105

Table 3. Research Questions and how the Data Sources Address them 109

Table 4. Qualitative codes from the MSAQ related to the dimensions and symptoms of motion sickness 111

Table 5. Qualitative codes and operationalizations 113

Table 6. Aggregate MSAQ Dimension Scores Across Research Sessions 115

Table 7. Gross MSAQ Scores across NT Participants and Research Sessions 119

CHAPTER 5: Literature Review for VR and Adults with Autism Spectrum Disorder

Table 1. Unstructured search query consisting of search terms and Boolean operators. 167

Table 2. Filters or limits applied to each index used in the literature review 167

Table 3. Definitions of VR across projects/interventions. 174

Table 4. Description of VR projects/interventions for individuals with ASD as categorized using the schema suggested by Parsons (2016). 175

Table 5. Design factors of VR projects/interventions for individuals with ASD as categorized using the elaborated model of learning in 3D virtual learning environments (Dalgarno & Lee, 2010). 187 CHAPTER 6: Dissertation Conclusion

xii

TABLE OF FIGURES

CHAPTER 1: Introduction

Figure 1. Generic DBR Model as outlined by McKenney and Reeves (2018). 2

Figure 2. Virtuoso Design Principles to Promote Transfer and Cognitive Accessibility. 5

Figure 3. Positioning my dissertation manuscripts Virtuso’s DBR cycles. 7 CHAPTER 2: The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case

Figure 1. Screenshot of the campus terrain with a Google Map image placed onto a 3D mesh. 26

Figure 2. A photogrammetrically-created model, imported into High Fidelity. 29

Figure 3. Campus model created using a combination of GIS data, image editing, 3D modeling, and photogrammetry. 30

Figure 4. Office model created using architectural modeling software. 31

Figure. 5. Single activity as represented in the Virtuoso procedural task analysis highlighting ABA system of least prompts strategy. 32

Figure 6. Shuttle tracking application embedded in the virtual environment; an invisible player would trigger a shuttle model based on the information provided in this app. 34

CHAPTER 3: Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder

Figure 1. Intervention architecture. 53

CHAPTER 4: Investigating the Experience of Virtual Reality Head-Mounted Displays for Adults with Autism Spectrum Disorder

Figure 1. Structure of research sessions 103

Figure 2. Research Area outline to show where research activities were conducted. 107

xiii

Figure 3: Directed approach to data analysis. 111

Figure 4. Gross MSAQ scores across sessions. 116

Figure 5. This figure visualizes MSAQ subscores and their role in gross values. 116

Figure 6. Aggregate MSAQ scores across research sessions. 120

Figure 7. Gross MSAQ scores across Session 0 of the study. 121

Figure 8. Gross MSAQ scores across Session 1 of the study. 121

Figure 9. Gross MSAQ scores across Session 2 of the study. 122

CHAPTER 5: Literature Review for VR and Adults with Autism Spectrum Disorder

Figure 1. Flow diagram illustrating systematic search and selection process. 173

Figure 2. Overview of VR technologies used across 49 identified projects/interventions. 182

Figure 3. Participant age ranges across identified VR projects/interventions. 183

Figure 4. Breakdown of Physical Contexts Used in the Literature. 184

Figure 5. Clinical targets of Training Activities. 186

Figure 6. Design factors instantiated across VR projects/interventions by percent. 198 CHAPTER 6: Dissertation Conclusion

xiv

CHAPTER 1: Introduction

This dissertation has been organized and written in what is known as an article-based format. This dissertation style provides the same rigor and academic requirements of a traditional dissertation, but contains manuscripts that are prepared for publication. Article-based dissertations still require a demonstration of an author’s capacity for independent scholarship and contribution to knowledge, but since each manuscript is prepared for targeted journals, length and style requirements may differ across articles. The manuscripts that make up this dissertation are related to my research and involvement in a project called Virtuoso. Virtuoso is a suite of virtual reality technologies that have been designed to promote the development of adaptive skills related to catching public transportation for individuals with autism in an adult day program at the University of Cincinnati, called Impact Innovation. In the following sections I will provide a brief background into (1) the Virtuoso design framework, (2) my involvement in the Virtuoso design-based research process, and (3) a description of the four scholarly publications included in this dissertation.

Virtuoso Design Framework

Virtuoso has been developed using a design-based research (DBR) methodological approach. DBR is an emerging technique that facilitates the development of meaningful and pragmatic solutions to problems that involve the use of educational technologies. As described by McKenney and Reeves (2018), DBR is a form of scientific inquiry with a commitment to developing theoretical and practical solutions in real world contexts in collaboration with involved parties or stakeholders. Design-based research is rooted in the tradition of design experiments where theory informs design and design informs theory with the experiment taking place in a context where the learning would actually occur (Brown, 1992). A multi-pronged

1

approach is recommended that can take into consideration the various lenses that can influence learning (Brown, 1992). Therefore, DBR relies not on one singular study or line of inquiry, but rather multiple iterations of both design and evaluation. McKenney and Reeves (2018) state that insights and interventions of DBR evolve over multiple periods of investigation, development, testing, and refinement. Within the framework of a larger study, there are oftentimes several sub- studies taking place with their own iteration of inquiry and reasoning.

This framework is appropriate for the design and development of an intervention because it represents a fluid or flexible approach that allows for the use of diverse methodologies depending on the goals and purpose of any given phase within the overarching project. And while DBR does indeed allow for flexibility, it still prescribes a framework to help guide and shape the process. As seen in Figure 1 below, McKenney and Reeves (2018) outline a generic three-phase research trajectory that includes analysis/exploration, design/construction, and then evaluation/reflection.

Figure 1. Generic DBR Model as outlined by McKenney and Reeves (2018).

As a DBR study progresses, these phases tend to grow in scope and educational impact as the intervention is continuously refined (McKenney & Reeves, 2013). These individual phases of analysis/exploration, design/construction, and evaluation/reflection are referred to as mico-

2

cycles. A full iteration that culminates in an evaluation/reflection that can be used to mature the intervention and theoretical understanding is known as a meso-cycle (McKenney & Reeves,

2018). With this loose but prescriptive framework in mind, I will now provide a brief project narrative that describes where my contributions and manuscripts fall into the DBR process.

In alignment with the framework of a DBR project, this work addresses an educational conundrum both in terms of the practical and theoretical. In regards to the practical, this work has been done as part of a collaboration between the Impact Innovation program at the

University of Cincinnati. Impact Innovation is an adult day program for individuals with autism who experience significant communication and behavioral challenges and provides its participants with a high level of community integration through a curriculum that focuses on providing vocational opportunities, healthy lifestyles, and lifelong learning. Central to the goals of the Impact Innovation program is the ability for its associates to have access to public transportation which is also a much cited barrier for individuals with autism and their ability to maintain employment and social opportunities (Mechling & O’brien, 2010; Felce, 1997;

Carmien, et al., 2005). As a product of the needs analysis conducted with the director of the

Impact Innovation program, the project team determined there was a need for an experiential training approach for teaching Impact associates how to use the UC Shuttle system, which provides transportation in and around the campus setting. Needs assessment was part of a larger front-end analysis, which also included development of a project scope. Scope included a 3D, digital re-creation of the University of Cincinnati’s terrain, buildings, shuttle system that would serve as the virtual context of an instructional centering around teaching Impact Innovation associates how to use the University of Cincinnati’s shuttle system.

3

The Virtuoso project addresses theoretical understanding–one of the outputs of DBR– through the establishment, operationalization, evaluation and iterative refinement of design principles to promote generalization of skills learned in the virtual context to the real world – a pervasive challenge with educational research in general and autism interventions specifically, which is explored in-depth in the corpus of manuscripts included in this dissertation.

Generalization is considered to be the “outcome of behavior change and therapy programs, resulting in effects extraneous to original targeted changes” (Stokes & Osnes, 2016, p. 338).

Generalization heuristics (Schmidt et al., 2020) guided the establishment of three overarching design principles that were embodied in the design of Virtuoso to support generalization: (1) provide learners support by way of instructional scaffolding (with appropriate fading), (2) harness purported affordances of VR by providing sufficient photographic fidelity and accuracy

(veridicality), and (3) leverage VR properties of immersion to promote generalization heuristics.

Collectively, these design principles provide participants a VR training experience that increases in complexity across activity structure, pedagogical strategy, and implementation context as they acquire superordinate and subordinate associated skills. Figure 2 illustrates the three design principles and examples of how they were embodied into the development of Virtuoso.

4

Figure 2. Virtuoso Design Principles to Promote Transfer and Cognitive Accessibility.

My Involvement in the Virtuoso Design-based Research Process

The research presented in this dissertation was performed across two meso-cycles of

DBR. During the first meso-cycle a prototype of the Virtuoso software suite was created. This prototype included an Android-based spherical video-based virtual reality application (Virtuoso-

SVVR) and a fully immersive multi-user virtual environment that was made in High Fidelity, an open-source VR development kit (Virtuoso-VR). Development of these prototypes took place from the time I entered into the doctorate program in the Fall of 2016 until the Spring of 2018.

At the end of this first meso-cycle a usage test was conducted during the Summer of 2018 semester. In this usage test, Virtuoso was formatively evaluated through a user-centric lense that examined usability of the software and the nature of user experiences. Two of the manuscripts presented in this article-based dissertation are based on findings from this first meso-cycle. A second meso-cycle began after the conclusion of the 2018 usage test.

At this point in the DBR process, the Virtuoso team entered into a second meso-cycle of research and design. After doing a preliminary analysis of the data (which led to a publication;

Schmidt et al., 2019). I began a micro-cycle of analysis/exploration when I wrote a thematic literature review concerning the general use of virtual reality in the field for my first qualifying examination response. During this meso-cycle, I took part in another micro-cycle of

Design/Construction to improve and refine the intervention and to prepare it for another micro- cycle of Evaluation/Reflection. In the spring of 2019 I led the development of this second prototype and fully recreated the environment made in High Fidelity into a new development platform, . By porting the software into the Unity development platform I was able to simplify the administering procedures, provided greater system stability, and enabled a more

5

efficient implementation of system features. Footage for the Virtuoso-SVVR application was reshot to increase the visual fidelity of the videos. I also developed a way of administering these

360-scenarios into a multitude of head-mounted displays. Completing this second round of

Design/Construction allowed us to advance our understanding of our design principles through an improved theoretical understanding and an intervention that became further refined and matured. After finishing this design process, the Virtuoso team ran another usage test in the summer of 2019.

The final two manuscripts that make up this article-based dissertation came from findings that took place during this second meso-cycle. During this second usage test, I conducted an independent cybersickness study which is the subject of my third dissertation manuscript. While analyzing these data, I also returned to the literature to review what gaps exist in the field. I conducted a systematic literature review with the goal of identifying how researchers are categorizing the nature of their virtual reality applications and how they are being designed. The findings from this literature review are presented in my fourth dissertation manuscript. Figure 3 shows where each of my four dissertation manuscripts took place within the DBR cycles of

Virtuoso.

6

Figure 3. Positioning my dissertation manuscripts within Virtuso’s DBR cycles.

Descriptions of the Four Manuscripts

The following sections will provide further detail of the four manuscripts that make up this article-based dissertation. These manuscripts are referred to as, (a) Scholarly Publication 1:

The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case, (b) Scholarly

Publication 2: Investigating the Usability and User Experience of a Virtual Reality Adaptive

Skills Intervention for Adults with Autism Spectrum Disorder, (c) Scholarly Publication 3:

Investigating the Experience of Virtual Reality Head-Mounted Displays for Adults with Autism

Spectrum Disorder, and (d) Scholarly Publication 4: Systematic Literature Review of Virtual

Reality Intervention Design Patterns for Individuals with Autism Spectrum Disorders. Brief descriptions, author contribution statements, and plans for submission will be provided.

7

Scholarly Publication 1: The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism:

A design case

Chapter 2, entitled The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case (Glaser, Schmidt, Schmidt, Beck, & Palmer, in print), presents a design and development case study that describes how a small research team utilized interdisciplinary processes to create a multi-user virtual environment for adults with autism. While studies have pointed to the difficulty of developing a multi-user virtual environment (MUVE), few have articulated the process in detail which leaves a gap in the literature. The purpose of this manuscript is to describe and provide insight into how a small team created a MUVE that required expertise from educators, researchers, content experts, programmers, 3D modelers, designers, developers, and more. By providing a description of this case study, others in the field will be able to take lessons learned from this project and be able to apply similar solutions to their own design challenges. This manuscript is based on the design and development micro- cycle from the first meso-cycle of Virtuoso. This micro-cycle was completed through my graduate positions during the first two years of my doctorate program.

This case study was written as a book chapter that describes my role in the design and development of a virtual reality intervention. In 2019, The AECT Summer Research Symposium put out a call for proposals soliciting chapters related to the interdisciplinary nature of learning design. This call for proposals aligned well with the topic of the paper as this case study describes salient challenges that emerged during the design and development of Virtuoso and how interdisciplinary methods were instrumental in overcoming those challenges.

8

Those who were involved with the design and development of Virtuoso are included as co-authors on this manuscript. Co-authors include Dr. Matthew Schmidt, Dr. Carla Schmidt, Dr.

Dennis Beck, and Heath Palmer. The following information outlines author contributions to the book chapter:

● Conceptualization: Noah Glaser Matthew Schmidt, ● Methodology: Noah Glaser and Matthew Schmidt, ● Software: Noah Glaser, Matthew Schmidt, and Heath Palmer, ● Validation: Noah Glaser, Matthew Schmidt, and Heath Palmer, ● Formal analysis: Noah Glaser and Matthew Schmidt, ● Investigation: Matthew Schmidt, Noah Glaser, Carla Schmidt, and Dennis Beck, ● Resources: Matthew Schmidt and Noah Glaser, ● Data curation: Noah Glaser and Matthew Schmidt, ● Writing: Noah Glaser, Matthew Schmidt, Heath Palmer, Carla Schmidt, and Dennis Beck, ● Writing—review and editing: Noah Glaser, Matthew Schmidt, Heath Palmer, and Carla Schmidt, Supervision: Matthew Schmidt, ● Project administration: Noah Glaser and Matthew Schmidt, ● Funding acquisition: Matthew Schmidt, Carla Schmidt, and Noah Glaser.

Scholarly Publication 2: Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder

Chapter 3, entitled Investigating the Usability and User Experience of a Virtual Reality

Adaptive Skills Intervention for Adults with Autism Spectrum Disorder (Schmidt & Glaser, under review), presents the user-centric evaluation findings of the first prototype of the Virtuoso software suite. Evaluation is focused on the acceptability, feasibility, ease-of-use, and relevance of the Virtuoso-VR and Virtuoso-SVVR prototypes to the unique needs of the participants, as well as the nature of participants’ user experiences. Findings from this study are included from the perspectives of expert testers and participant testers with autism. This study took part at the end of the first meso-cycle, during the evaluation and reflection micro-cycle of Virtuoso which was led by Dr. Schmidt in his studio for advanced learning technologies (SALT). Throughout

9

this process, Dr. Schmidt mentored me through a DBR cycle that culminated in a usage study.

Through my Graduate Assistant positions and Guided Research experiences I gained the skills needed to conduct an independent analysis and interpretation of the data from this study. Since this project was conducted under his guidance and mentorship, he is the first author on this manuscript and I did the work of a second author. My contributions to the study and this manuscript highlight my ability for independent scholarship and collaboration within an academic project.

An initial short report version of this manuscript was submitted to IEEE Transactions on

Learning Technologies. This journal was originally targeted because it publishes papers that cover advances in learning technologies including innovative learning systems, educational games, and . The IEEE Transactions on Learning Technologies is both an education and computer science journal which aligns with the intersection that we were providing with our usage test of a new virtual reality intervention. However, after receiving feedback, the author team determined that the manuscript was not an ideal fit for the journal and therefore the manuscript was completely reworked and expanded into a full report which has been submitted to Educational Technology Research and Development (ETRD). ETRD is a flagship journal in the field and publishes work related to the research and development of educational technology.

The following information outlines author contributions to the manuscript:

● Conceptualization: Matthew Schmidt and Noah Glaser, ● Methodology: Matthew Schmidt and Noah Glaser, ● Software: Noah Glaser and Matthew Schmidt, ● Validation: Noah Glaser and Matthew Schmidt, ● Formal analysis: Noah Glaser and Matthew Schmidt, ● Investigation: Matthew Schmidt and Noah Glaser, ● Resources: Matthew Schmidt and Noah Glaser, ● Data curation: Noah Glaser and Matthew Schmidt, ● Writing: Matthew Schmidt and Noah Glaser,

10

● Writing—review and editing: Matthew Schmidt and Noah Glaser, ● Supervision: Matthew Schmidt, ● Project administration: Matthew Schmidt, ● Funding acquisition: Matthew Schmidt and Noah Glaser.

Scholarly Publication 3: Investigating the Experience of Virtual Reality Head-Mounted Displays for Adults with Autism Spectrum Disorder

Chapter 4, entitled Investigating the Experience of Virtual Reality Head-Mounted

Displays for Adults with Autism Spectrum Disorder (Glaser, Schmidt, & Schmidt, in progress), examined the extent that adults with ASD felt symptoms of cybersickness while undergoing three different research sessions where they evaluated prototype virtual reality software. Learner experiences while using head-mounted displays including an and are also reported. Findings from this study were examined through multi-method procedures that utilized quantitative and qualitative data. This manuscript presents findings from a usage test that took place in the summer of 2019. This research was conducted as part of the evaluation and reflection micro-cyle of the second meso-cycle of Virtuoso. The virtual environments where research activities took place were created in the spring of 2019 during a design and construction micro-cycle that I led.

For this manuscript, I am targeting the Computers in Human Behavior journal. This outlet is being targeted because it is a scholarly journal that publishes research that examines the use of computers and technology from a psychological perspective including effects of technology on human development, learning, cognition, personality, and social interactions. In addition, this journal has published similar research in the past and the editor was receptive when we sent a preliminary email to determine interest. Additional authors on this manuscript are Dr.

Matthew Schmidt and Dr. Carla Schmidt as the study was conducted during part of a second

11

Virtuoso usage test in which they were involved. The following information outlines author contributions to the manuscript:

● Conceptualization: Noah Glaser Matthew Schmidt, ● Methodology: Noah Glaser and Matthew Schmidt, ● Software: Noah Glaser and Matthew Schmidt, ● Validation: Noah Glaser, ● Formal analysis: Noah Glaser and Matthew Schmidt, ● Investigation: Noah Glaser, Matthew Schmidt, and Carla Schmidt, ● Resources: Matthew Schmidt, Carla Schmidt, and Noah Glaser, ● Data curation: Noah Glaser and Matthew Schmidt, ● Writing: Noah Glaser and Matthew Schmidt, ● Writing—review and editing: Noah Glaser, Matthew Schmidt, and Carla Schmidt, ● Supervision: Matthew Schmidt and Carla Schmidt, ● Project administration: Matthew Schmidt and Carla Schmidt, ● Funding acquisition: Matthew Schmidt, Carla Schmidt, and Noah Glaser.

Scholarly Publication 4: Systematic Literature Review of Virtual Reality Intervention Design Patterns for Individuals with Autism Spectrum Disorders

Chapter 5, entitled Systematic Literature Review of Virtual Reality Intervention Design

Patterns for Individuals with Autism Spectrum Disorders (Glaser & Schmidt, in progress), presents the findings from a systematic literature that was conducted in the spring of 2020. The goal of this literature review was to examine how virtual reality interventions for individuals with ASD have been designed from the early work in the field to present day. A systematic search was conducted on six scholarly databases to bring up relevant literature around virtual reality and autism spectrum disorders. Data extracted from included results include technological descriptions including hardware and software design, target audience, and system goals. This literature review was conducted after the second Virtuoso usage test was completed and represents a new micro-cycle of analysis and exploration.

The Journal of Computer Assisted Learning is being targeted as a potential outlet for this manuscript. This journal covers a range of topics concerning the use of information and

12

communication technologies for supporting learning and knowledge exchange. In particular, this journal is being targeted because they publish reviews of the literature concerning the use of computer technologies for learning and are especially interested in justifying the use of technology on educational grounds. This scope aligns well with the topic of this literature review which seeks to provide an analysis of how designers and researchers are creating their virtual reality interventions, for what audiences, and under what conditions. The following information outlines author contributions to the manuscript:

● Conceptualization: Noah Glaser Matthew Schmidt ● Methodology: Noah Glaser and Matthew Schmidt ● Validation: Matthew Schmidt and Noah Glaser ● Formal analysis: Noah Glaser ● Investigation: Noah Glaser ● Data curation: Noah Glaser and Matthew Schmidt ● Writing: Noah Glaser and Matthew Schmidt ● Writing—review and editing: Noah Glaser and Matthew Schmidt ● Supervision: Matthew Schmidt ● Project administration: Matthew Schmidt

13

References

Brewer, M. (2000). Research design and issues of validity. In H. T. Reis, H. & C. M. Judd (Eds.)

Handbook of Research Methods in Social and Personality Psychology (pp. 3-16).

Cambridge, UK: Cambridge University Press.

Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating

complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2),

141-178.

Carmien, S., Dawe, M., Fischer, G., Gorman, A., Kintsch, A., & Sullivan, J. F., JR. (2005).

Socio-technical Environments Supporting People with Cognitive Disabilities Using

Public Transportation. ACM Trans. Comput. -Hum. Interact, 12(2), 233–262.

Felce, D. (1997). Defining and applying the concept of quality of life. Journal of Intellectual

Disability Research, 41(2), 126–135.

Lagemann, E. C. (2002). An elusive science: The troubling of education research.

Chicago: University of Chicago Press.

McKenney, S., & Reeves, T. C. (2018). Conducting educational design research. New York,

NY: Routledge.

Mechling, L., O’Brien, E., & E. (2010). Computer-based video instruction to teach students with

intellectual disabilities to use public bus transportation. Education and Training in Autism

and Developmental Disabilities, 45(2), 230–241.

Schmidt, M., Schmidt, C., Glaser, N., Beck, D., Lim, M., & Palmer, H. (2019). Evaluation of a

spherical video-based virtual reality intervention designed to teach adaptive skills for

adults with autism: A preliminary report. Interactive Learning Environments, 1–20.

Schmidt, M., Glaser, N., Schmidt, C., Beck, D., Palmer, H., & Lim, M. (2020). Promoting

14

Acquisition and Generalization of Skills for Individuals Severely Impacted by Autism

Using Immersive Technologies. In B. Hokanson, G. Clinton, A. A. Tawfik, A.

Grincewicz, & M. Schmidt (Eds.), Educational Technology Beyond Content (pp. 71–84).

Springer International Publishing. https://doi.org/10.1007/978-3-030-37254-5_6

Stokes, T. F., & Osnes, P. G. (2016). An operant pursuit of generalization – Republished article.

Behavior Therapy, 47, 720-732.

Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., & Schultz, T. R.

(2015). Evidence-Based Practices for Children, Youth, and Young Adults with Autism

Spectrum Disorder. A Comprehensive Review. Journal of Autism and Developmental

Disorders, 45(7), 1951–1966.

15

CHAPTER 2: The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case

Authors: Noah Glaser, Matthew Schmidt, Carla Schmidt, Heath Palmer, & Dennis Beck

Abstract: Research suggests that virtual reality interventions have potential to

promote the acquisition and development of social, communicative, and adaptive

behavior skills for individuals with autism spectrum disorder. However, creating

such an intervention is a task with innate challenges that requires the use of

interdisciplinary processes and perspectives from those involved in its design. This

challenge is further amplified when trying to design a virtual reality application for

individuals with autism who exhibit substantial variability. The purpose of this

instrumental case study is to provide insight into the complexities of designing a

virtual reality intervention called Virtuoso. The case narrative will address a

substantial research to practice gap in the field by providing methods that other

designers can use in their own development process and will bring to light the

interdisciplinary nature of such research.

Keywords: virtual reality, autism, , adaptive skills,

intervention, virtual environment

Multi-User Virtual Environments (MUVE) are graphically detailed, three-dimensional

(3D) digital environments holding a variety of affordances that make them ideal tools for providing controlled scenarios that can promote learning and assessment (Dalgarno & Lee,

2010). The learning affordances of 3D virtual environments also appear to be well aligned with the learning needs of nontraditional learners, like those with disabilities such as autism (Conway,

16

Vogtle, & Pausch, 1994; Strickland, 1997; Dalgarno & Lee, 2010; Parsons, 2016; Glaser &

Schmidt, 2018). However, creating MUVE is a remarkably difficult task for educational technologists (Bricken, 1994; Bartle, 2004; Hirumi, Appelman, Rieber, & Van Eck, 2010), requiring an interdisciplinary team of educators, researchers, content experts, programmers, 3D modelers, designers, developers, and more. This challenge is further amplified when designing

MUVE for individuals with autism, who exhibit substantial variability in the unique challenges they face (Parsons, 2016).

Autism is a spectrum disorder (Wing, 1993) characterized by a series of impairments centered around social, communicative, and cognitive abilities (Wing & Gould, 1979). Deficits resulting from an ASD diagnosis can severely impact an individual’s quality of life and ability to function independently. Left untreated, problems can become exacerbated and lead to social isolation, difficulty maintaining relationships, and hardships with finding meaningful employment (Frith & Mira, 1992; Eaves & Ho, 2008). Despite decades of research, social, communicative, vocational, and accommodation-related outcomes for adults with ASD remain poor (Billstedt, Gillberg, & Gillberg, 2005; Eaves & Ho, 2008; Howlin, Goode, Hutton, &

Rutter, 2004; Parsons, 2016). To assist in reducing uncertainty of outcomes, the National

Professional Development Center on Autism Spectrum Disorder (NPDC) has detailed evidence- based practices that can be implemented in behavioral interventions for individuals with autism

(Bogin, 2008) and, importantly, have advocated for technology-aided instruction.

One technology that has been considered as potentially efficacious with this population is virtual reality (VR). Interest in VR technologies for individuals with ASD has been growing for decades (Aresti-Bartolome & Garcia-Zapirain, 2014), as researchers are increasingly turning to

17

VR as a means to provide both therapeutic and educational platforms for this population. This trend is due in part to evidence that suggests VR is intrinsically reinforcing for people with ASD, due to the technology’s visually stimulating and appealing nature (Schmidt et al., 2019). VR platforms also have a variety of technological affordances which align with the instructional needs of this population (Glaser & Schmidt, 2018). These benefits include the predictability of the task, ability to control system variables and complexities, realism of digital assets, immersion, automation of feedback, assessment, reinforcement, and more (Bozgeyikli, Raij,

Katkoori, & Alqasemi, 2018; Bozgeyikli et al., 2018). Research suggests that MUVE have potential to promote acquisition of social, communicative, and adaptive competencies for individuals with ASD (Parsons, 2016; Glaser & Schmidt, 2018; Schmidt et al., 2019), thus providing a preliminary basis of support for VR as an intervention modality for individuals with

ASD (Mesa-Gresa, Gil-Gómez, Lozano-Quilis, & Gil-Gómez, 2018). Our project, entitled

Virtuoso, is a VR-based MUVE, delivered using fully immersive VR headsets (i.e., HTC Vive,

Oculus rift), and developed to promote adaptive behavior skills for adults with ASD.

Project Description

Virtuoso was developed for participants with ASD in an adult day program called Impact

Innovation. A goal of Impact Innovation is to provide participants with vocational opportunities that are oftentimes geographically distant from the university campus where the program is housed. With transportation being one of the most cited barriers to access community settings for individuals with disabilities (Allen & Moore, 1997; Carmien, et al., 2005), the focus of Virtuoso is promoting adaptive behavior skills related to using a university shuttle public transportation system. To this end, we designed a VR curriculum based on a detailed procedural task analysis

(Jonassen, Tessmer, & Hannum, 1998). Research suggests that participants’ sense of immersion

18

and embodiment in a collaborative virtual space can lead to the tasks in which they engage taking on a deeper meaning (Mennecke, Triplett, Hassall, & Conde, 2010). In addition, VR technologies may help to convey meanings and symbolic measures of real world activities

(Wang, Laffey, Xing, Ma, & Stichter, 2016; Wallace, Parsons, & Bailey, 2017) that can be enhanced through behavioral and visual realism of in-world assets (Parsons, 2016). A key premise that supports these claims is the “assumption of veridicality,” i.e., if VR experiences are sufficiently authentic and realistic, people will interact with and respond similarity in the digital world as they do in the real world, which arguably can promote transfer from the prior to the latter (Parsons, 2016; Yee, Bailenson, Urbanek, Chang, & Merget, 2007). However, embodiment of a high level of realism in our VR-MUVE required a formidable assortment of expert knowledge and skills, including graphic design, 3D modeling, visual scripting, object-oriented programming, in-depth knowledge of computer and VR hardware, application of learning theory, photography, computational thinking, and even drone piloting.

A key problem that our team grappled with was creating environments that were realistic to the point that they were very easily recognizable. This included geometric accuracy, photographic fidelity, lack of distortion, appropriate scale, etc. While problems of this nature are well understood in commercial contexts, commercial solutions are not always directly applicable in research contexts. That is, limited budgets, capacity, and expertise significantly constrain research teams’ abilities to confront the design and development challenges presented by VR head-on. Importantly, complexities are amplified when working within a small development team due to limited resources. Solutions must be devised that are sensitive to these constraints.

Although it is generally accepted that developing VR interventions in the field of autism research is fraught with challenges, very little has been written to chronicle those challenges and how they

19

can be approached by researchers. Further elaboration is needed to detail how researchers in the field of VR for individuals with ASD encounter and overcome design and development challenges. To this end, we provide here a narrative of our design and development process so as to signpost salient challenges and how interdisciplinary methods and processes were instrumental in overcoming those challenges. Although our core team brought expertise in many disciplines (e.g., special education, applied behavior analysis, engineering, geography, instructional design, educational game design, mobile learning, information technology, computer science, etc.), further internal skills development as well as external expertise were required to successfully create our proof-of-concept. The following sections detail our experiences in the form of an instrumental case study.

Methodology

Lessons presented here are intended to inform others in the field as they devise solutions to their own design and development challenges. Hence, the purpose of this instrumental case study is to provide insight into the complexities of Virtuoso’s design and development, with a particular focus on the interdisciplinary nature of our research. Instrumental case studies seek to provide insight into an issue, to redraw generalizations, or to build theory (Stake, 1995). Our focus relates to the assumption of veridicality and the premise that the realism provided by VR potentially can promote transfer of skills from a VR environment to the real world. At issue is the substantial research to practice gap concerning how designers can embody these principles in practice. We specifically examine the intricacies of leveraging limited resources so as to imbue sufficient realism and authenticity into a virtual public transportation with the aim of promoting transfer of skills for individuals with autism. Virtuoso required development of an interdisciplinary set of skills as well as external expertise. For the purposes of this chapter, we

20

define interdisciplinarity as designing and implementing methods, processes, and skills from many fields during each step of the project (Klaassen, 2018). Incorporating the knowledge, skills, and traditions of different disciplines allows for consideration of divergent ontologies and epistemologies, thereby engendering ongoing negotiation and integration as project needs shift and project members recognize these disparate methodologies’ relevance to one another and their potential synergies (Modo & Kinchin, 2011).

Key Participants

This case study is bounded by the experiences of three core members of the Virtuoso team who engaged specifically in software development (n=3; all male), including (1) one instructional design and technology professor, (2) one instructional design and technology PhD candidate, and (3) one engineering undergraduate student. Members primarily contributed expertise in instructional design, information technology, software development, development of virtual worlds and assets, and educational game design.

Data Collection and Analysis

Data are presented from the perspective of the first author, a PhD candidate in instructional design and technology, relative to his involvement in development of the intervention. Experiences were compiled using autoethnographic methods following completion of a functional intervention prototype (Bruner, 1993; Freeman, 2004). Project artifacts were consulted to assist with recall (Goodall, 2001), including screenshots, 3D assets, videos, procedural analysis documents, meeting minutes and communications, rapid prototypes, and project documentation. Artifacts were organized based on principles of task authenticity and environmental realism and how the principle was achieved relative to each asset or virtual

21

element that was reviewed. This process led to four development processes being identified as representative of the principles, specifically, the development of realistic (1) terrain, (2) campus buildings, (3) interiors, and (4) task scenarios. Following this, autoethnographic methods were used to make sense of epiphanies (moments of experience that serve as a turning point in one’s understanding) perceived to have greatly impacted the phenomena of focus in this study (Denzin,

1989). When epiphanies were identified, the lead author would engage in a reflective writing process, detailing recalled challenges of embodying design principles and chronicling how disparate and non-intuitive approaches were required to create the initial prototype. These reflections were then consulted for sense- and meaning-making relative to these transformative moments in our design process, ultimately leading to a holistic representation of MUVE design for individuals with ASD, which we characterize here as transitive, ill-defined, and complex.

The Case Narrative

The assumption of veridicality suggests a need for physical realism (e.g., terrain, buildings, and building interiors) as well as task fidelity between the real world and the . Bringing together all of these pieces to create a learning environment that could provide sufficient realism to fully instantiate a diverse array of design guidelines presented significant challenges. Table 1 below outlines the interdisciplinary processes involved in realizing the principles across the four development processes outlined in the previous section.

Realistic terrain is imperative because the terrain acts as an underlying unifying element upon which the virtual world is based. This extends not only to the space in which objects and avatars act and interact, but also the activity space. The assumption of veridicality suggests a need for terrain that simulates the real experience of performing activity on campus. While a simplistic, flat terrain would have been simpler, it is likely that it would have substantially

22

diminished sensations of realism and potentially distorted participants’ sense of presence or

“being there.” Buildings would have been off-scale and incorrectly positioned. For example, the campus is hilly and many buildings have multiple entrances on different floors. Entering from one side might bring one to the first floor while entering from another side might bring you to the fourth. However, creating terrain that accurately represented the contours of the earth was a challenge that was outside our collective expertise. How does one model real world terrain in a virtual world? What about other topographical elements, such as building placement and roads?

How are all of these elements combined? And, ultimately, how could these combined elements be imported into a virtual reality simulation? These are problems that have long been addressed in the field of video game development; however, the resources available to game development studios differ significantly from those available to academic institutions. Given the constraints of our design and development resources, our options were either to go with the simple approach and thereby threaten the validity of our design principles, or to engage in an ill-defined process to determine how to solve the topography problem on our own.

Atop the terrain are the campus buildings themselves. We reasoned that buildings that were accurate representations of their real-world counterparts were imperative to promote the intended outcomes suggested by the assumption of veridicality. However, buildings have complex architecture, and creating accurate 3D models of buildings is incredibly tedious and requires substantial expertise. We reasoned that architectural accuracy was less important than photographic realism and therefore concluded that campus models did not need to be entirely true to their real-world geometry. This is because participants would be engaging in activity around the buildings, but not specifically with the buildings themselves. Hence, we focused on creating photographically realistic architecture. However, for the interior designs of our MUVE,

23

we took the opposite approach. This is because participants would be engaging in activity within the interior models, and close-up, therefore requiring far more precise approximations of real- world counterparts. Instead of focusing on photographic realism, we employed the actual architectural plans of the buildings. Based on the interior elevations and floor plans, we created highly realistic models of the Impact Innovation office space. This space provided users with a highly authentic-looking space to interact with an online guide and others. In the real world,

Impact participants have their own cubicles and are exposed to a variety of activities in this office space. In the virtual world, this space provides users a connection to the real-world and to their everyday lives. Importantly, the central component that connected elements of terrain, buildings, and interiors were the training activities to learn to use the university shuttle. We conducted a meticulous procedural task analysis in order to model how participants performed this task in the real world. The interdisciplinary task analysis process was made possible with the help of special education specialists and an applied behavioral analyst, who assisted in the development of a scripted set of routines and behaviors that precisely mirrored the activities that

Impact Innovation participants would undertake in the real world. A variety of computer programming and game design skills were required to bring these pieces together.

Table 1.

Interdisciplinary Processes and Skills Required to Create Virtuoso.

Affordance Challenge Interdisciplinary Specific examples Requirements

Realistic Terrain

Promotes sense of Generating a ● Geography ● Image Editing presence by geographically ● 3D Design ● 3D Editing emulating lived realistic contoured ● Game Design ● GIScience

24

experiences of mesh with accurate ● Graphical Design ● Scripting actually walking on representations of ● Software campus topography development

Realistic Campus Models

Promotes sense of Modeling ● Piloting ● Photogrammetry presence by photorealistic ● Photography ● Taking emulating in-situ campus models ● Geography photographs environment for ● 3D Design ● Drone Piloting training. Helps ● 3D Editing ● 3D Editing promote assumption ● Modeling toolchains ● GIS Extraction of veridicality ● Outline and Extruding models Realistic Interiors

Provides Creating a highly ● Architecture ● CAD connection to realistic ● 3D Design ● Converting real-world by representation of an ● Photography blueprints into a using a familiar interior design. 3D Model space. ● Interior photography

Realistic Tasks

Provide realistic Creating a one-to- ● Game Design ● System of Least and accurate way one representation of ● Applied Prompts of practicing behaviors that both Behavioral ● PHP skills to transfer human avatars and Analysis Programming from virtual in-world assets such ● Programming ● Javascript platform to the as a bus would ● 3D Design Programming real world. naturally undergo in ● Videography ● 3D Editing the real world. ● Instructional ● Game Design Design Techniques ● Computational ● Shot Ride-along Thinking footage ● Physics ● Created a task analysis ● Velocity Calculations

25

Developing a Realistic Terrain

An early representation of how we set out to create our model is shown in Figure 1. The terrain, including slopes and scaling, were obtained by extracting geographic information system

(GIS) data from . After extracting this GIS data, we were able to convert the data into a 3D mesh. In this early version of Virtuoso, we placed a screenshot of a map from Google

Maps onto that geometric mesh to create a preliminary terrain of the university.

Figure 1. Screenshot of the campus terrain with a Google Map image placed onto a 3D mesh.

Next, we textured the terrain to include photorealistic representations of the roads, pathways, landscape, and topography. Initially, we opened the GIS mesh in Blender

(http://blender.org; a free and open source 3D modeling program) to manipulate the millions of polygons that made up the terrain. With the outline of the map placed over the terrain we were able to garner rough approximations of the locations of pathways and grass. Photographs of the roads and topography were taken to allow for the realistic texture creation. While this process resulted in a more photorealistic representation of campus, the resulting product had a litany of

26

issues. Manipulating massive GIS meshes created problems with collisions, which resulted in

“holes.” This led to assets and avatars falling through the base of the world. Repeating textures were also too uniform and did not account for variations in coloring, texturing, and consistency across the terrain.

In the next iteration we again returned to the GIS terrain that we extracted from Google

Earth. Now knowing that editing the mesh could result in unforeseen errors, we ultimately decided to leave the geometry alone and to place the topography on top of the 3D object. In this prototype we took a large high resolution image of the university from Google Earth. We then placed it over the terrain like we had done with the Google Map image before, which resulted in a terrain that included the university’s natural topography while maintaining an outline of the buildings to help with proper positioning.

Developing Realistic Buildings

While developing the terrain, we were concurrently developing 3D models of all campus buildings. This process took approximately 1.5 years of iterations to reach a level of fidelity that we deemed acceptable. By acceptable we mean that the models were (1) largely accurate in terms of geometric representation in relation to their real-world counterparts, (2) textured with photographic images that gave them the appearance of photographic realism, (3) not containing distortions to shape, shadow, and overall visuals that could result in sensory issues, and (4) were scaled appropriately.

To create the campus buildings we first created approximations of the structures and then mapped images onto them to provide a degree of photorealism. To test this process we created a fountain that existed on campus. Two studio members went onto campus and took photos of the fountain from varying angles and perspectives. These images acted as the textures of the model

27

that was made. However, scaling this process was ineffective as lighting issues greatly impacted our ability to obtain high quality photos while on campus. In addition, an inability to reach high angles prevented us from being able to fully capture stills from most of the buildings on campus.

To address these issues we began testing a process called photogrammetry.

Photogrammetry is the process of obtaining information about objects and the environment through recording, measuring, and analyzing photographic images and patterns

(Mikhail, Bethel, & McGlone, 2001). We decided to test the procedure using a DJI Phantom

Drone equipped with a video camera that could continuously shoot footage as it flew around a building from various heights, angles, and perspectives. Unfortunately, this idea proved to be untenable, as we uncovered safety and regulatory barriers that the university had in place which restricted when we could pilot a drone on campus. Such regulations prevented successful implementation. In addition, the technical challenges of video capture and conversion of photographs into a 3D mesh proved to be prohibitive. Despite this setback, we were still interested in using photogrammetry to create 3D models. Instead of capturing images by flying around the campus buildings, we decided to use the same process, but in Google Earth. Using different perspectives while capturing images of the buildings on campus provided a set of photos we were able to process using a photogrammetry software called Adobe ReCap (Figure

2).

28

Figure 2. A photogrammetrically-created model, imported into High Fidelity.

The result of this process was a highly realistic, albeit flawed, model. There were multiple distortions that resulted from the process. As a result, we took the high resolution image of the campus map and placed it onto a flat plane which allowed us to view the outlines of the campus buildings. We then opened the map in Adobe Illustrator, a vector image editor, and used the Pen tool to trace the outlines of the different campus buildings. After doing so, we exported the image as a Scalable Vector Graphics (SVG) and imported it into Blender. We were then able to extrude crude 3D shapes of the buildings. This process resulted in building models that were correctly scaled and positioned onto the terrain. The question then became how to provide and map textures onto the faces of the buildings. We returned to the photogrammetry process we had tested in Google Earth for creating a mesh. However, instead of using it to create the entire mesh with textures, it was used only for ripping textures that we could place onto the planes of the building faces (Figure 3).

29

Figure 3. Campus model created using a combination of GIS data, image editing, 3D modeling, and photogrammetry.

Developing Realistic Interiors

The next step of our intervention design required that we re-create a model from the

Impact Innovation office suite. Given the results of extruding map outlines, we opted to do something similar using architectural blueprints. These were imported into architecture modeling software (Archilogic) and used to create a fully interactive 3D model. We then went to the actual office and took photos to create textures for the walls, floor, etc. Archilogic allowed us to import furniture, electronics, and other furnishings to better represent its real-world counterpart (Figure

4).

30

Figure 4. Office model created using architectural modeling software.

Developing Realistic Scenarios

Task analysis was performed at the outset of the Virtuoso project so as to determine the structure and nature of activities that would take place within the MUVE. This required assistance from an Impact Innovation staff member who was familiar with the day-to-day scheduling of the day program and was able to provide us with details of the behaviors associated with shuttle training. Moreover, we performed a ride-along, recording the staff member and a program associate completing these tasks to expand upon and improve the task analysis. We also worked with an applied behavior analyst to modify the tasks to include opportunities for interaction and behavioral prompts (Figure 5).

31

Figure. 5. Single activity as represented in the Virtuoso procedural task analysis highlighting ABA system of least prompts strategy.

Embodying realistic activity within the learning process required that we simulate real world tasks in the virtual environment with a high degree of fidelity; therefore, we needed to develop a solution to bring the tasks to life into our 3D environment. This included creating a shuttle bus that was animated to arrive on a set schedule after participants had completed the prerequisite training steps. The virtual world toolkit we used for this, High Fidelity, was in early development with immature documentation and standards. The project’s immaturity prevented us from being able to reliably create a solution to animate an object in our project. We had little choice but to script the provided physics engine to move objects from point A to point B. While this solution allowed us to animate an object, it did not allow for flexibility or elegance. Hence, we looked to the gaming industry for an animation solution. In the popular video game Fallout 3 there was a rideable train that was developed by using a non-playable character (NPC) and assigning a train model as that character’s hat. When that character walked, it appeared that the train was moving. Borrowing from this, we attached a model of a university shuttle bus to an

NPC as a hat attachment. We then used the built-in High Fidelity avatar recording tool to play a

32

loop of the NPC. This process allowed us to create a functional shuttle route, but with limitations. The recorded NPC loops did not have physics or collisions, meaning that playable characters and in-world objects could pass through them. Given that road safety and socially appropriate behaviors related to catching the shuttle bus were pivotal to our intervention, it was therefore not suitable for a player to be able to walk into and through a shuttle bus that was driving along a route. Thus, we needed a different solution to simulate the shuttle bus.

Solving the shuttle bus problem required development of multiple scripts to handle the shuttle’s movement and timing. We found a 3D elevator script that moved from point A to point

B, and modified it by hiding an invisible cube in the environment that we used as a trigger to initiate the shuttle’s movement. When a player walked through the cube it would load a

Javascript function that activated the shuttle’s movement. During usage testing, an administrator would control an invisible avatar, walk into the cube at a precise moment, and thereby activate the trigger. This would simulate a shuttle bus arriving on time, based upon that shuttle's location within a tracking application (Figure 6). In this way we were able to simulate a real-world activity, albeit not without significant complexity.

33

Figure 6. Shuttle tracking application embedded in the virtual environment; an invisible player would trigger a shuttle model based on the information provided in this app.

Discussion

As demonstrated through this case narrative, designing and developing a MUVE for adults with autism is a transitive, ill-defined, and complex problem-solving process, necessitating an interdisciplinary solution path. Our analysis of project methods and processes illustrates the requirements of broad skills and expertise spanning multiple disciplines. Clearly, creating a

MUVE capable of promoting skills development and transfer is far more complex than merely ensuring some degree of photorealism. The experienced activity itself must reflect substantial similarity to the authenticity and realism of the real world task upon which it is based. Realizing a single learning scenario in our functional VR prototype required years of development and continual problem-solving. The Virtuoso development team endeavored to use the best available evidence from the field in an attempt to realize the promise of VR for individuals with ASD. In

34

the process of doing so, we were constantly confronted with the reality that a massive amount of effort is required to produce a learning scenario that is sufficiently realistic and authentic so as to instantiate identified design principles. For any team considering developing a MUVE for individuals with ASD, a reasonable question is whether to use commercial off-the-shelf software or to build something themselves. Unfortunately, there is no currently available off-the-shelf software on the market that allows the creation of MUVEs for individuals with ASD. Hence, it is likely that other developers in the field will undergo similar challenges.

Our hard-earned experiences serve as a poignant example of a tremendous research-to- practice gap in our field. In some ways, this gap is so daunting it could represent its own digital divide. The audacity, wherewithal, and interdisciplinary spirit required to forge a research agenda in this field are symptomatic of the significant–but not impregnable–barriers that researchers must address if we are to achieve the purported benefits of VR for individuals with

ASD. Researchers tout VR as an exceptionally valuable technological tool that could significantly impact learning for people with autism (e.g., Parsons, 2016); however, the immense challenges of developing VR interventions for this population severely limit its feasibility and scale of impact. We hope that the examples provided here lend insight and guidance for those who wish to advance the current state-of-the-art.

35

References

Allen, S. M., & Moore, V. (1997). The prevalence and consequences of unmet need: Contrasts

between older and younger adults with disability. Medical care, 35(11), 1132-1148.

Aresti-Bartolome, N., & Garcia-Zapirain, B. (2014). Technologies as support tools for persons

with autistic spectrum disorder: A systematic review. International journal of

environmental research and public health, 11(8), 7767-7802.

Bogin, J. (2008). Overview of discrete trial training. Sacramento, CA: National Professional

Development Center on Autism Spectrum Disorders, MIND Institute, The University of

California at Davis Medical School.

Bozgeyikli, L., Raij, A., Katkoori, S., & Alqasemi, R. (2018). A survey on virtual reality for

individuals with autism spectrum disorder: Design considerations. IEEE Transactions on

Learning Technologies, 11(2), 133-151.

Bozgeyikli, L., Bozgeyikli, E., Katkoori, S., Raij, A., & Alqasemi, R. (2018). Effects of Virtual

Reality Properties on User Experience of Individuals with Autism. ACM Transactions on

Accessible Computing (TACCESS), 11(4), 22.

Bartle, R. A. (2004). Designing virtual worlds. New Riders.

Billstedt, E., Gillberg, C., & Gillberg, C. (2005). Autism after adolescence: Population-based

13-to 22-year follow-up study of 120 individuals with autism diagnosed in childhood.

Journal of Autism and Developmental Disorders, 35(3), 351-360.

Bricken, M. (1994). Virtual Worlds: No interface to design. Virtual Worlds, 14.

Bruner, J. (1993). The autobiographical process. In Robert Folkenflik (Ed.), The culture of

autobiography: Constructions of self-representation (pp.38-56). Stanford, CA: Stanford

University Press.

36

Carmien, S., Dawe, M., Fischer, G., Gorman, A., Kintsch, A., & Sullivan Jr, J. F. (2005).

Sociotechnical environments supporting people with cognitive disabilities using public

transportation. ACM Transactions on Computer-Human Interaction (TOCHI), 12(2),

233-262.

Churchill, E. F., & Snowdon, D. (1998). Collaborative virtual environments: An introductory

review of issues and systems. Virtual Reality, 3(1), 3–15.

Christensen, D. L., Braun, K. V. N., Baio, J., Bilder, D., Charles, J., Constantino, J. N., ... & Lee,

L. C. (2018). Prevalence and characteristics of autism spectrum disorder among children

aged 8 years—autism and developmental disabilities monitoring network, 11 sites,

United States, 2012. MMWR Surveillance Summaries, 65(13), 1.

Conway, M., Vogtle, L., & Pausch, R. (1994). One-dimensional motion tailoring for the

disabled: A user study. Presence: Teleoperators & Virtual Environments, 3(3), 244-251.

Dalgarno, B., & Lee, M. J. (2010). What are the learning affordances of 3‐D virtual

environments? British Journal of Educational Technology, 41(1), 10-32.

Denzin, K. (1989). Interpretive biography. Newbury Park, CA: Sage.

Doyle, P. M., Wolery, M., Ault, M. J., & Gast, D. L. (1988). System of least prompts: A

literature review of procedural parameters. Journal of the Association for Persons with

Severe Handicaps, 13(1), 28-40.

Eaves, L. C., & Ho, H. H. (2008). Young adult outcome of autism spectrum disorders. Journal of

autism and developmental disorders, 38(4), 739-747.

Freeman, M. (2004). Data are everywhere: Narrative criticism in the literature of experience. In

Colette Daiute & Cynthia Lightfoot (Eds.), Narrative analysis: Studying the development

of individuals in society (pp.63-81). Thousand Oaks, CA: Sage.

37

Frith, U., & Mira, M. (1992). Autism and Asperger syndrome. Focus on Autistic Behavior, 7(3),

13-15.

Glaser, N. J., & Schmidt, M. (2018). Usage considerations of 3D collaborative virtual learning

environments to promote development and transfer of knowledge and skills for

individuals with autism. Technology, Knowledge and Learning.

https://doi.org/10.1007/s10758-018-9369-9

Goodall, B. (2001). Writing the new ethnography. Walnut Creek, CA: AltaMira.

Hirumi, A., Appelman, B., Rieber, L., & Van Eck, R. (2010). Preparing instructional designers

for game-based learning: Part III. Game design as a collaborative process. TechTrends,

54(5), 38-45.

Howlin, P., Goode, S., Hutton, J., & Rutter, M. (2004). Adult outcome for children with autism.

Journal of child psychology and psychiatry, 45(2), 212-229.

Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1998). Task analysis methods for

instructional design. Routledge.

Kerr, S. J., Neale, H. R., & Cobb, S. V. (2002). Virtual environments for social skills

training: The importance of scaffolding in practice. In Proceedings of the fifth

international ACM conference on Assistive technologies (pp. 104-110). ACM.

Klaassen, R. G. (2018). Interdisciplinary education: A case study. European Journal of

Engineering Education, 43(6), 842-859.

Mennecke, B. E., Triplett, J. L., Hassall, L. M., & Conde, Z. J. (2010). Embodied social

presence theory. In 2010 43rd Hawaii International Conference on System Sciences(pp.

1-10). IEEE.

38

Mesa-Gresa, P., Gil-Gómez, H., Lozano-Quilis, J. A., & Gil-Gómez, J. A. (2018). Effectiveness

of virtual reality for children and adolescents with autism spectrum disorder: An

evidence-based systematic review. Sensors, 18(8), 2486.

Mills, A. J., Durepos, G., & Wiebe, E. (2010). Encyclopedia of case study research (Vols. 1-0).

Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781412957397

Mikhail, E. M., Bethel, J. S., & McGlone, J. C. (2001). Introduction to modern photogrammetry.

New York.

Modo, M., & Kinchin, I. (2011). A conceptual framework for interdisciplinary curriculum

design: A case study in neuroscience. Journal of Undergraduate Neuroscience

Education, 10(1), A71.

Parsons, S. (2016). Authenticity in Virtual Reality for assessment and intervention in autism: A

conceptual review. Educational Research Review, 19, 138-157.

Potter, W. J., & Levine‐Donnerstein, D. (1999). Rethinking validity and reliability in

content analysis. Journal of Applied Communication Research, 27(3), 258–284

https://doi.org/10.1080/00909889909365539

Schmidt, M., Schmidt, C., Glaser, N., Beck, D., Lim, M., & Palmer, H. (2019). Evaluation of a

spherical video-based virtual reality intervention designed to teach adaptive skills for

adults with autism: a preliminary report. Interactive Learning Environments, 1-20.

Simonoff, E., Pickles, A., Charman, T., Chandler, S., Loucas, T., & Baird, G. (2008). Psychiatric

disorders in children with autism spectrum disorders: prevalence, comorbidity, and

associated factors in a population-derived sample. Journal of the American Academy of

Child & Adolescent Psychiatry, 47(8), 921-929.

Stake, R. E. (1995). The art of case study research. Sage.

39

Wallace, S., Parsons, S., & Bailey, A. (2017). Self-reported sense of presence and responses to

social stimuli by adolescents with ASD in a collaborative virtual reality environment.

Journal of Intellectual & Developmental Disability, 42(2), 131-141.

Wang, X., Laffey, J., Xing, W., Ma, Y., & Stichter, J. (2016). Exploring embodied social

presence of youth with Autism in 3D collaborative virtual learning environment: A case

study. Computers in Human Behavior, 55, 310-321.

Wing, L. (1993). The definition and prevalence of autism: A review. European child &

adolescent psychiatry, 2(1), 61-74.

Wing, L., & Gould, J. (1979). Severe impairments of social interaction and associated

abnormalities in children: Epidemiology and classification. Journal of Autism and

Developmental Disorders, 9(1), 11-29.

Yazan, B. (2015). Three approaches to case study methods in education: Yin, Merriam, and

Stake. The Qualitative Report, 20(2), 134-152.

Yee, N., Bailenson, J. N., Urbanek, M., Chang, F., & Merget, D. (2007). The unbearable likeness

of being digital: The persistence of nonverbal social norms in online virtual

environments. CyberPsychology & Behavior, 10, 115-121.

40

CHAPTER 3: Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder

Matthew Schmidt, PhD

University of Florida

Noah Glaser

University of Cincinnati

41

Abstract (164 words)

This paper presents evaluation findings from a proof-of-concept adaptive skills intervention for adults with autism spectrum disorders. Entitled Virtuoso, the intervention is designed for training and skills transfer related to safe and appropriate utilization of public transportation.

Technological and pedagogical scaffolds are implemented in a staged manner and gradually faded to promote acquisition and transfer of target skills. A constellation of technologies are employed, including spherical video-based video modeling and fully immersive virtual reality.

Details of the intervention and technology architecture are provided. Evaluation focused on the acceptability, feasibility, ease-of-use, and relevance of the prototypes to the unique needs of participants, as well as the nature of participants’ user experiences. Findings are presented from the perspectives of expert testers (n=4) and participant testers with autism (n=5). Results suggest a largely positive user experience and that Virtuoso is feasible and relevant to the unique needs of the target population. Anecdotal evidence of skills transfer is discussed from the perspective of future research directions.

Index Terms—adaptive skills, autism spectrum disorder, cybersickness, public transportation, spherical video-based virtual reality, virtual reality, intervention design

Word Count: 9057 including tables; 8021 not including tables

42

Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention

for Adults with Autism Spectrum Disorder

In this paper, a multi-stage virtual reality (VR) intervention named Virtuoso (a play on the words “virtual” and “social”) is presented. Virtuoso is designed to provide training for and promote transfer of adaptive behavior skills for adults with autism spectrum disorder (ASD) who are enrolled in an adult day program at a large Midwestern university. Interest in virtual reality

(VR) interventions for individuals with autism spectrum disorder (ASD) has been steadily growing over the last 20 years (Aresti-Bartolome & Garcia-Zapirain, 2014). Since Strickland and colleagues’ seminal investigation into the acceptability of VR equipment and potential learning effects (Strickland, Marcus, Mesibov, & Hogan, 1996; Strickland, 1997), researchers have continued to explore VR as a means to deliver interventions. This research has contributed to the emergence of a promising, but preliminary, basis of support for the efficacy of VR as an intervention modality (Mesa-Gresa, Gil-Gómez, Lozano-Quilis, & Gil-Gómez, 2018).

ASD is a lifelong condition that manifests as a cluster of neurodevelopmental disorders and is characterised by persistent deficits in social communication/interaction and restricted, repetitive patterns of behavior (DSM-5 American Psychiatric Association. Diagnostic and statistical manual of mental disorders, 2013). In the United States, the prevalence of autism is increasing, with recent reports indicating that one in 59 children receives an ASD diagnosis

(Baio et al., 2018). Comorbidities include cognitive impairments, epilepsy/seizures, ADHD, general anxiety disorder, sensory problems, etc. (Simonoff et al., 2008). ASD can severely impact an individual’s independent functioning and quality of life, and if left untreated, can exacerbate employment problems, ability to live independently, and social isolation (Eaves &

Ho, 2008; Hedley et al., 2017; Müller, Schuler, & Yates, 2008).

43

VR is believed to be particularly appealing to individuals with ASD, in part due to their affinity to computers and strong visual-spatial skills (Strickland, 1997). VR conveys concepts, meanings, and activities through highly realistic scenarios that mimic the real world, thereby providing rich and meaningful contexts for embodiment and practice of activities, behaviors, and skills (Wallace, Parsons, & Bailey, 2017; Wang, Laffey, Xing, Ma, & Stichter, 2016). The literature points to a number of potential benefits of VR for individuals with ASD such as predictability; structure; customizable task complexity; control; realism; immersion; automation of feedback, assessment, reinforcement; etc. (Bozgeyikli, Raij, Katkoori, & Alqasemi, 2018;

Bozgeyikli, Bozgeyikli, Katkoori, Raij, & Alqasemi, 2018). In addition to these benefits, VR can increase access to services by overcoming logistical barriers such as distance, cost, and limited access to providers (Zhang, Warren, Swanson, Weitlauf, & Sarkar, 2018).

Importance

This paper reports cutting-edge design and development, focusing specifically on areas that are under-represented and poorly understood in the literature. For example, a litany of barriers hinder adoption of VR for individuals with ASD, such as cost and difficulties associated with development. Given these barriers, excitement and interest is growing around the use of spherical, video-based VR (SVVR), also known as 360-degree video (Brown & Green, 2016).

SVVR requires fewer resources than traditional VR and is easier to develop and implement.

SVVR is an emergent technology; hence, empirical evidence of pedagogical success is limited

(Fowler, 2015). However, given the wealth of research around video modeling as an instructional practice for teaching social and life skills for individuals with autism spectrum disorder, the promise of immersive video is clear. One aspect of our work focuses on SVVR.

44

The emergence of commercially available HMD and the tendency of these devices to induce adverse effects (e.g., cybersickness) has led to concerns around the use of these devices with a population with significant sensory processing differences. While there is an emerging base of evidence that suggests people with ASD find desktop-based VR and its use to be acceptable, it is unclear whether those findings extend to HMD-based VR (Bozgeyikli et al.,

2018). Prior research suggests that the majority of users who use HMD will experience adverse effects of use, including nausea, headaches, eye strain, dizziness, and an array of psychosomatic irregularities (Cobb, Nichols, Ramsey, & Wilson, 1999; Dennison, Wisti, & D’Zmura, 2016). Of importance is that the vast majority of research on adverse effects has been conducted on neurotypical individuals. Very little research exists on using HMD for individuals with ASD, and we have been unable to locate any research that has been performed on multi-user HMD-based

VR interventions. The lack of research in this area for people with ASD has led to ethical concerns regarding adoption (Newbutt et al., 2016). Our work is particularly sensitive to adverse effects.

Despite increasing interest and research on using VR for individuals with ASD, researchers must be wary of promoting the bias of technological determinism by utilizing a technology before it is fully understood. While VR is widely regarded as a promising tool for learning and instruction for individuals with ASD, significant questions remain largely ignored in the literature regarding appropriate and socially valid interventions that are highly sensitive to the unique needs of this vulnerable population. Given the rapid pace of adoption of new and increasingly more interactive technologies, it is natural to want to explore what works best for helping people with autism. However, as Parsons (2016) argues, this question is misplaced; research must shift from “what works” to more nuanced considerations of what kind of

45

technology works for whom, in what ways, with what supports, and with what objectives.

Therefore, an aim of this paper is to explicate our design and evaluation process, which researchers and practitioners can use to derive and extend pragmatic and specific principles for their own work.

How to effectively promote transfer from virtual contexts to the real world is poorly understood and is among the most cited limitations and suggested areas for future research in the literature. Researchers maintain that a key affordance of VR for individuals with ASD relates to the potential for knowledge, skills, and behaviors acquired in the virtual medium to transfer to the real world. Transfer from a known context to a novel context is a pervasive challenge in autism intervention research in general (Neely et al., 2016). Many individuals with ASD have a tendency to be concrete thinkers (Grandin, 1995) and have difficulties generalizing skills learned in one context to another (Yerys et al., 2009), leading to difficulties in establishing intervention effects across settings (Plaisted, 2015). Because VR provides high-fidelity photographic and behavioral realism, this medium is thought to align well with the tendency towards concrete thinking in individuals with ASD. Many researchers hypothesize there is a relationship between the realism of a VR environment and the probability that skills acquired in the VR environment will transfer to novel contexts and situations (McComas, Pivik, & Laflamme, 1998; Strickland,

1997; Wang & Reid, 2011). However, others argue that while visual fidelity may play a role in transfer, the extent of its impact is not fully understood due to the significant perceptual, sensory, and cognitive differences among individuals with ASD (Parsons & Cobb, 2011). We maintain that generalization cannot be assumed, and must be intentionally designed for. Hence, a central element of Virtuoso’s design is a scaffolded and intentional trajectory of technology supports as

46

well as generalization heuristics derived from the literature to promote transfer of skills from the virtual medium to the real world.

Literature Review

Recent reductions in hardware costs and the potential for increased access have boosted interest in information and communication technologies (ICT) for teaching adaptive skills to individuals with ASD (Beaumont & Sofronoff, 2008; Durkin, 2010; Grynszpan, Weiss, Perez-

Diaz, & Gal, 2014; Knight, McKissick, & Saunders, 2013; Parsons, 2016). People with ASD tend to express a strong affinity for technology (Bozgeyikli et al., 2018) and respond positively to visual stimuli, use of visual cues, and instruction that digital interfaces can provide (Reed,

Hyman, & Hirst, 2011). ICTs provide opportunities for learning in controlled contexts devoid of nuanced social customs, which individuals with ASD may have challenges navigating

(Grynszpan et al., 2014). As such, VR is considered to hold particular promise for individuals with ASD (Bellani, Fornasari, Chittaro, & Brambilla, 2011). In the following sections, we review a variety of VR implementations relevant to our work.

Single-user, desktop-based VR applications

Desktop-based VR interventions for individuals with ASD have been designed around a variety of social and adaptive skills including (1) riding a bus and engaging within a cafe

(Mitchell, Parsons, & Leonard, 2007; Sarah Parsons, Mitchell, & Leonard, 2004), (2) safety skills (Self, Scudder, Weheba, & Crumrine, 2007), (3) street-crossing (Josman, Ben-Chaim, &

Friedrich, 2011), (4) emotion recognition (Moore, McGrath, & Powell, 2005), (5) engaging in conversation at social gatherings (Ke & Im, 2013), (6) practicing public speaking (Jarrold et al.,

2013), (7) interviewing for a job (Kandalaft, Didehbani, Krawczyk, Allen, & Chapman, 2013),

47

(8) learning to think imaginatively (Herrera et al., 2008), (9) preparing for a court appearance

(Standen & Brown, 2005), (10) and more (Wang & Anagnostou, 2014).

One implementation of a desktop-based VR environment used a grocery shopping context to teach adaptive behavior skills (Standen & Brown, 2005). Participants with severe intellectual disabilities (n=19) were given the task to locate items and take them to the checkout at a grocery store. No significant differences were found between experimental and control groups’ ability to repeat and complete the task. Significant differences were found in each group’s ability to find items and choose products. Findings suggest the virtual store was not very realistic. A broader range of stimuli and more realistic interactions were recommended to promote transfer. Another study investigated a VR environment designed to teach food preparation (Brooks, Rose, Attree, & Elliot-Square, 2002). The environment was modeled on an actual kitchen. After training, participants showed improvement on tasks for which they had no prior training. Using a street crossing and road safety VR intervention, Brown and colleagues’

(2002) found improvements in terms of fewer inquiries and mistakes by participants. While the examples provided here illustrate promise of desktop-based VR, they also showcase limitations related to skills transfer and the need to model interventions on real-world contexts.

Multi-user, desktop-based VR applications

Most VR interventions for individuals with ASD are single-user experiences

(AUTHORS, 2018). Conversely, the iSocial project developed a multi-user 3D virtual learning environment for a 10-week social competence curriculum (AUTHORS, 2011). The intervention followed a curricular structure in which participants would first learn a target skill and then be provided opportunities to rehearse that skill in controlled contexts (AUTHORS, 2008). Another project, the Virtual Reality Social Cognition Training (Didehbani, Allen, Kandalaft, Krawczyk,

48

& Chapman, 2016; Kandalaft et al., 2013), utilized social scenario simulations that allowed participants to practice meeting someone or to confront a bully. Two peers participated in the virtual training along with two trained clinicians. Findings suggest improved measures of social cognition related to theory of mind and emotion recognition. In a more recent study, Parsons

(Parsons, 2015) investigated a game-based, collaborative virtual environment called Block

Challenge that was designed to promote collaborative and communicative reciprocity between children with ASD (n=6) and their typically developing peers (TD; n=8). Dyads of ASD-ASD or

TD-TD pairs collaborated around stacking colored blocks to match a given color pattern while problem-solving how to simultaneously match their partner’s pattern. Findings suggest that dyads communicated similarly, although participants with ASD may have communicated less efficiently. This finding suggests that tools such as Block Challenge potentially can promote reciprocal social communication and perspective-taking. Generally speaking, very few multi- user VR interventions for individuals with ASD have been researched; further investigation is needed.

Headset-based VR applications

Very few studies have investigated the use of VR headsets (also known as head-mounted displays, or HMD). Of those that do, nearly all are single-user. Strickland (Strickland, Marcus,

Mesibov, & Hogan, 1996; Strickland, 1997) was one of the first to assess how young children with autism might accept HMD. Findings indicated that study participants generally accepted the headsets and were able to complete the virtual tasks. However, the research was limited by a small sample size (n=2). HMD-based research in this area remained dormant until recently, when consumer-grade, affordable HMD became commercially available (Newbutt et al., 2016).

Newbutt and colleagues (2016) provide a brief report of people with autism’s perceptions of

49

HMD. Generally speaking, users reported positive perceptions. Most participants consented to return for multiple virtual experiences and found HMDs to be comfortable and enjoyable.

Importantly, the researchers concluded that the adoption of HMD for interventions with this population could introduce ethical concerns. They urge designers to seriously consider sensory processing disorders that individuals with ASD face when designing VR interventions.

A criticism of VR interventions for individuals with ASD in general and HMD-based VR interventions specifically has to do with substantial investments of time, money, and specialized expertise required for development, as well as the challenges of short technology longevity and prohibitive hardware costs (Grynszpan et al., 2014). AUTHORS (2019) present SVVR as a less complex and less expensive alternative to digitally-modeled VR environments. SVVR uses 360- degree video to represent the virtual environment through an HMD. These videos are created using 360-degree video cameras. These researchers’ preliminary work suggests this medium can provide above-average usability and user-friendliness, high participant engagement, and a sense of enjoyment. However, they also note some evidence of adverse effects (i.e., nausea, dizziness) and concerns related to potentially diminished sensations of immersion in comparison to higher fidelity VR systems. To our knowledge, no research studies exist beyond this work, although anecdotal case studies are emerging, such as a prototype SVVR application for the Google

Cardboard designed for individuals with neurodevelopmental disorders to teach social stories

(Gelsomini, Garzotto, Matarazzo, Messina, & Occhiuto, 2017), or a system that aims to teach adaptive skills related to grocery shopping (Wickham, 2016).

Project Description

Virtuoso serves an adult day program for individuals with significant communication and behavioral challenges associated with ASD. Over 20 adults participate in this program year-

50

round. Participants follow a daily schedule with the assistance of a peer mentor, during which they take part in vocational internships and work on developing adaptive behavior skills.

Adaptive behavior skills are considered to be a core challenge for individuals with ASD (Gilotty,

Kenworthy, Sirian, Black, & Wagner, 2002), and include practical, everyday skills needed to function and meet the demands of one's environment. Examples include getting dressed, self- care, and safety-related activities (Council, 2001). Developing these skills is critical for attaining more independent levels of functioning (Ditterline & Oakland, 2009).

Virtuoso provides the day program a constellation of immersive learning technology interventions that promote the skill of using public transportation. Public transportation was identified in consultation with the director of the adult day program (hereafter referred to as the subject matter expert, or SME) as a promising application of immersive training because (1) day program participants need to use public transportation to travel to vocational training sites, (2) if a participant gets lost, they are taught to use public transportation to return safely to the program offices, (3) the activity involves a synthesis of several subordinate tasks that can be used in many different situations (e.g., interpreting a map, interpreting a schedule), (4) real-world public transportation training exposes participants to a variety of risks, (5) training can be experienced safely and repeatedly in controlled scenarios. Research suggests the ability to access and use public transportation promotes independence through higher access to employment, medical care, community, etc. (Felce, 1997; Mechling, O’Brien, & E., 2010). However, transportation is one of the most cited barriers for individuals with disabilities (Allen & Mor, 1997; Carmien et al., 2005). Research shows that transportation is among the greatest hurdles in getting to and maintaining a vocation and that many individuals with disabilities are unable to keep medical

51

appointments due to the lack of access to transportation (Shier, Graham, & Jones, 2009;

Veltman, Stewart, Tardif, & Branigan, 2001).

Virtuoso’s uses a stage-wise approach that progresses from low-tech to high-tech (and then to real world application) and consists of: (1) skill introduction, (2) 360-degree video modeling of the skill, (3) VR rehearsal of the skill, and (4) real-world practice of the skill see

(Figure 1). In the first stage, training is introduced using a low-tech, social narrative that breaks down the overarching activity into a series of tasks accompanied by visual supports (e.g., photographs and icons). This low-tech social narrative is presented on a tablet as a paginated, multimedia presentation. In the second stage, an Android application is used to present 360- degree videos of the task in a SVVR environment. This software is called Virtuoso-SVVR and supports both Google Cardboard and Google Daydream head-mounted displays (HMD). In the third stage, participants engage with an online guide in a multi-user, immersive VR environment to rehearse the skill. This environment is called Virtuoso-VR, was developed using the open source High Fidelity virtual reality toolkit, and supports both HTC Vive and Oculus Rift headsets (see Figure 2). In the fourth stage, participants practice the skills in the real world with a trained staff member. The design of all stages of training was informed by evidence-based practices related to task structure, instructional scaffolding, prompting, transfer, accessibility, etc.

(National Autism Center, 2015; Wong et al., 2015).

52

Figure 1. Intervention architecture.

Figure 2. A participant navigates his avatar to the shuttle stop in Virtuoso-VR.

Methods

The purpose of this user-centered, multi-phase usage study was to explore the following research questions across two stages of Virtuoso (Stage 2: 360-degree video modeling of the

53

skill, and Stage 3: VR rehearsal of the skill) for participants with autism in an adult day program at a large Midwestern university:

RQ1: To what extent do project prototypes (Virtuoso-VR and Virtuoso-SVVR) meet the

design goals of being acceptable, feasible, easy to use, and relevant to the unique needs of

participants?

RQ2: What is the nature of participants’ user experience relative to Virtuoso-VR and

Virtuoso-SVVR?

The focus was only on stages that used VR (Stage 2 and Stage 3), not on all four intervention stages. The usage study was conducted in Summer 2018 and consisted of two phases, beginning with expert testing (Phase 1: n=4) that incorporated semi-structured interviews and survey methods, and concluding with participant testing (Phase 2: n=5) that incorporated observational and survey methods.

Usage testing of Virtuoso took place across two research phases (Table 1). All research performed was approved by our university’s institutional review board. In Phase 1 (expert testing), expert reviewers (n=4) engaged in a structured usage test with the Virtuoso-SVVR application, followed by a semi-structured interview. Phase 1 focused only on research question

1. This phase took place in May of 2018 in the offices of the respective expert reviewers. Experts explored Virtuoso-SVVR, completed the System Usability Scale (SUS; Brooke, 1996), and then responded to questions from a semi-structured interview protocol. Expert responses were audio recorded for later transcription and analysis.

In Phase 2 (participant testing), participants with ASD (n=5) usage tested the Virtuoso-

SVVR and the Virtuoso-VR software. Phase 2 focused on both research questions 1 and 2. After informed consent and/or assent was obtained, participants took part in two different sessions of

54

approximately 30 minutes each. In the first session, participants used Virtuoso-SVVR. In the second session used Virtuoso-VR. After each session, participants completed the System

Usability Scale (SUS) (Brooke, 1996) and one user-friendliness question (Bangor, Kortum, &

Miller, 2009). Video, audio, and screen recordings were captured for later transcription and analysis. Detailed field notes were taken during all sessions.

Table 1

Study procedures across phases

Phase 1 (Expert Testing)

Participant Virtuoso-SVVR Hardware Data Collected

All experts Google Cardboard-based Virtuoso Mobile System Usability Scale App; Adjectival Ease of Use Google Daydream-based Virtuoso Mobile Scale App Structured Expert Interviews Self-report Questionnaire

Phase 2 (Participant Testing)

Participant Virtuoso-SVVR Virtuoso-VR Data Collected Hardware Hardware

Kevin Screen-based Desktop-based System Usability Scale Virtuoso Virtuoso-VR (no head-mounted Environment (no Adjectival Ease of Use display) head-mounted Scale display); Microsoft Screen, Webcam, and Xbox controller Audio Recordings Travis, Google Daydream- Desktop-based Unstructured, post- Session 1* based Virtuoso Virtuoso-VR usage testing Mobile App Environment; keyboard-and-mouse

55

Travis, Screen-based HTC Vive-based interviews Session 2* Virtuoso Mobile App Virtuoso-VR (no head-mounted Environment; Vive Field Notes display) hand controllers Evan Google Cardboard- HTC Vive-based based Virtuoso Virtuoso-VR Mobile App Environment; Vive hand controllers Andy Google Daydream- HTC Vive-based based Virtuoso Virtuoso-VR Mobile App Environment; Vive hand controllers Jermaine Google Daydream- based Virtuoso Mobile App * Travis participated in two separate sessions

Participants

For Phase 1 (expert testing), study participants were purposively-sampled. Identification of experts was performed in consultation with the SME, who suggested a need for both autism and usability experts (Table 2). Autism experts were identified based on expertise in the field of autism research and prior clinical interactions with the adults enrolled in the adult day program.

The usability expert was identified based on expertise in usability evaluation.

Table 2

Demographics and Description of Expert Review Participants

Expert Participant Name and Description Age Gender Ethnicity

Daria: Director of the adult day program (SME), assistant 41 Female Caucasian professor of special education, doctoral-level board certified behavior analyst. Jennifer: Associate professor of special education, director of 46 Female Caucasian a sister program at the same university. Familiar with and had frequent interactions with the members of the adult day program.

56

Barb: Staff member of the day program who worked closely 42 Female African- with adult day program participants. American Jacob: Assistant professor of educational technology and 38 Male Caucasian director of a usability lab with over 10 years of usability evaluation experience.

For Phase 2 (participant testing), five adults with ASD were purposively sampled from the adult day program. These participants were identified by the SME based on (1) level of independence, (2) acuity scores, (3) scores on standardized assessments, and (4) a clinical diagnosis of having ASD. Usage test participants had an average age of 26.2 years old with a range between 22 and 34 years old. An overview of participant demographics is provided in

Table 3.

Table 3

Participant demographics and measures of Peabody Picture Vocabulary Test (PPVT), Social

Responsiveness Scale (SRS), and Behavior Rating Inventory of Executive Function (BRIEF)

Peabody Picture Vocabulary Test (PPVT)

Participant Name and Description Raw Score Standard Age Equivalent Score (yr:mo)

Travis: 28-year-old male diagnosed with Data Data 18:11 autism spectrum disorder, Smith Lemli Opitz unavailable unavailable Syndrome, attention deficit hyperactivity disorder (ADHD), and auditory processing disorder Andy: 23-year-old male diagnosed with autism 161 72 10:11 spectrum disorder, anxiety disorder, and attention deficit disorder Evan: 34-year-old male diagnosed with autism 162 72 11:1 spectrum disorder and an intellectual disability Kevin: 24-year-old male diagnosed with 165 73 11:6 autism spectrum disorder and anxiety disorder

57

Jermaine: 24 year-old male diagnosed with Data Data 4:3 autism spectrum disorder, intermittent unavailable unavailable explosive behavior, moderate intellectual disability, and oppositional defiant disorder Social Responsiveness Scale (SRS)

Participant Name SRS T- T-Score Range Score

Travis 69 Moderate Range

Andy 71 Moderate Range

Evan 66 Moderate Range

Kevin 65 Mild Range

Jermaine 62 Mild Range

Behavior Rating Inventory of Executive Function (BRIEF)

Participant Behavior Regulation Metacognition Global Executive Name

T-Score Percentile T-Score Percentile T-Score Percentile

Travis 88 98 90 99 94 99

Andy 93 99 72 97 84 99

Evan 48 70 52 68 50 66

Kevin 68 90 63 82 67 91

Jermaine 90 99 92 99 96 99

58

Data Collection

Qualitative and quantitative data were collected using a variety of measures and methods.

These are described in the following sections.

System Usability Scale.

The widely-used and validated System Usability Scale (SUS) was administered to both expert and participant testers. Given participants’ differing literacy levels, SUS items were read aloud to participants by a master’s-level staff member who the participants knew. Items were read aloud and explained in concrete terms with examples. Each response option was read aloud, after which participants were asked to choose a response. Participants were prompted continually regarding their understanding.

Adjectival Ease of Use scale.

The adjectival ease of use scale was administered to participant testers, (Bangor et al.,

2009), a single-item measure of user-friendliness. This scale considers ease-of-use using adjectives as ratings. The item states, “Overall, I would rate the use-friendliness of this product as: Worst Imaginable, Awful, Poor, Ok, Good, Excellent, Best Imaginable.” This item was administered to participants on the same sheet as the SUS.

Structured expert interviews.

A structured interview was conducted with each expert tester by a trained graduate student. The interview protocol consisted of two prompts focused on user experience and design principles, respectively. The first question sought experts’ opinions on the design of the system, including hardware and software. The second question sought opinions on evidence-based practices noted in the system’s design. Interviews took approximately five minutes and were audio recorded for later transcription and analysis.

59

Screen, webcam, and audio recordings.

Screen, webcam, and audio recordings were captured for participant testers. Videos were captured for each session. In total, 12 interaction videos were captured for Virtuoso-VR (six from the perspective of the online guide, six from participants) and six interaction videos were captured for Virtuoso-SVVR. All videos were transcribed using a professional transcription service.

Unstructured, post-usage testing interviews.

Unstructured interviews were conducted by the first author following each participant usage testing session. Interview duration was between five and 15 minutes. Interviews began with the interviewer asking participants about their experiences, exploring what participants liked best, least, and what they might change. Depending on what was observed during each usage test, the interviewer asked follow up questions. All interviews were recorded and transcribed for later analysis.

Field notes.

During Phase 2 (participant testing), field notes were taken by a trained graduate student, who made observations in handwritten notes related to participants’ preparation for usage testing, actual usage of Virtuoso-SVVR and Virtuoso-VR, and participants’ post-usage testing surveys and interviews. Specific focus areas included the nature of user interaction, participant responses, commentary during participant sessions, and any particularly salient moments. These handwritten notes were scanned, typed into a word processor, and stored for later analysis.

Analysis

Analysis adopted a multi-methods approach. Quantitative data were analyzed using methods appropriate to usability evaluation. Qualitative data were analyzed using inductive and

60

deductive methods. A constant comparative approach was applied across all phases of analysis, with specific attention given to coding reliability.

Quantitative Analysis

Data from the SUS and the User-friendliness Adjectival Rating Scale were analyzed using quantitative methods. Methods outlined in Brooke (1996) were used to calculate the SUS score. Scores above the value of 68 are considered to represent above average usability. These data were aggregated into tables for analysis (see Table 9).

User-friendliness was measured using the single item User-friendliness Adjectival Rating

Scale (Bangor et al., 2009). The seven possible responses on this scale include: Worst

Imaginable, Awful, Poor, Ok, Good, Excellent, Best Imaginable. These categorical data were converted to ordinal data, with 1 representing “worst imaginable” and 7 representing “best imaginable.” These data were aggregated into spreadsheet tables for analysis.

Qualitative Analysis

Two independent, qualitative analyses were performed – one deductive, and one inductive. The deductive analysis focused on exploring acceptability, feasibility, ease-of-use, and relevance of Virtuoso prototypes. The inductive analysis focused on the nature of participants’ experiences while using Virtuoso prototypes. Procedures are described in the following sections.

Deductive analysis.

Deductive analysis was performed by applying an existing coding methodology to further our understanding of the topic (Creswell & Poth, 2017). The coding scheme used was developed by Kushniruk & Borycki (2015) to analyze usability and usefulness within the context of medical interventions. Whereas many usability coding methods focus on general heuristics, this coding scheme was developed specifically for video analysis within intervention contexts. Minor

61

modifications to the coding scheme were made to better align it to our specific context as the coding scheme was originally developed to evaluate usability and usefulness of 2D interfaces. In addition, we augmented the coding scheme with four supplemental codes (Table 4) related to technology-induced errors due to instability in our beta-level software.

Table 4

Supplemental codes appended to Kushniruk & Borycki’s (2015) video analysis coding scheme

Code Operationalization

Hardware Coded when a review of the video indicates the user has problems due to a hardware issue. Audio Coded when a review of the video data indicates there are problems with audio. Bug Coded when a review of the video data indicates there was a glitch or bug in the software being used resulting in interruption of procedures Crash Coded when a review of the video data indicates there are problems with the system resulting in crashes or interruptions of the procedures.

Coding procedures.

Coding was performed by two trained graduate student independent observers (a primary observer and an agreement observer), with codes intermittently reviewed by the lead researcher.

After multiple training and calibration sessions, the primary observer coded 100% of the videos, and the agreement observer coded 50% of the videos. Coder drift was controlled for by comparing and discussing any discrepancies in coded videos.

Agreement and reliability analyses.

Two separate comparisons between raters were performed based on independent application of codes and estimates of training duration: interobserver agreement and Cohen’s

Kappa. Results from IOA and Kappa analyses (Table 5) are indicative of high agreement between coders.

62

Table 5

Interobserver agreement and Kappa for coding and duration in Virtuoso-SVVR and Virtuoso-

VR.

Intervention Coding Type Simple Agreement Kappa Percentage

Virtuoso-SVVR Time-stamped 0.875 1 Duration Coding Coding Scheme 1.00 1 Virtuoso-VR Time-stamped 0.906 0.874 Duration Coding Coding Scheme 0.955 1

Inductive analysis.

Inductive analysis was conducted on screen, webcam, and audio recordings, as well as interview transcripts and field notes (Benaquisto, 2008) to look for themes related to the nature of user experience. Axial coding procedures were used to create a set of preliminary codes and operational definitions. Emergent categories and subcategories were continually refined across three major iterations through a constant comparative method (Denzin & Lincoln, 2011). Two overarching themes emerged as particularly relevant to the nature of user experience in our context: accessibility and user affect. Preliminary evidence of transfer was also identified and coded. A listing of the categories and codes that emerged from the process identified here are found in Table 6.

Table 6.

Qualitative codes and operationalizations that emerged from inductive analysis

Code Operationalization

63

Affect

Joy, fun, or excitement Coded when a participant expresses a positive state of affect including statements of joy (e.g., saying they were having fun), or excitement with their experience in the VR/SVVR intervention. Willingness to return Coded when a participant expresses desire to return to use the VR/SVVR intervention again during their session.

Accessibility

Physical Accessibility Coded when the VR/SVVR intervention’s content and/or possibilities for action have implications related to physical accessibility (e.g., psychomotor impairments precluding use of hardware).

Cognitive Accessibility Coded when the VR/SVVR intervention’s content and/or possibilities for action have implications related to the cognitive accessibility of the system (e.g., textual cues impacting pre-literate individuals’ access).

Cybersickness Coded when a participant verbally states or physically exhibits symptoms of cybersickness (e.g., dizziness, nausea, eye strain, headaches, etc.). Transfer

Usefulness/Relevance Coded when a participant verbally comments on the usefulness or applicability of content and/or activities taking place in the VR/SVVR intervention. Recognizability of Coded when a participant verbally confirms recognition of Assets/Realism assets in the VR/SVVR intervention with real-world counterparts or when a participant verbally comments on the overall realism of assets or actions in the VR/SVVR intervention. Real-world Connections Coded when a participant verbally describes connections between digitally-mediated tasks/activities in the VR/SVVR intervention with analogous tasks/activities in the real-world (e.g., indicating where and how they would go in the real- world to complete an activity/action they are performing or have performed in the VR/SVVR intervention).

64

Findings

Experts Testers’ Perceptions of Usability

Experts evaluated the Virtuoso-SVVR prototype using the Google Cardboard and Google

Daydream View. Average SUS scores were 79.38 for the Google Cardboard version of Virtuoso-

SVVR and 84.38 for the Google Daydream version. Averaged SUS scores were 81.88, nearly 14 points above the average SUS rating of 68, suggesting good usability. Among the SUS questions, the lowest mean and median score applied to the Daydream’s ability to be learned quickly.

Conversely, findings suggest that the Daydream was not cumbersome.

Experts identified specific issues as impacting the usability of the Virtuoso-SVVR app.

These issues were categorized as relating to hardware and software, videos, and task design

(Table 7), and suggest important differences between the Cardboard and Daydream headsets. For the Cardboard, expert testers preferred its simple button located on the headset. It required less initial assistance to use, and was also less likely to face issues requiring intervention such as head strap discomfort, or pressing the wrong button. Experts were slightly better able to get started with little instruction using the Cardboard. For the Daydream, some experts had issues navigating with its multi-function remote pointer. Across both devices, expert testers indicated that they experienced some symptoms of cybersickness and anticipated the need for a high degree of support due to insufficient explicit directions. Ultimately, expert responses suggested that they anticipated substantial support would be needed to be able to use the SVVR system.

Table 7.

Usability issues identified during expert testing

Category Description Hardware/Software ● Some cybersickness, screen/video blurriness, UI

65

complexity/inconsistency ● With Cardboard: potential fatigue with holding headset ● With Daydream: increased symptoms of cybersickness, frustration with head strap, difficulty using remote Videos ● Insufficient directions on what to do next when videos fade in/out ● Camera shake, head-turning challenges Task Design ● Need for more explicit supports, such as arrows or verbal directions ● Use natural voices for voice-overs

Participant Testers’ Perceptions of Usability

Participant testers evaluated both the Virtuoso-SVVR and Virtuoso-VR experiences, with both being rated as above average on the SUS in terms of ease-of-use (Table 8). Mean computed

SUS scores across all participants for Virtuoso-SVVR were 79.58 (SD=0.99) and for Virtuoso-

VR were 73.33 (SD=1.08), both above the average rating of 68. In addition, participants completed a one-item adjectival scale (Bangor et al., 2009) to rate user-friendliness. On average, participants rated Virtuoso-SVVR as “good” and Virtuoso-VR as “excellent.” These ratings are in contrast to participants’ SUS ratings, which suggested Virtuoso-SVVR was more favorably received.

Table 8

Mean System Usability Scale (SUS) scores across participant testers

Mean Computed SUS Score Participant Virtuoso-SVVR Virtuoso-VR Kevin 90 100 Travis (Session 1) 80 80 Travis (Session 2) 87.5 67.5 Jermaine 55 50 Evan 80 62.5 Andy 85 80

66

Mean Computed SUS Score Across Participants 79.58 73.33

Qualitative evidence from deductive analysis suggests both prototypes were easy to use.

Results from inductive analysis suggest that participants encountered fewer usability problems with Virtuoso-SVVR, perceived usefulness of the content to be greater, and encountered fewer technology-induced errors. Considering the amount of total codes that were assigned across the technologies and sessions, more assigned codes suggesting more usability challenges. There were only 16 codes applied to the Virtuoso-SVVR application while the Virtuoso-VR platform had 69 codes.

Assigned codes also varied across categories. For instance, the majority of codes assigned to the Virtuoso-VR software were related to user’s problems with (1) understanding instructions,

(2) system crashes, and (3) graphical issues. Users of the Virtuoso-SVVR application tended to have some of these problems, but less frequently. The most frequently applied codes were related to (1) navigation issues, (2) determining the meaning of icons and or terminology of the system, and (3) understanding instructions.

The inductive analysis of video data uncovered participants stating they were having difficulties. As Evan was using Virtuoso-VR within the HTC Vive, his avatar got stuck and he commented, “it’s kind of hard to use this.” He later stated, “it got difficult though" when asked what he thought of the overall experience. Despite these remarks, Evan still rated the system as

‘Best Imaginable’ for user friendliness.

Expert Testers’ Perceptions of Feasibility and Relevance

Feasibility and relevance were investigated in expert testing not as binary constructs but as interrelated and interdependent. In unstructured interviews, experts were asked to provide

67

general commentary on the design of the hardware and software. Generally speaking, experts found the design of Virtuoso-SVVR to be inclusive to participants’ unique needs. The task and task sequences were found to be relevant to learning to use the shuttle. Jacob commented that

Virtuoso-SVVR was “overall, a good tool.” Jennifer highlighted the relevance of the approach when she noted, “This has interesting potential for immersive modeling for individuals with

ASD.” Barb suggested, “It would be worth asking several of our participants [adults in the day program] to try this,” speaking to the potential feasibility of the approach. Experts agreed the design promoted accessibility and the presence of universal design principles. For example,

Jennifer commented specifically on the use of visual cues that directed the attention of participants, although she felt that more cues were needed. Jennifer, Barb, and Daria all noted the use of visual symbols that learners would already be familiar with.

Many comments pertained specifically to the videos themselves, for example, how realistic the videos looked, video length, and video stability. Experts found the fidelity of the videos to be adequate. For example, Daria indicated the environment was detailed and realistic, although the resolution was sometimes “fuzzy.” This led to minor visual distortion and difficulty reading text. Experts also noted that the viewport shaking or rotating during movement was potentially overwhelming. Jacob commented, “The shaky cam (sic) was disorienting.” Daria indicated that the short length of videos was a strength.

Experts provided a number of suggestions for improvement and recommendations for implementation. In line with our design strategy, Jennifer noted, “This system does not, through a single experience, prepare learners for the real-world task,” and recommended repeated use.

Importantly, Daria expressed concerns related to the high degree of variability between members

68

of the learner population due to varied sensory processing sensitivities. She suggested this could be difficult to control for in instances of cybersickness.

Nature of Participant Testers’ User Experience

Participant testers’ user experience was investigated using inductive methods and stratified using the coding categories outlined in Table 6. In usage testing, exposure to the VR technology was short. Mean computed time to completion across all participants for Virtuoso-

SVVR was 0:08:12 (SD=0:02:06) and for Virtuoso-VR were 0:09:18 (SD=0:02:17). Participant experiences across both applications fell within our parameters of limiting HMD exposure to <10 minutes per session so as to reduce adverse effects. However, Virtuoso-VR had somewhat unpredictable stability due to the immaturity of the underlying beta-level software. Frequent and sometimes predictable system instability and crashes were observed. This has implications for feasibility. Error rates were, on average, 0.32 errors per minute for participants using Virtuoso-

SVVR, and 1.24 errors per minute for participants using Virtuoso-VR. Across all sessions, no participants asked if they could leave the usage test. No participants expressed dissatisfaction with either of the intervention platforms. All participants expressed a desire to return and use

Virtuoso again, suggesting high acceptability.

Affect. Generally speaking, usage test participants’ experiences were enjoyable, although errors and bugs were observed. When Travis came back to complete his second time through the

Virtuoso-VR activities, the software presented several process-hindering system crashes that resulted in Travis having time to explore while the online guide restarted the application.

However, Travis stated that he still was having fun:

RESEARCHER: So, you also said you had fun. What did you find fun?

Travis: Just trying to mess around with the controls.

69

It was observed that Travis’ avatar was walking around, waving his hands, and exploring during these crashes. Even though his progress was stalled by crashes, he still found being in the environment enjoyable, suggesting that the technology is perhaps intrinsically reinforcing.

Some participants indicated they were excited to tell their friends about Virtuoso. Evan stated, “I would love to tell my friends about that.” Andy also said he would tell his friends about

Virtuoso. Several participants also expressed a desire to return, and Travis did return for a second session. Participants seem to have found their experiences with the system interesting or appealing. Many described Virtuoso as “cool.” Andy asked when we would be releasing our project to the public: “How do the others use the app or get it?...Is this a mobile app that you can get for the iPad?” He followed up by asking when it would be available after we said it was still in development: “How long will you think it’ll be ready?”

Accessibility. Accessibility was considered from the perspectives of physical accessibility, cognitive accessibility, and cybersickness, which are discussed in the following sections.

Physical accessibility. Physical accessibility issues were observed primarily as participants sought to gain fluency using the controllers for Virtuoso, including the Google

Daydream remote and the HTC Vive controllers. While most users were able to gain fluency quickly with controls, some encountered challenges. For instance, Kevin has challenges with fine-motor skills. He struggled to operate the Vive’s default controllers, which are flat and require some dexterity. Virtuoso supports alternative input devices for such situations; thus, we the Vive controllers were replaced with a Microsoft Xbox 360 controller, which resulted in

Kevin being able to navigate without further issues.

70

Another example of a physical accessibility issue was when Evan struggled to navigate his avatar with the HTC Vive default controllers. He repeatedly asked for assistance:

Evan: How do you do that?

RESEARCHER: Alright. You take your thumb… and remember, you push forward. Push

down with that thumb.

Evan: Push down with this thumb?

RESEARCHER: Push down… yeah!!! You got it.

Evan: [still struggling with controls] That’s okay. It’s a little kind of… it’s kind of hard.

In contrast to Kevin overcoming physical accessibility challenges with a different, Evan was not able to gain a high degree of fluency. It appears that Evan struggled with mapping controller inputs with avatar control. It is unclear whether Evan forgot the controls, whether he needed more practice, or whether he needed to use an alternative controller.

Cognitive accessibility. In designing Virtuoso, we set out to instantiate design standards that could address challenges related to using technologies for people with cognitive disabilities

(Steel & Janeslätt, 2017). In doing so we paid particular interest to supporting executive functioning, language, literacy, , and reasoning. Universal Design for Learning principles were applied widely to approach such concerns. Participants benefited from these features with the exception of Jermaine when using Virtuoso-SVVR. Jermaine is pre-literate and had trouble understanding terms and icons used to select the 360-degree videos, inadvertently skipping one of the tasks when he was unable to correctly identify the button he was asked to select. In this instance Jermaine was asked to watch the third video of the Virtuoso-SVVR application. Although the buttons are labeled both numerically and with graphical representations of the activity, he was unable to make the correct selection and watched the

71

fourth video instead. In the following Virtuoso-VR tests, the online guide read all text and instructions to the participant to help address these accessibility issues.

Cybersickness. Following each usage testing session, we asked our participants if they were feeling any kind of physical discomfort that might suggest cybersickness. Most participants did not report headaches, eyestrain, dizziness, or nausea. Three participants stated they were feeling fine and free of symptoms. Andy and Travis, however, reported feelings of discomfort.

Andy related the following:

RESEARCHER: Do your eyes feel weird?

Andy: Little bit.

RESEARCHER: Do you feel at all dizzy?

Andy: Maybe a tiny bit. Not too bad.

RESEARCHER: Yeah? And does your stomach feel okay? Any nausea?

Andy: It feels a little weird but it’s mostly okay.

RESEARCHER: Yeah? So, you think that might be because you’re hungry because it’s

almost lunchtime or…?

Andy: Yeah. It could be because it’s almost lunchtime.

While it is unclear if Andy was exhibiting symptoms of cybersickness, or simply feeling hungry, evidence from Travis suggests a clear connection from his symptoms to cybersickness. After using the Google Daydream, Travis leaned back and seemed to be feeling uncomfortable.

RESEARCHER: So, I see you’re kind of leaning back, why are you leaning back?

Travis: I’m disorientated.

RESEARCHER: You’re disorientated. Can you…

Travis [distressed]: Oh.

72

RESEARCHER: What do you mean by that when you say that you’re ‘disorientated?’

Can you explain?

Travis: Like that… [inaudible]. Okay. [exasperated tone] Oh boy!

RESEARCHER: So, what do you mean disoriented? Can you tell me how you physically

feel?

Travis: Like [tone rising] whooooo boy!

RESEARCHER: Like dizzy or a headache?

Travis: Just it’s interesting to be outside… Oh no, it’s somewhat dizzy and somewhat out

of the roof.

RESEARCHER: So, when you took off the headset…

Travis: Yeah. It felt like… give me a minute.

RESEARCHER: Okay. Do you need a drink or water or something?

Travis: I’m fine.

RESEARCHER: Okay...

Travis: So, I don’t think virtual reality is for me.

Despite the claim that “I don’t think virtual reality is for me,” Travis insisted on returning to complete the second part of that day’s session and later asked to return the following day for a second usage test. In subsequent testing sessions, Travis did not demonstrate or communicate any further cybersickness.

Transfer. Virtuoso was designed using Stokes and Baer’s (1977) heuristics for generalization. Specifically, Virtuoso includes elements of the natural environment with the goal of exploiting functional contingencies that exist in the real world, which are thought to promote training and reinforcement of desired skills (Stokes & Osnes, 2016). In a conceptual review of

73

virtual reality interventions for people with autism (Parsons, 2016), high visual fidelity and realistic representation of assets were hypothesized to be conducive to transfer for people with autism. Therefore, one of our design principles was centered around creating an environment that participants could relate to, would find realistic, and would convey a sense of presence.

Analysis suggests users found our environment to be useful and realistic, and that they were making some connections between Virtuoso and the real-world. Most participants commented on the recognizability of the assets and were able to identify where they were within the virtual environment with corresponding real-world locations. All participants recognized office spaces and, when prompted, were able to walk to their personal workspaces and those of their friends. Using Virtuoso-SVVR, Travis was immediately able to recognize his location in the office space and reacted positively: “Oh. Wait. I’m a little— that’s the plant thing… … before you come in. And that’s the work… how did you formulate this? This is awesome.” Evan was also able to recognize many of the assets: “I know where those seats are... It’s outside by

Dyer Hall. That’s right. Where the new café is. That’s right. And it’s University Square straight ahead.”

Not only did participants find assets to be recognizable and realistic, they also gave indications of a virtual sense of presence, that is, that they felt they were “really there.” Andy had the following exchange:

RESEARCHER: Do you feel like you were there with [the online guide]?

Andy: I do.

RESEARCHER: You do? Do you feel like you were really there?

Andy: I do.

RESEARCHER: You do?

74

Andy: I really do when I’m going out the door.

RESEARCHER: Did it feel like you were in the office?

Andy: It did.

RESEARCHER: Did it feel like you were outside?

Andy: It did.

RESEARCHER: Did it feel like you were going on a bus?

Andy: Yeah.

Realism of assets and environments was noted by nearly all usage test participants. They were readily able to recognize where they were both inside the office space as well as outside on the virtual university campus. After usage testing concluded, some participants were also able to identify their current location relative to the environments and tasks portrayed in the Virtuoso environment, suggesting some degree of transfer. For example, Evan was asked, “Do you think you can find that shuttle stop outside?” He responded positively and was able to look out the window and point to the shuttle stop that he had visited in the virtual environment. He then indicated that the environment was “pretty real,” and that he could tell it was the university campus. In another session, Andy indicated he might be able to identify the location of the bus stop:

RESEARCHER: Do you feel like it was realistic?

Andy: I do.

RESEARCHER: Do you feel like if I asked you to point out where that bus stop was, do

you think you could point where it was?

Andy: Probably.

75

Discussion and Implications

In this paper, a prototype VR adaptive skills intervention for adults with ASD is presented. The intervention aims to help participants learn to use a university shuttle in a safe and appropriate manner. In the current research, participants watched 360-degree videos that modeled the task. This was followed by a rehearsal of the task in a multi-user VR environment.

The overarching goal was to promote transfer of skills from the digital contexts to the real-world.

We sought to answer two research questions. Firstly, to what extent do project prototypes

(Virtuoso-VR and Virtuoso-SVVR) meet the design goals of being acceptable, feasible, easy to use, and relevant to the unique needs of participants? Secondly, what is the nature of participants’ user experience relative to Virtuoso-VR and Virtuoso-SVVR? Although Virtuoso was found to be acceptable, easy to use, and relevant to the needs of participants, the feasibility of the current approach remains in question. This is partially due to the nascent state of the technologies used and associated instability.

Overall, evaluation results of the Virtuoso-SVVR and Virtuoso-VR prototypes are positive. Expert review findings suggest that the intervention is feasible and relevant. Experts’ perceptions of usability suggested that Virtuoso-SVVR was usable, but they cautioned that substantial training would be needed and the SVVR intervention alone would not be sufficient for acquisition of skills. However, participant perceptions of usability with Virtuoso SVVR were high, and participants required very little training before they were able to use the intervention.

In addition, all participants were able to complete the Virtuoso-SVVR training, although some support was needed initially to show how the software worked and to remind participants about the sequence in which to watch the videos. While experts indicated a preference for the

76

Cardboard HMD over the Daydream, findings suggest that participants were able to operate both devices with relative ease and few errors.

Findings suggest the prototypes were easy to use, with mean SUS scores being comprehensively above the standard metric for a system to be considered usable. All participants were able to complete all phases of usage testing, and some came back for multiple trials.

Qualitative inquiry into the nature of participants’ learning experiences indicates a largely positive experience characterized by positive affect. The intervention attended to both physical and cognitive accessibility, with notable inclusion of UDL principles throughout. Some evidence of transfer was found, but is anecdotal and inconclusive. Participants did note the connection between real-world objects and objects they experienced in the virtual world and demonstrated a sense of connection between their location in the virtual world and in the real-world. Of particular concern were noted issues related to cybersickness and physical accessibility.

A number of limitations were noted in the execution of this research. In line with nearly all research that has been performed in this area, our sample size was limited and the research did not incorporate a comparison group. Future research should consider incorporating comparison groups, such as a neurotypical peer comparison group. However, given individuals with ASD represent a low-incidence disability group, challenges with small sample sizes are likely to remain. Importantly, participants spent a limited amount of time within the treatment conditions.

Questions remain as to whether there is a novelty effect associated with the technology. It is unclear whether further exposure and multiple applications of Virtuoso might change this.

Moreover, we used multiple hardware configurations and chose them in such a way as to reduce potential adverse effects. This, conversely, could have impacted participants’ acceptance of the technology. Future research should consider how best to implement HMDs with this population,

77

as there are currently no standards or guidelines. Furthermore, while user experience was generally positive, analysis uncovered accessibility challenges which could be exacerbated with prolonged exposure to the system. While most participants were able to quickly learn to use the system, notable concerns arose around physical accessibility, cognitive accessibility, and cybersickness.

Future work should specifically address limitations uncovered during evaluation. Firstly, issues of physical accessibility could be addressed by exploring user experiences with alternative

HMD systems. Participants used the HTC Vive, which utilizes a flat, touch-sensitive interface for controlling avatars. The Oculus Rift, however, uses analog joysticks which could be easier for participants to control (as evidenced by Andy’s ability to use the Xbox 360 controller). For

Virtuoso-SVVR, usability could be improved by removing any use of a wand-type controller and shifting to only use the Google Cardboard. Secondly, issues related to system instability could be alleviated by shifting development to a more stable and robust development platform. Given that the Unity platform (https://unity.com) now provides support for all commercially-available

HMD and has garnered a reputation of stability, extensibility, and community support, we have ported our existing infrastructure to that platform, which will be the focus of future research.

Thirdly, in order to attend to expert feedback of participants needing more exposure as well as to better promote transfer, more opportunities for rehearsal in the Virtuoso-VR environment should be developed. To this end, we are exploring the use of multiple “levels” in the VR intervention, each with increasing challenge in terms of task complexity, environmental detail, social encounters, etc. For example, introducing multiple shuttles, traffic, and crowds as a participant progresses through levels could provide more opportunity to address heuristics for promoting transfer, particularly as relates to training diversely and incorporating functional mediators.

78

Fourth and finally, we must address the issue of cybersickness and we call on other researchers to do the same. Few studies have been performed with individuals with ASD utilizing HMD, and only one of these studies has specifically investigated cybersickness. Cybersickness symptoms manifest more profoundly in HMD-based VR, yet little is known about how individuals with

ASD experience cybersickness, what factors impact their experiences, and how those sensations could potentially be moderated. Given that individuals with ASD have profound differences in sensory perception, continued research on the use of HMD-based VR with this population has ethical ramifications which have not yet been explored. Future research must carefully consider the trade-offs between VR-based training and cybersickness for this vulnerable population.

79

References

AUTHORS (2018).

Allen, S. M., & Mor, V. (1997). The prevalence and consequences of unmet need. Contrasts

between older and younger adults with disability. Medical Care, 35(11), 1132–1148.

Aresti-Bartolome, N., & Garcia-Zapirain, B. (2014). Technologies as support tools for persons

with autistic spectrum disorder: A systematic review. International Journal of

Environmental Research and Public Health, 11(8), 7767–7802.

Baio, J., Wiggins, L., Christensen, D. L., Maenner, M. J., Daniels, J., Warren, Z., & Dowling, N.

F. (2018). Prevalence of Autism Spectrum Disorder Among. Children Aged 8 Years -

Autism and Developmental Disabilities Monitoring Network, 67(6), 11.

Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean:

Adding an adjective rating scale. Journal of Usability Studies, 4(3), 114–123.

Beaumont, R., & Sofronoff, K. (2008). A multi-component social skills intervention for children

with Asperger syndrome: The Junior Detective Training Program. Journal of Child

Psychology and Psychiatry, and Allied Disciplines, 49(7), 743–753.

Bellani, M., Fornasari, L., Chittaro, L., & Brambilla, P. (2011). Virtual reality in autism: State of

the art. Epidemiology and Psychiatric Sciences, 20, 235–238.

Bozgeyikli, L., Raij, A., Katkoori, S., & Alqasemi, R. (2018). A Survey on Virtual Reality for

Individuals with Autism Spectrum Disorder: Design Considerations. IEEE Transactions

on Learning Technologies, 11(2), 133–151.

Bozgeyikli, Lal “lila,” Bozgeyikli, E., Katkoori, S., Raij, A., & Alqasemi, R. (2018). Effects of

Virtual Reality Properties on User Experience of Individuals with Autism. ACM Trans.

Access. Comput., 11(4), 22:1–22:27.

80

Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry.

Retrieved from https://books.google.com/books

Brooks, B. M., Rose, F. D., Attree, E. A., & Elliot-Square, A. (2002). An evaluation of the

efficacy of training people with learning disabilities in a virtual environment. Disability

and Rehabilitation, 24(11–12), 622–626.

Brown, D. J., Shopland, N., & Lewis, J. (2002). Flexible and virtual travel training

environments. 4th International Conference On.

Carmien, S., Dawe, M., Fischer, G., Gorman, A., Kintsch, A., & Sullivan, J. F., JR. (2005).

Socio-technical Environments Supporting People with Cognitive Disabilities Using

Public Transportation. ACM Trans. Comput. -Hum. Interact, 12(2), 233–262.

Council, N. R. (2001). Educating children with autism. National Academies Press.

Creswell, J., & Poth, C. (2017). Qualitative Inquiry and Research Design (4th ed.). Retrieved

from

https://books.google.com/books?id=UgFIMQAACAAJ&dq=Qualitative+inquiry+and+re

search+design:+Choosing+among+five+approaches++5th+edition&hl=en&newbks=1&n

ewbks_redir=0&sa=X&ved=2ahUKEwje6J3g9enlAhUGDKwKHZTIC5sQ6AEwAXoE

CAEQAg

Didehbani, N., Allen, T., Kandalaft, M., Krawczyk, D., & Chapman, S. (2016). Virtual Reality

Social Cognition Training for children with high functioning autism. In Computers in

Human Behavior (Vol. 62, pp. 703–711).

Ditterline, J., & Oakland, T. (2009). Relationships Between Adaptive Behavior and Impairment.

In J. Naglieri & S. Goldstein (Eds.), Assessing Impairment: From Theory to Practice (pp.

31–48). https://doi.org/10.1007/978-1-387-87542-2_4

81

DSM-5 American Psychiatric Association. Diagnostic and statistical manual of mental

disorders. (2013). Arlington: American Psychiatric Publishing.

Durkin, K. (2010). Videogames and young people with developmental disorders. Review of

General Psychology: Journal of Division, 1, 122–140.

Eaves, L. C., & Ho, H. H. (2008). Young adult outcome of autism spectrum disorders. Journal of

Autism and Developmental Disorders, 38, 739–747.

Felce, D. (1997). Defining and applying the concept of quality of life. Journal of Intellectual

Disability Research, 41(2), 126–135.

Gelsomini, M., Garzotto, F., Matarazzo, V., Messina, N., & Occhiuto, D. (2017). Creating social

stories as wearable hyper-immersive virtual reality experiences for children with

neurodevelopmental disorders. Proceedings of the 2017 Conference on Interaction

Design and Children, 431–437.

Gilotty, L., Kenworthy, L., Sirian, L., Black, D. O., & Wagner, A. E. (2002). Adaptive Skills and

Executive Function in Autism Spectrum Disorders. Child Neuropsychology, 8(4), 241–

248. https://doi.org/10/d3dsqf

Glaser, N. J., & Schmidt, M. (2018). Usage Considerations of 3D Collaborative Virtual Learning

Environments to Promote Development and Transfer of Knowledge and Skills for

Individuals with Autism. Tech Know Learn, 1–8.

Grandin, T. (1995). How People with Autism Think. In E. Schopler & G. B. Mesibov (Eds.),

Learning and Cognition in Autism (pp. 137–156). Boston, MA: Springer US.

Grynszpan, O., Weiss, P. L., Perez-Diaz, F., & Gal, E. (2014). Innovative technology-based

interventions for autism spectrum disorders: A meta-analysis. Autism: The International

Journal of Research and Practice, 18(4), 346–361.

82

Hedley, D., Uljarević, M., Cameron, L., Halder, S., Richdale, A., & Dissanayake, C. (2017).

Employment programmes and interventions targeting adults with autism spectrum

disorder: A systematic review of the literature. Autism: The International Journal of

Research and Practice, 21(8), 929–941.

Herrera, G., Alcantud, F., Jordan, R., Blanquer, A., Labajo, G., & de Pablo, C. (2008).

Development of symbolic play through the use of virtual reality tools in children with

autistic spectrum disorders: Two case studies. Autism: The International Journal of

Research and Practice, 12, 143–157.

Jarrold, W., Mundy, P., Gwaltney, M., Bailenson, J., Hatt, N., & McIntyre, N. (2013). Social

attention in a virtual public speaking task in higher functioning children with autism.

Autism Research: Official Journal of the International Society for Autism Research, 6,

393–410.

Josman, N., Ben-Chaim, H. M., & Friedrich, S. (2011). Effectiveness of virtual reality for

teaching street-crossing skills to children and adolescents with autism. Int. J. Disabil.

Hum. Dev., 7(1), 49–56.

Kandalaft, M. R., Didehbani, N., Krawczyk, D. C., Allen, T. T., & Chapman, S. B. (2013).

Virtual reality social cognition training for young adults with high-functioning autism.

Journal of Autism and Developmental Disorders, 43(1), 34–44.

Ke, F., & Im, T. (2013). Virtual-reality-based social interaction training for children with high-

functioning autism. The Journal of Educational Research, 106(6), 441–461.

Knight, V., McKissick, B. R., & Saunders, A. (2013). A review of technology-based

interventions to teach academic skills to students with autism spectrum disorder. Journal

of Autism and Developmental Disorders, 43(11), 2628–2648.

83

Kuziemsky, C., & Nøhr, C. (Eds.). (2015). Development of a Video Coding Scheme for

Analyzing the Usability and Usefulness of Health Information Systems. In E. Borycki &

A. W. Kushniruk, Context sensitive health informatics: Many places, many users, many

contexts, many uses (pp. 68–73). Amsterdam: IOS Press.

McComas, J., Pivik, J., & Laflamme, M. (1998). Current uses of virtual reality for children with

disabilities. In Studies in Health Technology and Informatics (Vol. 58, pp. 161–169).

Mechling, L., O’Brien, E., & E. (2010). Computer-based video instruction to teach students with

intellectual disabilities to use public bus transportation. Education and Training in Autism

and Developmental Disabilities, 45(2), 230–241.

Mesa-Gresa, P., Gil-Gómez, H., Lozano-Quilis, J.-A., & Gil-Gómez, J.-A. (2018). Effectiveness

of Virtual Reality for Children and Adolescents with Autism Spectrum Disorder: An

Evidence-Based Systematic Review. Sensors, 18(8). https://doi.org/10/gf8rn8

Mitchell, P., Parsons, S., & Leonard, A. (2007). Using virtual environments for teaching social

understanding to 6 adolescents with autistic spectrum disorders. Journal of Autism and

Developmental Disorders, 37(3), 589–600.

Moore, D., McGrath, P., & Powell, N. J. (2005). Collaborative virtual environment technology

for people with autism. Focus on Autism and Other Developmental Disabilities (Vol. 20).

Müller, E., Schuler, A., & Yates, G. B. (2008). Social challenges and supports from the

perspective of individuals with Asperger syndrome and other autism spectrum

disabilities. Autism: The International Journal of Research and Practice, 12(2), 173–190.

National Autism Center. (2015). Findings and Conclusions: National Standards Project, Phase

2. National Autism Center, Randolph, MA.

84

Neely, L. C., Ganz, J. B., Davis, J. L., Boles, M. B., Hong, E. R., Ninci, J., & Gilliland, W. D.

(2016). Generalization and Maintenance of Functional Living Skills for Individuals with

Autism Spectrum Disorder: A Review and Meta-Analysis. Review Journal of Autism and

Developmental Disorders, 3(1), 37–47.

Newbutt, N., Sung, C., Kuo, H.-J., Leahy, M. J., Lin, C.-C., & Tong, B. (2016). Brief report: A

pilot study of the use of a virtual reality headset in autism populations. Journal of Autism

and Developmental Disorders, 46(9), 3166–3176.

Parsons, S. (2015). Learning to work together: Designing a multi-user for

social collaboration and perspective-taking for children with autism. Int. J. Child-

Comput. Interact, 6, 28–38.

Parsons, S. (2016). Authenticity in virtual reality for assessment and intervention in autism: A

conceptual review. Educational Research Review, 19, 138–157.

Parsons, S., & Cobb, S. (2011). State-of-the art of virtual reality technologies for children on the

autism spectrum. European Journal of Special Needs Education, 26, 355–336.

Parsons, Sarah, Mitchell, P., & Leonard, A. (2004). The use and understanding of virtual

environments by adolescents with autistic spectrum disorders. Journal of Autism and

Developmental Disorders, 34(4), 449–466.

Plaisted, K. C. (2015). Reduced generalization in autism: An alternative to weak central

coherence. Retrieved from

https://www.repository.cam.ac.uk/bitstream/handle/1810/248652/Chapter.pdf

Reed, F. D. D., Hyman, S. R., & Hirst, J. M. (2011). Applications of technology to teach social

skills to children with autism. In Research in Autism Spectrum Disorders (Vol. 5, pp.

1003–1010).

85

Self, T., Scudder, R. R., Weheba, G., & Crumrine, D. (2007). A virtual approach to teaching

safety skills to children with autism spectrum disorder. In Topics in Language Disorders

(Vol. 27, pp. 242–253).

Schmidt, M., Schmidt, C., Glaser, N., Beck, D., Lim, M., & Palmer, H. (2019). Evaluation of a

spherical video-based virtual reality intervention designed to teach adaptive skills for adults

with autism: A preliminary report. Interactive Learning Environments, 1–20.

Schmidt, M., Laffey, J., Schmidt, C., Wang, X., & Stichter, J. (2011). Developing methods for

understanding social behavior in a 3D virtual learning environment. Comput. Human

Behav., 28, 405–413. https://doi.org/10/d3x3cv

Schmidt, M., Laffey, J., Stichter, J., Goggins, S., & Schmidt, C. (2008). The design of iSocial: A

three-dimensional, multi-user, virtual learning environment for individuals with autism

spectrum disorders to learn social skills. International Journal of Technology, Knowledge

and Society, 4(2), 29–38. https://doi.org/10/gf8rp3

Shier, M., Graham, J. R., & Jones, M. E. (2009). Barriers to employment as experienced by disabled people: A qualitative analysis in Calgary and Regina, Canada. Disability & Society,

24(1), 63–75. https://doi.org/10/bsn4xs

Simonoff, E., Pickles, A., Charman, T., Chandler, S., Loucas, T., & Baird, G. (2008). Psychiatric

disorders in children with autism spectrum disorders: Prevalence, comorbidity, and

associated factors in a population-derived sample. Journal of the American Academy of

Child and Adolescent Psychiatry, 47(8), 921–929.

Standen, P. J., & Brown, D. J. (2005). Virtual reality in the rehabilitation of people with

intellectual disabilities. Cyberpsychology. In & Behavior: The Impact of the Internet,

Multimedia and Virtual Reality on Behavior and Society (Vol. 8, pp. 272–282).

86

Strickland, D. (1997). Virtual reality for the treatment of autism. In Studies in Health Technology

and Informatics (pp. 81–86).

Strickland, D., Marcus, L. M., Mesibov, G. B., & Hogan, K. (1996). Brief report: Two case

studies using virtual reality as a learning tool for autistic children. J. Autism Dev. Disord.,

26(6), 651–659. https://doi.org/10/dxgf7c

Veltman, A., Stewart, D. E., Tardif, G. S., & Branigan, M. (2001). Perceptions of primary

healthcare services among people with physical disabilities. Part 1: Access issues.

MedGenMed, 3(2).

Wallace, S., Parsons, S., & Bailey, A. (2017). Self-reported sense of presence and responses to

social stimuli by adolescents with autism spectrum disorder in a collaborative virtual

reality environment. Journal of Intellectual & Developmental Disability, 42(2), 131–141.

Wang, Laffey, J., Xing, W., Ma, Y., & Stichter, J. (2016). Exploring embodied social presence of

youth with autism in 3D collaborative virtual learning environment: A case study.

Wang, M., & Anagnostou, E. (2014). Virtual Reality as Treatment Tool for Children with

Autism. In V. B. Patel, V. R. Preedy, & C. R. Martin (Eds.), Comprehensive Guide to

Autism (pp. 2125–2141). New York, NY: Springer.

Wang, M., & Reid, D. (2011). Virtual reality in pediatric neurorehabilitation: Attention deficit

hyperactivity disorder, autism and cerebral palsy. Neuroepidemiology, 36, 2–18.

Wickham, J. (2016). VR and Occupational Therapy (Masters Thesis). Retrieved from

https://itp.nyu.edu/thesis2016/project/jaclyn-wickham

Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., & Schultz, T. R.

(2015). Evidence-Based Practices for Children, Youth, and Young Adults with Autism

87

Spectrum Disorder. A Comprehensive Review. Journal of Autism and Developmental

Disorders, 45(7), 1951–1966.

Yerys, B. E., Wallace, G. L., Harrison, B., Celano, M. J., Giedd, J. N., & Kenworthy, L. E.

(2009). Set-shifting in children with autism spectrum disorders: Reversal shifting deficits

on the Intradimensional/Extradimensional Shift Test correlate with repetitive behaviors.

Autism: The International Journal of Research and Practice, 13(5), 523–538.

Zhang, L., Warren, Z., Swanson, A., Weitlauf, A., & Sarkar, N. (2018). Understanding

Performance and Verbal-Communication of Children with ASD in a Collaborative

Virtual. Environment. Journal of Autism and Developmental, disorders, 1–11.

(2008). https://doi.org/10.4135/9781412963909.n48

88

CHAPTER 4: Investigating the Experience of Virtual Reality Head-Mounted Displays

for Adults with Autism Spectrum Disorder

Noah Glaser

University of Cincinnati

Matthew Schmidt, PhD.

University of Florida

Carla Schmidt, PhD.

University of Florida

Declarations of interest: None

This research did not receive any specific grant from funding agencies in the public, commercial,

or not-for-profit sectors.

Corresponding author: Noah Glaser,

[email protected],

University of Cincinnati’s College of Education Cincinnati, Ohio 45221

89

Abstract

People with autism spectrum disorders (ASD) exhibit a range of socio-communicative and behavioral deficits which leads to difficulties holding meaningful relationships and vocational opportunities. Unfortunately, it is oftentimes difficult for this population to transfer learnt skills from controlled intervention contexts into the real-world. As a result, interest in using virtual reality (VR) to create naturalistic training contexts has grown. Research has provided evidence to support the benefits of using VR-based training for people with ASD. However, the emergence of commercially available head-mounted displays (HMD), and their association with cybersickness, has led many to wonder if people with ASD would continue to find VR as being acceptable if they were to be immersed within these devices. Further, people with ASD often have sensory processing disorders making the continued use of VR a potential ethical concern. This research examined the extent that adults with ASD from a day program felt symptoms of cybersickness while undergoing sessions of a VR-training program. The nature of learner experiences while using HMD were also explored. Research questions were addressed through multi-method procedures that utilized quantitative and qualitative data. Despite the presence of some cybersickness symptoms, participants found the experiences to be positive and acceptable.

Index Terms— autism spectrum disorder, cybersickness, virtual reality, oculus rift, accessibility

90

1. Introduction

This paper presents a multi-method analysis of how individuals with autism experience, interact with, and perceive virtual experiences through head-mounted displays (HMD). The research presented here represents one aspect of a broader formative evaluation of a virtual reality intervention for adults with autism spectrum disorder enrolled in a day program. This intervention included a gamut of experiences that were delivered across varying virtual reality

(VR) technologies and hardware. This suite of VR experiences is called Virtuoso. Virtuoso aims to help adults with ASD acquire adaptive behavior skills related to independently and safely using public transportation. Virtuoso is based on research that suggests VR can be a powerful learning tool and intervention technology for individuals with ASD.

While current research has provided evidence to suggest that people with ASD find the use of VR to be acceptable those findings are largely confined to desktop-based systems. The recent emergence of commercial HMD has led to a renewed interest in using this technology for people with ASD. However, it is unclear if people with ASD would continue to find the use of

VR to be acceptable if HMD were introduced (Bozgeyikli et al., 2018). This concern arises because the use of HMD is often associated with inducing adverse effects such as cybersickness which has led to concerns around their use and implementation for people with sensory processing disorders (Bian et al., 2013; Newbutt et al., 2016). Further confounding this concern is that research suggests that up to 80% of people who use HMD will experience symptoms of cybersickness including dizziness, eye strain, queasiness, nausea, and other symptoms related to motion sickness (Cobb et al., 1999; Dennison et al., 2016). Little research exists that has explored how people with cognitive disabilities or sensory processing disorders may experience adverse effects related to HMD use (Bradley & Newbutt, 2018; Newbutt et al., 2016). This lack

91

of research has created a potential ethical concern if the field is to continue looking towards the use of HMD with VR-based systems for people with ASD (Newbutt et al., 2016). The study presented here contributes to the literature by exploring areas that are under-represented in the research, specifically (1) how individuals with autism interact with and perceive virtual experiences through commercial HMD and (2) how they may experience adverse effects of cybersickness (Bradley & Newbutt, 2018; Newbutt et al., 2016; Bian et al., 2013).

The following sections contextualize our study by first providing background on ASD, its characteristics and prevalence, and intervention approaches for promoting adaptive behavior skills. We then consider the potential and alignment of information and communications technologies (ICTs) for delivering interventions to this population, and present VR as a promising intervention modality. This is followed by an overview of VR, how VR has been used in ASD research, and the emergence of HMD-based VR. The section concludes by identifying serious concerns related to potential adverse effects of using HMD-based VR with individuals with ASD.

1.1 Background

Autism Spectrum Disorder (ASD) is a lifelong neurodevelopmental disorder that is associated with core deficits in interpersonal communication and social interactions, as well as inhibiting, stereotyped, and repetitive behaviors (American Psychiatric Association, 2013).

Although autism presents along a spectrum of impairments, difficulties across social, communicative, and behavioral domains seem to be prevalent in all individuals with ASD

(DSM-5 American Psychiatric Association. Diagnostic and statistical manual of mental disorders, 2013). As a result, individuals with autism often face increased adversities including social isolation, hardships with maintaining relationships, and barriers to finding and holding

92

meaningful employment (Eaves & Ho, 2008; Frith & Mira, 1992). People with autism also tend to exhibit deficits related to the cognitive processes behind mental functioning in relationship to the development of psychological imagery needed to imagine or simulate objects in their mind

(Bartoli et al., 2014; Chen et al., 2019). This deficit manifests through a difficulty in generalizing skills and mental models between environments, having a finite ability to use their , and challenges with predicting events from abstract ideas (Bartoli et al., 2014;

Chen et al., 2019).

Once considered to be a rare condition, ASD is increasingly becoming more prominent in part because diagnostic assessments have improved a great deal in recent years (Matson et al., 2009). Recent studies also suggest that comorbid psychopathology is more frequent with

ASD, contributing to higher rates of diagnosis than in the past (Thorson & Matson, 2012).

Current global estimates suggest that 1 in 132 children have ASD (Baxter et al., 2015), but recent reports indicate that 1 in 59 American children have ASD (Baio et al., 2018). With additional psychopathological comorbidities such as cognitive impairments, epilepsy, anxiety disorders, and sensory problems being more commonly associated with and recognized in diagnostic procedures, it has become clear that ASD can severely impact an individual’s quality of life and ability to independently function (Simonoff et al., 2008; Thorson & Matson, 2012;

Tuchman & Rapin, 2002). If left untreated, these comorbidities can exacerbate problems with maintaining employment, living independently, and social isolation (Eaves & Ho, 2008; Hedley et al., 2017; Müller et al., 2008). Due to the prevalence of these issues, there has been a focus on developing and performing research on effective, socially valid, and appropriate interventions that can help people with ASD to develop social, communicative, and adaptive behavior skills needed to function and thrive in social situations (Bellani et al., 2011; Rao et al., 2008).

93

Adaptive behavior skill interventions have been found to be beneficial for individuals with ASD (Kanne et al., 2011). However, access to these supports is oftentimes limited due to geographical, financial, and staffing barriers (Zhang et al., 2018). For example, financial costs can place a heavy burden on the families of children diagnosed with ASD, leaving them unable to afford support and treatment (Lee, David, Rusyniak, Landa, & Newschaffer, 2007). In addition, direct care providers are often unable to reach stakeholders effectively and face systemic support barriers in staffing, training, and funding (Davignon et al., 2014; Geenen et al.,

2003). Further confounding these issues are the challenges of face-to-face interactions for individuals with ASD related to communication and social reciprocity deficits (Rump et al.,

2009, Tager-Flusberg, Paul, & Lord, 2003; Carter, Davis, Klin, & Volkmar, 2005).

Supporting socio-communicative and behavioral development while accounting for unique needs of people with ASD is imperative (Mazurek, 2013). Information and communications technologies have long been seen as holding promise for addressing the socio- communicative and behavioral needs of people with ASD (Beaumont & Sofronoff, 2008;

Durkin, 2010; Grynszpan et al., 2014; Knight et al., 2013; Parsons, 2016). Individuals with ASD tend to have a natural affinity to computers (Grynszpan et al., 2014; Knight et al., 2013; Parsons,

2016; Reed et al., 2011), and the advantages of using technology for people with ASD has been broadly explored (Beaumont & Sofronoff, 2008; Grynszpan et al., 2014; Knight et al., 2013). For example, ICT interventions can reduce distractions and sensory stimuli, thereby allowing users to better focus on a given learning activity. In addition, ICTs have been shown to reduce social ambiguities, providing particular benefit in an area where individuals with ASD often struggle

(Grynszpan et al., 2014). Recently, researchers have given particular attention to VR. Believed to be particularly appealing to individuals with ASD, VR is able to harness the natural affinity that

94

many individuals with ASD have for computers, as well as their strong visual-spatial skills

(Bozgeyikli et al., 2018; Strickland, 1997). In the following sections we explore and discuss why

VR is considered a good fit for people with ASD, how HMD-based VR systems have been used with this population, as well as associated challenges and research gaps.

1.2 Promises and Challenges of Virtual Reality for Autism

A virtual reality world refers to a digitally simulated three-dimensional space which promotes users’ perceptions of presence, , and immersion (Miller & Bugnariu,

2016; Steuer, 1992) via interaction between human perception and computer generated displays

(Slater et al., 1995). VR technology facilitates this interaction by providing visual, auditory, and tactile sensory input that is perceived by participants, who take on the role of active participant, continuously moving within and interacting with an all-encompassing environment (Gibson,

2014). This effect is often accomplished through a variety of computer hardware including (1) desktop-based VR where users use a computer monitor as a window to view the virtual environment and control an avatar with a keyboard and mouse, (2) through the use of HMD to provide a sense of immersion, and (3) through the use of a series of projectors and sensors which maps imagery onto walls (i.e., CAVE). Virtual reality technologies tend to incorporate auditory and video feedback, but may also provide tactile and olfactory feedback through the use of various haptic devices.

1.2.1 Promises of VR.VR worlds have affordances (cf. Dalgarno & Lee, 2010) which are beneficial for instruction, learning, and assessment for individuals with autism (Glaser &

Schmidt, 2018). These affordances allow for the creation of dynamic and individualized VR environments that allow users to repetitively practice skills in a predictable and safe context

(Strickland, 1997). Designers are able to adjust aspects of the environment as users undergo

95

repeated exposure. For example, designers might gradually increase complexity to promote mastery of nuanced social behaviors and customs (Parsons, 2016). VR platforms also allow for the automated collection of data as participants use the system, which enables researchers to apply data mining to gain insight into user behavior patterns (Schmidt & Laffey, 2012). This is especially pertinent as over half of the ASD population is non-verbal (Strickland, 1997). VR is also capable of increasing access to services because it can address logistical barriers concerning distance, cost, and access to providers (Zhang et al., 2018). Affordances of VR can therefore provide learners with opportunities to take on new perspectives and could promote long-term behavior change through repeated practice (Bailenson, 2018), and allows users to form mental models of the real world as they practice skills within a realistic training context.

The use of VR is especially beneficial for training in situations that would otherwise be dangerous, impossible, counterproductive, and or expensive in real-world contexts (DICE;

Bailenson, 2018). Because VR allows for training and practice in realistic contexts that might pose risks (Parsons, 2016; Strickland, 1997), it has been used to promote the development of several adaptive behavioral skills for people with ASD. For example, Brown and colleagues

(2002) developed a virtual environment that they used to train adults with autism on road safety adaptive behavior skills skills associated with crossing the street. Using VR allowed for participants to practice road safety skills as they underwent repeated exposure to evolving scenarios that increased in complexity until it closely resembled the real world, and allowed designers to provide an opportunity for training that would have been dangerous and counterproductive to do in the real world. Virtual reality interventions have also been used with this population to provide training on the adaptive behavioral skills related to (1) riding in a bus and interacting within a cafe (Mitchell et al., 2007; Parsons et al., 2004), (2) general safety skills

96

(Self et al., 2007), (3) practicing public speaking (Jarrold et al., 2013), (4) preparing for a court appearance (Standen & Brown, 2005), (5) and more (Parsons, 2016; Michelle Wang &

Anagnostou, 2014). While the reported outcomes of these studies have been generally positive, empirical evidence to support the effectiveness of these interventions remains largely inconclusive (Parsons, 2016).

1.2.2 Challenges of VR. What has become apparent is that despite the many affordances and allure that VR holds as a therapeutic tool for this population, its promise has yet to be fully realized in the field (Parsons, 2016). This is in part due to the innate challenges of the technology, including the need for specialized technical abilities to develop custom worlds and restrictive hardware costs (Grynszpan et al., 2014; Schmidt, Schmidt, et al., 2019). A further challenge relates to the problem of generalization, or the “outcome of behavior change and therapy programs, resulting in effects extraneous to original targeted changes” (Stokes & Osnes,

2016, p. 338). In the context of VR, skills developed within an intervention often do not generalize to the real world once those supports are removed; with this challenge being one of the most widely cited limitations in the field.

This challenge is related to the tendency of people with ASD being concrete thinkers, which can lead to difficulties in establishing or measuring intervention effectiveness (Neely et al., 2016; Plaisted, 2015; Yerys et al., 2009). With VR being capable of conveying concepts, meanings, and activities in a highly realistic manner (Blascovich et al., 2002; Dalgarno & Lee,

2010), researchers maintain that this medium holds the potential to promote generalization for individuals with ASD. As Parsons states, “Underpinning these claims of transformation lies the fundamental assumption of veridicality; viz. that the experiences within VR technologies are authentic and realistic such that people will behave and respond in a similar way in virtual

97

worlds as they do in the real world, thereby enabling generalization from the former to the latter” (Parsons, 2016, p. 139). Further, Bailenson and colleagues (2008) state that the psychological ramifications – and potential for generalization – from a virtual environment is more prominent and engaging if sensory information comes from within the digital environment rather than the outer physical world. In other words, users are better able to make connections between their training contexts and the real world when they feel physically and mentally immersed within the virtual world (Blascovich et al., 2002; Parsons, 2016).

Virtual reality can best promote these feelings if a participant’s actions are unobtrusively tracked so that their bodily position and head orientations are captured and reflected upon in the virtual world, and if sensory information from the outside world is kept to a minimum

(Bailenson et al., 2008; Shu et al., 2019). This effect has typically been accomplished through the use of head-mounted displays (HMD) and is currently seen as a particularly effective way of addressing long standing difficulties with promoting generalization of skills for people with

ASD (Bian et al., 2013; Bradley & Newbutt, 2018; Newbutt et al., 2016; Sharples et al., 2008;

Shu et al., 2019). While head-mounted displays can help to promote immersion and sense of presence – and therefore, potentially, generalization – there are uncertainties related to the effects of overexposure to this hardware for people with ASD. Further, outcomes have yet to be validated in the field of ASD research (Bian et al., 2013; Bradley & Newbutt, 2018; Newbutt et al., 2016; Shu et al., 2019).

1.2.2.1 Cybersickness Concerns. One reason that HMD-based VR research has yet to be validated for people with ASD is because the use of HMD is often associated with adverse effects such as cybersickness, which has led some to caution their use for individuals with cognitive disabilities and sensory processing disorders (Bian et al., 2013; M Wang & Reid,

98

2011). Cybersickness is defined as an aversive behavioral state which impacts several psychophysiological systems (LaViola, 2000), and while not fully understood, seems to be elicited through simulated motion of VR displays (Nalivaiko et al., 2015; Rebenitsch & Owen,

2016). Such discomfort is thought to be related to mismatches between the sensations that a user’s body feels and what they observe in a HMD (Rebenitsch & Owen, 2016). Adverse effects have also been correlated to various visual display attributes such as a limited field of view, latency issues, and refresh rate (Moss & Muth, 2011). Symptoms of cybersickness vary between individuals but tend to include the presence of detrimental bodily sensations along the axis of gastrointestinal, central, peripheral, and sopite-related symptoms of motion sickness (Gianaros

& Stern, 2010; Nalivaiko et al., 2015). Research has found that up to 80% of users of HMD might experience some degree of cybersickness, that symptoms can set-in within ten minutes of use (Cobb et al., 1999), and that symptoms of cybersickness can last for hours after use

(Dennison et al., 2016).

Unfortunately, the vast majority of studies in the area of cybersickness, have focused on people who are neurotypical, suggesting a gap in the literature relative to the adoption of VR for individuals with autism (Newbutt et al., 2016). This gap in the literature was largely ignored for some time as available HMD were bulky, expensive, and largely unsuitable for individuals with sensory processing disorders (Bian et al., 2013; M Wang & Reid, 2011). However, several commercially viable head-mounted displays have been recently released onto the market (e.g.,

Oculus Rift, HTC Vive) which offer a high degree of immersion, visually stimulating graphics, and rich display experiences. This has led to renewed interest in exploring the use of HMD- based VR for individuals with ASD. Seeking to explore whether or not people with ASD would find the use of newer HMD to be acceptable, Newbutt and colleagues (2016) conducted a study

99

in which several participants with ASD completed a variety of activities using an Oculus Rift.

Findings suggest that their participants, independent of their required levels of support, found the HMD to be acceptable for use (Newbutt et al., 2016). Since then, HMD with this population have been utilized in a limited number of additional studies that largely have set out to assess whether or not VR could support learning for people with ASD within various contexts

(Adjorlu et al., 2017; Bozgeyikli et al., 2017; Mundy et al., 2016).

Of concern is that researchers seem eager to implement HMD to investigate learning gains, but often do not report on issues of cybersickness and how people with ASD and sensory processing disorders may experience these symptoms. Serious consideration of adverse effects is needed before HMD are implemented, as their use with at-risk populations presents potential ethical concerns (Bian et al., 2013; Bradley & Newbutt, 2018; Newbutt et al., 2016). Related to this, the use of HMD for individuals with ASD is not well understood. As this technology continues to evolve, the need to understand its impact on users – both positive and negative – is imperative (Bradley & Newbutt, 2018; Newbutt et al., 2016; Schmidt, Beck, et al., 2019).

2. Methods

There is a significant need to examine how individuals with ASD experience VR scenarios in commercially available HMD. Since HMD has the potential to promote a user’s sense of presence - and therefore transfer - it is imperative to explore how individuals with ASD experience learning tasks within a HMD and to better understand the character of adverse effects on a population with sensory processing disorders. The purpose of this multi-method study was to explore the character of cybersickness and experiences of using HMD for participants with

ASD in an adult day program at a large midwestern university. Quantitative and qualitative data are triangulated from multiple perspectives to provide more insight into the nature of learner

100

experiences when using HMD across Virtuoso evaluation sessions. The following research questions guided this inquiry:

RQ1: What is the character of cybersickness as experienced by research participants with

ASD when using Virtuoso?

RQ2: How do cybersickness symptoms compare between participants with ASD and the

neurotypical comparison group?

RQ3: How does the use of VR headsets influence the learning experience of participants

with ASD in Virtuoso?

2.1 Research Design

This research took place within the context of structured usage testing for Virtuoso that was conducted during the Summer of 2019. All research was performed in the university’s

School of Education (see Site Information for more details). Participants consisted of associates from a day program with ASD (n=6) and neurotypical (NT) staff members from the same day program who acted as a peer comparison group (n=6). Participants with ASD were assigned to separate groups based on severity levels for ASD as described in the DSM 5 (American

Psychiatric Association. Diagnostic and statistical manual of mental disorders, 2013). The purpose of the grouping was to provide appropriate levels of support, as recommended by the director of the program; therefore, three of the participants were identified as Level 3, or

"requiring very substantial support,” and three were identified as Level 2, or "requiring substantial support” (DSM-5 American Psychiatric Association. Diagnostic and statistical manual of mental disorders, 2013). Based on this, participants were organized into two groups:

Level 2 (L2) and Level 3 (L3). Neurotypical participants were randomly assigned to either the

L2 or L3 group.

101

The study was performed from May 20 to June 7, 2019. Assent and consent procedures were completed with each participant before taking part in the research. Each participant (both

ASD and NT) took part in three research sessions, totaling 36 sessions altogether. Sessions lasted for approximately one hour each, and were completed on non-consecutive days. Each session consisted of two parts. In Part 1, participants experienced a spherical video-based virtual reality

(SVVR) application. Participants with ASD took part in a short, semi-structured interview to gauge cybersickness and willingness to continue. In Part 2, participants completed an activity within a fully immersive VR environment. Participants with ASD took part in another short, semi-structured interview. After completing both Part 1 and 2, all participants (ASD and NT) completed the Motion Sickness Assessment Questionnaire (Gianaros & Stern, 2010). Detailed field notes and video, audio, and screen recordings were also captured for later analysis (see

Figure 1).

102

Figure 1. Structure of research sessions

2.1 Participants

2.1.1 ASD Participants. Our overarching sampling strategy sought to identify participants who would represent a broad range of comorbidities across the autism spectrum that associates in the day program experience. A purposive strategy within a convenience sample was implemented. The convenience sample population is made up of all day program associates. The purposive strategy was developed in consultation with the program’s director, with the following inclusion criteria: (1) a confirmed diagnosis of autism, (2) ability to verbally communicate, (3) level of cognition, and (4) ability to engage in a task up to 30 minutes in duration. Participants were excluded if they had a history of significant behavioral challenges such as physical aggression or were unable to verbally communicate. Based on these criteria and recommendations of the program’s director, a total of six participants were identified for inclusion. All ASD participants were male. A full breakdown of ASD participants is provided in

Table 1.

Table 1

Participant demographics and measures of Peabody Picture Vocabulary Test (PPVT), Social

Responsiveness Scale (SRS), and Behavior Rating Inventory of Executive Function (BRIEF)

Peabody Picture Vocabulary Test (PPVT)

Participant Name and Description Raw Score Standard Age Equivalent Score (yr:mo)

Travis: 29-year-old male diagnosed with 196 93 18:11 autism spectrum disorder, Smith Lemli Opitz Syndrome, attention deficit hyperactivity disorder (ADHD), and auditory processing disorder Andy: 24-year-old male diagnosed with autism 161 72 10:11

103

spectrum disorder, anxiety disorder, and attention deficit disorder Evan: 35-year-old male diagnosed with autism 162 72 11:1 spectrum disorder and an intellectual disability Kevin: 25-year-old male diagnosed with 165 73 11:6 autism spectrum disorder and anxiety disorder Jonah: 22 year-old male diagnosed with autism 65 20 4:1 spectrum disorder,

Keith: 25 year-old male diagnosed with autism 57 20 3:9 spectrum disorder and down syndrome

Social Responsiveness Scale (SRS)

Participant Name SRS T- T-Score Range Score

Travis 69 Moderate Range

Andy 71 Moderate Range

Evan 66 Moderate Range

Kevin 65 Mild Range

Jonah 75 Mild Range

Keith 82 Mild Range

Behavior Rating Inventory of Executive Function (BRIEF)

Participant Behavior Regulation Metacognition Global Executive Name

T-Score Percentile T-Score Percentile T-Score Percentile

Travis 88 98 90 99 94 99

104

Andy 93 99 72 97 84 99

Evan 48 70 52 68 50 66

Kevin 68 90 63 82 67 91

Jonah 86 98 82 99 88 99

Keith 62 86 98 99 88 99

2.1.2 Neurotypical Participants. Convenience sampling was used to identify six neurotypical participants for inclusion in this study. The sample population consisted of undergraduate and graduate students serving as staff members in the day program. Inclusion criteria included participants’ employment as a staff member, their availability on days that we were conducting the study, and the recommendation of the program’s director. The participants that were selected for inclusion into this study were evenly divided between male and female users. Participants were aged between 21 and 26 years with an average age of 22.6 years (see

Table 2).

Table 2

Neurotypical Participant Demographics

Pseudonym Gender Age

Ralph Male 25

Joel Male 22

Zelda Female 21

Devon Female 21

Max Male 26 Megan Female 21

105

2.2 Informed Consent

Informed consent was obtained by a trained research member who is the first author on this manuscript. This staff member would read the informed consent document to participants and would use chunking and simplified language to explain the voluntary nature of the study and its purpose. Concrete examples were provided to participants to help explain what kind of data would be collected and who would have access to these data. Frequent check-ins were conducted to ensure that participants were feeling comfortable and willing to continue. In cases where an individual was under the legal guardianship of another then consent was obtained by the guardian and an assent form was also obtained by the participant. A trained research member also obtained informed consent from the NT evaluation group.

2.3 Site Description

This study took place one floor above the day program’s location within the special education office suite on the 6th floor of the College of Education. There were two primary points of focus where research activities took place (see Figure 2).

106

Figure 2. Research Area. This figure demonstrates where research activities were conducted.

The first of which was the office of the principal investigator of the project. The walls of this office space acted as the focal point of research activities with the participants of the study.

Informed consent was obtained for all participants within the office. A VR-ready laptop was set- up in this office with sensors for the Oculus Rift being placed to provide the necessary room for interaction with the virtual environment. The second area of focus for this study was within a cubicle space located directly outside the office of the principal investigator. This cubicle was equipped with a VR-ready desktop that the online guide (a research participant that joined users within the environment and facilitated the learning process) could use to control his avatar. A second computer was set-up for a researcher to provide assistance and to read the script for the

Virtuoso-VR intervention while the online guide controlled the movements and gestures of the avatar. At the end of each activity, all researchers would return to the office from the cubicle to help administer surveys and collect data.

107

2.4 Study Procedures

In the following section, study procedures are outlined to better describe the activities that took place within the three research sessions. The introduction and training session provided participants with an opportunity to try out and test the different HMD that they would be using during the structured usage test, and to acquaint participants with research procedures. After informed consent and/or consent were obtained, participants began by watching a spherical 360- degree introduction video within an Oculus Rift that explained the study’s purpose. Participants then completed a Virtuoso-VR training activity so they could become familiar with system controls and common iconography. Throughout this session, a trained researcher frequently checked-in with participants to ensure that they were feeling comfortable. Once all activities were completed, participants were asked if they would like to return to continue the study and complete the next session.

The next two study sessions were designed to simulate what Virtuoso might look like in an early stage of usage. In each of these two sessions, participants begin by watching four videos that model the skills of the intervention to them within the Virtuoso-SVVR application.

Participants then proceeded to the Virtuoso-VR environment where they were given an opportunity to practice those skills in a safe and controllable fully immersive environment. Both of these research sessions provided the same general scaffolding strategy of moving from video modeling to VR practice, but with additional degrees of immersion and complexities built in through variations in environmental details, visual fidelity, headset usage, instructional scaffolds, etc.

108

2.5 Data Sources

A multi-method approach that utilized both quantitative and qualitative data sources (see

Table 3), were used to gather evidence and to respond to our research questions. These data sources are described in the following sections.

Table 3

Research Questions and how the Data Sources Address them

Research Question Data Being Used to Address the RQ Type of RQ

RQ1 Video Recordings Multi-method Field Notes, Semi-structured Interviews, and Motion Sickness Assessment Questionnaire RQ2 Motion Sickness Assessment Questionnaire Quantitative RQ3 Video Recordings Qualitative Field Notes, and Semi-structured Interviews

2.5.1 Quantitative Measure: Motion Sickness Assessment Questionnaire (MSAQ).

The MSAQ (Appendix A) is a 16-item survey that was developed to measure symptoms of motion sickness along the criteria of gastrointestinal, central, peripheral, and sopite-related

(fatigue, drowsiness, and mood changes) symptoms (Gianaros et al., 2001). A modified version of the MSAQ (Appendix B) was administered to participants identified as requiring additional levels of support. This measure was modified to simplify the language, to reduce the number of options in the likert-responses, and to translate numerical responses to a colorized photographic representation of expressions (Yoder & Lieberman, 2010). Data from the MSAQ were entered into spreadsheet software and organized for later analysis.

2.5.2 Qualitative data sources

109

Qualitative data sources consisted of (1) video recordings, (2) field notes, and (3) semi- structured interviews. To begin for video recordings, Open Broadcaster Studio

(https://obsproject.com/) was used to capture screen, webcam, and audio recordings of participants and the online guide for a total of 24 videos (12 from the online guide, 12 from participants). For recording videos (n=12) of Virtuoso-SVVR in Google Cardboard, mobile screen monitoring software Vysor (https://www.vysor.io/) was used for all participants. For recording videos (n=12) of Virtuoso-SVVR in Oculus Rift, OBS was used. Videos from the online guide and study participants were merged into one video that presented both perspectives, allowing researchers to observe participants and online guide simultaneously. Next, field notes were documented in a notebook and included: observations related to user interactions, responses, and comments made during participant sessions were documented into a notebook for future analysis. Finally, semi-structured interviews were conducted after each part of the research sessions (Appendix C). After each of the sessions, field notes and interview data were scanned and digitized into Portable Document Format (PDF) that could be stored in a secure

Drive and accessed later to be analyzed.

2.6 Data Analysis

Analysis of these data was conducted through a multi-methods approach. Quantitative data were analyzed through methods provided by the measure. Descriptives were calculated to provide gross aggregate scores across research sessions and evaluation groups. Qualitative data were analyzed using both deductive and inductive approaches.

2.6.1.1 Deductive analysis. A directed approach to deductive analysis guided our analysis

(Potter & Levine‐Donnerstein, 1999). The process began by using the MSAQ as a guide for

110

generating an initial coding scheme (Hsieh & Shannon, 2005). An overview of the directed approach is provided in Figure 3.

Figure 3: Direct Approach. This figure outlines the directed approach to data analysis.

With the goal of studying how participants experience and feel symptoms of cybersickness across axes of bodily sensations, the MSAQ was a useful starting place for developing a preliminary coding scheme; largely because other research in the field recognizes the multidimensional nature of cybersickness and its relationship to motion sickness. Examples of these bodily sensations and dimensions were used to develop a system for coding the qualitative data (see Table 4).

Table 4 .

Qualitative codes from the MSAQ related to the dimensions and symptoms of motion sickness

Code MSAQ Provided Examples Coding Instructions

Cybersickness

Gastrointestinal ● Sick to stomach Coded when a participant ● Queasy expresses gastrointestinal distress ● Ill while they are experiencing the

111

● Stomach discomfort VR/SVVR intervention. ● Vomiting Central ● Dizziness Coded when a participant ● Feeling like the room is expresses central system distress spinning, Feeling faint- while they are experiencing the like VR/SVVR intervention. ● Feeling lightheaded Disorientation ● Blurred vision/eye strain ● Head rush Peripheral ● Feeling sweaty, Coded when a participant ● Feeling clammy, expresses symptoms of peripheral ● Feeling hot distress while they are experiencing the VR/SVVR intervention. Sopite-related ● Feeling annoyed Coded when a participant ● Fatigue expresses symptoms of sopite- ● Feeling of uneasiness related distress while they are experiencing the VR/SVVR intervention. Coding procedures for this analysis followed the techniques outlined by Hsieh and Shannon

(2005) for conducting a directed qualitative content analysis. This directed qualitative content analysis procedure was used to guide the steps taken to analyze these data to look for evidence of cybersickness. To provide additional rigor and validity to the findings, this process was undertaken in a reflexive manner that allowed for the review of data across several iterations of analysis. Figure 3 highlights the directed qualitative content analysis procedures that guided this inquiry.

2.6.1.2 Agreement and reliability. These deductive coding procedures were completed by two of the researchers on this project (a primary observer and a secondary observer included to code for agreement and reliability). The primary observer coded data sources from 100% of the research sessions, and the secondary observer coded data sources from 50% of the research sessions. Discrepancies in codes were compared and discussed ro reach agreement.

112

2.6.1.2 Inductive analysis. In addition to the directed approach discussed above, an inductive qualitative analysis approach was also used. First, a holistic analysis was performed by reviewing all screen recordings, observation notes, and semi-structured interview data. Second, an open coding process (Benaquisto, 2008) was performed to create a preliminary coding scheme and provisional coding definitions. Multiple read-throughs of the data were conducted to identify recurring themes. A constant comparative method was used (Denzin & Lincoln, 2011), in which data were recursively reviewed to validate identified themes, leading to identification of categories and relationships. These were then explored to identify new codes and discern the relevance of those codes. This process continued until categorical saturation was reached

(Lincoln & Guba, 1986), the results of which were three overarching themes related to how the use of VR headsets influenced the learning experience of participants. Categories, codes, and code definitions are provided in Table 5.

Table 5.

Qualitative codes and operationalizations

Code Operationalization Affect

Joy, fun, or excitement This theme was coded when a participant expressed a positive state of affect including statements of joy (e.g., saying they were having fun), or excitement with using the head-mounted displays.

Accessibility

Physical Accessibility Coded when the use of a head-mounted display or their associated hand controllers had ramifications on the VR/SVVR intervention’s content and/or possibilities for action related to physical accessibility (e.g., psychomotor impairments precluding use of hardware).

113

Cognitive Accessibility Coded when the use of a head-mounted display or their associated hand controllers had ramifications on the VR/SVVR intervention’s content and/or possibilities for action related to the cognitive accessibility of the system (e.g., ability to express oneself). Comfort

Physical Comfort Coded when a participant verbally comments on the comfort of the head-mounted displays used during the intervention.

2.6.2 Quantitative Analysis. MSAQ data were analyzed using quantitative methods to provide gross motion sickness scores as well as stratified scores across subscales. The MSAQ gross motion sickness score was obtained by calculating the percentage of total points (sum of all items/144) *100. Subscale scores were obtained by calculating percentage of points within each factor. The simplified MSAQ used a three-point Likert scale (ranging from 0-2); requiring modifications to the analysis method. The gross motion sickness score was obtained by calculating the percentage of total points and dividing by 32. Subscale scores were also obtained by calculating percentage of points within each factor.

Descriptive statistics of the gross motion sickness scores across participants, sessions, devices, and dimensions of the questionnaire were calculated to provide gross mean scores.

MSAQ scores and gross aggregate values were visualized into line and bar charts.

3. Findings

Findings are presented below. First, conclusions from qualitative analysis relating to the character of cybersickness are reported, including a broad overview of qualitative findings as triangulated with quantitative data (RQ1). Second, findings are reported related to how cybersickness compared between participants with ASD and the neurotypical peer group (RQ2).

114

Third, findings related to the impact of HMD on learner experience are reported (RQ3). Details are provided in the following sections.

3.1 RQ1: Character of Symptoms of Cybersickness Across Sessions for ASD Group

Generally speaking, evidence of cybersickness was infrequent with only 12 instances being observed. The evidence that was found typically required additional probing to confirm as participants rarely outright stated if they were feeling any kind of discomfort and the symptoms were not severe enough to provide visibly observable feelings of distress. Evidence of cybersickness was seen in five out of the six participants with the most frequent symptoms being related to feelings of dizziness and disorientation. Nine of the 12 qualitative instances of cybersickness fell under this categorization. There was one reported instance of a participant having gastrointestinal distress and two cases of symptoms concerning peripheral distress where an individual would feel clammy or hot while using a HMD. While five of the six participants exhibited some qualitative evidence of cybersickness, gross MSAQ values and its subscores indicate that these symptoms were relatively minor to moderate across the three research sessions (see Table 6).

Table 6

Aggregate MSAQ Dimension Scores Across Research Sessions

Training Phase 1 Phase 2 Central 3.52 13 24.6 Gastrointestinal 8.33 10.42 14.81 Peripheral 12.96 15.74 32.72 Sopite 9.72 15.28 28.47

Findings indicate that the lowest gross value was seen in the training session (Session 0) and the highest gross value seen in Session 2 suggesting that participants would likely continue

115

to report more severe feelings of cybersickness if we increased their exposure time and added additional interactive and visual stimuli. These findings are made more apparent when the aggregate scores across research sessions are visualized. An upward trend is seen as participants progressed through the sessions, but the scores remained low at <25 out of 100 (see Figure 4).

Figure 4. Gross MSAQ Scores. This figure visualizes gross MSAQ scores across sessions.

While gross MSAQ scores were relatively low, there were individual participants whose quantitative results suggest more severe manifestations of cybersickness. Keith for example reported MSAQ scores higher than any other research participant across all three sessions of the study.

3.1.1 Peripheral Symptoms. The first observation of a participant feeling clammy or hot took place during Evan’s training session. After finishing the Virtuoso-VR training environment and taking off his Oculus Rift headset, he stated “I just got a headache in there. I was starting to get really hot." Results of Evan’s MSAQ helped to triangulate the character of how he was

116

feeling. A calculated peripheral dimension score from Evan’s training session came out to be a

33.3 which gives insight into the severity of which he was experiencing that symptom of feeling hot. Andy also reported peripheral symptoms of feeling “a little dizzy” and “a little clammy” after finishing the Virtuoso-SVVR application in Session 2. After completing the next part of the research session within Virtuoso-VR Level 2 he said was no longer feeling clammy but when asked about his dizziness he said he was feeling it, “A little bit but it’s not as bad.” The results of

Andy’s MSAQ helped to characterize the symptoms of cybersickness that he was feeling.

Despite the presence of these symptoms, his aggregated MSAQ score still remained relatively low suggesting that these feelings were relatively minor.

While qualitative evidence of peripheral symptoms were infrequent across the research sessions, quantitative data from the MSAQ indicates that participants with ASD on the whole were likely experiencing this dimension of cybersickness more notably than others. This finding is made more apparent when comparing aggregate scores across the dimensions of the MSAQ

(see Table 6).

3.1.2 Gastrointestinal Symptoms. Qualitative evidence of gastrointestinal distress was infrequent across research sessions. The only instance found in our analysis took place after

Jonah had finished watching the Virtuoso-SVVR videos in the Google Cardboard. He told researchers that he was feeling sick to his stomach. When probed further he said these symptoms were presenting “in heart.” However, quantitative evidence from his MSAQ indicates that this symptom may have been related to another dimension of cybersickness as the only positive response to the survey was that of being lightheaded; a symptom associated with the central domain. Gross calculated values from the MSAQ suggest that qualitative evidence of gastrointestinal distress was infrequent because these symptoms were not severely felt by usage

117

test participants. Of the four dimensions of cybersickness outlined by the MSAQ, the gastrointestinal domain consistently had some of the lowest scores across all three research sessions (see Table 6).

3.1.3 Central Symptoms. The most commonly observed symptom of cybersickness was dizziness and disorientation. Qualitative analysis found nine instances of central domain symptoms that were present across four of the six participants from the ASD evaluation group.

For example, Travis told researchers that watching the Viruoso-SVVR videos in a Google

Cardboard was “A little disorienting.” An analysis of the MSAQ from this session provides a breakdown of how Travis was feeling and indicates that central domain symptoms were the most severely felt symptom. Another example of a central-related discomfort being felt was with

Jonah after he had completed Virtuoso-SVVR application within the Oculus Rift during Session

2 of the research study. Jonah said that he was feeling “A little bit” dizzy.

Qualitative evidence suggests that symptoms of dizziness or disorientation seems to be most prevalent after a participant finished a Virtuoso-SVVR activity, as all but two of the central-related symptoms were observed or reported after the videos were watched in an HMD.

Andy and Keith were the only two participants where we were able to observe evidence of central-domain symptoms during or immediately after the Virtuoso-VR sessions. However, it should be noted that both of these participants had previously reported that they were feeling dizzy after completing the Virtuoso-SVVR activity of their respective research sessions. This finding could indicate that the symptom of dizziness after the Virtuoso-VR session was a carry- over effect from the first activity of the session. Although central domain symptoms were the most frequently observed across research sessions, calculated values from the MSAQ suggest that these symptoms were not as severe.

118

3.1.4 Sopite Symptoms. While there was no qualitative evidence of sopite symptoms across research sessions, calculated values from the MSAQ suggest that participants were feeling frustration and fatigue while using HMD. Gross MSAQ scores suggest that sopite symptoms were more severe than other dimensions of cybersickness. The feelings of annoyance or fatigue became more severe as participants were exposed to the more complex and visually stimulating scenarios of the later research sessions (see Table 6).

3.2 Research Question 2: How do MSAQ Scores Between ASD Group and NT Group

Differ

An analysis of MSAQ results for the NT participants indicates that individuals from this evaluation group were also feeling some degree of cybersickness. Gross calculated values from the MSAQ suggest that these symptoms were minor for most participants across the three research sessions (see Table 7).

Table 7

Gross MSAQ Scores across NT Participants and Research Sessions

Zelda Megan Joel Ralph Max Devon Average Training 0.69 3.47 4.17 25.00 0.00 15.63 8.16 Session 1 2.78 6.21 4.86 28.13 6.25 12.50 10.13 Session 2 2.78 6.25 3.47 31.25 15.63 12.50 11.98

On average participants with ASD reported a higher degree of symptom severity than that of the neurotypical participants. This finding was seen across all three research sessions, but became more pronounced as participants progressed from the training session (Session 0) to the more complex activities in Session 2 (see Figure 6).

119

Figure 6. Average MSAQ Scores. This figure visualizes aggregate MSAQ scores across research sessions.

With gross MSAQ scores being higher in the ASD group, their subscore values also tended to be higher. This finding was seen in all but three subscale values across the research sessions. Two of these three exceptions were noted during the training session (Session 0) when neurotypical participants reported higher severity of gastrointestinal and central domain symptoms (see. Figure 7).

120

Figure 7. This figure visualizes gross MSAQ scores across Session 0 of the study.

The third and last example where the NT evaluation group reported a higher value of a

MSAQ subscale was seen in Session 1 when the central domain values were slightly higher than the ASD group (see Figure 8).

Figure 8. This figure visualizes gross MSAQ scores across Session 1 of the study.

By the conclusion of the last research session (Session 2) ASD group scores were higher across all dimensions of the MSAQ (see Figure 9). Differences in these subscores across research sessions indicate that while both participant groups experienced cybersickness in some capacity, and oftentimes in similar severity according to gross MSAQ scores, that the groups had variability in how they felt the symptoms. An analysis of MSAQ scores for individuals with autism suggests that feelings of cybersickness are prominently expressed through symptoms of feeling hot, clammy, having cold sweats, or through a general fatigue and annoyance. On the other hand, neurotypical participants more prominently felt symptoms of dizziness, eye strain, and stomach discomfort.

121

Figure 9. This figure visualizes gross MSAQ scores across Session 2 of the study.

3.3 Research Question 3: HMD Impact on Learner Experience

Despite there being some evidence of cybersickness, the nature of usage test participants’ learner experience with virtual reality head-mounted displays can be characterized as particularly enjoyable and positive. For example, Andy stated that he felt “badish” after completing Virtuoso

VR Level 2, but not enough to make him want to stop participating and would play again. On the whole, participants found the HMD across treatment sessions to be comfortable, having moderate physical and cognitive accessibility, having controls that were easy to use, and being acceptable for use in the intervention. There were some problems related to the hardware in terms of comfort and accessibility, but these concerns were largely tolerated by participants as they found the experience to be overwhelmingly positive and intrinsically reinforcing. This feeling of positive affect led to the successful completion of all usage test activities and sessions by every participant in the study. In the following sections, themes are specifically discussed with a specific focus on the nature of user experience through the lens of participants’ experiences in relation to the head-mounted displays used in the intervention across the treatment sessions of the usage test.

122

3.3.1 Affect. Across nearly all sessions across all sessions was evidence that participants found their experiences with associated HMD to be a positive experience. For example, when

Jonah first used the Oculus Rift to complete the training session he expressed a great deal of excitement upon entering the virtual world, “Oh...OHHHHH *laughs*” Another example of positive affect towards a HMD was apparent during Travis's first treatment session. After completing Virtuoso-SVVR in the Google Cardboard, Travis was clearly excited to try out the fully immersive Oculus Rift. While we were trying to explain the procedures that he would be undergoing he kept trying to reach for the Rift so he could get started. Upon entering the virtual world he asked if we could wait before starting the intervention because he was admiring the graphics and wanted to take a look around. When asked if he was ready to begin, he said, "Give me a second. I’m admiring the graphics here. It's wonderful. Oh wow!”

This positive affect of the HMD was also present when discussing issues of cybersickness. When asked about how they were feeling after using the Oculus Rift to complete the SVVR scenarios, both Travis and Andy stated that they were feeling some symptoms of cybersickness. Travis had previously reported the same central system symptoms with the

Cardboard experience, but Andy had previously reported no symptoms with the Cardboard.

Despite the presence of these symptoms, both participants stated that they preferred the Oculus

Rift over the Google Cardboard.

3.3.2 Comfort. Over the course of the Virtuoso usage test every participant stated that they found the different HMD to be comfortable. This finding was true regardless of device and intervention strategy being implemented at the time. The only negative element to an individual’s comfort that was perceived is related to the use of the Google Cardboard and strain that it could put on their arms. The Cardboard is strapless and therefore requires its users to hold

123

the device to their faces. Two participants explicitly stated that they felt their arms were getting tired while watching the SVVR videos in the Cardboard. Andy said that holding the Cardboard to his face made his arms tired, but overall he found the device to be comfortable. Keith experienced similar discomfort and asked to take a break after watching 3 of the 4 videos in the

Cardboard. While other participants did not verbally state that their arms were tired while using the device, qualitative observations from the usage tests indicates that other participants would prop their arms onto the armrest of the chair to provide relief from arm strain. Travis reported minor discomfort with the Oculus Rift, but attributed that to the headset being a bit loose on his head.

3.3.3 Accessibility. Accessibility was examined from the point of view of physical and cognitive accessibility. Virtuoso was designed to purposely accommodate challenges that individuals with cognitive disabilities may experience while using this technology. Universal

Design for Learning principles were implemented to help alleviate such concerns. For example, the use of the Oculus Rift allowed us to provide Virtuoso participants with multiple means of action & expression throughout the intervention. Keith has difficulties with verbally responding due to deficits related to his autism diagnosis. While completing tasks within the Virtuoso VR platform he was able to participate in the activity by using gestural responses instead of verbally replying to prompts. As an example, Keith would use the Oculus Rift’s hand controller to point at the map in the office in response to probing questions from the online guide. Not only did the hand controllers facilitate the physical action of moving one’s arms in response to questions, but they also allowed for specific gestures such as pointing with individual fingers.

Travis and Andy, who were more capable of verbal responses, would also utilize this feature when engaging within the Virtuoso VR scenarios.

124

Accessibility issues related to the use of a virtual reality HMD were observed with the hand controllers of the Oculus Rift. Anticipating such problems, we designed the VirtuosoVR intervention in such a way that provided multiple options for physical action through the mapping of various control schemes with the Oculus Rift hand controllers. While participating in the VirtuosoVR intervention players could control their avatar with two flexible options. Players had the option of moving their avatar forward, backward, or sideways with their left hand controller while their avatar could be rotated with either the right hand controller or by physically rotating their body. These controller options could be seamlessly and interchangeably utilized and allowed for cognitive and physical flexibility while using the intervention.

For the most part, these control options allowed participants to gain a degree of fluency with the VR controllers. However, Evan struggled at times to operate the Oculus Rift’s hand controllers due to psychomotor impairments and repeatedly commented on the difficulty of the controls while participating in the Session 1 of the intervention. For example, during the

Virtuoso-VR Level 1 activity, Evan was asked to turn around and to find the door that would take him out of the SPED office suite and down into campus. When he tried to turn around he struggled with using the right hand controller of the Oculus Rift to rotate his body. Instead of turning, he inadvertently strafed his avatar through a wall within the office model and got stuck within the graphics behind a desk. After noticing that Evan was stuck in the graphics, the following exchange took place:

Researcher: Uh oh, why don’t you walk back towards me?

Evan: Oh jeez...

Researcher: If you get stuck, Noah can always give you a hand

with the controls as the can be a little tricky

125

Evan would remain stuck in the graphics and unable to get out on his own. Eventually a research assistant would step in to help Evan get out of the wall and where he needed to be. He would continue to struggle with the controls of the Virtuoso-VR environment throughout this activity.

Later, when walking towards the shuttle stop he would again face challenges with moving his avatar. When trying to turn his avatar towards the university fountain, he found the controls to be cumbersome and was unable to rotate in a way that allowed fluid control. He repeatedly said,

“I'm having a hard time...I'm having a hard time.” Despite these challenges Evan was able to complete levels of Virtuoso-VR with intervening help by a research assistant. In fact, four of the six usage test participants were able to complete their sessions without assistance.

For the two participants who required additional assistance there were observed challenges with how they were able to draw connections between their actions and how their avatar would respond. This challenge was particularly notable when participants were using both controllers at the same time. For instance Keith was observed having difficulties controlling his avatar. While using both controllers he was routinely able to move his avatar forward and backwards. However, he had difficulties rotating his avatar and would often end up stuck in the graphics or unable to progress. There were several instances where he would reach out and lurch his entire body forward in his chair to try to get his avatar to move. To help alleviate this problem we took away his right hand controller which allowed him to focus on just turning his body to rotate his avatar. A research assistant would later have to provide additional assistance by operating the right controller to turn the participant’s avatar towards areas of interest in the intervention. Jonah had similar difficulties and struggled with the controls during the training session. It was determined that a research assistant would control his avatar for both sessions of the intervention due to his severe cognitive impairments.

126

4. Discussion

VR is widely seen as holding promise for providing therapeutic learning opportunities to address social, communicative, and adaptive behavior skills deficits for individuals with ASD.

As the field has progressed, and evidence suggests that users with autism generally find VR to be intrinsically motivating and acceptable for use, researchers have begun to turn their attention to the use of HMD that has become more readily available over the last several years. The use of these headsets are widely considered to be an effective way of enhancing one’s sense of presence and immersion within a virtual environment which has implications on promoting transfer of learnt skills (Bian et al., 2013). However, the use of these HMD is oftentimes associated with symptoms of cybersickness, which leads researchers to wonder if people with ASD would continue to find the use of VR as acceptable if they were situated in a fully immersive platform

(Bozgeyikli et al., 2018), and if their use is in any way appropriate for a population with sensory processing disorders (Bian et al., 2013; M Wang & Reid, 2011).

To better understand the nature of how individuals with ASD would find the use of HMD to be acceptable, this study sought to investigate how participants of an adult day program experienced symptoms of cybersickness and how they interacted with and used the devices. The findings from qualitative usage test observations and quantitative self-reported motion sickness questionnaires indicate that participants of this study largely had a positive experience using the

HMD and found them to be acceptable and comfortable. However, there was also evidence of accessibility issues, primarily in relation to the Oculus Rift’s hand controllers, and evidence of cybersickness that could have more substantial implications with prolonged exposure in fully immersive VR platforms.

127

Results from this study provide a preliminary level of support to the field that individuals with ASD find commercial HMD to be acceptable and comfortable to use within various VR contexts. Findings also indicate that some people with ASD may experience minimal effects of cybersickness when using HMD while experiencing VR scenarios, and that these effects may be similar to that of people without cognitive disabilities or sensory processing disorders. However, the high degree of variability associated with an ASD diagnosis means that these findings cannot be generalized to all people with ASD. While overall symptoms of cybersickness were relatively minor, two participants did report higher scores on the MSAQ. These findings indicate that while participants generally did not experience heightened symptoms of cybersickness, that individual differences could lead to more severe feelings, and that these symptoms could further be problematic if additional stimuli were introduced into the VR scenarios. However, despite the fact that participants did experience symptoms of cybersickness, some more severe than others, no participant asked to leave the research study and all participants were able to complete all three sessions of usage testing.

While the effects of cybersickness seem to be mild within the context of this usage test, we still caution the use of fully immersive VR in the field given the sensory processing disorders that people with ASD frequently experience. With this in mind, we encourage the field to establish and implement ethical guidelines and VR-intervention designs that can help to control the onset and severity of cybersickness. While there is currently no known formula to predict the severity of cybersickness in an individual, there is evidence that suggests duration of virtual simulations may play a role (Bruck & Watters, 2011; W. Chen et al., 2002). Therefore we suggest implementing a stage-wise or phase approach that limits time spent in a HMD and provides participants with ample time outside of stimulating contexts. In the case of this

128

research, we implemented such an approach based off of the structure of prior work in the field

(Newbutt et al., 2016). We posit that our multi-phased scaffolded approach helped to promote acceptance, satisfaction, and comfort with the HMD and that it also minimized adverse effects of cybersickness. While all six of our participants with ASD exhibited some signs of cybersickness, they still seemed to find the overall experience to be positive and were able to complete all virtual activities. We wonder if this tolerance and ability to find satisfaction with the system were to remain true if participants were subjected to prolonged exposure within our virtual activities that could lead to heightened feelings of sickness.

While participants were able to complete every session of the study, some did require assistance. The need for this assistance was largely related to the use of the Oculus Rift’s hand controllers and difficulties with controlling an avatar during the Virtuoso-VR activities.

Comorbid impairments associated with ASD including psychomotor and cognitive deficits complicate the ability to design accessibility virtual reality interventions for this population.

Many of the observed control problems were noticed as participants tried to rotate their avatar.

The way Virtuoso-VR was designed was for participants to use the left hand controller of the

Rift to move their avatar forward and backwards and for the right hand controller to rotate their body. While this control scheme is often used in commercial video games, it seemed that participants were expecting their physical body movements to be the means of rotating their avatar. Designers of VR interventions for this population, especially those with more severe impairments, should seek to simplify the control schemata of their platforms. With the advent of sensor-free and wireless HMD, there are now better options for including such considerations.

These HMD can allow for a more naturalistic control of avatars that is based on the movement of the user which could help with promoting physical accessibility of VR platforms.

129

We report a number of known limitations with this research. True of most research concerned with the application of interventions for individuals with autism, our sample size is limited. Given that individuals with ASD represent a low-incidence population, future researchers would likely encounter similar issues with small sample sizes. However, the findings from this study are not meant to be generalizable to the full range of autism characteristics including levels of cognitive functioning, symptoms, and language abilities. We also note that calculated MSAQ values may be inflated for those receiving the modified measures due to the simplification of the likert-style responses and reduction of possible options. Despite this modification, scores remained relatively low. Lastly, a limitation exists concerning the nature of cybersickness symptoms and how they present in individuals with sensory processing disorders.

Symptoms of both sensory processing disorders and cybersickness present in similar ways and can precede and amplify one another (Arcioni et al., 2019). An example of a sensory processing disorder symptom is appearing to be clumsy and or feeling dizzy. This symptom is related to that of postural stability which is regulated via integration of signals originating from somatosensory, vestibular, and visual systems that are then processed by the cortex and cerebellum to create an appropriate motor output and response (Arcioni et al., 2019). Research suggests that those with postural instability tend to exhibit greater chances of feeling symptoms of motion sickness

(Chardonnet et al., 2017) which has implications for researchers trying to distinguish if participants are feeling symptoms of their sensory processing disorders or cybersickness.

However, it may not be relevant if these symptoms can be fleshed apart as they both represent concerns for the health and safety of participants using virtual reality.

130

4.1 Directions for future research and design

Little research has been conducted to examine how this population will experience commercially available HMD and the ramifications on learning and adoption are still unclear.

However, what is known is that feelings of cybersickness can detrimentally impact a user’s sense of presence which is strongly correlated with negative training transfer (Maraj et al., 2017). In addition, evidence from the field suggests that those who experience symptoms of cybersickness are more likely to leave VR-based experiences and therefore not complete or participate in educational activities (Brooks et al., 2010). Since the cause of cybersickness is still not fully understood and is in many ways unknown (Bruck & Watters, 2011; W. Chen et al., 2002), we implore the field to conduct future research on the use of HMD and the effects of cybersickness for individuals with ASD (Bradley & Newbutt, 2018; Glaser & Schmidt, 2018; Newbutt et al.,

2016).

The findings from this study provide a preliminary direction that future researchers should use to develop and expand evidence-based practices for this technology to promote the acceptance of HMD with this population. This research is critical to the field because without it, we cannot begin to address the potential benefits that HMD have for increasing one’s sense of presence and immersion within realistic environments that can promote transfer of skills.

131

References

Arcioni, B., Palmisano, S., Apthorp, D., & Kim, J. (2019). Postural stability predicts the

likelihood of cybersickness in active HMD-based virtual reality. Displays, 58, 3–11.

https://doi.org/10.1016/j.displa.2018.07.001

Bailenson, J. N., Yee, N., Blascovich, J., & Guadagno, R. E. (2008). Transformed social

interaction in mediated interpersonal communication. Mediated Interpersonal

Communication, 6, 77–99.

Bartoli, L., Garzotto, F., Gelsomini, M., Oliveto, L., & Valoriani, M. (2014). Designing and

evaluating touchless playful interaction for ASD children. Proceedings of the 2014

Conference on Interaction Design and Children - IDC ’14, 17–26.

https://doi.org/10.1145/2593968.2593976

Baxter, A. J., Brugha, T. S., Erskine, H. E., Scheurer, R. W., Vos, T., & Scott, J. G. (2015). The

epidemiology and global burden of autism spectrum disorders. Psychological Medicine,

45(3), 601–613. https://doi.org/10.1017/S003329171400172X

Beaumont, R., & Sofronoff, K. (2008). A multi-component social skills intervention for children

with Asperger syndrome: The Junior Detective Training Program. J. Child Psychol.

Psychiatry, 49(7), 743–753. https://doi.org/10/ccr85t

Bellani, M., Fornasari, L., Chittaro, L., & Brambilla, P. (2011). Virtual reality in autism: State of

the art. Epidemiology and Psychiatric Sciences, 20(03), 235–238.

https://doi.org/10.1017/S2045796011000448

Bian, D., Wade, J. W., Zhang, L., Bekele, E., Swanson, A., Crittendon, J. A., Sarkar, M.,

Warren, Z., & Sarkar, N. (2013). A Novel Virtual Reality Driving Environment for

Autism Intervention. In C. Stephanidis & M. Antona (Eds.), Universal Access in Human-

132

Computer Interaction. User and Context Diversity (Vol. 8010, pp. 474–483). Springer

Berlin Heidelberg. https://doi.org/10.1007/978-3-642-39191-0_52

Bozgeyikli, L., Raij, A., Katkoori, S., & Alqasemi, R. (2018). A Survey on Virtual Reality for

Individuals with Autism Spectrum Disorder: Design Considerations. IEEE Transactions

on Learning Technologies, 11(2), 133–151. https://doi.org/10.1109/TLT.2017.2739747

Bradley, R., & Newbutt, N. (2018). Autism and virtual reality head-mounted displays: A state of

the art systematic review. Journal of Enabling Technologies, 12(3), 101–113.

https://doi.org/10.1108/JET-01-2018-0004

Brooks, J. O., Goodenough, R. R., Crisler, M. C., Klein, N. D., Alley, R. L., Koon, B. L., Logan,

W. C., Ogle, J. H., Tyrrell, R. A., & Wills, R. F. (2010). Simulator sickness during

driving simulation studies. Accident Analysis & Prevention, 42(3), 788–796.

https://doi.org/10.1016/j.aap.2009.04.013

Bruck, S., & Watters, P. A. (2011). The factor structure of cybersickness. Displays, 32(4), 153–

158. https://doi.org/10.1016/j.displa.2011.07.002

Chardonnet, J.-R., Mirzaei, M. A., & Mérienne, F. (2017). Features of the Postural Sway Signal

as Indicators to Estimate and Predict Visually Induced Motion Sickness in Virtual

Reality. International Journal of Human–Computer Interaction, 33(10), 771–785.

https://doi.org/10.1080/10447318.2017.1286767

Chen, J., Wang, G., Zhang, K., Wang, G., & Liu, L. (2019). A pilot study on evaluating children

with autism spectrum disorder using computer games. Computers in Human Behavior,

90, 204–214. https://doi.org/10.1016/j.chb.2018.08.057

133

Chen, W., Yuen, S. L., & So, R. H. Y. (2002). A Progress Report on the Quest to Establish a

Cybersickness Dose Value. Proceedings of the Human Factors and Ergonomics Society

Annual Meeting, 46(26), 2119–2123. https://doi.org/10.1177/154193120204602604

Cobb, S. V., Nichols, S., Ramsey, A., & Wilson, J. R. (1999). Virtual reality-induced symptoms

and effects (VRISE). Presence. Teleoperators & Virtual Environments, 8(2), 169–186.

https://doi.org/10/fsskk5

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual

environments? British Journal of Educational Technology, 41(1), 10–32.

https://doi.org/10.1111/j.1467-8535.2009.01038.x

Davignon, M. N., Friedlaender, E., Cronholm, P. F., Paciotti, B., & Levy, S. E. (2014). Parent

and Provider Perspectives on Procedural Care for Children with Autism Spectrum

Disorders. Journal of Developmental & Behavioral Pediatrics, 35(3), 207.

https://doi.org/10.1097/DBP.0000000000000036

Dennison, M. S., Wisti, A. Z., & D’Zmura, M. (2016). Use of physiological signals to predict

cybersickness. Displays, 44, 42–52. https://doi.org/10.1016/j.displa.2016.07.002

Denzin, N., & Lincoln, Y. (2011). The SAGE Handbook of Qualitative Research—Google

Books.

https://books.google.com/books?hl=en&lr=&id=AIRpMHgBYqIC&oi=fnd&pg=PP1&dq

=denzin+and+lincoln+2011&ots=koGNCIcCje&sig=leQkYfiImgyjrdzBNFjwhTwimBA

#v=onepage&q=denzin%20and%20lincoln%202011&f=false

Durkin, K. (2010). Videogames and young people with developmental disorders. Rev. Gen.

Psychol., 14(2), 122–140. https://doi.org/10/fdtzww

134

Eaves, L. C., & Ho, H. H. (2008). Young adult outcome of autism spectrum disorders. J. Autism

Dev. Disord., 38, 739–747. https://doi.org/10/ccwdnx

Geenen, S. J., Powers, L. E., & Sells, W. (2003). Understanding the Role of Health Care

Providers During the Transition of Adolescents With Disabilities and Special Health Care

Needs. Journal of Adolescent Health, 32(3), 225–233. https://doi.org/10.1016/S1054-

139X(02)00396-8

Gianaros, P. J., Muth, E. R., Mordkoff, J. T., Levine, M. E., & Stern, R. M. (2001). A

Questionnaire for the Assessment of the Multiple Dimensions of Motion Sickness.

Aviation, Space, and Environmental Medicine, 72(2), 115–119.

Gianaros, P. J., & Stern, R. M. (2010). A Questionnaire for the Assessment of the Multiple

Dimensions of Motion Sickness. 11.

Gibson, J. J. (2014). The Ecological Approach to Visual Perception: Classic Edition. Psychology

Press.

Glaser, N. J., & Schmidt, M. (2018). Usage considerations of 3D collaborative virtual

learning environments to promote development and transfer of knowledge and skills

for individuals with autism. Technology, Knowledge and Learning.

https://doi.org/10.1007/s10758-018-9369-9

Grynszpan, O., Weiss, P. L., Perez-Diaz, F., & Gal, E. (2014). Innovative technology-based

interventions for autism spectrum disorders: A meta-analysis. Autism, 18(4), 346–361.

Hedley, D., Uljarević, M., Cameron, L., Halder, S., Richdale, A., & Dissanayake, C. (2017).

Employment programmes and interventions targeting adults with autism spectrum

disorder: A systematic review of the literature. Autism, 21(8), 929–941.

https://doi.org/10/gb2vzf

135

Hsieh, H.-F., & Shannon, S. E. (2005). Three Approaches to Qualitative Content Analysis.

Qualitative Health Research, 15(9), 1277–1288.

https://doi.org/10.1177/1049732305276687

Jarrold, W., Mundy, P., Gwaltney, M., Bailenson, J., Hatt, N., McIntyre, N., Kim, K., Solomon,

M., Novotny, S., & Swain, L. (2013). Social Attention in a Virtual Public Speaking Task

in Higher Functioning Children With Autism: Social attention, public speaking, and

ASD. Autism Research, 6(5), 393–410. https://doi.org/10.1002/aur.1302

Kanne, S. M., Gerber, A. J., Quirmbach, L. M., Sparrow, S. S., Cicchetti, D. V., & Saulnier, C.

A. (2011). The role of adaptive behavior in autism spectrum disorders: Implications for

functional outcome. J. Autism Dev. Disord., 41(8), 1007–1018. https://doi.org/10/cqd7rd

Knight, V., McKissick, B. R., & Saunders, A. (2013). A review of technology-based

interventions to teach academic skills to students with autism spectrum disorder. J.

Autism Dev. Disord., 43(11), 2628–2648.

LaViola, J. J. (2000). A discussion of cybersickness in virtual environments. ACM SIGCHI

Bulletin, 32(1), 47–56. https://doi.org/10.1145/333329.333344

Maraj, C. S., Badillo-Urquiola, K. A., Martinez, S. G., Stevens, J. A., & Maxwell, D. B. (2017).

Exploring the Impact of Simulator Sickness on the Virtual World Experience. In J. I.

Kantola, T. Barath, S. Nazir, & T. Andre (Eds.), Advances in Human Factors, Business

Management, Training and Education (pp. 635–643). Springer International Publishing.

https://doi.org/10.1007/978-3-319-42070-7_59

Matson, J. L., Fodstad, J. C., Mahan, S., & Sevin, J. A. (2009). Cutoffs, norms, and patterns of

comorbid difficulties in children with an ASD on the Baby and Infant Screen for Children

136

with aUtIsm Traits (BISCUIT-Part 2). Research in Autism Spectrum Disorders, 3(4),

977–988. https://doi.org/10.1016/j.rasd.2009.06.001

Miller, H. L., & Bugnariu, N. L. (2016). Level of immersion in virtual environments impacts the

ability to assess and teach social skills in autism spectrum disorder. Cyberpsychol. Behav.

Soc. Netw., 19(4), 246–256. https://doi.org/10/f8h82z

Mitchell, P., Parsons, S., & Leonard, A. (2007). Using virtual environments for teaching social

understanding to 6 adolescents with autistic spectrum disorders. J. Autism Dev. Disord.,

37(3), 589–600. https://doi.org/10/djrjrx

Moss, J. D., & Muth, E. R. (2011). Characteristics of head-mounted displays and their effects on

simulator sickness. Human Factors, 53(3), 308–319.

https://doi.org/10.1177/0018720811405196

Müller, E., Schuler, A., & Yates, G. B. (2008). Social challenges and supports from the

perspective of individuals with Asperger syndrome and other autism spectrum

disabilities. Autism, 12(2), 173–190. https://doi.org/10/b78xmh

Nalivaiko, E., Davis, S. L., Blackmore, K. L., Vakulin, A., & Nesbitt, K. V. (2015).

Cybersickness provoked by head-mounted display affects cutaneous vascular tone, heart

rate and reaction time. Physiology & Behavior, 151, 583–590.

https://doi.org/10.1016/j.physbeh.2015.08.043

Neely, L. C., Ganz, J. B., Davis, J. L., Boles, M. B., Hong, E. R., Ninci, J., & Gilliland, W. D.

(2016). Generalization and Maintenance of Functional Living Skills for Individuals with

Autism Spectrum Disorder: A Review and Meta-Analysis. Review Journal of Autism and

Developmental Disorders, 3(1), 37–47. https://doi.org/10/gf8rmg

137

Newbutt, N., Sung, C., Kuo, H.-J., Leahy, M. J., Lin, C.-C., & Tong, B. (2016). Brief report: A

pilot study of the use of a virtual reality headset in autism populations. Journal of Autism

and Developmental Disorders, 46(9), 3166–3176. https://doi.org/10.1007/s10803-016-

2830-5

Parsons, S. (2016). Authenticity in Virtual Reality for assessment and intervention in autism: A

conceptual review. Educational Research Review, 19, 138–157.

https://doi.org/10.1016/j.edurev.2016.08.001

Parsons, S., Mitchell, P., & Leonard, A. (2004). The use and understanding of virtual

environments by adolescents with autistic spectrum disorders. Journal of Autism and

Developmental Disorders, 34(4), 449–466.

Plaisted, K. C. (2015). Reduced generalization in autism: An alternative to weak central

coherence.

Rao, P. A., Beidel, D. C., & Murray, M. J. (2008). Social Skills Interventions for Children with

Asperger’s Syndrome or High-Functioning Autism: A Review and Recommendations.

Journal of Autism and Developmental Disorders, 38(2), 353–361.

https://doi.org/10.1007/s10803-007-0402-4

Rebenitsch, L., & Owen, C. (2016). Review on cybersickness in applications and visual displays.

Virtual Reality, 20(2), 101–125. https://doi.org/10.1007/s10055-016-0285-9

Self, T., Scudder, R. R., Weheba, G., & Crumrine, D. (2007). A virtual approach to teaching

safety skills to children with autism spectrum disorder. In Topics in Language Disorders

(Vol. 27, pp. 242–253).

138

Sharples, S., Cobb, S., Moody, A., & Wilson, J. R. (2008). Virtual reality induced symptoms and

effects (VRISE): Comparison of head mounted display (HMD), desktop and projection

display systems. Displays, 29(2), 58–69. https://doi.org/10.1016/j.displa.2007.09.005

Shu, Y., Huang, Y.-Z., Chang, S.-H., & Chen, M.-Y. (2019). Do virtual reality head-mounted

displays make a difference? A comparison of presence and self-efficacy between head-

mounted displays and desktop computer-facilitated virtual environments. Virtual Reality,

23(4), 437–446. https://doi.org/10.1007/s10055-018-0376-x

Simonoff, E., Pickles, A., Charman, T., Chandler, S., Loucas, T., & Baird, G. (2008). Psychiatric

disorders in children with autism spectrum disorders: Prevalence, comorbidity, and

associated factors in a population-derived sample. J. Am. Acad. Child Adolesc.

Psychiatry, 47(8), 921–929. https://doi.org/10/csjddz

Slater, M., Steed, A., & Usoh, M. (1995). The Virtual Treadmill: A Naturalistic Metaphor for

Navigation in Immersive Virtual Environments. In M. Göbel (Ed.), Virtual Environments

’95 (pp. 135–148). Springer Vienna. https://doi.org/10.1007/978-3-7091-9433-1_12

Standen, P. J., & Brown, D. J. (2005). Virtual reality in the rehabilitation of people with

intellectual disabilities. Cyberpsychol. Behav., 8(3), 272–282. https://doi.org/10/dvq4n9

Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of

Communication, 42, 73–93.

Stokes, T. F., & Osnes, P. G. (2016). An operant pursuit of generalization – Republished article.

Behav. Ther., 47, 720–732. https://doi.org/10/gf8rmq

Strickland, D. (1997). Virtual reality for the treatment of autism. Studies in Health Technology

and Informatics, 81–86.

139

Thorson, R. T., & Matson, J. L. (2012). Cutoff scores for the Autism Spectrum Disorder –

Comorbid for Children (ASD-CC). Research in Autism Spectrum Disorders, 6(1), 556–

559. https://doi.org/10.1016/j.rasd.2011.07.016

Tuchman, R., & Rapin, I. (2002). Epilepsy in autism. The Lancet Neurology, 1(6), 352–358.

https://doi.org/10.1016/S1474-4422(02)00160-6

Wang, M, & Reid, D. (2011). Virtual reality in pediatric neurorehabilitation: Attention deficit

hyperactivity disorder, autism and cerebral palsy. Neuroepidemiology, 36, 2–18.

https://doi.org/10/dkp6vk

Wang, Michelle, & Anagnostou, E. (2014). Virtual Reality as Treatment Tool for Children with

Autism. In V. B. Patel, V. R. Preedy, & C. R. Martin (Eds.), Comprehensive Guide to

Autism (pp. 2125–2141). Springer New York. http://link.springer.com/10.1007/978-1-

4614-4788-7_130

Yerys, B. E., Wallace, G. L., Harrison, B., Celano, M. J., Giedd, J. N., & Kenworthy, L. E.

(2009). Set-shifting in children with autism spectrum disorders: Reversal shifting deficits

on the Intradimensional/Extradimensional Shift Test correlate with repetitive behaviors.

Autism, 13(5), 523–538. https://doi.org/10/b963br

Yoder, P. J., & Lieberman, R. G. (2010). Brief Report: Randomized Test of the Efficacy of

Picture Exchange Communication System on Highly Generalized Picture Exchanges in

Children with ASD. Journal of Autism and Developmental Disorders, 40(5), 629–632.

https://doi.org/10.1007/s10803-009-0897-y

Zhang, L., Warren, Z., Swanson, A., Weitlauf, A., & Sarkar, N. (2018). Understanding

Performance and Verbal-Communication of Children with ASD in a Collaborative

Virtual. Environment. Journal of Autism and Developmental, disorders, 1–11.

140

(2008). SAGE Publications, Inc. https://doi.org/10.4135/9781412963909.n48

141

Appendixes

Appendix A

1. I felt sick to my stomach (G) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

2. I felt faint-like (C) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

3. I felt annoyed/irritated (S) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

4. I felt sweaty (P) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

5. I felt queasy (G) Not at all

Severely

0——1——2——3——4——5——6——7—

142

—8——9

6. I felt lightheaded (C) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

7. I felt drowsy (S) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

8. I felt clammy/cold sweat (P) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

9. I felt disoriented (Q) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

10. I felt tired/fatigued (S) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

143

11. I felt nauseated (G) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

12. I felt hot/warm (P) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

13. I felt dizzy (C) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

14. I felt like I was spinning (C) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

15. I felt as if I may vomit (G) Not at all

Severely

0——1——2——3——4——5——6——7—

—8——9

16. I felt uneasy (S) Not at all

144

Severely

0——1——2——3——4——5——6——7—

—8——9

145

Appendix B

Motion Sickness Assessment Questionnaire

Instructions. Using the scale below, please rate how accurately the following statements describe your experience

A lot A little Not at all

1. How sick to your stomach did you feel?

2. How much did you feel like you might pass out?

3. How much did you feel annoyed

146

4. How sweaty did you feel?

5. How queasy did you feel?

6. How much did you feel lightheaded?

7. How much did you feel sleepy?

147

8.How much did you feel like you got the cold sweats?

9.How disoriented did you feel?

10. How tired did you feel?

11. How much did you feel nauseated?

148

12. How hot or warm did you feel?

13. How much did you feel dizzy?

14. How much did you feel like you were spinning?

15. How much did you feel like you may vomit?

149

16. How uncomfortable did you feel?

150

Appendix C

151

CHAPTER 5: Literature Review for VR and Adults with Autism Spectrum Disorder

Rationale

Autism Spectrum Disorder (ASD) is a lifelong neurodevelopmental disorder associated with deficits in communicative and social interactions, as well as restrictive and stereotyped behaviors (DSM-5 American Psychiatric Association. Diagnostic and Statistical Manual of

Mental Disorders, 2013) that impacts an estimated 1 in 59 children in the United States (Baio et al., 2018). Autism is considered a spectrum based disorder with symptoms presenting differently in everyone (DSM-5 American Psychiatric Association. Diagnostic and Statistical

Manual of Mental Disorders, 2013), and with other psychopathological comorbidities tending to be prevalent (Müller et al., 2008). These impairments can severely impact an individual’s quality of life resulting in social isolation, difficulties maintaining employment, and mental health issues (Eaves & Ho, 2008; Hedley et al., 2017) which has led to a need for effective and appropriate interventions to help develop skills needed to thrive in social contexts (Bellani et al., 2011; Rao et al., 2008). Technology-aided instruction has been seen as particularly viable for this population and has gained the support of The National Professional Development

Center on Autism Spectrum Disorder (Bogin, 2008). One technology that has been growing in interest to be used with this population is virtual reality (VR) as evidence suggests that the visually stimulating nature of the modality is intrinsically reinforcing for people with ASD

(Schmidt et al., 2019).

VR is “a model of reality with which a human can interact, getting information from the model by ordinary human senses such as sight, sound, and touch and/or controlling the model using ordinary human actions such as position” (Hale & Stanney, 2014, p. 34) and typically includes a digitally simulated three-dimensional space that can induce sensations of

152

telepresence (Miller & Bugnariu, 2016) including both the physical sensations delivered through computer generated sensory stimuli and the psychological sense of feeling ‘there’ within a computer-generated virtual environment (Slater, 2018; Slater et al., 2009). A high degree of interaction and immersion is typically provided through VR systems that can translate a user’s actions into a virtual environment (Bozgeyikli et al., 2018). The literature points to a number of these VR systems including (1) desktop-based systems, (2) projection-based systems,

(3) cave automatic virtual environments (CAVE), (4) fully immersive HMD-based systems, and

(5) mobile-based systems (Bozgeyikli et al., 2018; Shu et al., 2019). These VR systems are used to provide users a medium to interact with and within a virtual environment such as a three- dimensional virtual world, which are seen as being beneficial for providing instruction and assessment (cf. Dalgarno & Lee, 2010) and especially so for individuals with ASD (Glaser &

Schmidt, 2018). Due to the affordances of this technology, researchers are increasingly looking towards VR as a means to provide interventions for this population (Aresti-Bartolome &

Garcia-Zapirain, 2014) following seminal work (Strickland, 1996) that assessed how young children with ASD might accept head-mounted displays (HMD) while engaging in a virtual environment.

Problems of Overgeneralization in the Literature

Generally speaking, the outcomes of research on VR for individuals with ASD are promising. However, questions linger concerning whether or not there is sufficient empirical evidence to support claims of effectiveness of these interventions (Parsons, 2016). Research summaries and syntheses in this research area point to potential benefits of using VR as a therapeutic tool for this population (Karami et al., 2020a; Mesa-Gresa et al., 2018). However, summarizing and synthesizing research findings from disparate approaches to intervention that

153

implement a broad variety of VR technologies for an extraordinarily heterogeneous population requires particular care so as to avoid overgeneralizing findings. We argue here that there is evidence of overgeneralization in the literature in two specific areas: (1) participant demographics, and (2) VR technology.

Firstly, generalization of specific findings from research performed with a subset of individuals with ASD to the entire ASD population is problematic, as a recognized limitation of the research in this area is limited sample sizes and a general lack of control groups (Parsons,

2016). Yet, reviews of the literature stratify findings across demographics such as age or severity of symptoms when reporting on outcomes, suggesting homogeneity of individuals with

ASD. Given the heterogeneity of ASD and a tendency of researchers to underreport the details of study participants, characterizing VR as a promising tool for the entire ASD population is methodologically unsound. ASD is a life-long spectrum disorder that manifests differently in each affected individual; however, the majority of VR systems have been developed for primarily for adolescents or children with ASD (Mesa-Gresa et al., 2018) that are typically described as being ‘high functioning’ (Karami et al., 2020a). Adults with ASD, those who are in need of more substantial support, and those presenting more severe comorbidities have largely been ignored in the VR intervention literature (Bozgeyikli et al., 2018). Given the limitations noted here and elsewhere, more nuance is needed in reporting results. Claims that VR can be an efficacious tool for individuals with ASD need to be re-examined from this perspective and correspondingly tempered (Müller et al., 2008; DSM-5 American Psychiatric Association.

Diagnostic and Statistical Manual of Mental Disorders, 2013).

Secondly, this tendency towards overgeneralization extends to the VR technologies used in the interventions themselves. Broad claims of the promise or efficacy of VR are problematic

154

because VR is not a homogenous technology. As outlined previously in this article, VR can be delivered using a constellation of different technologies, all of which have differing affordances and constraints. Purported benefits of VR can vary based on the hardware and system architecture being used, which has ramifications on how a user interacts with and makes connections to their experiences within a virtual learning environment. Importantly, most VR platforms that have been developed for individuals with ASD have been desktop-based systems

(Miller & Buganri, 2016; Bozgeyikli et al., 2018; Parsons, 2016). The handful of studies that utilize different interfaces (e.g., CAVE, HMD) tend to utilize dated technologies that pale in comparison to the currently available consumer VR technology such as and HTC

Vive HMD (Bozgeyikli et al., 2018; Newbutt et al, 2016). Therefore, it is unclear if findings from systematic literature reviews suggesting effectiveness of VR for ASD can be extrapolated to all VR system types.

Miller and Bugnariu’s (2016) work underscores the heterogeneous nature of VR technologies used in interventions for individuals with ASD. In their systematic review, they classify each VR system as providing users with low-, moderate-, or high-immersion. Findings from their review suggest that even a low level of immersion could be sufficient to promote differences in social skill performance, but given contradicting results found in the literature, they caution that more testing and refinement is needed to understand the influence of immersion on effectiveness. The conclusions raised in this review of the literature highlight the tendency to overgeneralize results, as it is unclear how participant demographics and designs of these systems can impact findings. It is likely that levels of immersion are not the single defining factor that will influence whether a VR system is effective for individuals with ASD.

More probable is that the interactions afforded to individual users by the system’s interaction

155

possibilities and underlying design have a greater impact on determining if something truly works.

Of interest is that of the literature reviews in this research area to date, only one adopts a more nuanced approach. In Sarah Parsons’ (2016) conceptual review of the literature, she critiques researchers’ tendency to try to establish “what works,” and suggests that questions of whether or not VR works for individuals with ASD are misplaced (2016). Instead, she posits that the focus of research in this area should be on trying to understand, ‘which technologies work for whom, in which contexts, with what kinds of support, and for what kinds of tasks or objectives?’ (Parsons, 2016, p. 153). This suggests there is likely no VR system, technology, or associated affordance (e.g., immersion, sense of presence) that independently influences intervention efficacy for individuals with ASD. This is congruent with research in instructional design, which has long maintained that technology alone is insufficient for promoting learning and advocates that any technology-mediated learning is predicated on careful design that is responsive to identified learner needs and that considers the relationship of technology affordances to desired intervention outcomes (Gibson, 2014; Jonassen, 1995).

Problems with Generalization to Novel Contexts

The tendency to overgeneralize results from complex systems is also seen in how the literature refers to the effectiveness of VR outcomes for individuals with ASD. While there is some evidence that some VR interventions can be effective in some limited ways (Karami et al.,

2020a; Mesa-Gresa et al., 2018), generalization of what is learned in VR training contexts to novel contexts or situations has not been addressed sufficiently in the literature. This raises serious concerns related to the ecological validity of such interventions. If what is learned using

VR interventions cannot be generalized to the real world, then of what value are such

156

interventions? However, challenges with generalization are widely recognized as perhaps the most prominent limitation in all autism research (Arnold-Saritepe et al., 2009; Neely et al.,

2016). Generalization is seminally defined by Stokes and Osnes (2016) as the “outcome of behavior change and therapy programs, resulting in effects extraneous to original targeted changes” (p. 338). To address concerns of generalization, designers of VR systems for this population have historically looked in one direction, “towards a closer fit with the real world in order to assess cognition and support the generalization of learning” (Parsons, 2016, p. 154).

This decision to design towards a closer fit with the real world rests on the “assumption of veridicality;” that is, if experiences within VR worlds are authentic and sufficiently realistic, users of these VR systems will behave in the virtual world in a similar way as they do in the real world (Parsons, 2016; Yee et al., 2007). This premise rests on a further assumption of intuition; that is, greater fidelity of a VR world will lead to a greater sense of telepresence and therefore generalization (Dalgarno & Lee, 2010). These assumptions have not yet been seriously empirically explored, leading to criticism as there is not sufficient evidence to support the claim that learners will trust their experiences within a VR world to sufficiently reorganize their mental models of the real world (Dalgarno & Lee, 2010)and thereby apply lessons learned in a

VR intervention into naturalistic contexts.

Contributing to weak empirical evidence of generalization is the fact that VR-based interventions often take place in rigid, controlled environments in which the behaviors of participants are rectified and reinforced through immediate feedback of the administering therapist (Parsons & Mitchell, 2002). It is unclear if behavioral outcomes would persist if the intervening platform were removed. Further conflating this problem is that while systematic reviews of the literature suggest effectiveness of VR-based approaches for this population,

157

reported outcomes from these studies are highly variable (Mesa-Gresa et al., 2018; Parsons,

2016), and they fail to address important questions concerning generalization. How improvements are measured in one study can be wildly different from others (Karami et al.,

2020). Turning to the literature, Mesa-Gresa and colleagues’ (2018) systematic review suggests moderate support concerning the effectiveness of VR-based treatments. While they found evidence of targeted improvements, these researchers also highlighted the need for standardized ways of validating effectiveness in future studies to better support the claim that VR can effectively complement traditional treatment options. This finding relates to the problem of generalization, highlighting the relative dearth of empirical findings to support generalization after the VR supports have been removed. Further findings from this systematic review indicate that most studies do not include control groups with like-diagnosed participants receiving alternative treatments to act as a comparison, and very few studies use a control group of any kind (Mesa-Gresa et al., 2018). Any purported benefits of VR-based treatments must be considered in this light. Findings from a recent meta analysis (Karami et al., 2020) suggest that individuals with ASD show improvements across a variety of VR training contexts, i.e., daily living, cognitive, emotional recognition, and socio-communicative skills. Results from their analysis suggested some evidence of improvements; however, significant limitations were noted around how experiments were designed, rendering conclusions regarding generalization all but impossible. These researchers cite a need for standardization in how experiments are designed, how participant demographics are reported, and the need for follow-up evaluations to assess whether or not the skills are truly generalizing out into other contexts once the VR-based supports have been removed (Karami et al., 2020).

Inherent Complexity of Designing VR Interventions for Individuals with ASD

158

We argue a more careful consideration of VR systems design is needed if researchers are to meaningfully confront the unique challenges that individuals with ASD face in general and difficulties with generalization specifically. If VR is to be used as a social-psychological tool (Blascovich et al., 2002), then the features of the technology that are being used and how they influence participant responses need to be better understood (Parsons, 2016). This represents a significant challenge, as designing VR systems for individuals with ASD, as characterized by Schmidt (2014), is a “wicked problem.”

[Designing VR systems for individuals with ASD ] is an ill-structured problem for

which there may be no comprehensive solution. The problem appears to be wicked

because, on the one hand, our knowledge of the problem in general is incomplete and

perhaps in some ways contradictory. On the other hand, the problem is interconnected

with a plethora of other problems, and solving one problem may exacerbate another. (p.

68).

To further illustrate the wickedness of this problem, factors impacting generalization from VR environments are difficult to systemically unpack, and, as noted previously, research concerning effectiveness of VR interventions are plagued by challenges such as high variability in participant demographics, limitations in research methodologies, and substantial variation in

VR system designs, all of which make cross-study comparisons difficult (Miller & Buganri,

2016).

Designers of VR interventions also face increased challenges in creating an effective and usable product, as they have to make many decisions concerning the design space, many of which have no “correct” solution (Glaser et al., in print; Sherman & Craig, 2002). One of the challenges in designing a VR intervention is determining how users will interact with the

159

system to exploit the affordances of the technology. Unfortunately, much of what has been published concerning the use of VR technologies is largely ‘show-and-tell’ with evidence being anecdotal and unable to be generalized to other system contexts (Dalgarno & Lee, 2010). A systematic effort will be required by researchers to determine how the capabilities of these technologies can be exploited in a way that promotes pedagogical outcomes. Part of this effort will require that researchers move forward with a model of technological characteristics that can be used to shape an agenda that includes empirical studies to assess the validity of assumptions we make about VR.

One such framework is provided by Dalgarno and Lee (2010) in their model of learning in three-dimensional virtual environments. They argue that there are two distinguishing characteristics of this technology that can impact learning, being “representational fidelity” and

“learner interaction.” The representational fidelity component refers not only to the visual qualities of the display, but also realism of different stimuli and system properties. According to

Dalgarno and Lee (2010) representational fidelity includes: realistic display of environment, smooth display of view changes and object motion, consistency of object behaviour , user representation, spatial audio, and kinaesthetic and tactile force feedback. The learner interaction component refers to a user’s ability to construct an identity for themselves while their actions and social interactions are embodied within the system (Dalgarno & Lee, 2010). Dalgarno and

Lee (2010) describe the following characteristics purported to promote learning through these interactions including: embodied actions including view control, navigation and object manipulation, embodied verbal and non‐verbal communications, control of environment attributes and behaviour, construction of objects and scripting of object behaviours.

160

Some have interpreted Dalgarno and Lee’s model as implying that a greater fidelity of representational fidelity and learner interactions will innately lead to better learning outcomes

(Fowler, 2015). However, this suggestion is not empirically validated and there are currently no standards in place to determine how these characteristics can be brought together to promote learning (Dalgarno & Lee, 2010; Parsons, 2016). In fact, making these design decisions begins with an understanding of the underlying VR architecture, which in turn impacts what interaction possibilities are afforded to users of the system (i.e., affordances; Gibson, 2014). If researchers are to confront this conundrum, a better understanding is needed of how the affordances of VR technologies historically have been exploited to take advantage of their purported learning benefits (Dalgarno & Lee, 2010). However, the ability to make evidence-based design decisions is confounded by the issue of variance across VR system architectures (Miller and Bugnariu,

2016). Various VR system architectures present their own sets of unique affordances and constraints, thereby impacting design considerations of how to best harness potentially useful characteristics of the technology (Dalgarno & Lee, 2010). This problem is further obscured in light of divergence and disparity in how researchers characterize “virtual reality.” For example, some research studies characterize desktop-based gaze-contingent display systems as VR, whereas other research presents fully immersive digital worlds with full body tracking and high fidelity avatar embodiment and interaction (Mesa-Gresa et al., 2018; Miller & Bugnariu, 2016;

Parsons, 2016). While both of these can be characterized as subtypes of VR (in an academic sense), they differ so fundamentally that they defy comparison.

Importantly, the benefits and affordances of VR for individuals with ASD are often alluded to in the literature; however, we have been unable to locate any research that explicitly considers how benefits and affordances are explicitly considered in system designs to advance

161

intended outcomes. Rather, design considerations are not often reported, and those that are tend to derive from small case observations and not from strong evidence or comparative studies

(Bozgeyikli et al., 2018). If researchers are to approach the problem of how to design effective

VR interventions for individuals with ASD in an intentional way, then an operational definition of VR for this field is needed. To inform this operationalization, an understanding of how VR systems have been designed to exploit the characteristics of the technology is needed. This again points to the need to explore which technologies work, for whom, under which contexts, with what kinds of support, and for what kinds of objectives (Parsons, 2016).

To summarize, VR is seen as a promising modality to deliver instruction for individuals with ASD, and reviews of the literature in this research area point to potential benefits (Karami et al., 2020a; Mesa-Gresa et al., 2018). However, while there is evidence to suggest these systems may be effective, there are problems with overgeneralization in the field as researchers tend to implement a broad variety of VR technologies for an extraordinarily heterogeneous population. Therefore, the aims of this systematic literature review are to uncover, analyze, and present the design characteristics of VR systems that have been designed as intervention or training tools for individuals with ASD. Specifically, this review seeks to: (1) assess points of convergence and divergence in how researchers conceive of and define VR for their projects; (2) extrapolate individual components of VR interventions along the dimensions of: (a) which technologies were used, (b) for whom, (c) in which contexts, (d) with what kinds of support, and

(e) for what kinds of tasks/objectives; and (3) systematically extract which of the design factors outlined in Dalgarno and Lee (2010) review are instantiated in VR-based interventions for individuals with ASD. To these ends, the following three aims guide this inquiry:

162

Aim 1: To identify how designers of VR interventions for individuals with ASD

characterize/define VR;

Aim 2: To categorize the VR systems reported in the literature using the schema

suggested by Parsons (2016), namely: (a) which technologies were used, (b) for whom,

(c) in which contexts, (d) with what kinds of support, and (e) for what kinds of

tasks/objectives; and

Aim 3: To determine how the distinguishing characteristics of virtual learning

environments, as outlined by Dalgarno and Lee (2010), are instantiated in VR

interventions designed for individuals with ASD.

Methods

A systematic review was conducted to approach these overarching aims (Davis et al.,

2014) with the goal of providing reliable conclusions to better inform the field regarding identified issues (Moher et al., 2009). The Preferred Reporting Items for Systematic reviews and

Meta-Analysis (PRISMA) standards were followed to provide methodological and reporting quality (Moher et al., 2009). The Problem, Interest, COntext approach (PICo) was used to inform the focus of this systematic review (Stern et al., 2014), a qualitative recasting of the established

PICO framework for meta-analyses (Stern et al., 2014; Butler et al., 2016). Keyword and controlled vocabulary search strategies and search filters were established using the following

PICo framework for Qualitative Studies (Butler et al., 2016):

1. Population (P): Researchers of VR interventions for individuals with ASD

2. Interest (I): Designs, definitions/operationalizations of VR training applications and

interventions for the population.

163

3. Context (Co): Research projects that use VR as a tool to deliver interventions to people

with ASD.

To identify articles to be included for this review, the electronic databases Web of

Science, PubMed, Scopus, IEEE Xplore, and ERIC were searched. was also searched as a secondary tool to seek literature that may have been missed in the full systematic review of the electronic databases (Gusenbauer & Haddaway, 2020; Haddaway et al., 2015).

Protocol and Registration

The protocol for this systematic literature was developed following PRISMA guidelines

(Moher et al., 2009). The protocol was registered as per PRISMA guidelines and can be found on the Open Science Foundation at osf.io/5asyg.

Eligibility Criteria

Manuscripts were considered to be eligible for inclusion based on the following criteria.

Firstly, manuscripts had to be peer reviewed articles, published in academic journals, and in

English. The decision to include only peer-reviewed journal articles (as opposed to “grey” literature such as conference proceedings, theses, and dissertations) was made so as to identify work of highest quality. Conference proceedings, “grey” literature, literature reviews, conceptual papers, patents, and citations were excluded. Only articles published after 1995 were included, as the first work in this area was published in 1996 (Strickland, 1996b). Next, articles had to provide a description of a VR system, virtual world, or virtual environment specifically designed for individuals with ASD (i.e., a bespoke system). Manuscripts that did not describe a VR system or that described VR systems not specifically designed for individuals with ASD (e.g., general, off-the shelf systems such as console video games, driving simulators, etc.), as were manuscripts that miscategorized their intervention as VR when it was something else (e.g., ,

164

text-based virtual environments, etc.). Further, the VR system, virtual world, or virtual environment described had to have been used to deliver an intervention or training. Manuscripts that described VR systems that were used for diagnosis of ASD or for enjoyment or entertainment were excluded, as were those that evaluated feasibility and acceptance of technology but lacked an intervention/training component. In addition to this, evaluation or empirical research data on the VR system, virtual world, or virtual environment had to be presented. Manuscripts that only described design or provided proof-of-concept descriptions were excluded. Finally, the training or intervention had to include individuals with ASD as users.

Studies that, for example, only reported on expert review or parent perceptions of a VR system were excluded.

Information Sources

All searches were conducted in March, 2020. Databases searched for this systematic review were Web of Science, PubMed, Scopus, IEEE Xplore, ERIC, and Google Scholar. Web of Science is a subscription-based scientific indexing platform that provides comprehensive citation data across many disciplines. The Web of Science Core Collection is made up of six databases. PubMed is a database maintained by the National Center for Biotechnology

Information and includes more than 30 million citations primarily related to life sciences and biomedical topics. Scopus is an abstract and citation database launched by Elsevier. Scopus includes book series, trade journals, and journals related to life sciences, social sciences, health sciences, and the physical sciences. IEEE Xplore is a digital library that provides access to over five million publications related to electrical engineering, computer science, and electronics.

More than 20,000 new documents are added to the IEEE Xplore library every month. The ERIC

Collection is an online library that was established in 1966. ERIC is currently sponsored by the

165

Institute of Education Sciences by the United States Department of Education. It includes various types of publications including journal articles, books, conference papers, technical reports, dissertations, and more. The ERIC dataset includes over 1.5 million records which are largely available in Adobe PDF format. Google Scholar is a free search engine that indexes full text and metadata of literature including peer-reviewed manuscripts, conference papers, dissertations, preprints, technical reports, patents, and other scholarly works. Google Scholar is estimated to include approximately 400 million documents in its index. Google Scholar was also searched as it has emerged as being the go-to academic search tool for many in the field due to its ease-of- use and convenience (Gusenbauer & Haddaway, 2020). However, while Google Scholar’s immense dataset serves as a multidisciplinary collection of knowledge, it lacks many of the features required for conducting a systematic search such as the ability to specify tailored queries with high recall and precision. Therefore, this search engine was only used as a secondary tool to seek literature that may have been missed in the full systematic review of the electronic databases (Gusenbauer & Haddaway, 2020; Haddaway et al., 2015).

Search

An iterative approach was used to develop the search strategy. Cursory searches were conducted and initial results were reviewed to examine the nature of the returned literature. This strategy was refined over several iterations. Ultimately, the search terms used in this systematic review included variations of autism AND virtual reality and/or environments. In addition to using the PICo strategy for developing search queries, additional variations in terminology were adapted from other reviews of the literature in the field (Alcañiz et al., 2019; Brattan, 2019;

Karami, 2020; Mesa-Gresa et al., 2018). An example query is provided in Table 1.

166

Table 1

Unstructured database search query consisting of search terms and Boolean operators.

Term One Condition Term Two

(virtual reality) OR (virtual realit*) OR AND (autism) OR (autism*) OR (virtual learning environment) OR virtual (autistic) OR (autis*) OR learning environment*) OR (virtual- (asperger) OR (autism spectrum reality) OR (virtual-realit*) OR (VR) OR disorder) OR (asd) OR (virtual environment) OR (virtual environment*) OR (virtual world) OR (Asperger Syndrome) OR (virtual world*) OR (virtual-world) OR (asperger’s) OR (Asperger*) OR (virtual-world*) OR (collaborative virtual (Autistic Disorder) OR learning environment*) OR (3d virtual (Autistic) worlds) OR (3d virtual world*) OR (MUVE) OR (CAVE) OR (head- mounted display)

A search was conducted across all databases and search indexes stated above. A comprehensive list of search queries across databases is provided in Appendix A. Filters and limits were applied based on the nature of the index and provided functionality (e.g., IEEE Xplore only goes back to

2005; see Table 2).

Table 2

Filters or limits applied to each index used in the literature review

Database Filters or Limits Applied

PubMed Published between 1995-2020, English language, human subjects Web of Published between 1995-2020, English language, TOPIC Science Scopus Published between 1995-2020, English language, article format, published IEEE Xplore Published between 2005-2020, journals, and magazines ERIC Published between 1995-2020, peer-reviewed only Google Scholar Published between 1995-2020, exclude patents, exclude citations, not signed- in/incognito

167

Search Results Reliability. An analysis was performed to assess reliability of search results.

The second author performed database searches using the same search strategy (queries, keywords, filters, etc.) across four databases to validate the number of returned results. Results suggested high agreement. In total, there was a difference of five in total number of records in comparison to the first author. Upon further analysis, these differences were found to be due to

(1) new articles becoming available and (2) nuances across computing environments (e.g. cookies, browser version, other personalizations).

Study Selection

First, a search was conducted and results were imported into RefWorks

(http://refworks.proquest.com/). Results were compared to identify duplicates. A combination of automated searches (Exact Match, Close Match, and Legacy Close Match) and manual review was used. Identified duplicates were removed. Second, titles and abstracts of the remaining corpus were reviewed. Inclusion and exclusion criteria were applied. Third and finally, full text was reviewed and inclusion and exclusion criteria were applied. The final result was a corpus of

82 articles. The study selection process is illustrated in Figure 1.

Study Selection Inter-rater Reliability. Screening was performed by the first author, with a subset of articles screened by a trained graduate student to establish inter-rater reliability.

The trained graduate student applied inclusion and exclusion criteria to the a 25% sample of the corpus of manuscripts after duplicates had been removed (n=765; see Figure 1). Reliability was calculated by using the number of agreements between the observers and dividing by number of agreements plus disagreements. The coefficient was then calculated by multiplying that value by

168

100 to compute the percentage of agreement, resulting in an agreement estimate of 87.8%, suggesting high agreement.

Data Collection Process

We reviewed all publications associated with a given VR intervention/project so as to extract all relevant details across articles. Although Kitchenham (2004) warns against including multiple publications from the same dataset or research projects to avoid bias, this guideline refers to studies that focus specifically on empirical outcomes; however, our research questions were focused not on empirical findings but on design factors. Therefore, grouping articles that reported on the same project was necessary, as sometimes not all design details were reported in a single article and had to be extracted across multiple articles on the same project. Project IDs were created and corresponding articles were grouped with those IDs. IDs combined project name (if given) and the principal investigator’s last name. In cases where a project did not have a stated name, a descriptive title was created (e.g. Lorenzo et al. IVRT). In cases where we were unable to determine if an article was reporting on the same VR system as another article by a similar author group, we followed Kitchenham’s (2004) guidelines and contacted the authors to seek clarification, keeping projects separate if no response was received. The process of extracting relevant information on the system’s design, hardware, software, participants, training goals, supports, etc. was facilitated using customized spreadsheets. A total of 49 projects were identified from the corpus of 82 included manuscripts (see Tables 3 and 4).

Data Items

Data extracted from each manuscript included:

● Source and full reference: An APA reference was generated and stored in a reference

manager

169

● Definition of virtual reality: A summary of how authors describe/define VR; only

included if authors explicitly defined VR. A description of proposed benefits was not

considered a definition and was not included.

● Description of technologies: The “which technologies” of the Parsons’ (2016)

framework. A summary of the technologies used in the VR system or intervention. This

description included both hardware and software configurations if provided and included

information about the number of users the system supported at once (e.g Single-user or

Multi-user)

● Target audience: The “for whom” of the Parsons’ (2016) framework. A summary of

participants demographics and ASD diagnosis. Ages are categorized in categories,

including (1) children (0-9 years old), (2) adolescents (10-19 years old), and (3) adults

(20+ years old). If the authors described participants in terms of necessary levels of

support, that information was also included (e.g., “low functioning”, Level 2).

● Context of the study: The “in which contexts” of the Parsons’ (2016) framework. A

description of both the physical context and the virtual context. The physical context is

defined as where the VR intervention was administered (e.g., in a controlled university

setting). The virtual context is defined as where the virtual activities took place (e.g., in a

virtual replication of a shopping mall).

● Supports: The “with what kinds of support” of the Parsons’ (2016) framework. Supports

included the instructional, pedagogical, intervention, etc. supports and scaffolds of the

VR intervention and training environment.

● Tasks and/or objectives: The “for what kinds of tasks or objectives” of the Parsons’

(2016) framework. A description of the overarching task that was the instructional or

170

learning focus of the VR system. A clinical target was also identified. Clinical targets

were characterized using the same categories as used in Mesa-Gresa and colleagues’

work (2018).

● System Description: A summary of the overall VR system including a description of user

interactions, goals, and supports. Used to provide context for identified design factors

from the Dalgarno and Lee (2010) framework.

● Design Factors: Design factors that are evident in the VR system. Design factors are

characterized using the characteristics of Dalgarno and Lee’s elaborated model of

learning in 3D virtual learning environments. (2010). These unique characteristics relate

to representational fidelity (realistic display of environment, smooth display of view

changes and object motion, consistency of object behavior, user representation, spatial

audio, kinesthetic and tactile force feedback) and learner interaction (embodied actions,

embodied verbal and non-verbal communication, control of environment attributes and

behavior, construction/scripting of objects and behaviors).

● Design Factors Description: A description of how the design factors (Dalgarno & Lee,

2010) are instantiated in the design of a VR system.

Results

The following section summarizes the results of this literature review.

Study Selection Results

Initial search across five indexes resulted in a total of 1,090 results. This pool of resources was culled to 766 articles after duplicates were removed. A total of 192 articles remained after a review of titles and abstracts was conducted for relevance. A full text review was performed on these 192 articles by the first author, and inclusion and exclusion criteria were

171

applied. The outcome of this process was a final corpus of 84 articles that met the inclusion criteria. A secondary analysis was performed by the second author on these 84 articles, with a further two articles excluded because they did not have an evaluation component or had been published as conference proceedings. These 82 articles were then categorized by project (as described above), resulting in a total of 49 records representing different interventions/projects.

A diagram illustrating the search and selection process is provided in Figure 1.

172

Figure 1. Flow diagram illustrating systematic search and selection process.

Aim 1: How VR interventions for individuals with ASD are defined and characterized

Analysis of articles associated with the 49 projects found that 22 projects (44.9%) provided explicit definitions of VR, as reported in Table 3.

173

Table 3

Definitions of VR across projects/interventions.

VR System VR Definition

VR Adaptive Driving System (Bian et al., 2013; Wade et al., Engaging visual medium often in the form of video games that can be used to 2016, 2017; Zhang et al., 2017) create immersive, interactive, and realistic environments.

iSocial (Laffey et al., 2012; Laffey et al., 2014; Schmidt et al., Auditory and visual interactive technology which may represent a reduction 2014; Schmidt, 2014; Schmidt et al., 2012; Stichter et al., of information from a real-world setting but also represents a full description 2014; Wang et al., 2016, 2017) of a setting without the need for imagined components.

Immersive VRET and Cognitive Behavioral Therapy System Computer-generated virtual images/scenes. (Maskey et al., 2014, 2019; Maskey et al., 2014, 2019)

Fire and tornado safety system (Self et al., 2007) Computer-generated, interactive, three-dimensional environment.

Immersive VR Where a user is immersed in a computer-generated world so that the point of (Herrero & Lorenzo, 2019) view upgrades according to the position of the user. To provide greater realism and interaction.

IVRT research (Lorenzo et al., 2016) A computer generated world, where users are entirely immersed and have the impression that they have “stepped inside” a synthetic world.

E-VISP (Babu et al., 2018) A 3D computer generated virtual world that is capable of providing real-life imagery of the physical world.

Pronunciation VR platform (Chen et al., 2019) Simulation of the real world based on computer graphics.

VR4VR (Bozgeyikli et al., 2017) A model of reality where one can interact with and get information from ordinary human senses and can control the model using ordinary human actions.

VR Intervention (Ip et al., 2018; Yuan & Ip, 2018) Immersive, computer generated world that can be created to simulate real-life situations.

Three-dimensional, computer simulated environments that can be experienced Virtuoso-SVVR (Matthew Schmidt et al., 2019) by users with specialized electronic equipment. Promotes concepts of presence, telepresence, and spatial immersion.

Street Crossing Platform (Dixon et al., 2019) Computer-based, multisensory, simulated environments that are navigated using different technologies.

Virtual Travel Training (Simões et al., 2018) Artificial, 3D, computer-generated environments which the user can explore and interact with.

JobTIPS (Strickland et al., 2013) A technology where users can practice context-based social and adaptive skills, facilitated in real time by an instructor, within computer generated environments.

Blood drawn exposure therapy (Meindl et al., 2019) Technology that can realistically simulate a three‐dimensional environment.

3D-SU system (Cheng et al., 2015) A realistic simulated 3D world.

Virtual Reality Social Cognition Training (Didehbani et al., Computer-based simulation of reality in which visual representations, based 2016; Kandalaft et al., 2013) in everyday life settings, are presented on a screen.

Interaction Training (Ke et al., 2015; Ke & Im, 2013) Computer generated, three-dimensional representation of real-life environment.

Hand-in-Hand (Zhao et al., 2018) Interactive and immersive simulated situations.

VR-CR (M. Wang & Reid, 2013) Simulation of the real world using computer graphics.

174

VLSS (Volioti et al., 2016) Three-dimensional computing environment in which users can be immersed and interact.

Of the 49 projects, ten (20.4%) defined VR as some kind of computer generated environment, scene, or world. Eleven (22.4%) projects defined VR as providing an environment based on the real-world or providing a model of reality that is realistic and ecologically sound.

Six projects (12.2%) stated that VR provides sensations of tele-presence including physical immersion or a psychological sense of feeling ‘there’ within a synthetic world. Nine projects

(18.4%) stated that VR allows users to interact with the environment that is being presented including other users and objects within the virtual world. One project (2%) defined VR as being game-like.

Aim 2: How VR systems can be categorized using Parsons’ (2016) schema

Articles associated with the 49 projects were reviewed and data associated with Parsons’

(2016) framework were extracted along the dimensions of (a) which technologies, (b) for whom,

(c) in which contexts, (d) with what kinds of support, and (e) for what kinds of tasks/objectives.

The results of this data extraction process are reported in Table 3.

Table 4

Description of VR projects/interventions for individuals with ASD as categorized using the schema suggested by Parsons (2016).

Project and Which technologies? For whom? In which contexts? What kinds of For what kinds of References support? tasks/objectives?

VR Adaptive Driving Single-User, desktop-based driving Adolescents between 13- Physical context: ● Contextualized Program goals: System (Wade et al., interface that used Logitech G27 18; Not described corrective feedback Driving skills 2016; Wade et al., (wheel) controller and various addons ● Just-in-time corrective 2017; Zhang et al., (eye tracking, eeg, etc) to provide Autism diagnosis based Virtual context: feedback Clinical target: 2017; Bian et al., adaptive responses by the system. on SRS, SCQ, IQ Non-fictional. A replication of ● Token economy Daily living skills 2019) a city in the U.S.A.

175

iSocial (Wang et al., Multi-User, desktop-based VR built Youth between 11-14; Physical context: ● Token economy Program goals: 2018; Schmidt, 2014; using Java-based OpenWonderland School setting ● “Social orthotics” Deliver SCI-A Laffey et al., 2014; virtual worlds toolkit to deliver online High-functioning autism ● Visual schedules curriculum Laffey et al., 2012; training at-a-distance. or Aspergers based on Virtual context: ● Online guide Stichter et al., 2014; ADI-R, ADOS Fictional. Five fantasy Clinical target: Wang et al., 2016; environments, including a Social skills Wang et al., 2017; boat, restaurant, castle, etc. Schmidt et al., 2014; Schmidt et al., 2012)

Gaze sensitive Single-User, Gaze contingent Youth between 13-18; Physical context: ● Adaptive difficulty Program goals: Adaptive Response Desktop-based VR built in Vizard Not described Social Interaction Technology (Lahiri from Worldviz Llc. Autism diagnosis based training et al., 2015; Lahiri et on PPVT, SRS, SCQ, Virtual context: al., 2011; Lahiri et ADOS-G, ADRI-R Fictional. Scenarios were Clinical target: al., 2011) based on diverse topics and Social skills locations of interest to teenagers. For example, one scenario involved a scene at the beach.

Immersive VRET Single-User, CAVE-based system Youth between 8-14; Physical context: ● Individual treatments Program goals: and Cognitive called the 'Blue Room' where scenes University setting ● Gradual exposure Treatment of Behavioral Therapy are projected onto a 360 degree Autism diagnosis based ● Cognitive Behavioral phobias System (Maskey et screened room. Scenes are controlled on: DSM-IV or ICD-10 Virtual context: Techniques al., 2019; Maskey et by a therapist with an iPad. criteria from NHS; Non-fictional. Various scenes Clinical target: al., 2019; Maskey et based on individual user’s Phobia or fear al., 2014; Maskey et No comorbid learning phobias including: Street al., 2019) disabilities scene with open land. Dogs of different sizes would appear and run around; Inside a virtual car that would drive through the streets; Entrance to a school from a participant's life; Dimly lit corridor; Replica of a local sports center; Scenarios of fire drills; Inside a house during a storm

Fire and tornado Single-User, Desktop-based VR Youth between Physical context: ● Navigational Program goals: safety system (Self et system made with EON Professional 6 to 12 years School setting scaffolds Teaching safety al., 2007) 5.0; 3DS Max 6.0 software ● Visual cues and skills related to Virtual context: prompting fire and tornado Fictional. Virtual buildings ● Evolving events unfamiliar to participants complexities Clinical target: Daily living skills

AViSSS (Ehrlich & Single-User, Desktop-based VR Adolescents Physical context: ● Ability to fail and Program goals: Miller, 2009) system where users click on multiple Not described replay scenarios Teaching social choice options to progress through ● Corrective system skills scenarios. Created in OGRE 3D Virtual context: Rendering Software Fictional. Hallways, Clinical target: restrooms, buses, and Social skills cafeterias unfamiliar to the participants

Bob’s Fish Shop Single-User, HMD-based-VR in One 6 year old adolescent Physical context: ● User progress Program goals: (Rosenfield et al., Oculus Rift made in Unity with ASD Not described tracking Social skills 2019) ● Gaze tracking and development Virtual context: analysis Fictional. Cartoon pet store Clinical target: Social skills

Virtual Joystick Single-User, Desktop-based VR with Youth between 8-16; Physical context: ● Increasing emotional Program goals: (Kim et al., 2015) a joystick made in Game Studio A6 Not described intensity levels on Examine social rendering engine High Functioning avatars motivation and diagnosis based on: IQ, Virtual context: emotion SCQ, ASSQ, SRS, Fictional. Living room perception MASC, BASC, RME; unfamiliar to the participants Clinical target: Emotional Skills

176

CicerOn VR: Virtual Single-User, Mobile-based-VR in Individuals with Physical context: ● Gamified Program goals: Speech Coach (Rojo Samsung Gear HMD with a Asperger's Syndrome Not described ● Gradual exposure Development of et al., 2019) controller made in Unity 3D ● Formative public speaking Virtual context: assessment through skills and Fictional. Locations from speech recognition addressing phobia around the world related to talking in front of others

Clinical target: Phobia or fear

Eye gaze VR Single-User, Gaze contingent Young adults with an Physical context: Not reported Program goals: (Grynszpan et al., Desktop-based VR average age of 24.06; Not described Investigate 2019) Diagnosis based on attentional focus Psychiatrist confirmed Virtual context: abilities diagnosis using the DSM- Fictional. Two twins would IV R criteria appear on a screen Clinical target: Attention

IVR Single-User, HMD-based-VR in Youth aged between 8 to Physical context: ● Reduction of stimuli Program goals: (Herrero & Lorenzo, Oculus Rift made in Unity 3D 15; Not described that could be Train the 2019) distracting emotional and Autism diagnosis; Level 1 Virtual context: ● Simplified control social skills of and Level 2 of DSM-V Fictional. Generic virtual system students school and garden Clinical target: Emotional skills

CRETA (Zhang et Multi-User, Desktop-based VR Youth with an average Physical context: ● Adaptive system that Program goals: al., 2020) age of 13.39; Not described responds to how a Assessing Social Diagnosis based on: user is Communication SRS, SCQ, ADOS, IQ Virtual context: communicating and Collaboration Fictional. Problem solving games Clinical target: Communication ability

Facial Recognition Single-User, Gaze Contingent Youth aged between 13- Physical context: Not reported Program goals: VR (Bekele et al., Desktop-based VR made in Unity 3D 17; Below risk range Not described Enhancing Facial 2014; Bekele et al., High functioning Affect Recognition 2012; Bekele et al., diagnosis based on: SRS, Virtual context: 2013) SCQ, ADOS-6, ADOS- Fictional. Virtual characters Clinical target: CSS, IQ; displayed on a screen Emotional skills

VR-JIT (Smith et al., Single-User, Desktop-based VR Teenagers and adults Physical context: ● Repetitive simulated Program goals: 2015; Smith et al., between 16 to 31; Not described job interviews based Interviewing skills 2014; Smith et al., on hierarchical 2020) High Functioning Virtual context: learning Clinical target: diagnosis based on Fictional. Virtual character ● Algorithm based on Daily living skills SRS representing a human customizable features resources manager in a large that adapts to learner department store needs ● Non-branching design that has over 2000 different simulations ● Displaying scores based on performance ● Difficulty levels where the interviewer becomes more agitated, hostile, or will even ask illegal questions

IVRT research Single-User, L-shaped CAVE-based Youth aged between 7-12 Physical context: ● Adaptive system that Program goals: (Lorenzo et al., and Desktop-based VR designed in Not described responds to the Improve emotional 2016) Vizard from Worldviz Llc actions of users in the skills Virtual context: environment Fictional. Party and a Clinical target: classroom Emotional skills

Virtual Single-User, Projector-based with Youth aged between 6-17; Physical context: ● Mirroring of Program goals: Dolphinarium Microsoft Kinect Not described psychomotor skills To teach nonverbal (Cai et al., 2013; Lu Mild to severe diagnosis from in-game avatars communication et al., 2018) based on; TONI3, NIQ, Virtual context: ● Gamified through gesturing GARS Fictional. Next to a ● Experiential learning pool/virtual dolphin lagoon methods Clinical target: where users are able to interact Communication with pink dolphins. The ability environment also includes underwater scenes where the dolphins can swim and behave naturalistically.

177

Haptic-Gripper Single-User, desktop-based system Youth aged between 8-12; Physical context: ● Adaptive haptic, Program goals: Virtual Reality that uses a haptic grip attachment that Diagnosis based on: Not described audible, and visual Address motor System is made by augmenting a commercial ADOS, SB-5, SRS; feedback skill deficits (Zhao et al., 2018) haptic device (Geomagic Touch Virtual context: Haptic Device) with a 3D-printed Fictional. Virtual tasks and Clinical target: gripper embedded with Force- activities including Letter Physical activity Sensing Resistors Tasks and curved Path Tasks

AS System Single-User, Desktop-based VR Youth aged between 10- Physical context: ● Tracks anxiety Program goals: (Kuriakose & Lahiri, Designed in Vizard from Worldviz 16; Not described measures of Social 2017; Kuriakose & Llc individual users Communication Lahiri, 2015) Above Average IQ Virtual context: Skills diagnosis based on: SRS, Fictional. Social stores in SCQ, SCAC; relevant social situations such Clinical target: as a classroom, park, hotel, Social skills and etc.

E-VISP Single-User, Gaze contingent Youth aged between 10- Physical context: ● Eye tracking Program goals: (Babu et al., 2018) desktop-based VR Designed in 19; Not described technology that can Social Vizard from Worldviz Llc use gaze-based Communication Clinical range diagnosis Virtual context: biomarkers to based on: SCAS, SRS, Fictional. Social contexts provide quantitative Clinical target: SCQ; taking place in scenarios estimates of one’s Social skills including: Birthday party and anxiety marriage party, dinner, classroom, restaurant, movie theatre, and sporting events

Immersive VR Single-User, four walled CAVE- Youth aged between 4-6; Physical context: ● Roleplay pedagogy Program goals: System based, HMD-based in Oculus Rift, Not described to embody users in Improve (Halabi et al., 2017; and Desktop-based VR the task Communication Halabi et al., 2017) Virtual context: ● Auto-navigation Skills Fictional. On the grounds of a through the school and inside of a environment Clinical target: classroom Communication ability

Pronunciation VR Single-User, Gaze contingent Youth with an average Physical context: ● Modeling of skills Program goals: platform Desktop-based VR age of 6.63; Not described Word (Chen et al., 2019) pronunciation Low functioning Virtual context: training diagnosis based on: Fictional. A 3D virtual tutor GARS-2, CARS; presented as various models of Clinical target: their face, lips, tongue, jaw, Communication and nasopharyngeal wall. The ability model would animate and change in order to generate realistic pronunciation models.

AS Interactive Single-User, Desktop-based VR Youth aged between of Physical context: ● Scaffolding through Program goals: (Parsons et al., 2005; system controlled with a joystick and 13-18; Not described various levels with Social Skills and Parsons et al., 2006; a mouse that was designed in different Social Rutten et al., 2003; Superscape Virtual Reality Toolkit Diagnosis based on: Virtual context: complexities Conventions Mitchell et al., 2007; FSIQ, WASI Fictional. Within a social cafe´ ● Teacher-controlled Development Parsons et al., 2004; and on a bus pause feature to Parsons, 2005) provide opportunities Clinical target: for communication Social skills with the users of the system ● Visual and verbal feedback provided to users on their performance

VR4VR (Bozgeyikli Single-User, VR220 HMD-based VR College aged; Physical context: ● Tutorial level as part Program goals: et al., 2017) system made in Unity Not described of each virtual Vocational High Functioning scenario Rehabilitation Virtual context: ● System prompts Fictional. Several virtual provided to the user Clinical target: environments in which the throughout Daily living skills skill modules take place such ● Evolving as warehouse, grocery store, complexities such as outdoor parking lot, office the addition of space, and street. system distractors

Street-crossing Single-User, Desktop-based VR made Youth aged between 8-16; Physical context: ● Nine levels of Program goals: environment (Josman in f Superscape’s 3D Webmaster Not described evolving Street crossing et al., 2008) Moderate to severe complexities and skills diagnosis Virtual context: challenge Fictional. Four-lane divided ● Safe repeatable levels Clinical target: street with crosswalks Daily living skills

178

VR-Tangible Single-User, Projector-based system Youth aged between 5-6 Physical context: ● Therapist controlled Program goals: Interaction System with physical devices (e.g. a stick, a Not described levels Integrate sensory (Jung et al., 2006; rotation board, a trampoline) Diagnosis based on: ● Visual and auditory and motor Jung et al., 2006) DSM-IV criteria Virtual context: responses are experiences Fictional. Breaking virtual provided to give balloons with a real stick, food feedback Clinical target: appearing on a screen, facial ● Five phases that Physical activity expressions appear on the participants gradually screen go through

Emotional and social Single-User, Half CAVE-based Youth with an average Physical context: ● Game-based learning Program goals: adaptationVR made in MiddleVR for Unity3D age of 9.03 Not described Enhance emotional Intervention and social (Ip et al., 2018; Yuan Virtual context: adaptation & Ip, 2018) Fictional. Four seasons simulation, a home scene to Clinical target: practice morning routines, Social skills taking a bus, engaging in a classroom, a store, and a playground

Block Challenge Multi-User, Desktop-based VR made Youth aged between 10- Physical context: ● Support agent Program goals: (Parsons, 2015) in DEMON 13; High Functioning School setting ● In-person facilitator Supporting Diagnosis based on: SCQ; ● Various levels of communicative Virtual context: difficulty perspective-taking Fictional. A virtual interface skills with a digital facilitator and different colored blocks that Clinical target: are used to solve puzzles Communication ability

Virtuoso-SVVR Single-User, Mobile-based Spherical Adults aged between: 22- Physical context: ● Video modeling Program goals: (Schmidt et al., video-based virtual reality in Google 34; University setting ● Chunked content that Public 2019) Cardboard and Daydream HMD is short and transportation made in Unity Diagnosis based on: Virtual context: digestible training PPVT, SRS, BRIEF Non-fiction. Video scenarios ● Narration that taking place on a university explains the Clinical target: campus and shuttle bus instruction Daily living skills

Street Crossing Single-User, Spherical video-based Youth aged between 4-10; Physical context: No reported Program goals: Platform (Dixon et virtual reality in Oculus Rift ran in Autism center setting Street crossing al., 2019) STEAM VR Average to High skills functioning diagnosis Virtual context: based on: PDDBI; Non-fictional. 360 degree Clinical target: videos of traffic patterns Daily living skills taking place in their local community

Modified Virtual Single-User, Desktop-based VR made Youth aged between: 11- Physical context: ● Tasks varied in Program goals: Errands Task in Superscape 3D Webmaster and run 17; Not described complexities and Multi-tasking (Rajendran et al., using Superscape Visualiser required more steps evaluation while 2011) High Functioning Virtual context: to complete them as conducting various Diagnosis based on: IQ, Fictional. An actual university participants errands WASI, VSIQ, VIQ, PIQ building consisting of three progressed through BADS floors that are connected by the intervention Clinical target: stairwells Daily living skills

Virtual Mall Single-User, Desktop-based VR Adults aged between: 19- Physical context: Not reported Program goals: (Trepagnier et al., controlled with a joystick 27; Not described Navigating a mall 2005) and performing Ability to think out loud Virtual context: socially Fictional. Virtual mall appropriate behaviors

Clinical target: Social skills

Virtual Conversation Single-User, Desktop-based VR Aged between 16-30; Physical context: ● Point system to Program goals: Partner (Trepagnier Not described provide a score and Conversational et al., 2005; Diagnosis based on: insight into the skills development Trepagnier et al., WASI Virtual context: performance of users 2011) Fictional. Pre-recorded virtual ● A help button which Clinical target: character that responses to includes assistance Communication dialog options features ability

Virtual Travel Single-User, HMD-based VR in Young adults with Physical context: ● Difficulty levels of Program goals: Training (Simões et Oculus Rift with gamepad controller average age of 18.9; Autism center setting varying complexities Taking a bus to al., 2018) ● Scoring system reach specified Low to mild intellectual Virtual context: ● Adaptive destinations disability Fictional. In a virtual city that biofeedback system adapts to user’s biofeedback Clinical target: and within public busses Daily living skills

179

JobTIPS (Strickland Multi-User, Desktop-based VR Young adults with an Physical context: ● Video models Program goals: et al., 2013) through the VenuGen4 virtual reality average age: 18.21; Not described ● Visual supports Interviewing skills platform ● Clinician feedback High Functioning Virtual context: in-system Clinical target: Diagnosis based on: SRS Fictional. In a realistic office ● Concrete Daily living skills space explanations

Crossing the Street Single-User, HMD-based VR that Youth aged between 7-9; Physical context: ● Continually modified Program goals: (Strickland, 1997) uses a ProVision 100 fully integrated minimal vocabulary Not described for each individual Street safety VR system by Division between sessions Virtual context: ● Reduction of in- Clinical target: Fictional. Simplified street world distractions Daily living skills scene with a sidewalk and textured buildings

Floreo PSM (Parish- Single-User, Mobile-based VR Users aged between 12- Physical context: ● Therapist can control Program goals: Morris et al., 2018) lightweight HMD 37; Autism center setting officer responses to Police interaction be adaptive skills Diagnosis based on: Virtual context: ● System is monitored WASI, SCQ Fictional. Throughout a virtual by a therapist who Clinical target: community provides instant Daily living skills feedback

Blood drawn Single-User, Mobile-based Spherical One 26 year old Physical context: ● Gradual exposure Program goals: exposure therapy video-based virtual reality system participant with ASD Home setting and doctor’s ● Safe comfortable Reducing phobia (Meindl et al., 2019) with Tzumi Dream Vision HMD and office setting administration of having blood an Apple pencil used to simulate a settings drawn needle Virtual context: Fictional. Blood draw video Clinical target: Phobia or fear

3D Empathy System Single-User, Desktop-based VR made Youth aged between: 8- Physical context: ● Simplified language Program goals: (Cheng et al., 2010) in 3D Max, Virtools, and Poser. 10; Not described ● Virtual teacher or Promote empathy online guide would Diagnosis based on: Virtual context: prompt questions Clinical target: FSIQ, PIQ, VIQ, WASI Fictional. A restaurant Emotional skills

3D-SU system Single-User, HMD-based VR in Youth aged between 10- Physical context: ● Social modeling Program goals: (Cheng et al., 2015) Model: I-Glasses PC 3D Pro 13; Not described ● Awards system Social ● Auditory feedback understanding and Diagnosis based on: Virtual context: system skills development WASI, VIQ, PIQ, FSIQ Fictional. A virtual bus stop and a classroom Clinical target: Social skills

Public Speaking Single-User, HMD-based VT with Youth aged between: 8- Physical context: ● In-person guide who Program goals: Intervention (Jarrold eMagin Z800 3DVisor made in 16; Not described acted as a teacher in Public Speaking et al., 2013) Vizard from Worldviz Llc. the virtual classroom Skills Diagnosis based on: Virtual context: ● Visual cues WASI, SCQ, ASSQ, Fictional. Classroom ● Difficulty levels Clinical target: SRS; High functioning Daily living skills and low functioning comparison groups

Virtual Reality Multi-User, Desktop-based VR in Youth aged between 7– Physical context: ● Online guide that Program goals: Social Cognition Second Life 26; Not described facilitated instruction Social Cognition Training (Kandalaft High Functioning Skills et al., 2013; diagnosis based on: Virtual context: Didehbani et al., ADOS Fictional. An office building, a Clinical target: 2016) pool hall, a fast food Social skills restaurant, a technology store, an apartment, a coffee house, an outlet store, a school, a campground, and a central park

Social Multi-User, Desktop-based VR in Youth aged between 9– Physical context: ● Online facilitator Program goals: Interaction Training Second Life 10; Home setting, School setting, Social Interaction (Ke & Im, 2013; Ke and Parent’s office setting Training et al., 2015) HFASD or Asperger Syndrome diagnosis Virtual context: Clinical target: based on existing medical Fictional. Birthday party and a Social skills or educational records cafe

Hand-in-Hand (Zhao Multi-User, Desktop-based VR with Youth with average age Physical context: ● Score system Program goals: et al., 2018) LeapMotion controller made in Unity 12.38 and 12.60; Not described ● System feedback on Promote performance communication Diagnosis based on; SRS, Virtual context: and collaboration SCQ; Fictional. Various skills collaborative games Clinical target: Social skills

180

Eye-gaze system Single-User, Eye Contingent Adults with average age Physical context: ● Intonation was Program goals: (Grynszpan et al., Desktop-based VR 20.19; Not described reduced to the Self-Monitoring of 2012) minimum by using Gaze High Functioning Virtual context: synthesized speech diagnosis based on: Fictional. Virtual characters ● Real-time feedback Clinical target: WAIS, DSM IV criteria expression emotion while was provided about Social skills talking the gaze of participants as they used the system

Pico's Adventure Multi-User, Projector-based VR with Youth with average age Physical context: ● Introduction phase Program goals: (Crowell et al., 2019) Microsoft Kinect of 5.69; Not described ● Game elements and Collaboration reward system Skills Diagnosis based on: Virtual context: ADOS, ADI-R, WISC-IV Fictional. Games Clinical target: >70 Social skills

Decoding Social Single-User, 6 wall CAVE-based VR Adults with typical Physical context: ● Virtual coach Program goals: Interactions intelligence Not described Decoding social (Jacques et al., 2018) interaction Virtual context: Fictional. Various social Clinical target: contexts such as a party, Social skills restaurant, bus stop, and a bar

VR-CR Single-User, Desktop-based VR Youth aged between 6-8; Physical context: Not reported Program goals: (Wang & Reid, Motion-capture technology was Home setting Improve 2013) incorporated using a tracking webcam Diagnosis based on: contextual CARS, PDD-NOS Virtual context: processing of Fictional. Within a variety of objects locations relevant to the objects being assessed for Clinical target: contextual processing Daily living skills including a kitchen and bathroom

VLSS (Volioti et al., Single-User, Desktop-based VR made Designed for youth with Physical context: ● Wide, open, Program goals: 2016) in Open Simulator ASD between ages of 9– Not described comfortable VR Social 17 years spaces to prevent communication Virtual context: issues with control skills Fictional. School and functionality which could impact Clinical target: cognitive load Communication ● Distractors from the ability real-world have been removed ● Audible and visual feedback ● Stable voice from virtual instructor

VESIP Single-User, Desktop-based VR Youth aged between 8-12; Physical context: ● Virtual helper agent Program goals: (Russo-Ponsaran et Verbal and diagnosed University setting and School ● Customized to the Social Information al., 2018) based on: , SCQ, IQ setting user Processing above 80 Virtual context: Clinical target: Fictional. Watching avatars Social skills behave in a school setting

Which technologies? Clearly, a wide variety of VR technologies and interfaces are used to deliver interventions (see Figure 2). Thirty-two projects administered interventions on desktop-based interfaces that present virtual environments on a computer monitor and implement a combination of controller options. Five projects used cave automatic virtual environments

(CAVE) where scenarios were projected onto a combination of screens that users could interact with through motion trackers, cameras, or other wearables. Eight projects used fully immersive head-mounted displays (HMD) where users perceive virtual experiences through wearable

181

helmets and their gestures are captured through various trackers and controllers. Five projects delivered their interventions through mobile-based systems that present scenarios through videos and interactive worlds that can be perceived through lightweight HMD and controllers. Three projects used a projector-based system that allowed users to interact with the system through a

Microsoft Kinect or a motion tracking device.

Figure 2. Overview of VR technologies used across 49 identified projects/interventions.

Varying human-computer interfaces are used to provide possibilities for user interaction and display of information with the different VR technologies illustrated in Figure 2. Desktop- based systems included the use of keyboards, computer mice, joysticks, haptic devices (e.g. a pressure sensitive Geomagic Touch Haptic Device), video game controllers, eye-gaze contingent devices, motion trackers, and driving interfaces. Gaze contingent systems (18.75% of desktop- based VR) and systems with some combination of keyboard/mouse or controller options were the most common (71.9% of desktop-based VR). CAVE-based interfaces utilized a variety of configurations including: 6-walled CAVE, full 360-degree CAVE, 4-walled CAVE, a half

CAVE, and a L-shaped CAVE. All five mobile-based VR systems were presented using lightweight mobile HMD including Google Cardboard, Google Daydream, and the Tzumi Dream

182

Vision. Three of the mobile-based VR projects (60%) presented digital worlds that users could interact with using various input methods. Two of the mobile-based VR projects (40%) provided users with 360-degree video-based scenarios. Of the eight fully immersive HMD-based VR systems, five were presented in an Oculus Rift (62.5%). The other three fully immersive HMD used the ProVison, I-Glasses, and the Z800 3DVisor, respectively.

For whom? The projects and interventions reviewed were designed for a wide range of participants with ASD, with demographics that vary considerably in age and diagnostic measures

(see Table 4). Reporting of necessary levels of support for participants is inconsistent, and few articles identify participant comorbidities. Age ranges of participants are the most commonly reported demographic item. The majority of research in this field has been conducted with children (aged 0-9) and adolescents (aged 10-19). Interventions designed for adults (20+) are less common (Figure 4). These findings are in agreement with other published reviews (Lal

Bozgeyikli et al., 2018; Mesa-Gresa et al., 2018; Parsons, 2016).

Figure 3. Participant age ranges across identified VR projects/interventions.

183

In which contexts? Reporting of the physical contexts in which projects/interventions were implemented was inconsistent, with the majority of projects (76.5%) failing to include this information. From the 12 projects that did report this information, it is evident that physical contexts varied considerably, as shown in Figure 4, and that few projects were situated outside of controlled settings.

Figure 4. Breakdown of Physical Contexts Used in the Literature.

Of the virtual contexts identified, the majority were based on fantasy or fictional environments (94.1%), with a minority situated in real-world settings (5.91%). Fictional settings were defined as virtual environments that, while sometimes realistic, were not based on settings from the real-world. Examples include: birthday parties, schools, shopping centers, cafeterias, and inside of a bus. Non-fictional settings were defined as virtual environments that were based on real-world settings, often from the lives of the target participants. For example, Schmidt et al.

(2019) created a spherical video-based virtual reality training application that included high definition footage from actual settings that adults with ASD typically encountered in their day program.

184

With what kinds of support? Reporting of instructional supports and scaffolds in the literature is inconsistent. Many projects do not report, but rather imply the supports that are provided by the VR system. When supports are reported, they tend to be closely aligned with the system design and are therefore highly contextualized. Therefore, summarizing and synthesizing supports is challenging as there is little consistency across reporting or implementation.

However, broad descriptions of supports are implied. For example, several VR systems are reported to provide users with adaptive system response to individualize the task to the unique needs of the individual (Parish-Morris et al., 2018; Simões et al., 2018; Zhao et al., 2018;

Lorenzo et al., 2016). How this support is provided is not always detailed and system configurations impact how supports are provided to users. In some cases bio-feedback and user metrics are collected (Babu et al., 2018; Simões et al., 2018;) through external peripherals that tie into the software of the VR system to provide dynamic experiences. In other cases, the actions of users such as their repeated failures, impact how the system responds and adapts (Lorenzo et al.,

2016). These findings align with other reviews that state that researchers of VR experiences need to do a better job with reporting on pedagogical decisions (Fowler, 2015).

For what kinds of tasks/objectives? Tasks and objectives were characterized using the same categories as reported in Mesa-Gresa and colleagues (2018): social skills, emotional skills, daily living skills, communication ability, attention, physical activity, and phobia or fear. The majority of projects/interventions identified target the development of social skills (34.7%) and daily living skills (28.6%). Seven studies (14.3%) were designed to target communication skills development. Five studies (10.2%) focused on emotional skills development. Three studies

(6.1%) focused on the creation of systems to help with the treatment of phobias or fear. Two studies (4.1%) focused on the training of physical skills. One study (2%) focused on attention

185

skills. Figure 6 shows the full breakdown of studies that focused on each of the clinical targets.

These findings are largely in agreement with those reported in Mesa-Gresa and colleagues’ work, with some key differences. Fewer projects in our work focused on emotional skills, for example.

Figure 5. Clinical targets of Training Activities.

Aim 3: How distinguishing characteristics are instantiated in VR interventions designed for individuals with ASD

Articles associated with the 49 projects were reviewed and data extracted according to

Dalgarno and Lee’s (2010) elaborated model of learning in 3D virtual learning environments.

(2010). Specifically, projects were categorized along the dimensions of unique characteristics related to representational fidelity (realistic display of environment, smooth display of view changes and object motion, consistency of object behavior, user representation, spatial audio, kinesthetic and tactile force feedback) and learner interaction (embodied actions, embodied verbal and non-verbal communication, control of environment attributes and behavior, construction/scripting of objects and behaviors). Findings suggest designers of VR environments sought to exploit a number of distinguishing characteristics of virtual learning environments to

186

promote learning within their training programs. The manner in which these design factors were instantiated varied across projects and system architecture. Findings are reported in Table 5.

Table 5

Design factors of VR projects/interventions for individuals with ASD as categorized using the elaborated model of learning in 3D virtual learning environments (Dalgarno & Lee, 2010).

Project and System Description Identified Design Factors Description of Design Factors References (Dalgarno & Lee, 2010)

VR Adaptive Driving The virtual platform supports driving Consistency of object A realistic physics engine that exerts System (Wade et al., tasks with variable complexity and behaviour force on in-world objects. 2016; Wade et al., difficulty. Drivers/learners can interact Other in-world assets such as 2017; Zhang et al., with traffic lights, pedestrians, and other pedestrians, traffic lights, and other 2017; Bian et al., drivers, all designed to behave vehicles behavior in a way that mirrors 2019) naturalistically. A physics engine the real world. provides further realism. Virtual currency provides users with rewards for Realistic display of A realistic replication of Philadelphia their performance. environment was created that included a full model of its streets and buildings.

Embodied actions Users engage with the system through a variety of hardware that affords the embodiment of actions that emulate that from the real world. Examples include pressing down a brake and gas pedal to control the virtual vehicle.

iSocial (Wang et al., iSocial implements the 31-lesson social User representation In iSocial each user is represented by 2018; Schmidt, 2014; competence intervention (SCI-A) with one’s own avatar. Laffey et al., 2014; high fidelity within a virtual environment Laffey et al., 2012; that includes learning scaffolds and Realistic display of A variety of realistic environments were Stichter et al., 2014; supports. Learners are represented as environment created where users could collaborate Wang et al., 2016; avatars. Communication is verbal and together in fantasy worlds including a Wang et al., 2017; nonverbal. Users manipulate objects in boat, restaurant, castle, etc. Schmidt et al., 2014; the 3D space as they engage in goal- Schmidt et al., 2012) oriented curricular tasks. Embodied actions Users can control their avatar and manipulate and select objects within the virtual environment to complete curricular tasks.

Embodied verbal and non‐ Users can communicate with other verbal communication people within the environment through multiple modalities including verbal communication through a microphone and gestural through control of their avatar.

Gaze sensitive This virtual reality system was designed Consistency of object Virtual characters behave realistically. Adaptive Response to administer and alter social interactions behaviour Technology (Lahiri et through the use of bi-directional al., 2015; Lahiri et al., conversation taking and feedback. The Realistic display of Photorealistic backgrounds were used 2011; Lahiri et al., system measures physiological metrics environment behind the virtual characters to provide 2011) of gaze to make predictions regarding naturalistic social contexts. engagement and adapts communication to the behavior of its users. Smooth display of view Virtual scenes were designed to changes and object motion smoothly change to display new situations.

187

Immersive VRET and This intervention was administered in a Spatial audio Visual and audio media are presented to Cognitive Behavioral space called 'The Blue Room' where users in an immersive CAVE Therapy System audio visual images were projected onto environment. (Maskey et al., 2019; the walls and ceilings of a 360 degree Maskey et al., 2019; seamless screened room. Participants Realistic display of Photorealistic environments are created Maskey et al., 2014; could move around the space to freely environment that are often made from real-life social Maskey et al., 2019) interact with and navigate through the contexts of participants so they can be scenario. used to provide gradual exposure therapy A therapist would control the scene being to stimuli that invokes individual administered through an iPad. phobias. Participants would undergo several treatments that would evolve in exposure. Cognitive and behavioral techniques were used throughout.

Fire and tornado Virtual simulations were created that Consistency of object Objects within the virtual environment safety system (Self et provided different levels of cueing behaviour maintain properties from the real world. al., 2007) including the sound of alarms going off, For example, the fire object behaves navigational wayfinding, redirected realistically and sensory stimuli from attention to relevant cues, and sensory this object is provided including cues of a fire including olfactory stimuli olfactory stimuli . from ScentPalate®. Prompts and scaffolds were faded as participants went Realistic display of A realistic building was created for users through the simulation repeatedly. By the environment to navigate through as they practiced end of the intervention participants had procedures like tornado and fire drills. to assume control for most of the actions Realistic visual cues and stimuli from in the environment. The training the environment are included. simulation was based on buildings that participants were not familiar with. Embodied Actions Users are able to control a character through a virtual environment that evolves in complexities and gradually provides more control and embodiment to the user.

AViSSS (Ehrlich & AViSSS was designed to simulate Realistic display of Realistic social environments including Miller, 2009) everyday real-world contexts.. Each environment hallways, restrooms, and cafeterias were environment has multiple scenarios created. where the user must solve a problem. Scenarios are designed around decision trees that encode social . Participants interacted with the environment by clicking on responses that would appear at decision points. If an incorrect decision is made then the software would inform the participant about why that choice was poor. The last scene then replays so the user can make a better decision with the prior selection grayed out.

188

Bob’s Fish Shop This system was designed to allow users Embodied verbal and non‐ Users can communicate verbally through (Rosenfield et al., to successfully interact with the verbal communication the use of a microphone. The system 2019) proprietor of a pet shop. Participants of uses voice recognition to interpret what the system used the Oculus Rift to the user is saying and has an in-world navigate through the world as if they character respond. The user is also able were actually inside the virtual to communicate through the use of the environment. The script was designed in Oculus Rift’s wand controllers to consultation with a BCBA to map out gesture. example conversations with variations. A professional voice actor was used to record the audio. In this environment, users begin in their home, go to a fish shop, examine the shelves, and then interact with the proprietor. The shop owner's mouth and body movements were animated to be naturalistic. The participant's verbal communications are processed and referenced to a response library to determine what actions should come next.

Virtual Joystick Virtual characters were created as a way Realistic display of Users interact within a living room (Kim et al., 2015) of measuring how participants would use environment environment with virtual characters that a joystick to position themselves closer present a range of realistic emotions and or further from virtual avatars while intensity. attempting to identify six emotions expressed by the avatars, happiness, fear, Embodied actions Users are able to control an avatar with a anger, disgust, sadness, and surprise that joystick to position a character where were expressed at different levels of they would in social scenarios based on intensity. the emotions that they perceive in virtual characters.

Rojo et al. CicerOn CicerOn is a serious game where Embodied verbal and non‐ Users are able to read a series of artifacts VR: Virtual Speech participants can talk to different verbal communication out loud that the system interprets and Coach (Rojo et al., characters within virtual environments. It provides feedback to. 2019) includes six levels that are gamified and based on Egyptian mythology. Users Embodied actions Users can control their viewport and must solve riddles as they travel through interact within the environments to find different countries to find lost objects. clues and progress through the game. After completing each level they must read aloud a final piece of text that is Realistic display of Environments were created based off of used to assess their speaking ability as environment real-world locations. well as advancing the narrative of the game. The different levels of the game provide users with gradual exposure to the fear of public speaking. A speech recognition system is used to evaluate the responses of users which allows for formative assessment and improvement on their abilities. A HMD is implemented with the goal of helping users transfer what they learn.

Eye gaze VR This system utilized an eye-tracker Embodied actions Eye gaze of interactions within the (Grynszpan et al., device that was capable of monitoring system are controlled by the user which 2019) the gaze of participants. A bust of a male directs the attention of virtual characters. avatar was created and programmed to follow the gaze of the participant. In this system some of the virtual characters would follow the gaze of participants and others would not.

IVR This system was designed to provide Realistic display of Realistic environments of a garden and (Herrero & Lorenzo, users with a familiar environment. A environment school were created to provide a training 2019) generic garden and school context were context familiar to the participants. created to facilitate participants' Spatial environment was designed to be adaptation of social and emotional skills realistic. as these settings are familiar to

189

participants and are filled with social Embodied verbal and non‐ Users are able to engage and interactions. Avatars were created with verbal communication communicate through voice and gestures different personalities and appearances to with in-world avatars. Those interactions provide realistic social spaces for and responses are triggered by practice. Avatar behaviors were animated predefined options that include realistic to be realistic. Avatar responses and gestures, facial expression, and lip behaviors were controlled by the synchronization with the corresponding researcher. audio, recorded by a real human being.

Embodied Actions Through the Oculus Rift and its wand controllers users take on an avatar that is used to manipulate and interact with assets within the virtual world.

CRETA (Zhang et al., CRETA is a virtual environment where Embodied verbal and non‐ Users can verbally communicate through 2020) two users play games either with each verbal communication a microphone. If they are playing with an other or with the system’s intelligent agent-avatar then the system uses voice agent. These games were designed to recognition to interpret the speech and withhold information or require respond accordingly. Users are able to simultaneous movement in order to communicate and collaborate through promote collaboration and consistent, controlled, and replicable communication between players. interactions.

Embodied Action Users are able to manipulate objects within the environment in collaboration with others or a virtual agent to solve problems.

Facial Recognition Virtual characters were created to present Consistency of object Virtual characters react and behave in a VR (Bekele et al., 28 trials of 7 emotional expressions at behaviour realistic manner. 2014) (Bekele et al., four levels of intensity. After realistic 2012) (Bekele et al., facial expression animations were Realistic display of Each avatar was rigged with a skeletal 2013) presented a menu appeared where environment structure consisting of 94 bones. Twenty participants were given choices and of these bones were involved with the asked to identify the emotion. face structure that was used for facial emotional expressions. Since the main focus of this project was displaying facial emotional expressions, greater emphasis was given to the face structure.

VR-JIT (Smith et al., Virtual characters were created to present Embodied verbal and non‐ Users are able to respond to a virtual 2015) (Smith et al., 28 trials of 7 emotional expressions at verbal communication character that is conducting a simulated 2014) (Smith et al., four levels of intensity. After realistic job interview. Individuals are provided 2020) facial expression animations were with multiple methods of responding presented a menu appeared where including speech which is parsed participants were given choices and through voice recognition software. asked to identify the emotion.

IVRT research This immersive virtual reality platform Realistic display of A classroom and party scene were (Lorenzo et al., 2016) allows users of the system to improve environment created that included realistic avatars and and train on emotional skills by emotional expressions. interacting with different in-world avatars and to perform emotional Embodied actions A camera system on a robotic arm tracks recognition tasks while engaging in the user’s facial expressions to detect social contexts such as a party and a their mood and update the system classroom accordingly. The camera also determines the pose of the users and allows them to The CAVE platform presents a frontal interact with the environment. view of the environment and the other is placed on a platform which allows it to project from below. The assets and avatars of the represented scenes adapt to the behavior of users of the system. A camera tracks where users are in the environment and adapts to their behavior

190

Virtual Dolphinarium This system is a virtual dolphin Realistic display of A virtual pool was created and based off (Cai et al., 2013) (Lu interaction program that allow children environment of a pink dolphin lagoon experience for et al., 2018) with autism to act as dolphin individuals with autism. The virtual pool trainers. The goal of the program is to includes levels of realism such as high allow users to interact at a realistic fidelity water ripples, animated dolphins poolside and to learn (nonverbal) that have real-time collision avoidance communication through gesturing and and collision detection algorithms. commands with the virtual dolphins. Immersive visualization and gesture- Embodied verbal and non‐ Gesture-based non-verbal based interactions are implemented verbal communication communication is used as a way for the through a curved screen spanning 320 participant to communicate with degrees, a projection system, and a dolphins in the virtual environment. Microsoft Kinect. Embodied actions A Microsoft Kinect is calibrated to the movement of a user's actions so that their gestures and actions are able to interact with objects within the virtual world.

Haptic-Gripper The Haptic-Gripper Virtual Reality Kinaesthetic and tactile force Use of a haptic system that is able to Virtual Reality System was designed to provide analysis feedback measure the user’s tactile and force System and opportunities for practice of fine feedback so they correspond to the (Zhao et al., 2018) motor skills in an adaptive system with virtual activities. Includes haptic immediate auditory, haptic, and visual feedback such as friction and spring feedback. It is capable of detecting the force. grip and hand location of users so that it can provide feedback and guide them Embodied actions The actions of the user’s fine motor hand through completing several engaging skills are translated to the virtual system virtual fine motor tasks. System was through the use of a haptic device. made in Unity.

AS System The anxiety system (AS) was designed to Consistency of object Realistic non-playable characters are (Kuriakose & Lahiri, allow real-time measurement of behaviour presented in the virtual environment that 2017) (Kuriakose & performance and associated are able to behave with consistency of Lahiri, 2015) physiological indexes that can be their real-world counterparts including mapped to one's anxiety while interacting expressions and mannerisms. within social tasks in a virtual environment. Within this desktop-based platform, 24 different social stores are presented to users to expose them to social contexts.

E-VISP This social communication system Consistency of object Realistic non-playable characters are (Babu et al., 2018) system is composed of 3D environments behaviour presented in the virtual environment that that emulate real-life social contexts. The are able to behave with consistency of system is designed to present an their real-world counterparts including interactive task environment that begins expressions and mannerisms. The with a social narrative and then moves to avatars were programmed to demonstrate a questionnaire phase. Nine social stories a mixture of gaze patterns. were created and modeled to include behaviors that would attempt to direct the Realistic display of Social contexts were made in Google gaze pattern of participants. environment Sketchup to emulate the real world

Immersive VR This VR system consisted of an Realistic display of A realistic classroom and school setting System interactive scenario-based task that environment were created because it is seen as a (Halabi et al., 2017; utilized speech recognition and turn- social space where individuals with Halabi et al., 2017) taking role play to improve autism have many interactions communications skills of autistic children. In this system, the subject is Consistency of object Realistic non-playable characters are automatically navigated through the behaviour presented in the virtual environment that virtual environment. After arriving inside are able to behave with consistency of of the virtual classroom a virtual teacher their real-world counterparts including introduces themselves and goes through expressions and mannerisms. The virtual a series of tasks. Voice and action teacher and other students interacted monitoring is tracked to record a through realistic gestures.

191

response. Embodied verbal and non‐ A voice recognition system was verbal communication implemented. It would react when a participant began talking so measure their social-communicative response times. Gestural movements were also tracked as a way of giving users opportunities to use non-verbal communicative skills.

Embodied actions A device was used to map the user’s actions into the virtual system

Pronunciation VR This 3D virtual tutor was designed to act Consistency of object The 3D model of the character’s body platform as a multimodal and real-data-driven behaviour would animate and change in order to (Chen et al., 2019) speech production tutor to provide generate realistic pronunciation models internal and external models of realistic for users of the system to observe. pronunciation. Animations and behaviors were modeled off of real-world people to provide object behavior fidelity.

AS Interactive Two social contexts were created with Realistic display of The virtual scenes were designed to (Parsons et al., 2005) the main task of navigating social spaces environment provide a realistic display of the physical (Parsons et al., 2006) and trying to find a place to sit down. aspects of the café and bus (Rutten et al., 2003) Different versions of each environment environments. (Mitchell et al., 2007) were created that evolved in complexities (Parsons et al., 2004) and forced users to ask questions and to Embodied actions Participants used a joystick to navigate (Parsons, 2005) reflect upon why they could not sit down and a mouse to activate objects (e.g. with people they did not know. sitting down by clicking on the chair) or perform interactive tasks in the environment.

VR4VR (Bozgeyikli The VR4VR system was designed to Realistic display of Physically realistic representations of the et al., 2017) provide vocational training opportunities environment training environments were created around transferable skills of cleaning, loading the back of a truck, money Consistency of object Real world complexities and distractors management, shelving, environmental behaviour are added into the system that behave in awareness, and social skills.. Each skill a consistent manner. task is structured across three sessions that evolve in fidelity and complexities. Embodied action Users emulate the real motor functions needed to learn the vocational tasks being trained on in the system. Real-life items are scanned into the system and act as mediators between the virtual and real world. For example, users hold a pipe in the cart scenario to give them a physical connection to the action they are embodying.

Street-crossing In this system, a user is represented with Realistic display of A realistic four-lane street was created environment (Josman an avatar which is faced towards a zebra environment for users to practice street crossing skills et al., 2008) crossing at the start of each of the nine in. levels. The difference in each level is the number of cars, the traffic patterns, and Consistency of object Traffic lights, patterns, and vehicles their speed. Users who successfully cross behaviour behaved in a consistent and realistic the street automatically advance to the manner. next level. If a user is hit by a car then a crashing animation is played and they Embodied actions Users control an avatar as they look out replay the level until completion. for traffic patterns and cross the street. They can change which way they are looking and can control the avatar as they cross the street.

192

VR-Tangible This system has three components that Embodied actions Participants' actions on physical in-world Interaction System are designed to combine coordination devices are reflected on the screen as (Jung et al., 2006) and motor ability, social skill training, they perform tasks. (Jung et al., 2006) and sensory integration. User's complete a variety of game-like activities to provide baseline measurements of their abilities. Actions of users are projected onto a screen-based system.

Emotional and social This VR system provides users with Realistic display of Realistic training contexts are presented adaptationVR content in the form of learning scenarios environment across a variety of social spheres Intervention to practice emotional and social including classrooms, in a bus, at home, (Ip et al., 2018) adaptation skills facilitated by a trainer. and more. (Yuan & Ip, 2018) Scenes are projected in a CAVE environment and user actions are mapped Embodied actions User actions are tracked and projected into the environment through motion into the social scenes in the CAVE tracking. environment.

Block Challenge Block Challenge is a two player Embodied verbal and non‐ Users communicate with each other to (Parsons, 2015) collaborative game where users engage verbal communication complete puzzles that are solved with with each other to collaborate and solve blocks. They can communicate through a challenges around manipulating blocks. microphone or through their non-verbal The goal of the system is to support actions. communicative perspective taking skills. Embodied actions Users are able to manipulate and control blocks within the environment to solve challenges.

Virtuoso-SVVR Virtuoso-SVVR is part of an Embodied actions Users are able to control their viewpoint (Schmidt et al., 2019) instructional strategy to deliver training as their perspective is automatically on catching public transportation in a guided through the virtual 360 degree university setting for members of an campus. adult day program. In this platform the skills of catching a shuttle bus are Realistic display of 360 degree footage of the training modeled to users as they watch 360 environment context was shot with high definition degree scenarios. cameras to provide a realistic display of the exact environment the skills are being trained for.

Street Crossing The VR environment was made up of Embodied actions Participants could control their view of Platform (Dixon et 360 degree videos of real streets from the view by turning their head within the al., 2019) participants' community. Users watched HMD. the videos and responded to questions from a facilitator Realistic display of 360 degree videos were captured by a environment high definition Samsung Gear camera to provide a realistic training environment from the community that participants were familiar with.

Modified Virtual In this platform, users are placed in a Embodied actions Users are able to control their character Errands Task virtual school and put into the role of a to navigate through a school building to (Rajendran et al., pupil where they complete errands that complete tasks. 2011) their teacher sends them on. The tasks are designed to be ones that might be Realistic display of The school model is based off of a real plausible in a real school setting such as environment university building. checking a bulletin to see when an exam is taking place. Users navigated through the building through the use of a mouse.

Virtual Mall The Virtual Mall is a joystick-navigable, Realistic display of A mall environment was made to (Trepagnier et al., first-person mall that is presented to environment provide a social context that is familiar 2005) users on a monitor. The goal of the to participants.

193

platform is for users to navigate through Consistency of object Non-playable characters fill the virtual the mall that is filled with social behaviour space to create situations that are obstacles. identical in terms of spatial properties and similar to that of social implications. These characters take on consistent object behavior similar to the real world.

Embodied actions Participants use a joystick to control a character through a virtual mall while judging socially appropriate responses to objects and other people around them.

Placement and orientation of the virtual humans creates situations that are identical in terms of spatial properties (e.g. space between humans and walls, or between obstacles and walls) but differ importantly in social implications (e.g., passing between a person and a store window to which her back is turned, versus passing between two characters facing each other at a conversational distance vs. passing between two advertising signs)

Virtual Conversation This virtual environment is a simulated Consistency of object Virtual characters react to responses by Partner (Trepagnier et conversation platform for teaching behaviour users with realistic mannerisms and al., 2005) (Trepagnier occupationally important social expressions. If things go too poorly they et al., 2011) communication skills such as will get up and leave the room and end interviewing techniques. A real-life actor the session. was used to record and provide scripted responses to actions that the user can input through a dialog system.

Virtual Travel This serious game places users into a 3D Realistic display of A realistic replication of a city and bus Training (Simões et city and provides them with a series of environment were created to provide different levels al., 2018) tasks that involve taking buses to reach of training. specific destinations. Busses in this environment follow in 4 predefined Consistency of object Objects within the environment behave routes within the city. The player can behaviour like those from the real world including enter any of the buses, must validate their objects/elements, such as people, traffic, ticket, pick a seat, and then request a stop and dogs. when they have reached their destination. Players can choose between 7 tasks of Embodied actions Users are able to control their view by varying difficulty and complexities. turning their head in the Oculus Rift HMD. A USB game controller was used to navigate their character through the environment.

JobTIPS (Strickland In the virtual world training platform Embodied verbal and non‐ Users are able to practice job interview et al., 2013) users connect to a digital interview verbal communication skills through the use of both verbal and location where simulations are remotely non-verbal communication supported by led by a clinician at a different physical a microphone and through avatar location. The clinician assumed the role gestures. of an interviewee and the participant took on the role of the individual being interviewed. Users sat across from one another in a virtual office space to practice job interviewing skills.

Crossing the Street This virtual street crossing environment Realistic display of Textured buildings and streets were (Strickland, 1997) provided users with a simplified, safe, environment created. Environments were simplified and controlled context to practice with many details left out to reduce pedestrian skills. distractions.

Embodied actions Users are able to have their actions embodied through the use of a HMD and motion tracking controllers.

194

Floreo PSM (Parish- The PSM includes multiple scenarios, Realistic display of Realistic community settings were Morris et al., 2018) including an officer walking by without environment provided that users can interact within direct interaction, officers approaching participants to ask questions, being asked Embodied actions Users can change their viewport through to provide identifying information, and the use of the light weight mobile HMD more. Scenes are varied to take place during both the daytime and nighttime. Scripts from the cops are pre-recorded but can be controlled by the administering therapist.

Blood drawn In this 360 degree video environment Realistic display of A 360‐degree video of a blood draw was exposure therapy users watch a video of blood being environment developed using the Insta360 One VR (Meindl et al., 2019) drawn to help them gradually become camera to provide a high fidelity used to the procedure through exposure environment therapy. A video is presented in a HMD of blood being drawn with an Apple Kinaesthetic and tactile force Apple Pencil stylus used to simulate the Pencil being used to simulate the needle feedback needle insertion. prick. Exposure is presented in comfortable and safe settings at home Embodied actions Users are able to turn their head to look and then in the doctor’s office. around the 360 degree video environment.

3D Empathy System This system was designed to provide Realistic display of Simulated the real-world through 3D (Cheng et al., 2010) learners with social events where environment animation and virtual scenes. empathetic contexts can be emphasized. It allows users to select expressive User representation Users can select expressive avatars to avatars to represent themselves, express represent themselves. their emotions to others, and to use and improve empathy via interaction with Embodied verbal and non‐ Users are able to communicate via text others. Scripted questions are designed to verbal communication or speaking. Users can control their elicit questions of empathy from avatar’s facial expressions participants. Training takes place in a virtual restaurant as restaurants are public places in which people are in contact with others.

3D-SU system This system provides users with Realistic display of Simulated social spaces were created to (Cheng et al., 2015) simulated environments that they can be environment provide relevant places for the practicing immersed within to promote social of skills. understanding skills. Users are able to interact with objects in the environment Embodied action Users can interact with objects in the and are immersed with a HMD. Social environment through mouse and events are presented to users that take keyboard clicks and can control their place within a 3D virtual bus stop and view with the HMD. classroom environments which were selected because subjects frequently encounter these social settings.

Public Speaking The virtual social attention, public Realistic display of A realistic 360-degree virtual classroom Intervention (Jarrold speaking task was delivered via a HMD environment was rendered. et al., 2013) that tracked the movement of users. Participants practiced public speaking Consistency of object Virtual student avatars were skills in an environment that provided a behaviour programmed to exhibit subtle eyeblink virtual audience of peers. Cues were and head motions typical of an audience included as a way of prompting fixation of peers. on objects and analyzing task difficulty. Embodied action Head orientation and rotational motion along three rotational axes were dynamically tracked and embodied in the environment.

Virtual Reality Social The VR-SCT provided realistic Realistic display of A variety of realistic settings were Cognition Training opportunities to engage in, practice, and environment created to provide dynamic opportunities (Kandalaft et al., to attain feedback on activities within for social training.

195

2013; Didehbani et social spaces. After logging into the User representation Avatars representing the user in the al., 2016) multi-user environment, an online guide virtual world were modeled to resemble directed users to social situations taking each participant place at one of the virtual locations. Scenarios were devised to emphasize Embodied verbal and non‐ Users could communicate through a learning objectives around various social verbal communication microphone and through arm and body skills such as resolving conflict, meeting gestures of their avatar. new people, and interviewing for a job. Embodied actions Users could interact with the environment through their avatars that were driven by a standard QWERTY keyboard and mouse.

Interaction Training The system was composed of three social Realistic display of A variety of realistic settings were (Ke & Im, 2013; Ke interaction tasks including one environment created to provide dynamic opportunities et al., 2015) concerning the recognition of gestures for social training. and expressions, responding to and maintaining an interaction, and initiating Embodied actions Users controlled their avatar as they and maintaining an interaction. During manipulated the environment, moved each session an adult facilitator logged in throughout, and maintained social and actively interacted with other users interactions. in the environment. They provided support and guidance. Embodied verbal and non‐ Users could interact via a microphone or verbal communication a gestural system

Hand-in-Hand (Zhao This system supports naturalistic social Embodied verbal and non‐ Supports gaze, gestural, and voice et al., 2018) interactions, promotes communication verbal communication communication. within game play, and gathers user performance and communication in real Embodied actions The Leap Motion device recognizes the time. The goal of Hand-in-Hand is to players’ hand locations and gestures as collaborate with another user to solve a the control signals to manipulate virtual puzzle game. objects.

Eye-gaze system In this system, participants would look at Consistency of object Virtual characters made realistic facial (Grynszpan et al., the face of a virtual character while behaviour expressions and mannerisms. 2012) addressing them. The virtual character would then describe a situation and in the process would say a sentence that could be interpreted in two distinct ways according to the context. That context was provided by the facial expressions of the avatar and left up to the user to recognize and determine. Five basic emotions (disgust, joy, fear, anger and sadness) were utilized in the system.

Pico's Adventure Pico's Adventure was a full-body Embodied verbal and non‐ Users can communicate through the (Crowell et al., 2019) interaction collaborative system verbal communication built-in microphone in the Kinect or developed to help children with ASD through their gestures that are displayed learn and practice social abilities needed on screen through the Kinect. in collaboration such as reciprocity, imitation, joint-attention, and Embodied actions Microsoft Kinect reflects the behavior of cooperation. Children played games users into the system together that required cooperation such as having to employ joint attention to direct a laser to free their parents from spaceships.

Decoding Social In this system users were coached Embodied actions Participant actions were recorded with a Interactions through 5 social scenarios by a virtual wand tracker as they moved freely (Jacques et al., 2018) coach that helped them decode social throughout the CAVE

196

interactions and their contexts including Realistic display of Social scenarios that represent common emotions, possible actions, and behaviors environment places of gatherings were presented to that could be applied and practiced. users so they could practice decoding and responding to interactions. These activities took place within Psyche, a fully immersive stereoscopic and wireless 6-wall CAVE-Like system where participants can move freely with a wand tracker.

VR-CR In this VR platform the user has to make Embodied actions Motion-capture technology was (Wang & Reid, 2013) a similarity judgment between a movable incorporated using a tracking webcam target object and a multi object context which projected their movements into that is displayed on the screen. They are the virtual environment. able to manipulate and drag virtual objects to different locations in the environment.

VLSS (Volioti et al., Social stories are presented to students Embodied actions Students control an avatar as they 2016) within a virtual environment. These navigate through the environment. They include contexts that involve solving use a keyboard and follow paths through problems at home, handling dilemas, and the environment to complete the how to handle a situation where they feel intervention. The user is able to choose anger from their classmates. Users are to interact with objects such as sitting at presented with text and images on an a desk. interactive whiteboard to explain these social stories and are then given multiple Realistic display of Authors describe the system design as choice options on how they should environment being the best possible realistic respond. representation of school premises

VESIP VESIP is an automated, interactive, User representation Users are able to design their own (Russo-Ponsaran et computer-delivered assessment that uses character which then becomes part of the al., 2018) 3D game technology in which social scenes. respondents play the role of a customized avatar and engage in social situations. In each scenario, the respondent’s avatar engages in a challenging social situation, and a friendly helper character. Questions are posed to the user which can be responded to through multiple- choice or slide-based systems.

Designing for a realistic environment was the most commonly cited consideration in the literature and was used in 33 of the projects (67.3%). The second most frequent design factor was seen in 31 of the projects in which action is embodied through a virtual avatar (63.3%).

Designing for consistent object behaviors was seen in 15 of the projects (30.6%). The ability for users to embody verbal and non‐verbal communication through an avatar was seen in 15 of the projects (30.6%). A realistic user representation was seen in four (8.2%) of the projects.

Kinesthetic and tactile force feedback was only utilized in two (4.1%) projects. Both the use of spatial audio and a smooth display of view changes and object motion was seen in only 1 project

197

(2%). A full breakdown of these Dalgarno and Lee design factors and how they are instantiated is provided in Figure 6.

Figure 6. Design factors instantiated across VR projects/interventions by percent.

Discussion

Three aims guided this systematic review: (1) to identify how designers of VR interventions for individuals with ASD characterize/define VR; (2) to categorize the VR systems reported in the literature using the schema suggested by Parsons (2016); and (3) to determine how the distinguishing characteristics of virtual learning environments, as outlined by Dalgarno and Lee (2010), are instantiated in VR interventions designed for individuals with ASD. To this end, 84 manuscripts were selected and analyzed.

Lack of consensus around what constitutes VR

A review of VR definitions suggests there is no established and field-recognized operationalization of what defines VR. Many characterizations are presented with some authors claiming VR is a representation of the real-world, others stating VR is any form of computer-

198

generated imagery, others comparing VR to video games, and some stating that VR requires sensations of telepresence. This finding is problematic because any discussions of affordances of

VR necessarily are precluded by a base understanding of the technology itself. Further confusion stems from the fact that 56.9% of the projects returned in this literature review do not define the term at all, although many cite literature supporting benefits of VR. It has become clear that designers of interventions for VR are often citing benefits of using VR technology, but are using these systems in ways that are perhaps contradictory and hindering their full potential

VR System Characteristics

Just as there is no clear consensus on what defines VR, there is also no consensus of which system architectures can be used to deliver VR. Underlying system architectures range from desktop-based systems to fully immersive VR platforms that utilize HMDs. The most frequent system type is a desktop-based VR platform that presents a virtual environment onto a computer monitor that users can interact with through a variety of input device configurations such as a mouse and keyboard. In one case, Wade and colleagues (Wade et al., 2016; Wade et al., 2017; Zhang et al., 2017; Bian et al., 2019) utilized a USB G27 electronic steering wheel designed for racing video games to train individuals with ASD how to drive in an adaptive VR environment. This finding might be due to the fact that more immersive VR systems often require expensive hardware and can be difficult to design for. The issue is that these systems are referred to collectively as “VR.” This is particularly problematic when considering those systems that appear to be similar to a video game, for example projection-based systems that utilized the Microsoft Kinect to project the movements of users onto a screen to complete game- like activities (Crowell et al., 2019; Cai et al., 2013; Lu et al., 2018). For example, in Pico’s

Adventure, participants interacted with the system by waving their arms which would direct

199

lasers to shoot down alien spaceships with the goal of training joint attention skills (Crowell et al., 2019). While such systems have some commonalities with VR, they do not afford users the same level of interaction and immersion of a VR system. This raises the question of whether these are actually VR systems at all.

CAVE-based VR configurations exhibit similar issues. In these systems, virtual environments are projected onto walls or screens, but are done so in a more encompassing manner. Instead of being presented on just one screen, CAVE-based systems present the virtual environment on multiple screens to surround the user. The level of interaction provided by the system varies. For example, in the Immersive VRET and Cognitive Behavioral Therapy System

(Maskey et al., 2019; Maskey et al., 2019; Maskey et al., 2014; Maskey et al., 2019), users are placed within a 360-degree CAVE where scenes are controlled by an administering therapist.

Users are passive observers to scenarios that evolve in complexities to help treat phobias. In another example, the Decoding Social Interactions (Jacques et al., 2018) system allows users to interact within a L-shaped CAVE and have their motions tracked through a wand controller. This device allows for greater embodiment of user actions and enables participants to take a more active role in the training of social skills. Both systems are characterized as being VR platforms but clearly offer users vastly different experiences.

In a similar vein, mobile-based VR systems are emerging, in which users are often placed within a light-weight HMD and engage within a spherical video-based virtual reality (SVVR) training context (Meindl et al., 2019; Schmidt et al., 2019). In these systems, users are presented

360-degree videos as they wear a headset. Users typically have limited control and are passive observers to video-based multimedia or scripted events. These findings highlight the variability of VR systems that are being used to provide interventions for individuals with ASD, and they

200

also align with the conclusions of other work in the field that note that few fully immersive

HMD-based VR systems have been used (Bozgeyikli et al., 2018; Newbutt et al, 2016).

However, the lack of any standard, agreed-upon characterization or definition in this field of what constitutes VR in general, and what constitutes different VR architectures specifically

(including whether some architectures should be considered VR at all) raises concerns about efficacy claims of VR for individuals with ASD. In the future, researchers will need to confront this issue. Until then, researchers are urged to provide precise descriptions of how VR is operationalized in their particular contexts.

Distinguishing characteristics of VR in systems designed for individuals with ASD

Results from our third research aim indicates that the most commonly designed characteristic of a virtual learning environment that's been proposed as having potential learning benefits (Dalgarno & Lee, 2010) is that of creating a realistic display of the environment. This finding confirms empirically what others have suggested (e.g., Parsons, 2016), that researchers tend to design their VR environments “towards a closer fit with the real world” (p. 154). The second most frequent design factor was that of embodied action within the virtual environment.

Dalgarno and Lee (2010) highlight the centrality of embodied action towards identity construction. This is often achieved through a user’s control of an avatar, which acts as a nexus of interaction within the virtual world, allowing for a degree of psychological immersion. This consideration of embodiment brings up an important issue concerning the design of VR systems and their underlying hardware and modalities for interaction. That is, that the interactions between the learner and the system are often being considered, but it is unclear if these interactions are being intentionally designed in a way that can fully exploit the purported benefits of the technology. The findings presented in Table 5, highlights this issue as it shows the many

201

ways that this design challenge of embodiment is being considered and how it can be achieved through a variety of configurations.

For example, designing to allow for the embodiment of a user’s actions can be achieved in many ways. In the Immersive VR System (Halabi et al., 2017; Halabi et al., 2017) a LEAP

Motion device is used to capture and translate a user’s movement into the virtual environment. In the Virtual Dolphinarium (Cai et al., 2013; Lu et al., 2018) a Microsoft Kinect picks up simple movements such as arm and hand gestures that can be projected into a game-like system. In the

Virtual Joystick (Kim et al., 2015), users are able to control an avatar with a joystick to position their character within social scenarios. In the VR Adaptive Driving System (Wade et al., 2016;

Wade et al., 2017; Zhang et al., 2017; Bian et al., 2019), users interact with a system made up of realistic vehicular input devices to control a virtual car that can drive around a full-scale city. In the Public Speaking Intervention (Jarrold et al., 2013), a user’s head orientation and rotational movement along three axes were dynamically tracked and embodied in the environment. All of these examples convey a possibility for learner interactions that can promote the embodiment of a users’ actions within a virtual environment. However, it is unclear if these interactions afford its users the possibility to engage in a meaningful way that can promote learning outcomes. As seen in Figure 5, many of these VR systems were designed to promote the development of social skills which is an innately complex interaction composed of a multitude of verbal, non-verbal, and emotional sub-skills. Yet the vast majority of VR systems are desktop-based interfaces where users interact with, gesture, and have their actions embodied into the system through rigid motions that are mediated through unnaturalistic peripherals and interfaces (e.g. clicking a button on their keyboard or mouse to move their arm). This finding presents what seems to be a disconnect between intended learning outcomes and how VR systems tend to be designed for this

202

population. Also of concern is that because of the disparate state of VR systems and their varied configurations, characteristics, and design instantiations is that researchers will have difficulties in systematically unpacking how VRsystems can be developed to promote generalization.

What has become clear is that design factors have been instantiated across the literature in a wide number of considerations (see Table 5), and the way that the VR intervention is designed, beginning with the VR architecture and system type, greatly impacts the affordances and possibilities for interaction with the system. Given the rapid pace of adoption of VR by the public (Bagheri, 2016), and the emergence of commercially affordable HMD (Newbutt et al.,

2016), interest in using this technology for individuals with ASD continues to grow (Parsons,

2016). However, difficulties with designing to promote generalization remain. The field is laden with researchers who are trying to develop VR solutions that vary in their target audience and target outcomes. If VR systems are to be designed in a way that can meaningfully promote targeted outcomes and potentially generalization, then VR developers need to take a more systematic approach to defining the technology, using a model like Dalgarno & Lee’s (2010), and begin to consider how the interactions between the learner and the system can facilitate the learning process.

Limitations

Findings and implications of this systematic review should be considered in light of the following limitations. Procedures used in this manuscript deviate somewhat from suggested guidelines for conducting systematic literature reviews (Davis et al., 2014; Kitchenham, 2004;

Moher et al., 2009) including: (1) data was extracted by a single researcher, (2) multiple publications were included from the same dataset or research projects, and (3) some projects did not name their interventions which led to difficulties determining if a VR platform was unique.

203

The first point means that some of the data extracted may be erroneous. This limitation is in line with other research in the field that suggests the difficulties in conducting large scale qualitative reviews of the literature (Belur et al., 2018; Campbell et al., 2013). Other research has shown that difficulties arise with validating data when a large number of studies are provided or when the data is complex (Belur et al., 2018; Kitchenham et al., 2009), even if data definitions and extraction guidelines are provided in a protocol (Brereton et al., 2007).

The second point refers to guidelines proposed in by Kitchenham (2004) that stresses the importance of excluding multiple publications that use the same data set as doing so can impact the bias of result, and suggests that the most recent report should be the one included. With respect to this guideline, we included all studies from a dataset because of the nature of our research questions. We were not looking at reporting study outcomes, but were rather interested in the qualitative described conditions of VR systems and designs. Therefore, it was important to include all manuscripts from the same dataset as reports often did not include all of the relevant data that we were trying to extract or the information was ambiguous and required triangulation across multiple manuscripts to confirm the data.

Lastly, we found that many designers of VR interventions did not give their system a name that would help identify the project. There were cases where multiple manuscripts presented on data that appeared to be related but did not explicitly provide those connections. In some cases, we were able to determine if a manuscript was part of an existing project because they shared common assets across manuscripts. For example one project used the same screenshot of the system in multiple papers so we were able to group them together. Due to this limitation, it is possible that there are errors in the organization.

204

Conclusion

In conclusion, the findings from this systematic literature review suggest that VR interventions for individuals with ASD vary in conceptualizations of the term which has implications on how the platform itself is designed. While researchers tend to cite purported benefits of VR technologies and virtual worlds, the designs of the systems greatly impact the possibilities for learner interactions and how these learning benefits can be realized. Most research in the field has been conducted on desktop-based VR systems that use a variety of input devices and configurations. This finding suggests that designers of VR systems may be creating solutions that are not taking full advantage of the purported benefits to virtual environments such as increased immersion, fidelity, and active learning participation (Dalgarno & Lee, 2010).

Creating VR spaces is fraught with challenges and it is unclear how the nature of learner interactions within a virtual context can impact learning outcomes for individuals with ASD.

That is, there is still great uncertainty that exists concerning how the ideal properties of VR can be brought together to promote the development of different skills. A review of the literature indicates that researchers are exploring a range of VR technologies, to promote a range of skills and treatments, for a wide range of audiences, and with an even greater range of design implementations. This finding makes it hard for future researchers to make their own design decisions, as there is not much that exists concerning how to intentionally design for generalization of specific skills within VR spaces. Echoing the conclusions from another manuscript in the field, perhaps it is time for researchers to stop asking questions about if VR, or other technologies, work for individuals, and should instead focus on understanding which technologies work for whom, in which contexts, with what kinds of support, and for what kinds of tasks or objectives (Parsons, 2016).

205

References Glaser, N., Schmidt, M., Schmidt, C., Beck, D., & Palmer, H. (in press). The centrality of

interdisciplinarity for overcoming design and development constraints of a multi-user

virtual reality intervention for adults with autism: A design case.

Alcañiz, M. L., Olmos-Raya, E., & Abad, L. (2019). Use of virtual reality for

neurodevelopmental disorders: A review of the state of the art and future agenda;

30776285. Medicina (Argentina), 79(1), 77–81.

Aresti-Bartolome, N., & Garcia-Zapirain, B. (2014). Technologies as support tools for persons

with autistic spectrum disorder: A systematic review. Int. J. Environ. Res. Public Health,

11(8), 7767–7802. https://doi.org/10/f6fzs2

Arnold-Saritepe, A. M., Phillips, K. J., Mudford, O. C., De Rozario, K. A., & Taylor, S. A.

(2009). Generalization and Maintenance. In J. L. Matson (Ed.), Applied Behavior

Analysis for Children with Autism Spectrum Disorders (pp. 207–224). Springer.

https://doi.org/10.1007/978-1-4419-0088-3_12

Babu, P. R. K., Oza, P., & Lahiri, U. (2018). Gaze-Sensitive Virtual Reality Based Social

Communication Platform for Individuals with Autism. Ieee Transactions on Affective

Computing, 9(4), 450–462. https://doi.org/10.1109/TAFFC.2016.2641422

Bagheri, R. (2016). Virtual Reality: The Consequences. UC Davis Business Law

Journal, 17, 101.

Baio, J., Wiggins, L., Christensen, D. L., Maenner, M. J., Daniels, J., Warren, Z., Kurzius-

Spencer, M., Zahorodny, W., Robinson Rosenberg, C., White, T., Durkin, M. S., Imm, P.,

Nikolaou, L., Yeargin-Allsopp, M., Lee, L.-C., Harrington, R., Lopez, M., Fitzgerald, R.

T., Hewitt, A., … Dowling, N. F. (2018). Prevalence of Autism Spectrum Disorder

Among Children Aged 8 Years—Autism and Developmental Disabilities Monitoring

206

Network, 11 Sites, United States, 2014. MMWR Surveill. Summ., 67(6), 1–23.

https://doi.org/10/gfx5sc

Bellani, M., Fornasari, L., Chittaro, L., & Brambilla, P. (2011). Virtual reality in autism: State of

the art. Epidemiology and Psychiatric Sciences, 20(03), 235–238.

https://doi.org/10.1017/S2045796011000448

Belur, J., Tompson, L., Thornton, A., & Simon, M. (2018). Interrater reliability in systematic

review methodology: Exploring variation in coder decision-making. Sociological

Methods & Research, 0049124118799372. https://doi.org/10.1177/0049124118799372

Bian, D., Wade, J. W., Zhang, L., Bekele, E., Swanson, A., Crittendon, J. A., Sarkar, M.,

Warren, Z., & Sarkar, N. (2013). A Novel Virtual Reality Driving Environment for

Autism Intervention. In C. Stephanidis & M. Antona (Eds.), Universal Access in Human-

Computer Interaction. User and Context Diversity (Vol. 8010, pp. 474–483). Springer

Berlin Heidelberg. https://doi.org/10.1007/978-3-642-39191-0_52

Bozgeyikli, L., Bozgeyikli, E., Raij, A., Alqasemi, R., Katkoori, S., & Dubey, R. (2017).

Vocational rehabilitation of individuals with autism spectrum disorder with virtual

reality. ACM Transactions on Accessible Computing, 10(2).

https://doi.org/10.1145/3046786

Bozgeyikli, L., Raij, A., Katkoori, S., & Alqasemi, R. (2018). A survey on virtual reality for

individuals with autism spectrum disorder: Design considerations. IEEE Transactions on

Learning Technologies, 11(2), 133–151. https://doi.org/10.1109/TLT.2017.2739747

Brattan, V. C. (2019). The utility of virtual reality in interventions for autism spectrum

disorder: A systematic review. Generic.

Brereton, P., Kitchenham, B. A., Budgen, D., Turner, M., & Khalil, M. (2007). Lessons from

207

applying the systematic literature review process within the software engineering domain.

Journal of Systems and Software, 80(4), 571–583.

https://doi.org/10.1016/j.jss.2006.07.009

Butler, A., Hall, H., & Copnell, B. (2016). A Guide to Writing a Qualitative Systematic Review

Protocol to Enhance Evidence-Based Practice in Nursing and Health Care. Worldviews

on Evidence-Based Nursing, 13(3), 241–249. https://doi.org/10.1111/wvn.12134

Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding In-depth

Semistructured Interviews: Problems of Unitization and Intercoder Reliability and

Agreement. Sociological Methods & Research, 42(3), 294–320.

https://doi.org/10.1177/0049124113500475

Chen, J., Wang, G., Zhang, K., Wang, G., & Liu, L. (2019). A pilot study on evaluating children

with autism spectrum disorder using computer games. Computers in Human Behavior,

90, 204–214. https://doi.org/10.1016/j.chb.2018.08.057

Cheng, Y., Huang, C.-L., & Yang, C.-S. (2015). Using a 3D immersive virtual environment

system to enhance social understanding and social skills for children with autism

spectrum disorders. Focus on Autism and Other Developmental Disabilities, 30(4), 222–

236. Scopus. https://doi.org/10.1177/1088357615583473

Dalgarno, B., & Lee, M. J. W. (2010). What are the learning affordances of 3-D virtual

environments? British Journal of Educational Technology, 41(1), 10–32.

https://doi.org/10.1111/j.1467-8535.2009.01038.x

Davis, J., Mengersen, K., Bennett, S., & Mazerolle, L. (2014). Viewing systematic reviews and

meta-analysis in social research through different lenses. SpringerPlus, 3(1), 511.

https://doi.org/10.1186/2193-1801-3-511

208

Didehbani, N., Allen, T., Kandalaft, M., Krawczyk, D., & Chapman, S. (2016). Virtual Reality

Social Cognition Training for children with high functioning autism. Computers in

Human Behavior, 62, 703–711. https://doi.org/10.1016/j.chb.2016.04.033

Dixon, D. R., Miyake, C. J., Nohelty, K., Novack, M. N., & Granpeesheh, D. (2019). Evaluation

of an Immersive Virtual Reality Safety Training Used to Teach Pedestrian Skills to

Children With Autism Spectrum Disorder. Behavior Analysis in Practice, 1–10.

DSM-5 American Psychiatric Association. Diagnostic and statistical manual of mental disorders.

(2013). American Psychiatric Publishing, Arlington.

Eaves, L. C., & Ho, H. H. (2008). Young adult outcome of autism spectrum disorders.

Journal Autism Developmental Disorders, 38, 739–747. https://doi.org/10/ccwdnx

Fowler, C. (2015). Virtual reality and learning: Where is the pedagogy? British Journal of

Educational Technology, 46(2), 412–422. https://doi.org/10/f665s6

Gibson, J. J. (2014). The Ecological Approach to Visual Perception: Classic Edition. Psychology

Press.

Glaser, N. J., & Schmidt, M. (2018). Usage considerations of 3D collaborative virtual learning

environments to promote development and transfer of knowledge and skills for

individuals with autism. Technology, Knowledge and Learning.

https://doi.org/10.1007/s10758-018-9369-9

Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for

systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar,

PubMed, and 26 other resources. Research Synthesis Methods, 11(2), 181–217.

https://doi.org/10.1002/jrsm.1378

Haddaway, N. R., Collins, A. M., Coughlin, D., & Kirk, S. (2015). The Role of Google Scholar

209

in Evidence Reviews and Its Applicability to Grey Literature Searching. PLOS ONE,

10(9), e0138237. https://doi.org/10.1371/journal.pone.0138237

Hale, K. S., & Stanney, K. M. (2014). Handbook of Virtual Environments: Design,

Implementation, and Applications, Second Edition. CRC Press.

Hedley, D., Uljarević, M., Cameron, L., Halder, S., Richdale, A., & Dissanayake, C. (2017).

Employment programmes and interventions targeting adults with autism spectrum

disorder: A systematic review of the literature. Autism, 21(8), 929–941.

https://doi.org/10/gb2vzf

Herrero, J. F., & Lorenzo, G. (2019). An immersive virtual reality educational intervention on

people with autism spectrum disorders (ASD) for the development of communication

skills and problem solving. Education and Information Technologies, 1–34.

Ip, H. H. S., Wong, S. W. L., Chan, D. F. Y., Byrne, J., Li, C., Yuan, V. S. N., Lau, K. S. Y., &

Wong, J. Y. W. (2018). Enhance emotional and social adaptation skills for children with

autism spectrum disorder: A virtual reality enabled approach. Computers & Education,

117, 1–15. https://doi.org/10.1016/j.compedu.2017.09.010

Jonassen, D. H. (1995). Computers as cognitive tools: Learning with technology, not from

technology. Journal of Computing in Higher Education, 6(2), 40.

https://doi.org/10.1007/BF02941038

Kandalaft, M. R., Didehbani, N., Krawczyk, D. C., Allen, T. T., & Chapman, S. B. (2013).

Virtual Reality Social Cognition Training for Young Adults with High-Functioning

Autism. Journal of Autism and Developmental Disorders, 43(1), 34–44.

https://doi.org/10.1007/s10803-012-1544-6

Karami, B. (2020). VR on ASD supplemental data. https://doi.org/None

210

Karami, B., Koushki, R., Arabgol, F., Rahmani, M., & Vahabie, A. (2020a). Effectiveness of

Virtual Reality-based therapeutic interventions on individuals with autism spectrum

disorder: A comprehensive meta-analysis [Preprint]. PsyArXiv.

https://doi.org/10.31234/osf.io/s2jvy

Ke, F., Im, T., Xue, X., Xu, X., Kim, N., & Lee, S. (2015). Experience of Adult Facilitators in a

Virtual-Reality-Based Social Interaction Program for Children With Autism. Journal of

Special Education, 48(4), 290–300. Scopus. https://doi.org/10.1177/0022466913498773

Ke, Fengfeng, & Im, T. (2013). Virtual-Reality-Based Social Interaction Training for Children

with High-Functioning Autism. The Journal of Educational Research, 106(6), 441–461.

https://doi.org/10.1080/00220671.2013.832999

Kitchenham, B. (2004). Procedures for Performing Systematic Reviews. Keele University

Technical Report, 33.

Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009).

Systematic literature reviews in software engineering – A systematic literature review.

Information and Software Technology, 51(1), 7–15.

https://doi.org/10.1016/j.infsof.2008.09.009

Laffey, J. M., Stichter, J., & Galyen, K. (2014). Distance learning for students with special needs

through 3D virtual learning. International Journal of Virtual and Personal Learning

Environments, 5(2), 15–27. https://doi.org/10.4018/ijvple.2014040102

Laffey, J., Schmidt, M., Galyen, K., & Stichter, J. (2012). Smart 3D collaborative virtual

learning environments: A preliminary framework. Journal of Ambient Intelligence and

Smart Environments, 4(1), 49–66. https://doi.org/10.3233/AIS-2011-0128

Lorenzo, G., Lledó, A., Pomares, J., & Roig, R. (2016). Design and application of an immersive

211

virtual reality system to enhance emotional skills for children with autism spectrum

disorders. Computers and Education, 98, 192–205. Scopus.

https://doi.org/10.1016/j.compedu.2016.03.018

Maskey, M., Lowry, J., Rodgers, J., McConachie, H., & Parr, J. R. (2014). Reducing specific

phobia/fear in young people with autism spectrum disorders (ASDs) through a virtual

reality environment intervention. PLoS ONE, 9(7). Scopus.

https://doi.org/10.1371/journal.pone.0100374

Maskey, M., Rodgers, J., Ingham, B., Freeston, M., Evans, G., Labus, M., & Parr, J. R. (2019).

Using Virtual Reality Environments to Augment Cognitive Behavioral Therapy for Fears

and Phobias in Autistic Adults. Autism Adulthood, 1(2), 134–145.

https://doi.org/10.1089/aut.2018.0019

Maskey, Morag, Rodgers, J., Grahame, V., Glod, M., Honey, E., Kinnear, J., Labus, M., Milne,

J., Minos, D., McConachie, H., & Parr, J. R. (2019). A Randomised Controlled

Feasibility Trial of Immersive Virtual Reality Treatment with Cognitive Behaviour

Therapy for Specific Phobias in Young People with Autism Spectrum Disorder. Journal

of Autism and Developmental Disorders, 49(5), 1912–1927.

https://doi.org/10.1007/s10803-018-3861-x

Meindl, J. N., Saba, S., Gray, M., Stuebing, L., & Jarvis, A. (2019). Reducing blood draw phobia

in an adult with autism spectrum disorder using low-cost virtual reality exposure therapy.

Journal of Applied Research in Intellectual Disabilities, 32(6), 1446–1452. Scopus.

https://doi.org/10.1111/jar.12637

Mesa-Gresa, P., Gil-Gómez, H., Lozano-Quilis, J.-A., & Gil-Gómez, J.-A. (2018). Effectiveness

of Virtual Reality for Children and Adolescents with Autism Spectrum Disorder: An

212

Evidence-Based Systematic Review. Sensors, 18(8), 2486.

https://doi.org/10.3390/s18082486

Miller, H. L., & Bugnariu, N. L. (2016). Level of immersion in virtual environments impacts the

ability to assess and teach social skills in autism spectrum disorder. Cyberpsychology.

Behavior Social Network, 19(4), 246–256. https://doi.org/10/f8h82z

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred Reporting Items for

Systematic Reviews and Meta-Analyses: The PRISMA Statement. Journal of Clinical

Epidemiology, 62(10), 1006–1012. https://doi.org/10.1016/j.jclinepi.2009.06.005

Müller, E., Schuler, A., & Yates, G. B. (2008). Social challenges and supports from the

perspective of individuals with Asperger syndrome and other autism spectrum

disabilities. Autism, 12(2), 173–190. https://doi.org/10/b78xmh

Neely, L. C., Ganz, J. B., Davis, J. L., Boles, M. B., Hong, E. R., Ninci, J., & Gilliland, W. D.

(2016). Generalization and Maintenance of Functional Living Skills for Individuals with

Autism Spectrum Disorder: A Review and Meta-Analysis. Review Journal of Autism and

Developmental Disorders, 3(1), 37–47. https://doi.org/10/gf8rmg

Newbutt, N., Sung, C., Kuo, H.-J., Leahy, M. J., Lin, C.-C., & Tong, B. (2016). Brief report: A

pilot study of the use of a virtual reality headset in autism populations. Journal of Autism

and Developmental Disorders, 46(9), 3166–3176. https://doi.org/10.1007/s10803-016-

2830-5

Parsons, S. (2016). Authenticity in Virtual Reality for assessment and intervention in autism: A

conceptual review. Educational Research Review, 19, 138–157.

https://doi.org/10.1016/j.edurev.2016.08.001

Parsons, S., & Mitchell, P. (2002). The potential of virtual reality in social skills training for

213

people with autistic spectrum disorders. Journal of Intellectual Disability Research,

46(5), 430–443.

Rao, P. A., Beidel, D. C., & Murray, M. J. (2008). Social Skills Interventions for Children with

Asperger’s Syndrome or High-Functioning Autism: A Review and Recommendations.

Journal of Autism and Developmental Disorders, 38(2), 353–361.

https://doi.org/10.1007/s10803-007-0402-4

Schmidt, M., Galyen, K., Laffey, J., Babiuch, R., & Schmidt, C. (2014). Open source software

and design-based research symbiosis in developing 3D virtual learning environments:

Examples from the iSocial project. Journal of Interactive Learning Research, 25(1), 65–

99.

Schmidt, M. M. (2014). Designing for learning in a three-dimensional virtual learning

environment: A design-based research approach. Journal of Special Education

Technology, 29(4), 59–71.

Schmidt, Matthew, Laffey, J. M., Schmidt, C. T., Wang, X., & Stichter, J. (2012). Developing

methods for understanding social behavior in a 3D virtual learning environment.

Computers in Human Behavior, 28(2), 405–413.

https://doi.org/10.1016/j.chb.2011.10.011

Schmidt, Matthew, Schmidt, C., Glaser, N., Beck, D., Lim, M., & Palmer, H. (2019). Evaluation

of a spherical video-based virtual reality intervention designed to teach adaptive skills for

adults with autism: A preliminary report. Interactive Learning Environments, 1–20.

https://doi.org/10.1080/10494820.2019.1579236

Self, T., Scudder, R. R., Weheba, G., & Crumrine, D. (2007). A virtual approach to teaching

safety skills to children with autism spectrum disorder. In Topics in Language Disorders

214

(Vol. 27, pp. 242–253).

Sherman, W. R., & Craig, A. B. (2002). Understanding Virtual Reality: Interface, Application,

and Design. Elsevier.

Shu, Y., Huang, Y.-Z., Chang, S.-H., & Chen, M.-Y. (2019). Do virtual reality head-mounted

displays make a difference? A comparison of presence and self-efficacy between head-

mounted displays and desktop computer-facilitated virtual environments. Virtual Reality,

23(4), 437–446. https://doi.org/10.1007/s10055-018-0376-x

Simões, M., Bernardes, M., Barros, F., & Castelo-Branco, M. (2018). Virtual travel training for

autism spectrum disorder: Proof-of-concept interventional study. Journal of Medical

Internet Research, 20(3). Scopus. https://doi.org/10.2196/games.8428

Slater, M. (2018). Immersion and the illusion of presence in virtual reality. British Journal of

Psychology, 109(3), 431–433. https://doi.org/10.1111/bjop.12305

Slater, M., Lotto, B., Arnold, M. M., & Sanchez-Vives, M. V. (2009). How we experience

immersive virtual environments: The concept of presence and its measurement. Anuario

de Psicología, 40, 18.

Steuer, J. (1992). Defining Virtual Reality: Dimensions Determining Telepresence. Journal of

Communication, 42, 73–93.

Stichter, J. P., Laffey, J., Galyen, K., & Herzog, M. (2014). iSocial: Delivering the Social

Competence Intervention for Adolescents (SCI-A) in a 3D Virtual Learning Environment

for Youth with High Functioning Autism. Journal of Autism and Developmental

Disorders, 44(2), 417–430. https://doi.org/10.1007/s10803-013-1881-0

Stokes, T. F., & Osnes, P. G. (2016). An operant pursuit of generalization – Republished article.

Behav. Ther., 47, 720–732. https://doi.org/10/gf8rmq

215

Strickland, D. (1996a). A virtual reality application with autistic children. Presence:

Teleoperators and Virtual Environments, 5(3), 319–329.

https://doi.org/10.1162/pres.1996.5.3.319

Strickland, D. C., Coles, C. D., & Southern, L. B. (2013). JobTIPS: A transition to employment

program for individuals with autism spectrum disorders. Journal of Autism and

Developmental Disorders, 43(10), 2472–2483. Scopus. https://doi.org/10.1007/s10803-

013-1800-4

Volioti, C., Tsiatsos, T., Mavropoulou, S., & Karagiannidis, C. (2016). VLEs, social stories and

children with autism: A prototype implementation and evaluation. Education and

Information Technologies, 21(6), 1679–1697. https://doi.org/10.1007/s10639-015-9409-1

Wade, J., Weitlauf, A., Broderick, N., Swanson, A., Zhang, L., Bian, D., Sarkar, M., Warren, Z.,

& Sarkar, N. (2017). A Pilot Study Assessing Performance and Visual Attention of

Teenagers with ASD in a Novel Adaptive Driving Simulator; 28756550. Journal of

Autism and Developmental Disorders, 47(11), 3405–3417. https://doi.org/10/gb4vpb

Wade, J., Zhang, L., Bian, D., Fan, J., Swanson, A., Weitlauf, A., Sarkar, M., Warren, Z., &

Sarkar, N. (2016). A gaze-contingent adaptive virtual reality driving environment for

intervention in individuals with autism spectrum disorders. ACM Transactions on

Interactive Intelligent Systems, 6(1). Scopus. https://doi.org/10.1145/2892636

Wang, M., & Reid, D. (2013). Using the virtual reality-cognitive rehabilitation approach to

improve contextual processing in children with autism. The Scientific World Journal.

Wang, X., Laffey, J., Xing, W., Galyen, K., & Stichter, J. (2017). Fostering verbal and non-

verbal social interactions in a 3D collaborative virtual learning environment: A case study

of youth with Autism Spectrum Disorders learning social competence in iSocial.

216

Educational Technology Research and Development, 65(4), 1015–1039.

https://doi.org/10.1007/s11423-017-9512-7

Wang, X., Laffey, J., Xing, W., Ma, Y., & Stichter, J. (2016). Exploring embodied social

presence of youth with autism in 3D collaborative virtual learning environment: A case

study. Computers in Human Behavior, 55, 310–321. https://doi.org/10/f75xmr

Yee, N., Bailenson, J. N., Urbanek, M., Chang, F., & Merget, D. (2007). The unbearable likeness

of being digital: The persistence of nonverbal social norms in online virtual

environments. Cyberpsychology & Behavior: The Impact of the Internet, Multimedia and

Virtual Reality on Behavior and Society, 10(1), 115–121.

https://doi.org/10.1089/cpb.2006.9984

Yuan, S. N. V., & Ip, H. H. S. (2018). Using virtual reality to train emotional and social skills in

children with autism spectrum disorder. London Journal of Primary Care, 10(4), 110–

112. https://doi.org/10.1080/17571472.2018.1483000

Zhang, L., Wade, J., Bian, D., Fan, J., Swanson, A., Weitlauf, A., Warren, Z., & Sarkar, N.

(2017). Cognitive load measurement in a virtual reality-based driving system for autism

intervention. IEEE Transactions on Affective Computing, 8(2), 176–189.

https://doi.org/10.1109/TAFFC.2016.2582490

Zhao, H., Swanson, A. R., Weitlauf, A. S., Warren, Z. E., & Sarkar, N. (2018). Hand-in-Hand: A

communication-enhancement collaborative virtual reality system for promoting social

interaction in children with autism spectrum disorders. IEEE Transactions on Human-

Machine Systems, 48(2), 136–148. Scopus. https://doi.org/10.1109/THMS.2018.2791562

217

Appendix A

Database Search Query

PubMed ((((((((((((((((((((((("virtual reality"[MeSH Terms] OR ("virtual"[All Fields] AND "reality"[All Fields]) OR "virtual reality"[All Fields]) OR (virtual [All Fields] OR virtual reality[All Fields] OR virtual reality,[All Fields])) OR (virtual[All Fields] AND ("learning"[MeSH Terms] OR "learning"[All Fields]) AND ("environment"[MeSH Terms] OR "environment"[All Fields]))) OR (virtual learning environment[All Fields] OR virtual learning environments[All Fields])) OR ("virtual reality"[MeSH Terms] OR ("virtual"[All Fields] AND "reality"[All Fields]) OR "virtual reality"[All Fields])) OR (virtual realities[All Fields] OR virtual reality[All Fields] OR virtual reality,[All Fields])) OR ("Proc IEEE Virtual Real Conf"[Journal] OR "vr"[All Fields])) OR (virtual[All Fields] AND ("environment"[MeSH Terms] OR "environment"[All Fields]))) OR (virtual environment[All Fields] OR virtual environments[All Fields])) OR (virtual[All Fields] AND ("WORLD"[Journal] OR "world"[All Fields]))) OR (virtual world[All Fields] OR virtual worlds[All Fields])) OR virtual-world[All Fields]) OR (virtual world[All Fields] OR virtual worlds[All Fields])) OR (collaborative[All Fields] AND (virtual learning environment[All Fields] OR virtual learning environments[All Fields]))) OR (3D[All Fields] AND (virtual learning environment[All Fields] OR virtual learning environments[All Fields]))) OR (three-dimensional[All Fields] AND (virtual learning environment[All Fields] OR virtual learning environments[All Fields]))) OR ((three[All Fields] AND dimensional[All Fields]) AND (virtual learning environment[All Fields] OR virtual learning environments[All Fields]))) OR (3d[All Fields] AND virtual[All Fields] AND worlds[All Fields])) OR (3d virtual world[All Fields] OR 3d virtual worlds[All Fields])) OR MUVE[All Fields]) OR ("caves"[MeSH Terms] OR "caves"[All Fields] OR "cave"[All Fields])) OR ("smart glasses"[MeSH Terms] OR ("smart"[All Fields] AND "glasses"[All Fields]) OR "smart glasses"[All Fields] OR ("head"[All Fields] AND "mounted"[All Fields] AND "display"[All Fields]) OR "head mounted display"[All Fields])) AND ("1995/01/01"[PDAT] : "2020/12/31"[PDAT]) AND "humans"[MeSH Terms] AND English[lang]) AND ((((((((((((("autistic disorder"[MeSH Terms] OR ("autistic"[All Fields] AND "disorder"[All Fields]) OR "autistic disorder"[All Fields] OR "autism"[All Fields]) OR (autism[All Fields] OR autism'[All Fields] OR autism's[All Fields] OR autism1[All Fields] OR autism360[All Fields] OR autismaid1[All Fields] OR autismand[All Fields] OR autismate[All Fields] OR autismcare[All Fields] OR autismcenter[All Fields] OR autismcrc[All Fields] OR autismdata[All Fields] OR autisme[All Fields] OR autisme'[All Fields] OR autismeexpertise[All Fields] OR autismeforeningen[All Fields] OR autismen[All Fields] OR autismes[All Fields] OR autismespectrum[All Fields] OR autismespectrumstoornis[All Fields] OR autismespectrumstoornissen[All Fields] OR autismespekterforstyrrelser[All Fields] OR autismeteam[All Fields] OR autismeteamet[All Fields] OR autismevriendelijke[All Fields] OR autismi[All Fields] OR autismikirjon[All Fields] OR autismiliitto[All Fields] OR autismkb[All Fields] OR autismlike[All Fields] OR autismmatch[All Fields] OR autismmedicalhome[All Fields] OR autismo[All Fields] OR autismor[All Fields] OR autismos[All Fields] OR autismpro[All Fields] OR autismqld[All Fields] OR autismresearch[All Fields] OR autisms[All Fields] OR autisms'[All Fields] OR autismselfadvocacy[All Fields] OR autismspeaks[All Fields] OR autismspectrum[All Fields] OR autismspektrumbegreppet[All Fields] OR autismspektrumstorning[All Fields] OR autismspektrumsyndrom[All Fields] OR autismspektrumtillstand[All Fields] OR autismstudie[All Fields] OR autismstudies1[All Fields] OR autismstudiesnet[All Fields] OR autismstudynurseasu[All Fields] OR autismtherapies[All Fields] OR autismu[All Fields] OR autismul[All Fields] OR autismului[All Fields] OR autismus[All Fields] OR autismusbehandlung[All Fields] OR autismusforschung[All Fields] OR autismusspektrumstorungen[All Fields] OR autismussyndrom[All Fields] OR autismustherapiezentrum[All Fields] OR autismuszentrum[All Fields])) OR ("autistic disorder"[MeSH Terms] OR ("autistic"[All Fields] AND "disorder"[All Fields]) OR "autistic disorder"[All Fields] OR "autistic"[All Fields])) OR (autis[All Fields] OR autischen[All Fields] OR autischer[All Fields] OR autisense[All Fields] OR autisic[All Fields] OR autisiclike[All Fields] OR autisim[All Fields] OR autisitc[All Fields] OR autisitcsubjects[All Fields] OR autism[All Fields] OR

218

autism'[All Fields] OR autism's[All Fields] OR autism1[All Fields] OR autism360[All Fields] OR autismaid1[All Fields] OR autismand[All Fields] OR autismate[All Fields] OR autismcare[All Fields] OR autismcenter[All Fields] OR autismcrc[All Fields] OR autismdata[All Fields] OR autisme[All Fields] OR autisme'[All Fields] OR autismeexpertise[All Fields] OR autismeforeningen[All Fields] OR autismen[All Fields] OR autismes[All Fields] OR autismespectrum[All Fields] OR autismespectrumstoornis[All Fields] OR autismespectrumstoornissen[All Fields] OR autismespekterforstyrrelser[All Fields] OR autismeteam[All Fields] OR autismeteamet[All Fields] OR autismevriendelijke[All Fields] OR autismi[All Fields] OR autismikirjon[All Fields] OR autismiliitto[All Fields] OR autismkb[All Fields] OR autismlike[All Fields] OR autismmatch[All Fields] OR autismmedicalhome[All Fields] OR autismo[All Fields] OR autismor[All Fields] OR autismos[All Fields] OR autismpro[All Fields] OR autismqld[All Fields] OR autismresearch[All Fields] OR autisms[All Fields] OR autisms'[All Fields] OR autismselfadvocacy[All Fields] OR autismspeaks[All Fields] OR autismspectrum[All Fields] OR autismspektrumbegreppet[All Fields] OR autismspektrumstorning[All Fields] OR autismspektrumsyndrom[All Fields] OR autismspektrumtillstand[All Fields] OR autismstudie[All Fields] OR autismstudies1[All Fields] OR autismstudiesnet[All Fields] OR autismstudynurseasu[All Fields] OR autismtherapies[All Fields] OR autismu[All Fields] OR autismul[All Fields] OR autismului[All Fields] OR autismus[All Fields] OR autismusbehandlung[All Fields] OR autismusforschung[All Fields] OR autismusspektrumstorungen[All Fields] OR autismussyndrom[All Fields] OR autismustherapiezentrum[All Fields] OR autismuszentrum[All Fields] OR autisn[All Fields] OR autisru[All Fields] OR autissier[All Fields] OR autissiodorensis[All Fields] OR autist[All Fields] OR autista[All Fields] OR autistacon[All Fields] OR autistas[All Fields] OR autiste[All Fields] OR autisten[All Fields] OR autistes[All Fields] OR autisti[All Fields] OR autistic[All Fields] OR autistic'[All Fields] OR autistic's[All Fields] OR autistica[All Fields] OR autistical[All Fields] OR autistically[All Fields] OR autisticand[All Fields] OR autisticas[All Fields] OR autisticheskii[All Fields] OR autisticheskikh[All Fields] OR autisticheskogo[All Fields] OR autisticheskoi[All Fields] OR autisticheskom[All Fields] OR autistici[All Fields] OR autistickeho[All Fields] OR autistickych[All Fields] OR autisticlike[All Fields] OR autisticne[All Fields] OR autisticnog[All Fields] OR autisticnom[All Fields] OR autistico[All Fields] OR autisticos[All Fields] OR autistics[All Fields] OR autistics'[All Fields] OR autistiform[All Fields] OR autistique[All Fields] OR autistiques[All Fields] OR autistisch[All Fields] OR autistische[All Fields] OR autistischem[All Fields] OR autistischen[All Fields] OR autistischer[All Fields] OR autistisches[All Fields] OR autistischundisziplinierten[All Fields] OR autistisk[All Fields] OR autistiska[All Fields] OR autistiske[All Fields] OR autistism[All Fields] OR autististic[All Fields] OR autistm[All Fields] OR autistoid[All Fields] OR autists[All Fields] OR autisyn[All Fields])) OR asperger[All Fields]) OR ("autism spectrum disorder"[MeSH Terms] OR ("autism"[All Fields] AND "spectrum"[All Fields] AND "disorder"[All Fields]) OR "autism spectrum disorder"[All Fields])) OR ("Arthropod Struct Dev"[Journal] OR "Agron Sustain Dev"[Journal] OR "asd"[All Fields])) OR ("asperger syndrome"[MeSH Terms] OR ("asperger"[All Fields] AND "syndrome"[All Fields]) OR "asperger syndrome"[All Fields])) OR asperger's[All Fields]) OR (asperger[All Fields] OR asperger'[All Fields] OR asperger's[All Fields] OR asperger's'[All Fields] OR asperger`s[All Fields] OR aspergera[All Fields] OR aspergere[All Fields] OR aspergerfoundation[All Fields] OR aspergerian[All Fields] OR aspergerillus[All Fields] OR aspergerin[All Fields] OR aspergerov[All Fields] OR aspergers[All Fields] OR aspergers's[All Fields] OR aspergersyndroom[All Fields])) OR ("autistic disorder"[MeSH Terms] OR ("autistic"[All Fields] AND "disorder"[All Fields]) OR "autistic disorder"[All Fields])) OR ("autistic disorder"[MeSH Terms] OR ("autistic"[All Fields] AND "disorder"[All Fields]) OR "autistic disorder"[All Fields] OR "autistic"[All Fields])) AND ("1995/01/01"[PDAT] : "2020/12/31"[PDAT]) AND "humans"[MeSH Terms] AND English[lang]) AND (("1995/01/01"[PDAT] : "2020/12/31"[PDAT]) AND "humans"[MeSH Terms] AND English[lang])

Web of TOPIC: ((virtual reality) OR (virtual realit*) OR(virtual learning environment) OR (virtual learning Science environment*) OR (virtual-reality) OR (virtual-realit*) OR (VR) OR (virtual environment) OR (virtual environment*) OR (virtual world) OR (virtual world*) OR (virtual-world) OR (virtual- world*) OR (collaborative virtual learning environment*) OR (3d virtual worlds) OR (3d virtual

219

world*) OR (MUVE) OR (CAVE) OR (head-mounted display)) AND TOPIC: ((autism) OR (autism*) OR (autistic) OR (autis*) OR (asperger) OR (autism spectrum disorder) OR (asd) OR (Asperger Syndrome) OR (aspergers) OR (Asperger*) OR (Autistic Disorder) OR (Autistic))

Scopus ( TITLE-ABS-KEY ( virtual AND reality ) OR TITLE-ABS-KEY ( virtual AND realit* ) OR TITLE-ABS-KEY ( virtual AND learning AND environment ) OR TITLE-ABS-KEY ( virtual AND learning AND environment* ) OR TITLE-ABS-KEY ( virtual-reality ) OR TITLE-ABS- KEY ( virtual-reality* ) OR TITLE-ABS-KEY ( vr ) OR TITLE-ABS-KEY ( virtual AND environment ) OR TITLE-ABS-KEY ( virtual AND environment* ) OR TITLE-ABS-KEY ( virtual AND world ) OR TITLE-ABS-KEY ( virtual AND world* ) OR TITLE-ABS-KEY ( collaborative AND virtual AND learning AND environment* ) OR TITLE-ABS-KEY ( 3d AND virtual AND worlds ) OR TITLE-ABS-KEY ( 3d AND virtual AND world* ) OR TITLE-ABS- KEY ( muve ) OR TITLE-ABS-KEY ( cave ) OR TITLE-ABS-KEY ( head-mounted AND display ) AND TITLE-ABS-KEY ( autism ) OR TITLE-ABS-KEY ( autism* ) OR TITLE-ABS- KEY ( autistic ) OR TITLE-ABS-KEY ( autisti* ) OR TITLE-ABS-KEY ( asperger ) OR TITLE-ABS-KEY ( autism AND spectrum AND disorder ) OR TITLE-ABS-KEY ( asd ) OR TITLE-ABS-KEY ( asperger AND syndrome ) OR TITLE-ABS-KEY ( asperger* ) OR TITLE- ABS-KEY ( autistic AND disorder ) OR TITLE-ABS-KEY ( autistic ) ) AND PUBYEAR > 1994 AND ( LIMIT-TO ( LANGUAGE , "English" ) ) AND ( LIMIT-TO ( PUBSTAGE , "final" ) ) AND ( LIMIT-TO ( DOCTYPE , "ar" ) )

IEEE Xplore ((((("All Metadata":virtual reality) OR "All Metadata":virtual*) AND "All Metadata":autism) OR "All Metadata":asd) AND "All Metadata":autism*)

ERIC ((virtual reality) OR (virtual realit*) OR (virtual learning environment) OR virtual learning environment*) OR (virtual-reality) OR (virtual-realit*) OR (VR) OR (virtual environment) OR (virtual environment*) OR (virtual world) OR (virtual world*) OR (virtual-world) OR (virtual- world*) OR (collaborative virtual learning environment*) OR (3d virtual worlds) OR (3d virtual world*) OR (MUVE) OR (CAVE) OR (head-mounted display)" AND "(autism) OR (autism*) OR (autistic) OR (autis*) OR (asperger) OR (autism spectrum disorder) OR (asd) OR (Asperger Syndrome) OR (asperger’s) OR (Asperger*) OR (Autistic Disorder) OR (Autistic)) AND ((autism) OR (autism*) OR (autistic) OR (autis*) OR (asperger) OR (autism spectrum disorder) OR (asd) OR (Asperger Syndrome) OR (asperger’s) OR (Asperger*) OR (Autistic Disorder) OR (Autistic))

Google allintitle: virtual reality autism Scholar

220

CHAPTER 6: Dissertation Conclusion

The contents of this dissertation describe outcomes of a DBR process concerning the creation and formative evaluation of Virtuoso. In these four manuscripts, I present findings from two DBR meso-cycles related to the design and development of a VR intervention for individuals with ASD. The topics of these four manuscripts explored practical considerations in creating a maturing intervention through a systematic review of the literature and a design and development case study, and advanced the theoretical understanding of design principles through empirical results of two user-centric formative evaluations.

In the first article, The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case, lessons from my experience creating the first prototype of Virtuoso are presented with the goal of informing others in the field how they can devise solutions to their own development challenges. Data for this instrumental case study were presented from my perspective and were compiled through autoethnographic methods. Project artifacts including screenshots, 3D assets, videos, procedural analysis documents, meeting minutes and communications, rapid prototypes, and project documentation were used to assist with recall during the narrative process. This process led to the identification of four development processes that were seen as being representative of our desire to create a realistic training environment.

These included the design of realistic (1) terrain, (2) campus buildings, (3) interiors, and (4) task scenarios. Narrative from this case study highlights the immense challenges in developing VR interventions for people with ASD and suggests that the feasibility of such an endeavor clouds its potential.

221

In the second article, Investigating the Usability and User Experience of a Virtual Reality

Adaptive Skills Intervention for Adults with Autism Spectrum Disorder, findings from a formative evaluation of the Virtuoso technology suite are presented. This study was conducted during the first meso-cycle and the findings are concerned with the evaluation of the first prototypes of Virtuoso. Evaluation focused on the acceptability, feasibility, ease-of-use, nature of user experience, and relevance of the prototypes to the unique needs of participants. Findings are reported from the perspectives of expert users and participants with ASD from the Impact

Innovation day program. Results from this usage test suggest that participants had a largely positive experience and that the virtuoso prototype was relevant to the unique needs of its target population.

In the third article, Investigating the Experience of Virtual Reality Head-Mounted

Displays for Adults with Autism Spectrum Disorder, I report findings from a study that examined the extent that adults with ASD felt symptoms of cybersickness while using Virtuoso.

Additional findings are reported from the lens of learner experience while participants used different HMD including the Oculus Rift and Google Cardboard. This study was conducted during the second meso-cycle and the findings are concerned with the evaluation of the second prototypes of Virtuoso. Findings were examined through multi-method procedures that utilized quantitative and qualitative data to answer the research questions. Evaluation was conducted on six participants with ASD and six neurotypical participants who acted as a comparison group for the quantitative measures. Relatively minor evidence of cybersickness was seen in both evaluation groups with the ASD participants reporting slightly higher feelings of discomfort.

However, despite the presence of some cybersickness symptoms, participants found the experiences to be positive and acceptable.

222

In the fourth article, Systematic Literature Review of Virtual Reality Intervention Design

Patterns for Individuals with Autism Spectrum Disorders, a systematic review of the literature was conducted to explore how other creators of VR interventions for people with ASD are characterizing virtual reality, the distinguishing characteristics of system designs being instantiated in projects, and the characteristics of VR as an intervention modality. Searches related to autism spectrum disorders and virtual reality were conducted on Web of Science,

PubMed, Scopus, IEEE Xplore, ERIC, and Google Scholar. A total of 82 manuscripts, representing 49 different projects, were included in the final analysis. An analysis of the data extracted from these pieces of literature highlight the rampant overgeneralizations that exist in the field of VR intervention design for individuals with ASD. Researchers have a varied perspective of what constitutes a VR technology which impacts how systems are designed and how affordances are exploited to provide learners with possibilities to engage and take away meaning from the instructional content within the environment.

In writing this article-based dissertation, my intended audience is design researchers who are interested in using VR as a tool for delivering interventions for individuals with ASD. The four journal manuscripts cover a range of topics that must be carefully considered when creating

VR software for this population. The first manuscript, The centrality of interdisciplinarity for overcoming design and development constraints of a multi-user virtual reality intervention for adults with autism: A design case, is one that would be especially useful for designers and developers of three dimensional learning environments as it provides practical insight into an important research to practice gap that exists in the field. The second manuscript, Investigating the Usability and User Experience of a Virtual Reality Adaptive Skills Intervention for Adults with Autism Spectrum Disorder highlights the importance of taking a user-centric approach

223

towards the design and implementation of assistive software for a heterogeneous population. The third manuscript, Investigating the Experience of Virtual Reality Head-Mounted Displays for

Adults with Autism Spectrum Disorder, provides insight into how health and safety concerns can be measured and controlled for when using an emerging technology that is not fully understood.

Lastly, the fourth manuscript, Systematic Literature Review of Virtual Reality Intervention

Design Patterns for Individuals with Autism Spectrum Disorders, provides authors with a systematic look into the nature of VR technologies and how virtual environments are designed in the field under what contexts.

My plan for the future is to further analyze data from the 2019 usage test to evaluate our instantiation of the second prototype of Virtuoso from other perspectives (e.g. usability, sense of presence of users, social validity). As a design researcher, it is my goal to continue exploring the practical and theoretical outcomes of this research including whether or not the findings can be applied to non-localized contexts. Ideally I would like to take the lessons learned from designing

Virtuoso and begin to apply them into other contexts to study whether the design principles would work in a wider domain. While this further analysis is outside the scope of my dissertation, I will continue to explore the issues of generalizability of this project’s outcomes and will make efforts to disseminate these results.

Being involved in a DBR project has been exceptionally valuable to my growth as a scholar. Design-based research is messy and there have been many times along the way that I felt entirely out of my element. Conducting this kind of research is innately interdisciplinary and requires insight from many fields. For example, when I started my doctorate I never envisioned that I would be spending many late nights learning how to extract geographic information system

224

data in an attempt to create a realistic topography for a virtual model, and I never expected to be studying up on applied behavioral analysis techniques and single subject design research.

Despite all the mess that is involved in DBR, I have found all of these experiences to be invaluable and find a source of pride in my work because I know that the research presented in this dissertation will be of interest to practitioners in many fields including computer science, instructional design, and special education. I hope that disseminating these four manuscripts will help to advance the field and be useful to their intended audiences. Lastly, working on this DBR project has given me the opportunity to work with an especially remarkable group of people at the Impact Innovation program. I have learned so much through my interactions with Virtuoso’s participants and will continue a trajectory that puts the voices of the end-users first.

225

Dissertation Appendices Appendix A: Summer Symposium Proposal Submission

Designing Virtuoso: A Case Study on the Interdisciplinary Development of a Multi-User Virtual Reality Intervention for Individuals with Autism

Authors: Glaser, Schmidt, Schmidt, Palmer & Beck

Multi-user virtual reality interventions (MUVE) are graphically detailed three- dimensional digital environments that are designed to promote collaborative user-centric learning opportunities (Churchill and Snowdon 1998). Participants of MUVEs are represented through humanoid avatars that are controlled to interact with and manipulate the digital world around them. While MUVEs hold a variety of affordances that make them ideal tools for providing controlled scenarios that can promote learning and assessment (Dalgarno & Lee, 2010), creating one requires a vast and interdisciplinary team of educators, researchers, content experts, programmers, 3D modelers, designers, developers, and more. Current research suggests that MUVEs have potential to promote the acquisition of social, communicative, and adaptive competencies for individuals with ASD (Parsons, 2016; Glaser & Schmidt, 2018; Schmidt et al., 2019). While studies have pointed to the difficulty of developing a MUVE, few have articulated the process in detail which leaves a gap in the literature. It is therefore the purpose of this case study to provide insight into the design and development of a MUVE for individuals with autism called Virtuoso that took place at a large midwestern university. This case study will bring attention to the interdisciplinary nature of this design process and how a small development team was able to leverage their skills and expertise to create a MUVE for this population. By providing a description of this case study, others in the field will be able to extrapolate lessons learned from our project to apply solutions to their own design and development challenges that have been identified in the literature.

Project Description

Virtuoso is a MUVE that was developed for participants of an autism adult day program called Impact Innovation. The focus of Virtuoso is to promote adaptive skills development related to taking public transportation. When Impact Innovation participants engage within the MUVE they connect to a privately hosted virtual world. Users of this environment are able to take control of an avatar which acts as a digital representation of self and facilitates their ability to interact with others and the virtual world around them. Users can communicate with one another with avatar gestures and through microphone-equipped headsets. Virtuoso was developed using an open-source platform called High Fidelity which allowed a small design team to leverage a vibrant development community that provided us opportunities to collaborate and engage with individuals working to solve similar problems in their own projects. Using

226

open-source software like High Fidelity also afforded us the ability to modify any component of the platform through custom scripting and community-provided plugins. We were therefore able to extend the functionality of the system to meet the needs of our project.

With the help of Impact Innovation staff and an applied behavioral analyst, a front-end analysis was conducted to determine the needs and scope of Virtuoso. One of the goals of the Impact Innovation program is to provide participants with vocational opportunities that are oftentimes geographically distant from campus. With transportation being one of the most cited barriers to access community settings for individuals with disabilities (Allen & Moore, 1997; Carmien, et al., 2005), we determined that Virtuoso should first focus on promoting adaptive skills training related to taking public transportation in and around campus. The content of the curriculum was then designed around a detailed task analysis based on procedural analysis techniques (Jonassen, Tessmer, & Hannum, 1998). This process provided guidance on activity structure that needed to be designed and provided by the VR environment.

Case Study

This case study considers how a small design studio balanced a litany of expertise, tools, and methodologies to address the innate challenges of designing a VR intervention for adults with ASD with the additional complexities introduced by interdisciplinary collaboration. We present our work as an instrumental case study providing insight into our lab’s design process and exploring this phenomenon in depth. According to Stake (1995), an instrumental case study is the examination of a specific case (person, group, department, or organization) with the purpose of providing insight into an issue, to redraw generalizations, or to build theory.

Site and Key Participants Since 2015, an interdisciplinary team of researchers designed and developed Virtuoso in collaboration with Impact Innovation. The individuals of this case study were members of a small design studio (n = 3) engaged in the process of developing a MUVE. The design studio consisted of one instructional design and technology professor, one instructional design and technology PhD student, and one engineering undergraduate student who were committed to developing this piece of educational software over a 2.5 year period. Two of the participants had an intensive background in information technology and software development, one participant had background in virtual worlds creation, and one participant was recruited from an educational games course.

The Case A full description of our case falls outside the scope of this chapter prospectus. For this reason, what we have provided here is a brief summary that can provide insight into the context of the case study. The complete case will be reported in the full chapter.

227

To achieve the development of a MUVE, we had to look to the literature to provide guidance on design principles that could help promote the transfer of adaptive skills for individuals with autism. These considerations include: (1) providing tasks that are capable of embodying users within the learning process (Mennecke, Triplett, Hassall, & Conde, 2010), (2) providing flexibility to account for cognitive accessibility (Parsons, 2016), and (3) providing physical, technological, and pedagogical scaffolds (Kerr, Neale, & Cobb, 2002).

Research suggests that when users are able to feel immersed and a sense of embodiment of self and others in a collaborative virtual space then the tasks they engage with can take on a deeper meaning (Mennecke, Triplett, Hassall, & Conde, 2010). In addition, VR technologies may help to convey meanings and symbolic measures of real world activities (Wang, Laffey, Xing, Ma, & Stichter, 2016; Wallace, Parsons, & Bailey, 2017) that can be enhanced through behavioral and visual realism of in-world assets (Parsons, 2016). While the literature has provided these considerations as a jumping off point, the complexities involved with bringing them into practice have not been well articulated. The high variability of how individuals with autism exhibit their symptoms also causes a need for individualization and universal design across a continuum of immersion, multimedia, and scaffolding. As such, an interdisciplinary approach was required to problem solve and to tackle the problems that arose when trying to bring these principles to fruition.

Creating Virtuoso required a combination of skills such as graphical design, 3D modeling, visual scripting, object-oriented programming, computer hardware, learning theory application, photography, computational thinking, and even drone piloting. To bring these skills together in one cohesive project, we pulled processes from and formed a team with expertise from many disciplines including special education, applied behavior analysis, engineering, geography, instructional design, and information technology. In the full chapter, we will provide insight into how educational technologists can attempt to bridge the gap between research and practice in this field. More specifically, designers of MUVEs will be able to apply lessons learned from Virtuoso to their own projects as we will give a detailed narrative of how an interdisciplinary approach can be used to bring conceptual design considerations from the literature to life.

228

Appendix B: Initial Submission to Summer Symposium

Designing Virtuoso: A Case Study on the Interdisciplinary Development of a Multi-User Virtual Reality Intervention for Individuals with Autism

Authors: Glaser, Schmidt, Schmidt, Palmer, & Beck

Multi-User Virtual Environments (MUVE) are graphically detailed three-dimensional digital environments that are designed to promote collaborative, user-centric learning opportunities (Churchill & Snowdon, 1998). In MUVEs, users take on humanoid avatars that act as digital representations of self as they interact with the virtual world and participate in programmatically scripted events. MUVEs vary in sophistication along dimensions of world design, embodied representation of users, interactivity, sensory perception, and controls (Ibáñez et al. 2013). They also hold a variety of affordances that make them ideal tools for providing controlled scenarios that can promote learning and assessment (Dalgarno & Lee, 2010). In particular, the affordances of virtual reality also appear to be well aligned with the learning needs of nontraditional learners like those with disabilities such as autism (Conway, Vogtle, & Pausch, 1994; Strickland, 1997; Dalgarno & Lee, 2010; Parsons, 2016; Glaser & Schmidt, 2018). However, creating MUVE requires a vast and interdisciplinary team of educators, researchers, content experts, programmers, 3D modelers, designers, developers, and more. Thus, creating a MUVE is a remarkably difficult task for educational technologists (Bricken, 1994; Bartle, 2004; Hirumi, Appelman, Rieber, & Van Eck, 2010). This challenge is further amplified when designing MUVE for individuals with autism, who exhibit substantial variability in the unique challenges that they face (Parsons, 2016).

Autism spectrum disorder (ASD) is a lifelong diagnosis that manifests as pervasive social and communicative difficulties coupled with inhibiting, stereotyped, and repetitive behaviors. Recent studies indicate that one in 59 individuals in the United States have an ASD diagnosis (Christensen et al., 2018). Autism is a spectrum disorder (Wing, 1996), with its diagnosis being composed of a series of impairments around social, communicative, and cognitive abilities (Wing & Gould, 1979). In addition to persistent socio-communicative and behavioral deficits, individuals with autism tend to present a variety of comorbid disorders including additional cognitive impairments, anxiety disorders, sensory processing problems, and more (Simonoff et al. 2008). Deficits resulting from an ASD diagnosis can severely impact an individual’s quality of life and ability to function in an independent manner. If left untreated, these problems can become exacerbated and can lead to social isolation, difficulty maintaining relationships, and hardships with finding meaningful employment (Frith & Mira, 1992; Eaves & Ho, 2008).

Despite decades of research, social, communicative, vocational, and accommodation- related outcomes for adults with ASD remain poor (Billstedt, Gillberg, & Gillberg, 2005; Eaves

229

& Ho, 2008; Howlin, Goode, Hutton, & Rutter, 2004; Parsons, 2016). To assist in reducing uncertainty of outcomes, The National Standards Project outlined intervention guidelines and pushed for evidence-based practices in the field (Wilcyznsky, 2010). The National Professional Development Center on Autism Spectrum Disorder (NPDC) has also detailed evidence-based practices that can be implemented in behavioral interventions for individuals with autism (Bogin, 2008) and have advocated for technology-aided instruction.

One technology that has been considered as potentially efficacious with this population is virtual reality (VR). Interest in virtual reality technologies for individuals with ASD has been growing for decades (Aresti-Bartolome & Garcia-Zapirain, 2014) as researchers are increasingly turning to VR as a means to provide both therapeutic and educational platforms for individuals with ASD. This trend is due in part to evidence that suggests VR is intrinsically reinforcing for people with ASD, who find the technically and visually stimulating nature of the technology to be appealing (Schmidt et al., 2019). VR platforms also have a variety of technological affordances which align with the instructional needs of this population (Glaser & Schmidt, 2018). These benefits include the predictability of the task, ability to control system variables and complexities, realism of digital assets, immersion, automation of feedback, assessment, reinforcement, and more (Bozgeyikli, Raij, Katkoori, & Alqasemi, 2018; Bozgeyikli et al., 2018).

For instance, VR is capable of conveying concepts, meanings, and activities through high fidelity digital worlds that can emulate the real world. Therefore, authentic virtual scenarios can provide a meaningful context to embody and practice behaviors and skills (Wang, Laffey, Xing, Ma, & Stichter, 2016; Wallace, Parsons, & Bailey, 2017). Current research suggests that MUVE have potential to promote the acquisition of social, communicative, and adaptive competencies for individuals with ASD (Parsons, 2016; Glaser & Schmidt, 2018; Schmidt et al., 2019). This research has provided a preliminary basis of support for VR as an intervention modality for individuals with ASD (Mesa-Gresa, Gil-Gómez, Lozano-Quilis, & Gil-Gómez, 2018). Our project, entitled Virtuoso (a play on the words “virtual” and “social”), is an MUVE that was developed to promote adaptive competencies for individuals with ASD. We discuss Virtuoso in the following section.

Project Description

Virtuoso is a MUVE that was developed for participants in an autism adult day program called Impact Innovations. One of the goals of the Impact Innovations program is to provide participants with vocational opportunities that are oftentimes geographically distant from campus. With transportation being one of the most cited barriers to access community settings for individuals with disabilities (Allen & Moore, 1997; Carmien, et al., 2005), the focus of Virtuoso is promoting adaptive skills related to using a university shuttle public transportation

230

system. To this end, we designed a virtual reality curriculum based on a detailed task analysis using procedural analysis techniques (Jonassen, Tessmer, & Hannum, 1998). This process provided guidance on activity structure that needed to be designed and provided by the VR environment.

Creating a MUVE that could fulfill the requirements and needs of our curriculum required vast interdisciplinary expertise. For instance, skills that were required included graphical design, 3D modeling, visual scripting, object-oriented programming, computer hardware, learning theory application, photography, computational thinking, and even drone piloting. These skills were brought to the fore both proactively and in an emergent manner. For instance, we formed our core team with individuals who had expertise in many disciplines including special education, applied behavior analysis, engineering, geography, instructional design, educational game design, mobile learning, information technology, computer science, etc. Further, Virtuoso was developed using an open-source platform (High Fidelity: https://www.highfidelity.com/), which we chose because it allowed a small design team to leverage a vibrant development community, thus providing opportunities to collaborate and engage with individuals working to solve similar problems in their own projects. Using open- source software also made possible deep modification of the platform through custom scripting and community-provided plugins, thereby allowing us to extend the functionality of the system to meet the needs of our project.

The Virtuoso MUVE was informed by the literature, which provided guidance in the form of design principles. Specifically, we sought principles that could help promote the transfer of adaptive skills from the virtual environment to the real-world for individuals with autism. In order to promote transfer, the literature suggests providing tasks that are capable of embodying users within the learning process (Mennecke, Triplett, Hassall, & Conde, 2010). Research suggests that when users are able to feel immersed and a sense of embodiment of self and others in a collaborative virtual space, then the tasks they engage with can take on a deeper meaning (Mennecke, Triplett, Hassall, & Conde, 2010). In addition, VR technologies may help to convey meanings and symbolic measures of real world activities (Wang, Laffey, Xing, Ma, & Stichter, 2016; Wallace, Parsons, & Bailey, 2017) that can be enhanced through behavioral and visual realism of in-world assets (Parsons, 2016). A key premise that supports these claims is the assumption of veridicality, which maintains that if VR experiences are sufficiently authentic and realistic, people will interact with and respond similarity in the digital world as they do in the real world, which arguably can promote transfer from the prior to the latter (Parsons, 2016; Yee, Bailenson, Urbanek, Chang, & Merget, 2007).

Methodology

231

The current research is presented here as an instrumental case study (Stake, 1994), that is, the examination of a specific case (person, group, department, or organization) with the purpose of providing insight into an issue, to redraw generalizations, or to build theory (Stake, 1995). The issue we focus on here relates to the assumption of veridicality and the premise that the realism provided by VR potentially can promote transfer of skills from a VR environment to the real world for individuals with autism. At issue is that there is a substantial research to practice gap on how designers can bring this principle to life. It is therefore the purpose of this case study to provide insight into the complexities of design and development of a MUVE for individuals with autism called Virtuoso that took place at a large midwestern university, with a particular focus on the interdisciplinary nature of the research and development. This case study specifically examines the intricacies of imbuing realism and authenticity into a virtual public transportation simulation with the purpose of promoting transfer of skills for individuals with autism .

Stake’s (1995) characterization of case study calls for flexibility in design so that researchers can make changes even after they proceed from design to research (Yazan, 2015). Due to the nature of this flexibility, the exact point when data collection begins can be hard to determine leading to sampling often being re-strategized (Stake, 1995; Yazan, 2015). Instrumental case study is an appropriate method for the current inquiry, as it allows researchers to “use issues as conceptual structure in order to force attention to complexity and contextuality, [and] because issues draw us toward observing, even teasing out, the problems of the case, the conflictual outpourings, the complex backgrounds of human concern” (pp. 16-17). The complexities of developing Virtuoso required our small development team to develop our own interdisciplinary set of skills and expertise to solve this design problem, as well as seeking interdisciplinary expertise externally. This instrumental case study attempts to chronicle our design processes by exploring our particular research and development trajectory in depth. Although it is generally accepted that developing VR interventions in the field of autism research is fraught with challenges, very little has been written to chronicle those challenges and how they were approached. Therefore, it is our hope that, with this case study, others in the field will be able to extrapolate lessons learned and apply our solutions to their own design and development challenges.

Key Participants

Since 2015, an interdisciplinary team of researchers have designed and developed Virtuoso in collaboration with Impact Innovation. This case study focuses specifically on those members of the team who were members of the instructional design studio engaged in the process of developing the intervention. These three individuals were males, including: (1) one instructional design and technology professor, (2) one instructional design and technology PhD student, and (3) one engineering undergraduate student. All team members were directly involved in the design and development of Virtuoso over a 2.5 year period. Our team had

232

expertise in information technology, software development, development of virtual worlds, and educational game design.

Data Collection

Case study data are characterized as the lived experiences of the first author, an instructional design and technology PhD student, relative to his deep involvement in the design and development of the intervention. These experiences were assembled after a fully functional prototype of the intervention had been created using autoethnographic methods (Bruner, 1993; Freeman, 2004) in which project artifacts were consulted to assist with recall (Goodall, 2001). Artifacts included screenshots, 3D digital assets, video recordings, procedural analysis documents, team meeting minutes, team communications (e.g., emails, instant message transcripts), rapid prototypes versions, and project documentation. Project artifacts were organized based on the overarching theme of imbuing authenticity and realism into the MUVE and how this was achieved relative to each asset or virtual element that was reviewed. This process led to four categories of artifacts being identified: (1) developing a realistic terrain, (2) developing realistic campus buildings, (3) developing realistic interiors, and (4) developing realistic scenarios. After organizing project artifacts into these four categories, their development was then mapped out onto individual timelines to provide a linear representation of the research and development process.

Next, autoethnographic methods were used to make sense of moments or “epiphanies” perceived to have greatly impacted the phenomena of focus in this study (Denzin, 1989). When epiphanies of the development process were identified by studying project artifacts, the lead author would engage in a reflective writing process, detailing recalled challenges of realizing design principles and chronicling how disparate and non-intuitive approaches were required to create the initial prototype. These reflective pieces were then consulted for sense- and meaning-making relative to the transformative moments of our design process, ultimately leading to a holistic representation of MUVE design for individuals with ASD. This gestalt is characterized here as transitive, ill-defined, and complex.

The Case Narrative

The assumption of veridicality suggested that we focus on creating realistic representations of the terrain, buildings, and building interiors that participants would experience when training to use the university shuttle. This also suggested that authenticity was needed in terms of the actual task of using the shuttle. At the outset of the project, we had questions concerning how we could bring together all of these pieces to create a learning environment that could provide sufficient realism to fully instantiate a diverse array of design guidelines.

A realistic terrain was central to the design because the terrain acts as an underlying unifying element upon which the virtual world is based. This extends not only to the space in

233

which objects and avatars act and interact, but also the space in which meaningful activities occur. The assumption of veridicality suggested a need for a terrain that would simulate the experience of an activity performed on campus. While a simplistic, flat terrain certainly would have been simpler by orders of magnitude, it would substantially diminish sensations of reality and potentially distort a user's sense of presence or “being there.” Campus buildings would be off-scale and incorrectly positioned. For example, the campus is hilly and many buildings have multiple entrances on different floors of the buildings. Entering from one side might bring you to the first floor while entering from another side might bring you to the fourth floor. Creation of a terrain that accurately represents the contours of the earth is a challenge. This challenge was outside of our sphere of expertise, requiring that we seek out interdisciplinary knowledge for a solution. How does one model real world terrain in a virtual world? Could geographic information system data (GIS) be employed? What about other topographical elements, such as building placement and roads? How are all of these elements combined? And, ultimately, how could these combined elements be imported into a virtual reality simulation?

Atop the terrain are the campus buildings themselves. We reasoned that buildings that were accurate representations of their real-world counterparts would help promote the assumption of veridicality. However, buildings have complex architecture, and creating accurate 3D models of buildings is incredibly tedious and requires substantial expertise. Hence, we reasoned that architectural accuracy was perhaps less important than photographic realism. Hence, we reasoned that building models did not need necessarily to be entirely true to their real- world geometry. Instead, to create a sense of photographic realism we turned to a variety of interdisciplinary approaches to capture high resolution photographs that could be used as textures for our models.

For the interior designs of our MUVE, however, we took an opposite approach to that which we used for buildings. Instead of focusing on photographic realism, we employed the actual architectural plans of the buildings. Based on the interior elevations and floor plans, we created highly realistic models of the Impact Innovation office space. To increase the fidelity of this model we also went down to the office space and took photographs so that we could align the textures to the real world. We also consulted our pictures to include additional 3D models to the space such as interior furnishings, rugs, computers, and more. This space provided users with a highly authentic-looking space to interact with an online guide and others. In the real world, Impact participants have their own cubicles and are exposed to a variety of activities in this office space. In the virtual world, this space provided users a connection to the real-world and to their everyday lives.

Central to connecting the above elements to one another was the public transportation training that participants needed to engage in. We developed the virtual version of this task by carefully analyzing exactly how participants performed this task in the real world. An interdisciplinary process was conducted with the help of special education specialists and an applied behavioral analyst to develop a scripted set of routines and behaviors that exactly mirrored what Impact Innovations participants would actually undertake in the real world. A variety of computer programming and game design skills also were required to bring these pieces together.

234

In the next section, we will provide a case narrative that outlines the process behind bringing together these four elements to create Virtuoso. Table 1, as seen below, outlines the interdisciplinary processes involved in realizing this design principle across the four design problems that we identified while creating this MUVE. The purpose of this table is to outline these complexities and to highlight that an interdisciplinary approach is a requirement.

Table 1. Interdisciplinary Processes Used to Create Virtuoso.

Creating the Terrain

Affordance Challenge Interdisciplinary Specific examples Requirements Promotes a sense of Generating a ● Geography ● Image Editing presence by geographically ● 3D Design ● 3D Editing emulating the lived realistic contoured ● Game Design ● GIScience experiences of mesh with accurate ● Graphical Design actually walking on representations of campus topography

Creating Campus Models

Affordance Challenge Interdisciplinary Specific examples Requirements

Promotes a sense of Modeling a campus ● Piloting ● Photogrammetry presence by model that was ● Photography ● Taking emulating the in situ photo-realistic ● Geography photographs environment for ● 3D Design ● Drone Flying training. Helps to ● 3D Editing ● 3D Editing promote the ● GIS Extraction assumption of ● Outline and veridicality Extruding models

Creating Interior Model

Affordance Challenge Interdisciplinary Specific examples Requirements Provides Creating a realistic ● Architecture ● Converting connection to the representation of an ● 3D Design blueprints into a

235

real-world interior design. ● Photography 3D Model through a space ● Taking that participants photographs of would be familiar space with.

Creating the Virtual Task

Affordance Challenge Interdisciplinary Specific examples Requirements Provide a realistic Creating a one-to- ● Game Design ● System of Least and accurate way one representation of ● Applied Prompts of practicing behaviors that both Behavioral ● PHP skills to transfer human avatars and Analysis Programming from virtual in-world assets such ● Programming ● Javascript platform to the as a bus would ● 3D Design Programming real world. naturally undergo in ● Videography ● 3D Editing the real world. ● Instructional ● Game Design Design Techniques ● Computational ● Shot Ride-along Thinking footage ● Created a task analysis

Developing a Realistic Terrain

Figure 1, as seen below, was an early representation of how we set out to create our model. The terrain, including slopes and scaling were obtained by extracting geographic information system (GIS) data from Google Earth. After extracting this GIS data, we were able to convert the data into a 3D mesh. In this early version of Virtuoso, we placed a screenshot of a map from onto that geometric mesh to create a preliminary terrain of the university. This snapshot allowed us to have a blueprint of campus with outlines of buildings and structures for future placement.

236

Fig. 1. Screenshot of the campus terrain with a Google Map image placed onto a 3D mesh.

Next, we needed to texturize the terrain to include photorealistic representations of the roads, pathways, landscape, and topography. To accomplish this task we had to again iterate through many versions that required varying skills and expertise. On an initial trial we opened the GIS mesh in the Blender (http://blender.org) 3D modeling program so that we could manipulate the millions of polygons that made up the terrain. With the outline of the map placed over the terrain we were able to garner rough approximations of where concrete pathways existed and where patches of grass were. Through a process that encompassed several days, one studio member went through and individually selected the edges of each polygon that would represent concrete pathways. He then applied a texture that closely resembled the concrete used on campus. This process was repeated for the grassy portions of the terrain. Photographs of the roads and topography were taken by a lab member to allow for the selection of a texture that would best match the real world. While this process did allow Virtuoso to iterate closer to a photorealistic representation of campus, the resulting product had a litany of issues. Manipulating massive GIS meshes created problems with collisions, which resulted in “holes.” This led to assets and avatars falling through the base of the world. Repeating textures were also too uniform and did not account for variations in coloring, texturing, and consistency across the terrain.

In the next iteration we again returned to the GIS terrain that we extracted from Google Earth. Now knowing that editing the mesh could result in unforeseen errors, we ultimately decided to leave the geometry alone and to place the topography on top of the 3D object. In this prototype we took a large high resolution image of the university’s from Google Earth. This allowed us to select the region’s topography in an uninterrupted view. After capturing a high resolution image of this scene we were able to place it over the terrain like we had done with the Google Map image before, which resulted in a terrain that included the university’s natural

237

topography while maintaining an outline of the buildings to help with proper positioning. Figure 3 shows a version of this model in our High Fidelity prototype before any of the buildings had been created or placed.

Fig. 3. Screenshot of the campus terrain with GIS data image stills mapped onto a 3D mesh.

Developing Realistic Buildings

After we had finalized a version of our terrain that was sufficiently realistic, we set out to create 3D models of every single building on campus. This process took approximately 1.5 years to complete and went through many iterations until we reached a level of photographic fidelity that we deemed to be sufficient. One design studio member had some background creating 3D assets, having worked on a virtual reality project in the past. This background allowed him to create assets made up of basic 3D models and shapes. To create the campus buildings we decided that we would create approximations of the structures and then map images onto them to provide a degree of photorealism. To test this process we set out to create a fountain that existed on campus. Two studio members went onto campus and took photos of the fountain from varying angles and perspectives. These images, as seen in Figure 4, would act as the textures of the model that was made. However, scaling this process was ineffective as lighting issues greatly impacted our ability to obtain high quality photos while on campus. In addition, an inability to reach high angles would prevent us from being able to fully capture stills from most of the buildings on campus.

238

Fig. 4. A collection of photos used to create textures for a 3D model in Virtuoso

To address these issues we began testing a process called photogrammetry. Photogrammetry is the process of obtaining information about objects and the environment through recording, measuring, and analyzing photographic images and patterns. Knowing that a model could be made by capturing images of a structure, we decided to test the procedure on a large scale basis. The idea was to equip a DJI Phantom Drone with a video camera that would continuously shoot footage as we flew it around a building from various heights, angles, and perspectives. We would then use software to convert the video into frame by frame screenshots that could be used to run an analysis on and thereby create a usable and highly realistic 3D model. Unfortunately, this idea proved to be flawed, as we discovered a variety of technical and safety regulatory barriers. Capturing the amount of footage that would be required to convert the photographic stills into a 3D mesh required prolonged time in the air from many vantage points that our team simply did not have the piloting abilities to perform.

Though we were unable to use drones to capture the images needed to create our 3D mesh, we were still interested in using photogrammetry to get the models that we needed. Instead of capturing images by flying around the campus buildings, we decided to use the same process but in Google Earth. This idea allowed us to “fly” in through different perspectives and to capture images of the buildings on campus. We were able to use the built in tools of Google Earth to zoom in and rotate the models around the x, y, and z axes. We then took individual screenshots within the tool that we imported into a photogrammetry software called Adobe ReCap. Figure 5 demonstrates what this model looked like after we exported it from Adobe ReMake and brought it into High Fidelity.

239

Fig. 5. A photogrammetrically-created model, imported into High Fidelity

While this process certainly provided us with model that was more realistic and lively than anything we had accomplished to this point, it was not without its flaws. As seen in Figure 5, there were multiple distortions that resulted from the process. Textures and objects would appeared to be melting together and led to a grotesque, even nightmarish appearance. Knowing that individuals with ASD oftentimes have problems with sensory processing, we were concerned that this model could result in adverse outcomes. This finding did however give us insight into how we could combine multiple processes from our design iterations to create a model of our campus that could maintain their shapes while still having textures with a resolution capable of providing photographic fidelity. The next step of our process involved a return to the campus terrain that we had created prior. We took the high resolution image of the campus map and placed it onto a flat plane which allowed us to view the outlines of the campus buildings. We then opened the map in Adobe Illustrator, a vector image editor, and used the Pen tool to trace the outlines of the different campus buildings. After doing so, we exported the image as a SVG and imported it into Blender. This software has a tool called a Curve Bevel which allowed us to convert the 2D tracings into an extruded 3D rendering of the buildings. Now that we had a model that was scaled and positioned onto the terrain, we had to find a way to provide and map textures onto the faces of each campus building. Knowing that we could not take photos of the individual buildings we once again returned to the photogrammetry process we had tested in Google Earth for creating a mesh. However, instead of using it to create the entire mesh with textures, it was used only for ripping textures that we could place onto the planes of the building faces. Figure 6 shows the final campus model that we used which was a result of a multi-year endeavor that required the use of expertise across many disciplines to create.

240

Fig. 6. Campus model created through a combination of GIS data, image editing, 3D modeling, and photogrammetry.

Developing Realistic Interiors

After we had finished our campus exterior, the next step of our intervention design required that we re-create an office from the Impact Innovations suite that is located in the Teacher’s College on campus. Taking lessons from our photogrammetry experiments, we decided to skip these procedures in this process and to make the model through other means. Seeing the results that we got from extruding the map outlines we thought we could do something similar through architectural blueprints. We reached out to the building planner for the School of Education and were able to obtain the blueprints from the recent reconstruction of this building and office suite. We then imported it into architecture modeling software called Archilogic which allowed us to create a fully interactive 3D model of this space. We were then able to go down into the actual office and take photos that could be used to create textures for the walls, flooring, and decor. Archilogic also allowed us to bring in furniture, electronics, and other furnishings to populate our office space and make it better simulate its real-world counterpart.

241

Fig. 7. Office model created through architecture software.

Developing Realistic Scenarios

Early into the development of Virtuoso, we completed a detailed task analysis to determine the structure and nature of the activities that should take place within the MUVE. This task analysis required assistance from an Impact Innovations staff member who was familiar with the day-today scheduling of the day program. This staff member was able to provide us with a step-by-step series of behaviors that was necessary to complete the task of getting onto a shuttle bus by an Impact Innovations associate. We then went with the staff member and a program associate and recorded the two of them completing these tasks to expand upon and improve the task analysis that we had created. Next, we worked with an applied behavior analyst to modify the tasks to include opportunities for interaction and behavioral prompts. Figure 8 illustrates a portion of our task analysis that includes an ABA technique called the System of Least Prompts (Doyle, Wolery, Ault, & Gast, 1988).

242

Fig. 8. A portion of a Virtuoso task analysis with ABA strategies incorporated.

Embodying realistic activity within the learning process required that we simulate real world tasks in the virtual environment with a high degree of fidelity. Therefore, after we had mapped out the behaviors, activities, and structure of the intervention, we needed to develop a solution to bring them to life into our 3D environment. Part of this process included the creating a shuttle bus that could arrive on a set schedule when participants had completed the required steps to get to the shuttle stop and check their app to see where the bus was along the route.

High Fidelity is a Beta VR toolkit which meant that their documentation and API was constantly evolving and oftentimes absent. This limitation prevented us from being able to access into the system to create a solution that could readily be implemented. In addition, due to the relative immaturity of the software, there were no existing plugins that could provide us guidance on even animating an object in our project. We instead looked towards the gaming industry to develop a solution to animate a bus along a route. In the popular video game Fallout 3 there was a rideable train that was actually just a non-playable character wearing a hat that looked like a giant train. To borrow this idea for our project, we created a model of a university shuttle bus that we then equipped onto an avatar as a hat attachment. High Fidelity also had a recorder tool which allowed us to record avatar movements that could be activated and played on a loop when a player loaded into the world. This process allowed us to create a functional shuttle route, but not without its limitations.

These recorded loops did not maintain any of their asset’s physics or collisions which meant that playable avatars and other in-world items could pass through them. With road safety and socially appropriate behaviors related to catching the bus being a pivotal point of our intervention, it was not suitable for a player to be able to walk into and through a bus that was driving along a route. We had to come up with another solution to emulate a bus along its route. Solving this problem would ultimately require the development of multiple scripts that could handle different components of the shuttle’s movement and timing. We were able to create

243

a script that was based off of a 3D elevator that moved from point A to point B. This PHP script assigned the shuttle model to an object that was then translated across 3D coordinates. We were then able to assign a set speed for this animation with a variable in the script. So we took the shuttle model and hid it out of sight from where participants would be at the time that the script would initialize. To control the timing of the shuttle we hid an invisible cube into the environment that acted as a trigger to initiate the shuttle’s movement. When a player walked through the cube it would load a Javascript code that controlled the activation of the shuttle’s PHP movement code. During our intervention’s usage, an invisible player would walk into the cube during a sequence that took place in the task analysis. By walking into it at this exact moment, we were able to emulate a bus that would arrive based upon a shuttle’s location within a tracking application and could therefore replicate the real-world in this virtual activity.

Fig. 9. An invisible player in the world would activate the bus script based upon information from the shuttle tracking application that was embedded within the virtual environment.

Lessons Learned

As demonstrated through this case narrative, creating a MUVE for individuals with autism it is an interdisciplinary process that requires an interdisciplinary team by its very nature. Problem solving along each dimension required interdisciplinary solutions and it is likely that others working in this field will have similar issues and will benefit from developing a sensitivity for when seeking interdisciplinary perspectives could be effective. Our analysis of project

244

artifacts have shown that designing a MUVE to embody users within the learning process required a broad application of skills and expertise that spanned across many disciplines.

Creating a MUVE that is capable of promoting skills development and transfer is a lot more complex than just ensuring a degree of photorealism. The experience itself also has to be realistic. Users need to feel embodied within the environment and task itself. Bringing all of these pieces together required years of development by a small design studio to create one version of a singular learning scenario. This revelation in itself illustrates the considerable research to practice gap that still exists in the field. While virtual reality is touted as a valuable technological solution to promote the development and transfer of skills for people with autism (Parsons, 2016), that promise simply has not panned out. Creating a MUVE that is capable of instilling the design considerations from the literature is vastly difficult and multifaceted.

The team behind the development of Virtuoso took the best available evidence from the field in an attempt to realize the promise of VR for this population, but in the process of doing so realized that it takes a massive amount of work to provide an environment that is photographically realistic and authentic in design.The task of creating a single scenario of limited variability required a massive amount of labor, thinking, and problem solving to ultimately create an environment with graphics of Google Earth level quality. For any team to be successful at developing a MUVE they have to decide to use commercial off-the-shelf software or to build it themselves. And knowing that there is nothing on the market that allows you to create a photographically realistic environment, other developers in the field will likely undergo many of the same challenges that we did to create Virtuoso.

245

Appendix C: Email with Dr. Hokanson from the 2019 Summer Symposium

246

Appendix D: Response Letter to Committee Member Addressing Suggested Revisions

04/07/2020

Dear Dissertation Committee:

With this letter, please find my revised dissertation manuscript entitled Investigating the Experience of Virtual Reality Head-Mounted Displays for Adults with Autism Spectrum Disorder focusing on issues of cybersickness and the nature of learner experience of adults with autism as they use head-mounted displays. I have reviewed the feedback from my committee members and appreciate the insight and suggested revisions. I have revised the manuscript to align with many of the recommendations and believe these revisions address the identified issues. Below is a table where I describe how each of the suggested revisions were addressed.

Thank you, Noah Glaser 513-630-5654 [email protected]

Line-by-Line Summary of Reviewer Comments and Author Responses

Revisions from Chris

On page 12 it was suggested to change the “All research was performed in the wording of the location to be more concise. university’s School of Education in a large midwestern university” was changed to “All research was performed in the university’s School of Education”

On page 13 it was pointed out that naming the References to Impact Innovations were day program could potentially identify the removed throughout the manuscript. Any participants. reference was changed to “day program” or something similar based on the context of the text around it.

On page 17 it was advised to explain how the Provided a brief explanation of who this person obtaining informed consent was person is in the context of the manuscript. associated with the project

On page 21 it was suggested to change the Transition words were changed to To begin, transition words from firstly, secondly, thirdly next, and finally to something else.

247

On page 20 it was suggested that data sources Text changed to “A multi-method approach were being used to do more than just that utilized both quantitative and qualitative answering research questions, “to provide data sources (see Table 3), were used to evidence to respond to your research question, gather evidence and to respond to our research perhaps. To me, there is so much more questions.” involved than just simply answering them.”

On page 25 an APA problem with the table A bottom border was added to Table 5 was noted

On page 26 it was suggested that full MSAQ Subscore calculations for both the original and calculations be removed as it was too much the modified versions of the MSAQ were information. removed from the manuscript. Calculations were broadly described.

On page 43 it was suggested that I add a On page 43 and 44 I added a page of Limitations section limitations including new citations, information, and implications of those limitations. This information was added to the discussion header.

Revisions from Carla

Asked about anonymizing Impact Removed any references to the program’s name and replaced it with “day program” or some variation dependent of the context

Revisions from Miriam

On page 3 I am asked if this manuscript will There will be two co-authors. An author be co-authored. contribution statement is included for each of the manuscripts in my final dissertation document.

On page 17, “So I’m wondering why we have This information is included because the these detailed scores for the ASD participants participants with ASD are the core focus of and not for the neurotypical. Do these scores the study. The neurotypical participants are matter in a study of cybersickness? Feels included as a baseline of comparison for the imbalanced (but then I’m no expert on the MSAQ. ASD is also quite diverse and it is design of these sorts of studies).” important to include this information to provide context as to what was developed for who and under what conditions. A lot of the research in the field is conducted on individuals with ASD requiring lower levels of support. This study reports on individuals requiring more substantial reports and these

248

data provide that context

On page 17 it is asked if parental consent was I have added additional details that states that also obtained for participants who assented. we obtained consent from guardians and assent from participants.

On page 18 it was asked, “Is this different It is not. than the consenting you referred to in the previous section?”

On page 18 it was asked who the online guide Added clarifying information “(a research was. participant that joined users within the environment and facilitated the learning process)”

On page 19, “Still not sure I’ve followed all of I am wondering if the revisions I have made this. “ have addressed this concern. Is there something more specific that is unclear?

On page 23 it was noted that that the last I have added the missing words to finish the sentence abruptly ended sentence.

On page 24 it was asked what an agreement I removed that term, rephrased it, and added observer is and if this is a term we made up more information about the role of the secondary coder.

On page 38 it is asked how Evan was able to I have changed the language to 1) remove the resolve controller issues with minimal suggestion that it required minimal assistance, assistance if he also struggled earlier. and 2) clarified that a research assistant provided intervening help.

249