PREVENT–A pipeline approach to prototype realistic virtual environments via the reuse of expert domain knowledge

Mingze Xi

B.Eng

A Thesis presented for the degree of Doctor of Philosophy

School of Electrical Engineering and Computing The University of Newcastle, Australia March, 2017 Statement of Originality

This thesis contains no material which has been accepted for the award of any other degree or diploma in any university or other tertiary institution and, to the best of my knowledge and belief, contains no material previously published or written by another person, except where due reference has been made in the text. I give consent to the final version of my thesis being made available worldwide when deposited in the University’s Digital Repository, subject to the provisions of the Copyright Act 1968. **Unless an Embargo has been approved for a determined period.

Signed: Date: Mingze Xi

ii Dedicated to

My dear wife, Qi.

It is your sacrifices, patience and encouragement that made who I am today.

My dear parents.

For your endless support in all things, great and small. Acknowledgements

The completion of this thesis has been a long and challenging journey. I wish to express my sincere gratitude to the many people who have contributed to making this thesis and this PhD project possible. First and foremost, I wish to express my deepest appreciation to my supervisors, Dr Shamus Smith and Associate Professor Yuqing Lin, for helping develop this project and for providing insight, support and guidance as it progressed. To my primary supervisor, Shamus, the deepest thank you for your unswerving patience and willingness to share your expertise and experience on study design, research practice, project management, teaching skills, and the English language throughout my PhD, from the UK to Australia. Also, I would like to extend my gratitude to my co-supervisor, Yuqing, for your inspiring guidance on algorithm design and constructive feedback on my academic career development. Secondly, I want to say thank you to my friends and colleges: Dan Bell, Xin Gu, Geoff Martin, Hao Qin, Youwei Qin, Jiahe Shen, Mujiangshan Wang and many more. There were so many unforgettable moments working, studying and playing with you. I wish to express my special thanks to Dan Bell, for being the best listener even when I was grumbling and frustrated. Another special thank you to Yanping Lu, for your generous help that allowed me to settle down smoothly in Newcastle. Finally, I would like to acknowledge the financial support I received during this PhD. I was awarded the University of Newcastle International Postgraduate Re- search Scholarship (UNIPRS) and the University of Newcastle Postgraduate Re- search Scholarship Central (UNRSC). Additionally, I would like to thank the faculty and school for supporting my conference travels.

iv Contents

Statement of Originality ii

Dedication iii

Acknowledgements iv

Abstract xvi

Publications xvii

1 Introduction1 1.1 Introduction...... 2 1.2 Statement of the Research Problem...... 7 1.3 Significance of Research...... 10 1.4 Structure of the Thesis...... 12

2 Literature Review 14 2.1 Human Behaviour in Fires...... 14 2.2 Virtual Fire Evacuation Training Systems...... 16 2.3 Reusing Gaming Technology...... 18 2.3.1 ...... 19 2.3.2 CryENGINE Series...... 22 2.3.3 Unreal Engine Series...... 24 2.3.4 Source Engine...... 27 2.3.5 Summary of Game Engines...... 30 2.4 Evacuation Simulators...... 36

v Contents vi

2.4.1 FDS+Evac...... 38 2.4.2 BuildingEXODUS...... 40 2.4.3 Simulex...... 41 2.4.4 VISSIM...... 42 2.4.5 STEPS...... 43 2.4.6 Pathfinder...... 43 2.4.7 Summary of Evacuation Simulators...... 44 2.5 3D Modelling Tools...... 45 2.5.1 AutoCAD...... 45 2.5.2 SketchUp...... 47 2.5.3 Blender...... 47 2.5.4 Revit...... 48 2.5.5 Summary of 3D Modelling Tools...... 48 2.6 Summary of Chapter2...... 48

3 PREVENT- A Pipeline to Reuse Domain Knowledge in Static En- vironments 50 3.1 Overview of the Pipeline Approach...... 51 3.1.1 Modelling Virtual Environments...... 51 3.1.2 Generating Human Evacuation Behaviour as Paths...... 52 3.1.3 Presenting Virtual Target Environment in a .. 53 3.1.4 Summary of Pipeline...... 54 3.2 An Example Implementation of the Pipeline...... 55 3.2.1 Exporting the 3D Building Model from SketchUp...... 57 3.2.2 Evacuation Path Generation in FDS+Evac...... 57 3.2.3 Fire Evacuation Simulation in Unity...... 62 3.2.4 Summary of Pipeline Implementation...... 65 3.3 Validating the Consistency of Evacuation Time...... 65 3.4 Validating the Scalability...... 70 3.4.1 Scalability Test I: Simple Building...... 70 3.4.2 Scalability Test II: Large Building...... 71 3.5 Validating the Accuracy under Fire Conditions...... 75 Contents vii

3.6 Summary of Chapter3...... 77

4 Dynamic Path Switching in Virtual Environments 81 4.1 Introduction...... 81 4.2 Path-Switching Framework...... 83 4.2.1 Fire Warden as Interaction Initiator...... 84 4.2.2 Situation 1: Check Current Path...... 85 4.2.3 Situation 2: Search Its Own Path Library...... 85 4.2.4 Situation 3: Search the Global Path Library...... 86 4.2.5 Situation 4: Generate New Path...... 87 4.2.6 Summary of Path Switching...... 90 4.3 Case Study: Validating the Path-Switching Framework...... 91 4.3.1 Test I: Situation 1...... 92 4.3.2 Test II: Situation 2...... 94 4.3.3 Test III: Situation 3...... 96 4.3.4 Test IV: Situation 4...... 97 4.3.5 Discussion...... 98 4.4 Summary of Chapter4...... 99

5 Evaluating Virtual Human Behaviour - a New Turing Test 101 5.1 The Conventional Turing Test...... 101 5.2 The Turing Test in Virtual Environments...... 104 5.3 Design of VHBTT - A Virtual Human Behaviour Turing Test.... 107 5.3.1 Phase I: Data Collection...... 109 5.3.2 Phase II: Data Review...... 112 5.4 Data Report...... 114 5.5 Summary for Chapter5...... 115

6 Evaluating the Realism of Reused Evacuation Behaviours 117 6.1 Data Collection: Generating Gameplay Data...... 118 6.1.1 System Implementation...... 118 6.1.2 Apparatus...... 125 Contents viii

6.1.3 Participants...... 126 6.1.4 Procedure...... 126 6.1.5 Data Collected...... 128 6.2 Data Review: Reviewing the Fire Evacuation Replays...... 129 6.2.1 System Implementation...... 129 6.2.2 Participants...... 131 6.2.3 Procedure...... 132 6.2.4 Data Collected...... 133 6.3 Results...... 134 6.3.1 Metrics...... 134 6.3.2 Overview of the Judgement Results...... 135 6.3.3 Judgements Results of Three H/BGC...... 140 6.3.4 Judgements Results of Three Fire Evacuation Scenarios.... 142 6.3.5 Evacuation Time With Bots...... 146 6.4 Summary of Chapter6...... 148

7 Conclusions and Future Work 149 7.1 Summary...... 149 7.2 Overview of Contributions...... 150 7.3 Future Work...... 152

Bibliography 156

Appendix 189

A Table List 189 A.1 An Example Fragment of GPL...... 189 A.2 Types of Participants in User Study...... 190 A.3 Fire Warden Evacuation Instructions...... 191 A.4 The Configurations of Evacuation Sessions...... 192 A.5 The Judgement Groups...... 193 Contents ix

B User Study Materials 194 B.1 Information Sheet for the Group of Players...... 195 B.2 Information Sheet for the Group of Judges...... 197 B.3 Consent Form...... 199 B.4 Pre-trail Questionnaire...... 200 B.5 Data Review Form...... 201 B.6 System Usability Scale (SUS)...... 202 B.7 Recruitment Poster...... 203

C Floor Plans 204 C.1 Floor Plan A...... 205 C.2 Floor Plan C...... 206 List of Figures

1.1 Technology components needed for creating a realistic fire evacuation drill system (No. 7)...... 4

2.1 The user interface of the Unity Editor [237]...... 20 2.2 The user interface of the CryENGINE Sandbox [42]...... 24 2.3 The user interface of the Unreal Editor [51]...... 26 2.4 The user interface of the Unreal Blueprint editor...... 27 2.5 The user interface of the Valve Hammer Editor [244]...... 29 2.6 Project’s classification on four design elements across game engines used in the literature...... 34 2.7 Example fire simulation presented using Smokeview [61]...... 38 2.8 The user interface of Pyrosim [225]...... 40 2.9 An example buildingEXODUS application [238]...... 41 2.10 A screenshot of Simulex [95]...... 42 2.11 The user interface of Pathfinder [228]...... 44

3.1 The main procedures of PREVENT...... 52 3.2 A summary of the PREVENT approach...... 55 3.3 The floor plan of an example building for demonstrating the concept of PREVENT...... 56 3.4 The completed 3D building model in SketchUp...... 57 3.5 The imported building model in PyroSim...... 58 3.6 The structure of path libraries incorporating one global path library (GPL) and multiple position-based path libraries (PPL) across n start locations...... 61

x List of Figures xi

3.7 The imported building model in Unity. Red cylinders are simulated virtual human evacuees...... 63 3.8 The procedure of translating raw simulation data into behaviour scripts. 65 3.9 Four benchmark paths...... 68 3.10 Comparison of virtual human evacuees movement between Unity and FDS+Evac...... 69 3.11 The increase in overall evacuation time against the total number of evacuees in Unity and FDS+Evac...... 72 3.12 The floor plan of Engineering S (ES) building...... 73 3.13 Evacuation time comparison between Unity and FDS+Evac...... 74 3.14 An example scenario with an explicit fire included in the path gener- ating simulation...... 76 3.15 Actual evacuation paths in Unity and the optimal evacuation paths on the floor plan...... 78 3.16 Evacuation paths in Unity with EXIT TOP or EXIT LEFT being blocked by fire...... 79

4.1 The floor plan of ES building marked with the location of fire warden. 84 4.2 The graph representation of the ES building...... 87 4.3 Graphs of PPL and GPL of the ES building...... 88 4.4 The GPL after deleting node P7...... 89 4.5 Floor plan of the ES building building marked with test configurations. 93 4.6 Screenshot of an example evacuation in situation 1...... 94 4.7 Screenshot of an example evacuation in situation 2...... 95 4.8 Screenshot of an example evacuation in situation 3...... 97

5.1 A two phase Turing Test for virtual environments...... 108 5.2 An example set of data that should be collected in Phase I of the VHBTT...... 112

6.1 The floor plan of ES building...... 118 6.2 The completed 3D model of the Engineering S building in AutoCAD. 119 6.3 A screenshot of the Engineering S building as rendered in Unity.... 119 List of Figures xii

6.4 The 3D models of virtual human evacuees and fire wardens...... 120 6.5 A screenshot of the view from the perspective of an evacuee avatar in the virtual environment...... 121 6.6 A photo of the view from the perspective of an evacuee in real world. 121 6.7 Three fire evacuation scenarios used in the user study...... 123 6.8 The locations of virtual surveillance cameras in the ES building.... 124 6.9 The system diagram for the completed fire evacuation drill system.. 125 6.10 The overall procedure of the data collection phase...... 127 6.11 The web-based judging system - home page...... 130 6.12 The web-based judging system - interactive floor plan...... 131 6.13 The web-based judging system - view of a CCTV camera...... 132 6.14 The overall procedure of the data review phase...... 133 6.15 A snippet of the evaluation form...... 134 6.16 Distributions of judgement and confidence scores...... 136 6.17 The overall judgements results...... 140 6.18 The judgements results over the three H/BGC...... 142 6.19 The distribution of judgements scores of fire scenario S1...... 145 6.20 Mean total evacuation time comparisons...... 147 List of Tables

2.1 A list of advantages and disadvantages of Unity...... 21 2.2 A list of advantages and disadvantages of the CryENGINE series... 25 2.3 A list of advantages and disadvantages of the Unreal Engine...... 28 2.4 A list of advantages and disadvantages of the Source Engine...... 30 2.5 An overview of four game engines...... 32 2.6 A list of references shown in Figure 2.6...... 33 2.7 An index of research and training applications powered by game en- gines, indexed by topic area...... 35 2.8 Validation status of the evacuation simulators reviewed in the section. 46

3.1 A fragment of the global path library (GPL) for the example buildings. 62 3.2 Configuration for the evacuation time test...... 66 3.3 The summary of four benchmark evacuation times (manually calcu- lated)...... 67 3.4 Comparison of evacuation time...... 69 3.5 Configuration of the scalability test...... 71 3.6 Room capacities of the ES building...... 73

4.1 A list of keywords used in evacuation commands...... 84 4.2 Average evacuation success rates...... 92

6.1 The specifications of the PCs used in the user study...... 125 6.2 The three human/bot group combinations (H/BGC) in this user study.128 6.3 A list of evacuation replays to be reviewed (n = 18)...... 129 6.4 An overview of the hypotheses and results...... 135

xiii List of Tables xiv

6.5 Four categories of identification results...... 135 6.6 Test statistics for the Wald-Wolfowitz runs tests...... 137 6.7 Test statistics for the Kruskal-Wallis tests...... 138 6.8 Test statistics for the Mann-Whitney U Test...... 139 6.9 The report for mean comparisons on judgement scores across two identification results...... 139 6.10 The identification results of three H/BGC...... 141 6.11 Test statistics for the Kruskal-Wallis tests...... 143 6.12 The Kruskal-Wallis test statistics...... 143 6.13 The test statistics for the Mann-Whitney U test on the judgement scores...... 144 6.14 The statistic report on means comparison between HH and BH in fire evacuation scenario S1...... 145

A.1 A fragment of the global path library (GPL) for Engineering S Building.189 A.2 Three types of participants involved in the Turing Test user study... 190 A.3 Evacuation instructions used in the fire evacuation drill system.... 191 A.4 The configurations of evacuation sessions...... 192 A.5 A list of evacuation sessions in judgement group J1, J2 and J3..... 193 Listings

3.1 A fragment of virtual human behaviour script in C#. The frequencies of the paths matches those in Table 3.1...... 63

xv Abstract

Building realistic virtual environment-based training systems has been a long-term research topic. One issue is the difficulties of embedding expert domain knowledge into the virtual environments. For example, many virtual fire evacuation systems have limited fire science knowledge, resulting in unrealistic evacuation behaviours of computer-controlled virtual humans and inaccurate fire modelling. This research project has developed and validated a pipeline approach, PRE- VENT (a Pipeline for pRototyping Evacuation training Virtual ENvironmenT), which reuses domain simulators and game engines to create realistic virtual envi- ronments. PREVENT has been shown to be consistent, accurate, and scalable in a series of case studies. In addition to the reuse of domain knowledge, an interaction framework was designed and demonstrated to effectively enable dynamic interaction in virtual envi- ronments created via PREVENT. To evaluate the behavioural realism of the virtual humans, a new experimental protocol was designed based on the conventional Turing Test. A user study demonstrated that the virtual humans generated via PREVENT behaved as realistically as real human participants in an example fire evacuation drill environment. Publications

The following publications have been written within the timescale of this PhD re- search:

Xi, M., & Smith, S. P. (2014, March). Simulating cooperative fire evacuation training in a virtual environment using gaming technology. In 2014 IEEE Virtual Reality (VR) (pp. 139-140). http://doi.org/10.1109/VR.2014.6802090

Xi, M., & Smith, S. P. (2014, December). Reusing simulated evacuation behaviour in a game engine. In Proceedings of the 2014 Conference on Interactive Entertain- ment (IE’2014) (pp. 1-8). http://doi.org/10.1145/2677758.2677779

Xi, M., & Smith, S. P. (2015, January). Exploring the reuse of fire evacuation behaviour in virtual environments. In Proceedings of the 11th Australasian Con- ference on Interactive Entertainment (IE’2015), (Vol. 27, p. 35-44). Australian Computer Society.

Xi, M., & Smith, S. P. (2016). Supporting path switching for non-player characters in a virtual environment. In 2016 IEEE Virtual Reality (VR) (pp. 315-316). http://doi.org/10.1109/VR.2016.7504780

Xi, M. (2016). A pipeline for fast prototyping training environments. Paper pre- sented at the 2016 IEEE Virtual Reality (VR) Doctoral Consortium. IEEE.

xvii Chapter 1

Introduction

Virtual environments are increasingly used for simulation and training, particularly for dangerous scenarios such as fires, flood, and earthquakes. One typical applica- tion is to use virtual environments for practising safety skills, which gives engaging virtual drill experiences that real-life drills cannot provide. However, how to build realistic interactive fire evacuation systems in virtual environments has been a long- term research question [153, 212, 220]. This thesis focuses on the development and validation of a structured pipeline approach called PREVENT1 that rapidly cre- ates realistic virtual emergency evacuation drill systems by reusing expert domain knowledge. Although the PREVENT approach is capable of building general virtual environments for multiple purposes, this thesis uses the virtual fire drill systems as examples which demonstrate the PREVENT approach. In order to evaluate the realism of the developed virtual environments, this work has also explored a) inter- actions between human trainees and computer-controlled virtual humans, and b), the measurement of humanness of virtual avatars in a modified Turing Test. The rest of this chapter introduces the background of this research topic, defines the major research questions and highlights the significance of the research.

1PREVENT is short for a Pipeline for pRototyping Evacuation training Virtual ENvironmenT. More details will be discussed in Chapter3.

1 Chapter 1. Introduction 2 1.1 Introduction

According to Fire and Rescue New South Wales (NSW) Australia, there are over 6,000 building fires in NSW every year [59]. In the USA, there were an average of 377,108 residential fires annually, resulting in 2,687 deaths, 13,081 injuries and 7.54 billion US dollars in damage each year [239]. Deaths also occur due to a lack of fire safety education, such as insufficient practices with fire evacuation plans [149]. As a result, safety plans for fire evacuation are commonly needed. A fire safety plan includes at least three sub-plans, i.e. staff training, main- tenance, and fire action [32]. The staff fire training plan should cover description of staff duties, fire wardens, use of equipment and training of general fire science knowledge. The fire action plan should consist of assisting and reporting to the fire brigade, leading occupants to assembly places, and making roll calls at special venues, such as schools. A fire maintenance plan is for the fire safety system itself in- cluding the maintenance of active systems (e.g. smoke detectors and extinguishers) and evacuation floor plans (e.g. building layout and escape routes). To enhance people’s familiarity with fire safety plans, fire drills in real world situations are commonly used for fire evacuation exercises. However, such drills have drawbacks; for example, drills are not performed under realistic life-threatening or hazardous conditions due to practical and ethical reasons [3,64,65]. Also, organising a fire drill can be expensive, needing collaboration between multiple departments (e.g. media department, security department, fire department, etc.) [1,52,180,184]. Conducting drills can be very disruptive and affect the normal function of special buildings [210] such as hospitals, banks, prisons and airports. These limitations have made it difficult to provide repetitive fire evacuation training experience in realistic fire emergency environments [184]. An alternative solution is to use virtual reality (VR) based fire evacuation train- ing. Virtual reality is a technology that can provide an immersive and engaging training experience in 3D space [153, 212]; for example, simulating realistic envi- ronments for training fire evacuation skills [153, 212]. A number of tools have been Chapter 1. Introduction 3 developed to prototype virtual environments (VE), such as 3DVIA virtools2 for general interactive real-time VR design [203] and modern game engines [278]. Game engines, such as Unity3 [234], Unreal Engine [48], CryENGINE [36], are popular tools for creating realistic virtual environments. A game engine is typically a suite of graphics tools including highly efficient rendering algorithms to render photorealistic virtual scenes. For example, Smith and Trenholme [212] investigated virtual fire drills using the Source Engine [247]. These game engines come with easy-to-use editors that can be used to build first-person game-like 3D training environments [230]. Besides the graphical effects, another essential component of VR-based training systems is the participants. For example, the participants in a fire emergency can be categorised by their locations and evacuation roles. Instead of fire fighters who are the main participants outside the building during an emergency, this thesis focuses on the evacuees who represent all people inside a building during a fire scenario, i.e. residents, visitors and fire wardens. One main difference between the visitors and the residents is the knowledge of the building, which means the residents are usually more familiar with their residential buildings and evacuation routes, and normally have evacuation plans which visitors may not have. In terms of evacuation roles, compared to normal residents, the fire warden is a special type of resident who has the responsibility of giving instructions to support emergency evacuations. One compound research question is how to build realistic virtual fire drill systems with accurate representations of these different roles (e.g. visitors, residents, fire wardens, fire fighters, etc.) and to identify the core components of such systems. The way the components can to be interrelated is described in Figure 1.14. The three main components needed; the virtual environment, human behaviour modelling and the integration of fire science.

2http://www.3dvia.com/fast3d/ [Last Access: 20/01/2017] 3Unity is also referred to as Unity3D. In this thesis, the use of Unity and Unity3D represent the same game engine developed by Unity Technologies and reviewed in this section. The name of “Unity” and “Unity3D” is interchangeable in this thesis. 4This figure, and the work in this thesis, focus on the technology aspects of developing virtual environments in this context. Chapter 1. Introduction 4

1 Virtual Environment (Game Engine)

4 6 7 2 3 Human Behaviour 5 Fire Science Modelling Modelling

4 Virtual Fire Drills 5 Fire Evacuation Modelling Figure 1.1: Technology components needed for creating a realistic fire evacuation 6 Virtual Fire Drills drill system (No. 7).

The first requirement is to create an interactive virtual environment, in this case, with a modern game engine (No. 1 in Figure 1.1). Researchers have been using game engines to create virtual fire drill environments for different scenarios, such as campus buildings [212], nuclear institutions [153], and hospitals [209, 210]. The applications of this kind are either single user walk-throughs or networked evacuation drill with fellow trainees. One concern with these applications is that two main components of a realistic interactive fire evacuation system are missing, i.e. accurate fire science knowledge (No. 3 in Figure 1.1) and computer-controlled virtual humans which simulate the behaviour of humans (No. 2 in Figure 1.1). Thus they are unable to provide realistic fire evacuation experience with realistic human crowds. The second key component required to develop a realistic interactive fire drill system, is human behaviour modelling (No. 2 in Figure 1.1). Researchers have Chapter 1. Introduction 5 defined human behaviour based on case studies and factors which may influence responses performance in fire scenes [107, 211]. Compared to virtual humans, the behaviours of human beings are complicated, stochastic, highly personalised, and can be affected by environmental parameters [124,211]. A human behaviour model is a simulation of the human decision-making procedure, where a human is treated as an individual device that can get input and make predictions by different ap- proaches. Artificial intelligence (AI) methods, such as BDI model5 and Markov Decision Process6, have been frequently used to model the human behaviour for emergency situations [166,171,196,232]. The third component is the modelling of accurate fire science knowledge (No. 3 in Figure 1.1). Fire environments are dynamically changing environments with accompanying drastic reactions caused by heat radiation and combustion. The modelling of fire science, including flame, smoke, toxic gases, heat, etc., requires expert domain knowledge. In this area, researchers have created computational fluid dynamics models, such as the Fire Dynamics Simulator, to simulate the evolution of fire, smoke and other related factors [147,148]. The three main components described above also have intersections, which will be described further here. To create virtual environments featuring realistic virtual humans, researchers have attempted to integrate simulated realistic human behaviours in virtual envi- ronments (No. 4 in Figure 1.1)[185,259]. However, compared to general behaviour models (i.e. models categorised in No.2), modelling human behaviour in virtual en- vironments has the additional challenge of computation cost. For example, a virtual fire environment requires a huge amount of computation resources for rendering 3D content (e.g. 3D characters, flame, smoke etc.), which means only limited resources can be allocated for calculating virtual human behaviours [199, 262]. As a result,

5Belief-Desire-Intention (BDI) [182] is a model for implementing intentional agents. The belief represents the agent’s knowledge. The desire is the agent’s goal. The intention is a sequence of tasks to achieve the specified goal. 6Markov Decision Process is a mathematical framework for modelling decision making in situ- ations where outcomes are partly random and partly under the control of a decision maker [91]. Chapter 1. Introduction 6 the human behaviour models in virtual environments are simplified to only consider a few factors [171]. For example, the Legion system employs only four parameters (goal point, speed, distance from others, and reaction time) to represent the complex nature of individual behaviours [171]. As a trade-off for real-time interaction, the realism of the behaviours was sacrificed. Although the application of advanced AI in game-like virtual environments (No. 4 in Figure 1.1) provides more engaging virtual experiences (e.g. supports crowd evacuation and features an interactive environment), its use of advanced fire science (e.g. the movement and impact of toxic gas, smoke, heat, etc.) is limited. One reason is that fire evacuation modelling is highly interdisciplinary which requires advanced knowledge from multiple science areas, such as fire science, computer science, and psychology. This makes it hard for VR developers and researchers to develop fire- related virtual environments without those high levels of domain knowledge. To integrate a human behaviour model (No. 2 in Figure 1.1) with advanced fire science knowledge (No. 3 in Figure 1.1), fire scientists and psychologists have developed fire evacuation simulators [119, 191] (No.5 in Figure 1.1).Fire evacuation simulators, also referred to as evacuation models, are developed by fire scientists and psychologists to simulate the evolution of fire and human evacuation activities nu- merically and are often used in a building’s safety design processes [191]. However, the critical problem of evacuation simulators is the computation cost. For example, a 2,000 second simulation can take 124 hours to compute [270]. Also, real-time in- teraction, as an essential part of an interactive virtual environment (No. 1 in Figure 1.1), is not supported by simulators. This makes evacuation simulators incapable for use in real-time user-based interactive virtual fire drill systems. The requirement of advanced domain knowledge also leads to difficulties for research when combining fire science knowledge with virtual fire environments (No. 6 in Figure 1.1). As far as the author is aware, only few attempts to do this can be found in the literature. These examples either focus on using fire science knowledge to support fire particle effects [271] or simulate the spread of smoke in virtual environments [272]. The lack of realistic human behaviour makes them unsuitable for interactive drill experiences. Chapter 1. Introduction 7

Since the existing virtual fire evacuation drill applications (No. 4 in Figure 1.1) are lacking in advanced fire science knowledge, and evacuation simulators (No. 5 in Figure 1.1) are limited in real-time interactions in VE, a new approach integrating the advantages of each method is needed. One possible solution is reusing fire evacuation simulators (No. 5 in Figure 1.1) to support the modelling of computer- controlled avatars presenting the virtual environments by reusing game engines (No. 1 in Figure 1.1). In this way, the fire science knowledge can be embedded into the fire-related virtual environment applications (No. 7 in Figure 1.1). This is the general approach proposed in the work presented here. In this thesis, the expert evacuation models (No. 5 in Figure 1.1) will be reused to generate realistic human evacuation behaviour for the computer-controlled avatars in virtual evacuation drill environments via game engine technology (No. 1 in Figure 1.1). A pipeline was developed to link 3D modelling tools, evacuation simulators and game engines together. In addition, an interaction framework will be described to support the interactions between different avatars. The capability of the pipeline used to create realistic computer-controlled avatars featuring domain knowledge and dynamic interactions will be validated in an example virtual fire drill environment for fire wardens.

1.2 Statement of the Research Problem

Designing realistic virtual training environments is an extensive research area, in- cluding the design of virtual training environments and the evaluation of training performance. This thesis focuses on the design, development, and evaluation of the creation of realistic virtual training environments rather than measuring the training outcome (e.g. effectiveness of the training) of a particular virtual environment. The fundamental requirements of building a virtual fire drill system are:

• Modelled 3D content (e.g. buildings and avatar models),

• Behaviours of computer-controlled virtual humans (e.g. selecting an escape route during an emergency evacuation), Chapter 1. Introduction 8

• Domain knowledge of the target scenario (e.g. knowledge of fire dynamics in a fire-related virtual environment),

• Design of interactions (e.g. following other avatar’s suggestions, such as in- structions from a fire warden).

Firstly, the modelling of 3D content, such as buildings, virtual avatars and water, is usually time consuming and requires fast rendering techniques, advanced graphics hardware, and a dedicated set of skills for art design. Although advancing the ren- dering algorithms or developing faster hardware is beyond the scope of this thesis, to maintain the graphical realism of the modelled environments, existing 3D mod- elling tools (e.g. AutoCAD) and modern game engines (e.g. Unity) will be used for creating 3D contents and presenting the virtual environment. Secondly, believable computer-controlled virtual humans are one of the core com- ponents of a virtual evacuation drill system. For example, it is impractical for crowds of participants to attend virtual emergency drill concurrently. Instead, computer- controlled virtual humans have been frequently used to simulate the evacuation crowds [124, 265]. It is known that the decision-making process of humans is com- plex, stochastic and sometimes affected by surrounding individuals and the envi- ronment [124, 172, 211]. For example, while escaping from a burning building, an evacuee can estimate the location of fire by observing the smoke and feeling the heat and consider which route and exit are more familiar. This also indicates that a person can use his or her domain knowledge (e.g. fire science knowledge) to make decisions. Thirdly, another core component is domain knowledge. For example, a virtual fire evacuation system requires fire science knowledge to simulate the spread of fire, smoke, toxic gases, etc. In a different domain, an example is that an earthquake- related evacuation drill system needs the knowledge of earthquake science to simulate how buildings will be damaged (e.g. collapsed and unstable). Embedding domain knowledge into such virtual environment is necessary; otherwise, it cannot determine key aspects of the simulation, e.g. how spreading smoke changes people’s evacuation paths and how collapsed fire exits affect escape behaviour. Chapter 1. Introduction 9

The last but not least important component is the design of interaction. As a core advantage of virtual reality [21], interaction is of great significance in creating an engaging practising experience where people can learn and practice their skills in virtual environments. For example, possible interactions in fire drill systems include talking and listening to other avatars (e.g. discussing evacuation paths with other evacuees and receiving evacuation instructions from fire wardens) and picking up an object (e.g. fire extinguisher, hydrant hose, etc.). This thesis focuses on the last three fundamental components (i.e. the behaviours of computer-controlled virtual humans, integration of expert domain knowledge, and design of interactions) and breaks the main research question of creating a realistic virtual environment down to three sub-questions:

Questions:

Q1. How is expert domain knowledge reused to support realistic behaviours of computer-controlled virtual humans in virtual evacuation training environ- ments?

Q2. How is dynamic interaction enabled in virtual environments without impacting the embedded expert domain knowledge?

Q3. How is the realism of virtual human behaviours evaluated in a virtual envi- ronment?

To answer Q1, a pipeline that can automatically extract expert domain knowl- edge from domain simulators and reuse behavioural knowledge in virtual humans was proposed and developed. This improved the realism of a virtual environment by adding realistic computer-controlled virtual humans that have awareness of spe- cific domain knowledge. For example, the computer-controlled virtual humans in a fire drill environment are able to consider the fire-related factors (e.g. denseness of smoke, temperature and other evacuees) when selecting evacuation paths. Q2 focuses on user interaction designed to enable dynamic interactive training experiences. The minimum set of interactions that are needed to make an evacuation Chapter 1. Introduction 10 training system usable were defined. Also, a general interaction framework has been designed to support the implementation of these interactions. Q3 concerns the behaviour of the computer-controlled avatars generated by the pipeline (i.e. Q1) in a general test to confirm the validity in terms of the effectiveness of the approach derived as solutions from Q1 and Q2. A new experiment protocol based on the conventional Turing Test7 (TT) has been developed to evaluate the realism of behaviour of virtual humans in virtual environments.

1.3 Significance of Research

This research is significant for three main reasons. Firstly, the pipeline proposed in this research provides a new approach to reusing domain specific knowledge (in this thesis, the case study is fire science) to support realistic human behaviours. Modelling the fire evacuation behaviours of humans (e.g. Multi-Agent System and Markov Decision Process) can be a time consuming process. More importantly, the research on fire evacuation is highly interdisciplinary, covering fire science, computer science, education, social science and psychology. As noted in Section 1.1, one problem is that researchers who are experts in agent behaviour modelling are unlikely to be fire scientists and VR development experts. This is also the reason why existing fire evacuation training systems are lacking in either realistic human behaviour [99,153,209,210,212], realistic virtual environments [148,184,212,272] or interactions [153,209,210]. The pipeline proposed in this thesis will allow VR developers to focus on designing realistic human computer interactions and realistic virtual environments. Reusing fire science knowledge through this pipeline can support any lacking domain knowledge and improve the realism of fire evacuation systems by adding realistic computer-controlled avatars. Secondly, this research proposes a new test protocol that is unified for assess-

7The conventional Turing Test is a test introduced by Alan Turing in his paper, “Computing Machinery and Intelligence”, to determine whether the intelligent behaviour of a machine is equiv- alent to, or indistinguishable from that of a human [231]. More discussions about the Turing Test can be found in Chapter5. Chapter 1. Introduction 11 ing the realism of characters in real-time interactive virtual environments, such as computer-controlled evacuees described in Chapter3 and Chapter4. Previously, researchers have attempted to use the Turing Test to review the realism of charac- ters, such as conversational robots [144, 231] and combating non-player characters (NPC), in computer games [88, 89, 123]. However, these attempts are either only suitable for a particular application or a particular type of character. Also, judg- ing processes are usually mixed with game play, which makes it difficult for one examiner to assess all characters in one attempt. The new test protocol is the first protocol that is specifically designed for use in virtual environments to review the behavioural realism of avatars via a modification of the conventional Turing Test. Finally, the thesis presents a case study related to virtual fire evacuation drill systems for fire wardens. As noted in Section 1.1, participants who may be involved in a fire emergency situation can be categorised as being inside or outside of the building. For participants outside the building, i.e. fire fighters, researchers have developed multiple systems for training fire fighters [25, 213, 220]. For participants inside the building, a considerable number of projects have contributed virtual fire drill systems for visitors and residents including both single individual walk-through and networked group evacuation. For example, Smith and Trenholme’s virtual fire drill system allowed a single evacuee to wander around a building to find a way out [212]. Also, researchers have investigated how groups of human evacuees behave in a collaborative virtual fire evacuation system [153,203]. However, what was miss- ing here is the systems specifically designed for fire wardens, who have important leadership roles in an evacuation [172]. This thesis demonstrates a possible virtual drill system for fire wardens. The aim was not to develop a final educational tool (i.e. a complete training system), but to demonstrate an integration of technolo- gies in an approach from which a fire warden training system could be built. This case study demonstrates how complex interactive environments can be developed by reusing domain knowledge and gaming technologies. It also serves as an ideal test- bed for validating the computer-controlled avatars created with the new pipeline, as it not only observes how avatars select evacuation paths, but requires interac- tions between the computer-controlled avatars and human trainees (e.g. sending Chapter 1. Introduction 12 evacuation commands to residents). In summary, the innovative contributions of this thesis include, but are not limited to:

• An innovative pipeline approach that can successfully reuse expert domain knowledge to improve the behavioural realism of virtual humans in virtual environments.

• An interaction framework that defines and supports dynamic behaviour in emergency evacuation drill virtual environments.

• Case studies that validate the accuracy, scalability and flexibility of the pipeline.

• A new test protocol for reviewing the realism of virtual humans in virtual environments via a modified Turing Test.

• A case study that implements the new Turing Test protocol to review the behavioural realism of computer-controlled avatars in a fire drill system for fire wardens.

1.4 Structure of the Thesis

The remainder of this thesis will focus on three research questions. Firstly, Chapter2 reviews the related work that is required for creating a realistic virtual fire evacuation drill environment for fire wardens, including game engines, evacuation simulators and 3D modelling tools. Chapter3 presents a structured pipeline for rapidly prototyping training virtual environments (without dynamic path switching) that can reuse the expert domain knowledge to support the behaviour of the computer-controlled avatars (Q1). Chapter4 introduces an interaction framework that enables dynamic path switch- ing for the virtual humans during scenarios with interactive events (Q2). In order to validate the realism of the virtual humans, Chapter5 proposes a new test protocol via a modified Turing Test, for the review of the behaviour of “human- ness” of virtual humans in virtual worlds. An implementation of this new test is Chapter 1. Introduction 13 presented in Chapter6, which explores the realism of the evacuation behaviours in a test fire drill system for fire wardens (Q3). Finally, the findings and future work of this research are summarised and discussed in Chapter7. Chapter 2

Literature Review

Beginning with a brief overview of human behaviours in fires, this chapter reviews related work on creating interactive fire evacuation training environments with game engines. The technical reviews on modern game engines, evacuation simulators, and 3D modelling software are then presented. The findings are summarised at the end of the chapter.

2.1 Human Behaviour in Fires

Human behaviour in fires has been an active research topic, especially in the disci- pline of psychology and fire science. This section briefly reviews the main factors that affect the human response to fire emergencies. In general, human behaviour in a fire emergency can be divided into two seg- ments; the pre-evacuation period and the evacuation period. The pre-evacuation pe- riod usually involves behaviours on perceiving the risk situation through cues, such as seeing smoke, hearing explosions, swaying of the building and smelling burning materials [71,178]. Perceiving the risk quickly and previous experience of emergen- cies can significantly contribute to a rapid evacuation [66,71,146]. Galea et al. also found that, on average, one evacuee completed multiple activ- ities before evacuating [66]. Common activities include seeking information of the event, collecting belongings, and providing verbal instructions to evacuate [146,177]. Evacuees may also perform non-evacuation activities, such as making a phone call,

14 Chapter 2. Literature Review 15 changing footwear, shutting down computers, and seeking permission to leave [66, 71]. As noted by Galea et al. [66], the number of tasks completed prior to evacuation significantly correlates with the delay to evacuation. Evidence suggested that drill experience has a positive impact on the delay to evacuation. During the evacuation period, there are several critical factors that may affect an evacuee’s decision-making process. For example, social force factors may lead to a situation that members of the same social group (e.g. family) may prefer to evacuate together [113, 114]. Knowledge of the building layout also affects the selection of an evacuation path [71]. This has also been demonstrated by the Department for Communities and Local Government of the UK, which studied the human behaviours in actual fire accidents and suggested that familiarity with buildings (e.g. familiarity with doors and exits) and knowledge of the fire can influence escape behaviours [211]. For example, residents may evacuate through fire exits, while visitors may choose the main entrance [211]. In some extreme situations, evacuees may freeze and not take any action, leading to fatalities [126]. In addition, evacuees of different types also behave differently during a fire emer- gency [211]. For example, children prefer to follow their parents, disabled people usually need additional help to leave buildings [71, 209, 210]. Gershon et al. [71] pointed out that people completed evacuation faster if they received clear evacua- tion instructions from authority figures. Others [35, 58, 134, 172] also agreed that leaders (e.g. fire wardens), who can provide clear evacuation instructions and lead the evacuation, have a positive impact on evacuation. Kobes et al. [107] reviewed human behaviours concerning building fire safety. They summarised that several factors can influence how humans respond to fires, including survival strategies, fire perception, human factors (e.g. individual, group, and situational features), as well as building factors (e.g. engineered features). Similar conclusions were drawn by Hu et al. [92] by studying fire evacuation cases in China. Proulx [178] also summarised three characteristics that impact human behaviour during fire events, i.e. occupant characteristics, building characteristics and fire characteristics. This section has outlined the way humans react to fires differs due to multiple Chapter 2. Literature Review 16 factors. This impacts the difficulty of modelling realistic human behaviour and justifies the necessity to conduct fire evacuation training, especially for authority figures, such as fire wardens.

2.2 Virtual Fire Evacuation Training Systems

Although fire drills have been widely used to train people for fire egress skills, their disadvantages, such as high cost, limited repetitiveness, and lack of inherent danger, make it impractical for people to practice fire escape skills repetitively in realistic emergencies (see Chapter1). This section reviews previous research on building virtual environment-based fire evacuation training systems using game engines (see Section 1.1). Compared to real fire drills, using virtual environments can significantly reduce the training cost and provide an engaging training experience [99,184,265]. Previous work has shown that virtual fire drills can support training and observation of human evacuation behaviours in virtual environments [25,31,67,153,184–186,204,209,210, 212,220,272]. Researchers have developed virtual environment-based fire evacuation systems for various scenarios, for example, hospitals [210], schools [133,212], nuclear power plants [153], aircraft [203], and train terminals [184]. Smith and Trenholme investigated rapid prototyping of a virtual environment for fire drills, using gaming technology [212]. They reviewed the game engines that were appropriate to creating virtual environments for scientific use [230] and prototyped several virtual fire escape scenarios. Their virtual fire drill systems allowed trainees to walk through the building. When the trainee reached a specific location, the system would trigger the fire alarm to alert the trainee to evacuate from the building. The fire scenarios were created by the Source Engine [247] and loaded as a game level in the video game Half-Life 2 [243]. Smith’s and Trenholme’s research demonstrated that a modern game engine could be used to build virtual evacuation training systems. The study’s evaluation reported that previous gaming experience may influence the evacuation results. Self-reports from participants also suggested that the fires in the virtual environment were too unrealistic. Chapter 2. Literature Review 17

Chittaro and Ranon [31] developed serious games for training people for fire safety skills in virtually built scenarios. The NeoAxis 3D Engine [159] was used to mock up the training environment. One advantage of this application is that it can visualise the users’ navigation patterns by integrating an analytical tool called VU-Flow [30] into the NeoAxis Engine. This enabled the possibility of comparing navigation patterns between individuals and personalising the training experience. However, as pointed out by the authors, the simulation of smoke and fire phenomena was too unrealistic and the human response to fire and smoke (e.g. losing speed when close to smoke) was too simplistic. Ren et al. simulated emergency evacuations in fires using virtual reality technol- ogy [184]. Instead of directly modelling fires in virtual environments (e.g. using the particle system of a game engine), they started with simulating the evolution of fire and smoke in the Fire Dynamic Simulator (FDS) [148] and then reused the simula- tion data into virtual environments created by a modelling tool called the MultiGen Creator [156]. Their recent research showed an approach for integrating FDS into virtual reality-based fire training systems and proposed a rational VR-based fire training simulator with smoke hazard assessment [272]. Silva et al. [209,210] built an evacuation training system for a hospital using the Unity game engine [234]. This system could reproduce a recurrent situation such as when a staff member is steering a patient in a wheel chair through a hospital ward. The system involved disabled virtual humans, which highlighted the needs of cooperative actions to move the disabled individual to a safe zone. Their user survey showed that such a training system could help users gain more awareness of how to behave during emergency situations. However, the system did not support networked training and had no virtual humans, which made it unable to provide a collaborative mass evacuation training experience. M´olet al. [153] developed an evacuation system based on a nuclear power plant. They modelled the 3D buildings with CAD software and imported the 3D models into the Unreal Engine [48]. Their system supported several interactions among avatars by modifying several classes of the Unreal Engine. This system allowed multiple participants to attend the same networked evacuation training session. The Chapter 2. Literature Review 18 case study reported conjunction situations around door/exit areas. However, this system did not contain any computer-controlled virtual humans. Ribeiro et al. [186] presented a virtual fire evacuation training tool in Unity. In their user study, participants practiced evacuating a campus building with randomly initiated fires. The results showed that users effectively acquired knowledge of evac- uation procedures. Although a game engine (Unity) was used to present the virtual training scenario, the research reported issues with rapidly setting up simulation environments from floor plans. In addition, the system only supported training a single player in one session. Without a networked environment, it was hard to ob- serve any social behaviours that may have appeared during a fire emergency. Also, any detailed expert domain knowledge (fire science knowledge) seems to be missing from this work. Previous virtual environment-based fire evacuation training systems have fo- cussed on training a single evacuee for fire escape skills, but did not consider that individual behaviour can be influenced by group behaviour and instructions from authority roles [54,71]. Another important issue is that many did not integrate ex- pert domain knowledge (e.g. fire science knowledge) into their projects. This led to less realistic fire environments, for example less realistic spread of the fire and smoke and inaccurate human perception of the fire risk, such as the heat and poisonous gases released by burning materials [31,212].

2.3 Reusing Gaming Technology

Virtual reality technology has been widely used in scientific research projects. How- ever, building high quality virtual environments is a time consuming and error prone process. Developing a virtual environment can also be technically challenging for non-experienced researchers [212, 230]. Although there are several available vir- tual environment development tools, many of them only provide a limited set of components required to build a complete virtual environment. One alternative is to use a game engine to create virtual worlds. Game engines are integrated de- velopment platforms consisting of the critical modules required to build realistic Chapter 2. Literature Review 19 virtual environments; for example graphics, sound, animation, networking and spe- cial effects. Modern game engines are being used in scientific research, ranging from computing, education, psychology, and emergency planning, to robotic simu- lation [2, 130, 143, 212]. However, as a technology-based product, game engines and the environments they enable are constantly improving. This section reviews four popular game engines and provides a brief guide on game engine selection based on four system design elements.

2.3.1 Unity

Unity is a modern game engine designed by Unity Technologies and was released in 2005 [234]. The latest version is Unity 5.5 released in November 2016. Unity is an easy-to-learn game engine, has excellent support for cross platform applications, and has been favoured by independent developers and small software studios. Popular games designed with Unity engine include the Monument Valley (iOS/Android plat- form), Rochard (PC and Mac platform), InCell VR (virtual reality platform) and more as listed on the Unity showcase gallery [235]. Unity has an official developer community called the Unity Developer Network [233]. The graphics and visual effects of the Unity engine feature real-time lighten- ing and shadows, natural lights, deferred lighting rendering, high dynamic range (HDR) rendering8, surface shaders for multiple devices, particle system, NVIDIA PhysX9, and DirectX10 12. The Unity Editor is a fully integrated and extendible tool that supports lights, audio, avatars, physics, AI customisation and terrain cre- ation (see Figure 2.1). Unity also has a dedicated wheel collider designed for car racing games. This engine has a very powerful scripting system that allows develop- ers to quickly develop scripts in JavaScript and C#. Unity has a built-in scripting

8High Dynamic Range (HDR) is an image processing technology used to provide a wider range of tonal values, particularly the difference between lightest light and darkest light [183]. 9NVIDIA-PhysX is a physics engine that can simulate the mechanics of rigid-bodies and soft- bodies (building elements) or particles (smoke, fluids) [163]. 10DirectX is a set of low-level Application Programming Interfaces (APIs) that provides Windows programs with high-performance hardware-accelerated multimedia support [151]. Chapter 2. Literature Review 20 integrated development environment (IDE) called Mono [221], which is an open source implementation of Microsoft’s .NET Framework. The use of Mono enables Unity developers to easily develop applications for multiple platforms in a single language. For example, applications written in C# with Unity can be exported to iOS without writing Object-C or Swift code. This feature significantly boosts the project development process and eases distribution of the applications.

Figure 2.1: The user interface of the Unity Editor [237].

Another main advantage of this engine is that the Unity Editor has distributions for both PC (both Windows and ) and Mac. Developers using MacOS are able to make their applications without switching their operating systems. Compared to its competitors, such as CryENGINE (Section 2.3.2) and Unreal (Section 2.3.3) series, Unity has lower system requirements, which makes it possible for independent developers to use without powerful development environments. Web applications powered by this engine support seamless communication between the player and container web page. Also, Unity provides native support for virtual/augmented Chapter 2. Literature Review 21 reality platforms, including Oculus Rift11, Gear VR12, PlayStation VR13, HTC Vive (SteamVR)14, Google Daydream15 and Microsoft HoloLens16. One disadvantage of the Unity engine is the current license model. The base version, Unity, is free for personal usage with limited features, while some advanced features (e.g. navigation meshes) are only available in the Unity Pro version. More details about the licensing can be found on the Unity Store [236]. At the time this thesis was written, Unity did not provide any free licenses for education and research (license fee is USD 799.0). The cost of a license might be a concern for projects with limited budgets. Table 2.1 summarises several important advantages and disadvantages of Unity engine.

Table 2.1: A list of advantages and disadvantages of Unity.

Advantages Disadvantages

• Excellent operating system support, • No free education license is available including Microsoft Windows, Ma- (costs USD 799.0). cOS, and Linux. • Graphics and visual quality is at a • Friendly for independent developers lower level than that of Unreal and and small software studios with lower CryENGINE series. hardware requirements. • Scripting system requires previous • Supports a wide range of VR/AR programming experience. platforms, including Oculus Rift, • Additional features are only available HTC Vive, and Microsoft HoloLens. in the Unity Pro version. • Excellent cross-platform support.

• A large comprehensive asset store.

11https://www.oculus.com/rift/ [Last Access: 20/03/2017] 12http://www.samsung.com/global/galaxy/gear-vr/ [Last Access: 20/03/2017] 13https://www.playstation.com/en-au/explore/playstation-vr/ [Last Access: 20/03/2017] 14https://www.vive.com/ [Last Access: 20/03/2017] 15https://vr.google.com/daydream/ [Last Access: 20/03/2017] 16https://www.microsoft.com/microsoft-hololens/ [Last Access: 20/03/2017] Chapter 2. Literature Review 22

The Unity engine has been a popular choice in creating virtual environments to facilitate education and scientific research. Educational games developed using Unity engine include, but are not limited to, guiding intuitive learning [45], art his- tory learning [62], teaching children to defend against noise-related hearing loss [46], educational aircraft design systems [136], human anatomy education [138], security education [75], fire evacuation training [186, 266, 267], and tower stability science concept teaching for young children [33]. Benefiting from its support for motion or gesture tracker devices, like Microsoft Kinect17, Leap Motion18, and PlaySta- tion Move19, the Unity engine supports capture-based projects, like visual mo- tion [152,251], speech rehabilitation [208], vestibular dysfunction rehabilitation [257], phobias treatment [83] and soldier training systems [261]. Other virtual environ- ments developed using Unity 3D include a critical incident management training system [192], a virtual safety assessment system [132], spatial memory training for brain-injury patients [109], cognitive rehabilitation [110], healthcare facility sim- ulator [120], mechanics simulation [93], virtual heritage [44, 150, 161], virtual art gallery [145], virtual puppetry [80], narrative simulation [77] and architecture de- sign [169, 170]. Researchers have also used the game-tracing component to create serious games for therapy of mental disorders [116] and fire evacuation training [186].

2.3.2 CryENGINE Series

The CryENGINE series, including CryENGINE I - V, are developed by Crytek [36]. The lasted release of CryENGINE is the CryENGINE V. Games powered by the CryENGINE series usually feature stunning visual effects, such as the Crysis series (PC and console platforms) [38], The Climb (virtual reality platforms) [39] and examples as shown on CryENGINE’s showcase site [37]. CryENGINE V is currently freely available for everyone, including its full features and full source code which can be accessed on GitHub [40]. The CryENGINE

17http://www.xbox.com/xbox-one/accessories/kinect/ [Last Access: 20/03/2017] 18https://www.leapmotion.com [Last Access: 20/03/2017] 19https://www.playstation.com/explore/accessories/playstation-move-motion-controller/ [Last Access: 20/03/2017] Chapter 2. Literature Review 23 also offers two optional paid membership subscriptions for additional training and support resources. The new policies and subscription mode have led to a fast growing developer community [41]. CryENGINE series are well known for fast high-end rendering. CryENGINE V comes with Voxel-Based Global Illumination (SVOGI) [222] for creating life- like scenes with photo-realistic lighting. Also, with the support of DirectX 12 and Vulkan20, CryENGINE developers can create a good performance with existing hard- ware. In addition, the CryENGINE series consist of state-of-the-art AI systems and a unique, individual characters system. CryENGINE offers a complete suite of tools for developing AAA commercial level games, including physics, audio, and animation effects in its development kit, CryENGINE Sandbox (see Figure 2.2). The CryENGINE Sandbox is an all-in-one game solution set that supports a visual flow graph21, dedicated road/river tools, easy-to-use vehicle creator, integrated vegetation and terrain generation, material editing, and a “Time of Day” system which can dynamically create or change content based on time cycles. The biggest advantage of CryENGINE series is the “WYSIWYP” (What You See is What You Play) feedback and “Live Create” content creation feature and high quality graphics and visuals. A developer is able to see all its AI navigation, physics effect and lighting effect in real time in the editor. Table 2.2 describes some advantages and disadvantages of the CryENGINE. The CryENGINE series has also been used in a wide range of training appli- cations. For example, The Royal Australian Navy developed a realistic interactive virtual military training system for its Landing Helicopter Dock Ship (LHD) [60]. Besides the military use, CryENGINE has gained success in a variety of research

20Vulkan is a new generation graphics and compute API that provides high-efficiency, cross- platform access to modern GPUs used in a wide variety of devices from PCs and consoles to mobile phones and embedded platforms [104]. 21The Flowgraph is a visual scripting system that is embedded in the CryENGINE Sandbox Editor, which allows users to build gameplay systems and complex levels without needing to write scripts. Chapter 2. Literature Review 24

Figure 2.2: The user interface of the CryENGINE Sandbox [42].

fields; for example, driving CAVE-like displays [102], large-scale virtual world de- sign [29], building design and construction education [162] and reconstruction of in- teractive historic sceneries [188]. Benefiting from its powerful rendering technique, this engine can also be used as a rendering tool. For example, it has been used for perspective distortion research [214,215], and analysing size and distance estimation under varied geometric rendering parameters [19].

2.3.3 Unreal Engine Series

The Unreal Engine series is a family of game engines developed by Epic Games [48]. The Unreal Engine has experienced several major updates to its latest version, the Unreal Engine 4 (UE4) which is one of the most powerful engines that are freely available for all users (see Figure 2.3). The latest Unreal Engine 4 features HDR rendering, DirectX 12, Vulkan, auto- matic LOD generation, Blueprint visual scripting and Hot Reload functions. The Unreal Blueprint is a powerful visual scripting system for users to write complex game logic without programming experience (see Figure 2.4). The Hot Reload fea- ture allows users to edit C++ code and see those changes immediately in-game. In Chapter 2. Literature Review 25

Table 2.2: A list of advantages and disadvantages of the CryENGINE series.

Advantages Disadvantages

• WYSIWYP feedback development • Limited cross-platform support, such environment. as mobile operating systems.

• All-in-one sandbox enables the user • Smaller development community and to build virtual environments with- previous paid license model leads to out using third-party middleware. fewer example of research projects compared to its competitors, e.g. • Easy-to-use AI system. Unity. • Very high quality graphic and visual • High specification hardware require- effects. ments, i.e. fast graphic card and pro- cessor. contrast, in Unity, a user has to stop the gameplay and recompile the game project before seeing the changes. Also, Unreal Engine 4 supports porting applications to a wide range of platforms, including desktop operating systems (e.g. Microsoft Win- dows, MacOS), console platforms (e.g. Xbox, PlayStation, Nintendo Wii), mobile platforms (e.g. iOS and Android), and virtual reality platforms (e.g. Oculus). Besides Unreal Engine 4, UDK (Unreal Development Kit) was another popular version (a free version of Unreal 3). UDK was first released with the game Gear of War by Epic Games [49] in 2004. Later in 2009, Epic Games released the UDK with part source code for non-commercial licensed users. However, UDK is no longer supported by Epic Games as the new Unreal Engine 4 is more powerful and became free22 to everyone in 2015. Games powered by Unreal includes are Unreal Tournament (UE4), Gear of War

22Users are free to get all tools, all features, all platforms, all source code, complete projects, sample content, regular updates and bug fixes. For projects earning over $3000.00 (except from the Oculus Store), 5% of gross revenue needs to be paid to Epic as a royalty. More details can found in the Unreal Engine End User License Agreement (EULA) [50]. Chapter 2. Literature Review 26

Figure 2.3: The user interface of the Unreal Editor [51].

(UDK), and Borderlands 2 (UE3). Unreal Engine series share similar working pro- cedures for creating virtual environments. As projects based on lower versions of Unreal (e.g. UE3) can be ported to the latest UE4, the literature reviewed here considers both Unreal Engine 3 (including UDK) and Unreal Engine 4. The main advantage of Unreal Engine series is that it has been used in the scientific research area for a long time; thus numerous successful applications can be referred to when conducting research in relevant areas. Another significant feature is its community support, where developers can find problem solutions on online forums23. The CAD-like interface of Unreal Editor has a familiar look and feel for CAD and 3ds Max users. Table 2.3 outlines some important advantages and disadvantages of Unreal engine. Unreal has been a very popular choice for scientific research. It has contributed to research in computer assisted teaching (e.g. film scripting education [11], coach training [68], and architecture information visualisation [127]), medical therapy re- search (e.g. visual impairments simulation [129], and visual recovery for ambly- opia [10]), virtual heritage reconstruction (e.g. virtual Egyptian temple [97]), spe-

23https://forums.unrealengine.com/ [Last Access: 20/03/2017] Chapter 2. Literature Review 27

Figure 2.4: The user interface of the Unreal Blueprint editor. cial event planning and training systems in various scenarios (e.g. nuclear power plants evacuation [101, 117, 264], virtual fire drills [153], risk training simulations [187], threat identification [218], crime visualization [269] and military tactics train- ing [122]). This engine is able to drive virtual landscape projectors like CaveUT [98] and CaveUDK [137]. Researchers also have used Unreal to develop robotics simu- lators, for example, USARSim (Unified System for Automation and Robot Simula- tion) [23]. The physics engine of Unreal can also be used in vehicle simulation, such as wheelchairs [87].

2.3.4 Source Engine

Source [247] is a 3D video game engine which was developed by [240] and debuted with Counter-Strike: Source [242] in 2004. Source does not have a concise version numbering scheme; instead, it is designed in constant incremental updates. The latest stable update is Source 2, which was announced in 2015 [141, 249]. However, Source 2 was still unavailable to the public at the time this thesis was being written. Recent successful games developed using Source include Counter-Strike: Global Chapter 2. Literature Review 28

Table 2.3: A list of advantages and disadvantages of the Unreal Engine.

Advantages Disadvantages

• A history of successful examples to • High specification hardware require- reference and review. ments.

• Large active developer communities • Only limited number of plugins. and open source (UE4). • Steep learning curve of the develop- • Blueprint visual scripting system is ment pipeline of the Unreal Engine. friendly to people without program- ming experience.

• High quality graphic capabilities.

Offensive, Half-Life 2, and Defense of the Ancients 2 (Dota 2) [246]. The Source SDK 2013 Edition can be downloaded from Steam24, with purchase of a Source- based game, for example, Half-life 2 or Left 4 Dead. The source code of Source SDK 2013 branch (version) is available on GitHub [248]. This engine is comprehensive and features visual effects, network enabled physics system, a sophisticated AI system, advanced character meshes and a customised material system. Although the graphics and visual quality of Source Engine are not as good as that of Unreal Engine (see Section 2.3.3) and CryENGINE (see Section 2.3.2), it supports the most frequently used visual presentation technologies. Its level editor, the Valve Hammer Editor, is the Source map creation tool that supports WYSIWYG (What You See is What You Get) design environments (see Figure 2.5). Developers can use this editor to place and script models, entities, and non-player- characters (NPCs), such as virtual humans. The main advantage of Source Engine is its modular structure which enables a developer to modify the engine to a very fine degree. Another advantage is its

24Steam is an entertainment platform developed Valve Corporation, which offers digital rights management (DRM), multiplayer gaming, and social networking services [241]. Chapter 2. Literature Review 29

Figure 2.5: The user interface of the Valve Hammer Editor [244]. huge community support. A very large community of licensees and individual de- velopers, have made contributions to collecting and publishing technical documents and tutorials online. Source Engine also supports film-making. Developers can use Source Filmmaker [245], a movie-making tool built inside the Source Engine, to create videos for their applications. One disadvantage of this engine is that there are no visual scripting tools for researchers with less advanced programming expe- rience. Scripts for Source Engine are written in C++. Its visual effect quality might also be considered a disadvantage when compared to the other engines mentioned in this chapter. Similar to CryENGINE (see Section 2.3.2), an internet connection is required for launching this engine from the Valve platform with a Steam ac- count. Table 2.4 overviews some important advantages and disadvantages of Source Engine. Source Engine has also been used in research projects, such as the adaptive context-aware services evaluation system [164], virtual fire drills [212], mass casualty Chapter 2. Literature Review 30 incident information systems [277], kitchen food education [140], virtual laboratory [6,26], and real-time strategic planning [100].

Table 2.4: A list of advantages and disadvantages of the Source Engine.

Advantages Disadvantages

• Modular structure allows developers • No visual scripting system. Scripting to modify the engine to a fine level. requires C++. programming experience. • Unique movie-making function with • Few research projects using this en- the Source Filmmaker. gine in recent three years as the engine is an ageing technology. • Active community support. • Limited visual effects compared to more modern engines, e.g. Unreal 4.

2.3.5 Summary of Game Engines

All the game engines reviewed in this section have the potential to build non-game virtual environments. As noted by Lewis and Jacobson [130] many game engines are very flexible and provide the resources needed to simulate various activities. Thus for the researcher looking for a suitable development platform, the technical capabilities of the different engines may be less important. Of more interest will be access into the development pipeline of an engine; for example, tool support for adding new virtual content, documentation of the core engine features and online support from fellow mod developers, to ease the acquisition of any new skills as required and to support the production of initial, and ongoing, prototypes. Therefore, the two main aims of this section have been to (i) provide a timely review of the most accessible and extendable game engines for the non-expert de- veloper and (ii) to classify the current usage of these game engines to inform their selection for new developments across different domain areas. The former is sum- marised in Table 2.5 where the four game engines overviewed in this section are compared. For the latter, this section has compiled two summaries of these game engines from recent literature. Figure 2.6 provides a general classification of the Chapter 2. Literature Review 31 game engines over four relevant design considerations; namely, environment set- ting as indoor or outdoor environment, use of artificial intelligence (AI) and use of external input/output (I/O) devices.

• Defining any indoor and outdoor setting is an initial development require- ment. This has implications for visual and audio content creation and can scope the interaction activities that the environment is expected to support; for example, navigation in large outdoor spaces or selection and manipula- tion of indoor features, e.g. doors and light switches. Also the environment setting can impact graphic rendering issues as, for example, indoor environ- ments can be more easily optimised for rendering speed than large outdoor environments. In this aspect, the Unreal and CryENGINE series are suitable for building large-scale outdoor virtual environments. Unity and Source En- gine will be good for creating lower graphically realistic virtual environments. Also, for indoor applications that require very highly detailed textures, the CryENGINE series will be an appropriate choice. For projects that require highly customised functions instead of graphic effects, the modular structure of Source may provide more freedom on editing the underlying source code.

• The provision of non-player characters (NPCs) and the associated AI routines to guide their behaviour is common in game engines but many systems only provide high-level path following or event-based scripting. This classification highlights systems that have had significant NPC usage, either using game- provided resources or their own multi-agent infrastructures embedded into the system. For the customisation of AI behaviour, selection may vary depending on programming skills. Unreal and CryENGINE will be better choices for re- searchers with no programming experience with their visual scripting systems. For researchers with rich programming experience, they may be able to han- dle all engines, so other elements might need to be taken into consideration when making their selection. However, Source and Unity might be difficult for researchers with no programming experience.

• Finally, I/O requirements for custom environments may require more than Chapter 2. Literature Review 32 Some unofficial tutori- als and courses are of- fered by trainingstitutions in- and univer- sities A highmachine performing is required for development and deployment Additional platforms support mayextra need purchase from Unity store Large community pro- viding mod tutorials Officialtation documen- andtutorials are video able avail- from itswebsite official Official CryENGINE documentationavailable on its is official website Documentations and video tutorialsavailable are from Unity official website Official documenta- tions andare tutorials availableValve from Community Developers Unreal 4 can be down- loaded from its official website CryENGINEdownloaded can from its official website Unity enginebe downloaded can from Unity store Source SDK candownloaded be Steam with from purchase of Source-based game, such as Half-Life 2 is available on GitHub. Full source code is available on GitHub available with an enterprise subscription nology have access tosource code full Sandbox Table 2.5: An overview of the four game engines reviewed in this section on editor, SDK, documentation and additional notes. Engine NameUnreal Series Editor Unreal Editor Unreal Engine Source 4 source Code code SDK Documentation Other CryENGINE CryENGINE Unity Unity Editor Source code access is only Source Hammer Licensees of Source Tech- Chapter 2. Literature Review 33

Table 2.6: A list of references shown in Figure 2.6.

A [6,26,68,93,101,110,117,129,136,138,164,212,264,269] B [10,45,77,120,140,161,218] C [23,29,122,277] D [62,127,132,170,175] E [109,208] G [153] I [98,102,137,152,251,257] K [97] L [18,83,261] M [187] N [100,188] O [11,44,150]

standard monitor output and mouse/keyboard input. The systems identified here have used extended I/O devices, from CAVE-like environments to gesture and biometric interaction. This classification enables researchers to identify examples of external devices and provides insight into either the existence of supported device drivers and/or suitability for adding new devices to a par- ticular game engine supported environment. There is no doubt that all these engines could meet standard I/O devices; however, specific device require- ments might have more suitable choices, for example CryENGINE series for Stereo 3D.

There are, obviously, numerous different design elements that could have been considered here to define system clusters. These four features are significant consid- erations for new developments and whose overlap, as represented in Figure 2.6 and Table 2.6, provide a starting place for examining the current literature. In contrast to Figure 2.6, Table 2.7 provides a domain specific application index across the game engines considered in this review. Again, the aim here is to point virtual environment developers to system examples in similar or analogous domains, Chapter 2. Literature Review 34

Artificial Intelligence External I/O device

Outdoor Indoor

A(14) I(6) E(2) M(1) FJ B(7)

(1) G(1) N(2) K C(4) L(3) H

O(3) D(5)

Figure 2.6: Project’s classification on four design elements across game engines used in the literature. The number in brackets represent the number of publication found (see Table 2.6). both as a first step to identifying and examining the current state-of-the-art and also to highlight the game engines that have been successfully deployed in these areas. The author believes that although game engines are a tool to provide an alterna- tive way to build virtual environments efficiently, the key factor that influences the success of research projects should be the professional research content itself instead of the game engine. This is like choosing a IDE for C++ programming, while consid- ering whether Microsoft Visual Studio, Eclipse or NetBeans will make the program successful or not. Chapter 2. Literature Review 35

Table 2.7: An index of research and training applications powered by game engines, indexed by topic area.

Architecture Information Visualization Unreal [127] Adaptive Evaluation System Source [164] Aircraft Design Education Unity [136] Architecture Design and Lecturing CryENGINE [162], Unity [169,170] Art History Learning Unity [62] CAVE Establishment CryENGINE [102], Unreal [98,137] Coach Training Unreal [68] Crime Visualization Unreal [269] Film Education Unreal [11] Healthcare Facility Simulator Unity [120] Human Anatomy Education Unity [138] Incident Information System Source [277], Unity [192] Intuitive Learning Unity [45] Kitchen Food Education Source [140] Mechanics Simulation Unity [93] Military Training CryENGINE [60], Unity [261], Unreal [122] Narrative Simulation Unity [77] Nuclear Plants Evacuation Unreal [101,117,264] Phobias Treatment Unity [83] Rehabilitation: Cognitive Rehabilitation Unity [109,110] Rehabilitation: Mental Disorder Rehabilitation Unity [116] Rehabilitation: Speech Rehabilitation Unity [208] Rendering tool CryENGINE [19,214,215] Risk Training Simulation Unreal [187] Robotics Simulator Unreal [23] Science Concept Education Unity [33] Security Education Unity [75] Strategic Planning Source [100] Threat identification Unreal [218] Vehicle Simulation Unreal [87] Vestibular Dysfunction Rehabilitation Unity [257] Virtual Art Gallery Unity [145] Virtual Fire Drill Source Engine [212], Unreal [153], Unity [186,210] Virtual Heritage CryENGINE [188], Unreal [97], Unity [44,150,161] Virtual Laboratory Source [6,26] Virtual Puppetry Unity [80] Virtual Safety Assessment Unity [132] Virtual Worlds CryENGINE [29] Visual Motion Unity [152][251] Visual Recovery in Amblyopia Unreal [10] Visual Impairment Simulation Unreal [129] Chapter 2. Literature Review 36

Computing technology continues to change at an alarming rate and the technol- ogy underlying game engines is part of this ongoing revolution. This section only shows the current trends in the reuse of game engine technology. At the time of writ- ing, the video gaming industry was on the verge of releasing the next generation of gaming platforms such as virtual reality and augmented reality. Any new hardware will be supported by new software infrastructures and there is significant ongoing potential for the reuse of the high quality environments that these technologies sup- port. With industrial support of game engine reuse, through the distribution of free tools, software development kits (SDK) and active participation in online communi- ties, it is an exciting time for considering using game engines outside of their original purpose, in particular regarding virtual environments for serious applications. With the increasing quantity and types of game engine based research projects, modern game engines are drawing attention from scientific researchers. However, the huge potential for scientific use is still not fully developed. Higher development efficiency has become a key factor in current research process. Game engines have the ability to help researchers to build their own virtual environments from scratch, so it is reasonable to believe that this will be an ongoing trend.

2.4 Evacuation Simulators

As noted in Chapter1, fire drills in the real world have many issues, such as ethical constraints on real fires, high resource cost, and limited experiment times, which have led to a lack of quantitative analysis. Compared to real fire drills, virtual environment-based disaster training systems have limited domain knowledge, which leads to less realistic simulation results (as discussed in Section 2.2). In virtual evacuation training systems, the behaviours of computer-controlled virtual humans are driven by AI scripts. Researchers have tried to implement the evacuation behaviours with Finite State Machine (FSM) methods [105], but the rigid nature of FSM will always lead to a dramatic increase of code complexity [9]. Also, FSMs have difficulty with modelling group behaviours, which was considered as one of the most important factors of human behaviour [76]. Virtual humans without so- Chapter 2. Literature Review 37 cial awareness usually respond in a predictable way as defined in FSMs. Meanwhile, the decision-making process of a human being is complicated and stochastic. While making a decision in a virtual fire emergency environment, humans can consider multiple factors, including light, fire, smoke, avatar group movement, knowledge of the environment (building) and other factors. To solve the social behaviour issue, researchers have attempted to use agent programming methods to improve the behaviour of NPCs (e.g. virtual humans) [274]. Multi-agent systems (MAS) have been widely accepted to model human behaviours. For example, Ren [185] used agent programming to model and simulate human behaviours in emergency evacuation, Yoo [276] used a BDI (Beliefs - Desires - Intentions) model to create virtual humans for learning in an online game, and Pan [171] also used MAS for social behaviour study in egress processes. Most agent-based simulation systems using virtual environments are able to simulate basic behaviours, such as target finding, target recognition and escape pathway finding. However, only a few projects [184, 272] have been able to take fire science knowledge into consideration. Fire science or fire engineering knowledge, including toxic gas, heat, smoke and fire, are important factors that influence an evacuee’s decision-making process. An alternative is computer-aided evacuation simulators, referred to as evacuation models, which are numerical computational tools developed by fire professionals. These fire evacuation simulators can be used to evaluate the evacuation time from a building [119]. These simulation models have many advantages; for example, the simulations can be run many times to gain quantitative data such as the range or distribution of evacuation time [76]. According to a recent survey, FDS+Evac, buildingEXODUS, Simulex, VISSIM, STEPS, and Pathfinder, are the most frequently used simulators [190]. There are review articles and previous projects to refer to when choosing appropriate models [118,173,190,196]. This section reviews the above six popular evacuation simulators (No. 5 in Figure 1.1) that may be able to provide the fire science knowledge required to solve the main research question of the thesis (No.7 in Figure 1.1). Chapter 2. Literature Review 38

2.4.1 FDS+Evac

FDS+Evac is the evacuation simulation module for Fire Dynamics Simulator (FDS) [113, 148]. It should be noted that FDS is an independent numerical simulation software and could be used to support other fire applications. For example, Ren et. al [184] used FDS to calculate the fire spread trend in their own fire evacuation simulation system. Smokeview (SMV) is a tool used to visualise FDS outputs [61] (see Figure 2.7). FDS+Evac is the evacuation model embedded in FDS, which is used to simulate how humans egress from a scenario under the influence of fire, smoke and toxic gases [113].

Figure 2.7: Example fire simulation presented using Smokeview [61].

In this evacuation simulation model, the human behaviours were modelled from experimental observation. The specific flows through doors and corridors, walking Chapter 2. Literature Review 39 speeds, stair climbing speeds and exit-selection modules are included in this contin- uous agent-based evacuation model. There are three types of agent behaviour con- sidered in the model, namely “conservative”, “active”, and “herding” [114]. Game theoretic reaction functions and best response dynamics are applied to model the exit route selection of evacuees. In this model, each evacuee observes the locations and actions of the other evacuees and selects the exit which is estimated to be the fastest evacuation path. Thus, the exit selection is modelled as evacuees are trying to select the closest exits that could minimise their evacuation time. The estimated evacuation time consists of the walking time and any queuing time to exit. It is also assumed that people change their course of action only if there is an alternative that is clearly better than the current choice [112]. One advantage of FDS+Evac is that it is open source. The FDS+Evac model also has a technical guide for users [113]. It also supports Windows, MacOS and Linux systems. The disadvantage is that FDS does not have a friendly graphic- user interface and it has to be operated by command-line mode. Without detailed tutorials, there is a significant learning curve. FDS+Evac is highly esteemed in the evacuation modelling community and ex- amples of its use appear in the literature. For example, it has been used to simulate fire evacuation in various scenarios, such as office buildings [90], cinemas [85], and road tunnels [189]. FDS can also be used to provide smoke and fire data to help model realistic fires effects [184,185,219,272]. As one of the most popular evacuation simulators, developers and companies have developed several graphical user interfaces for FDS+Evac. One of them is the PyroSim, which is a graphical user interface for FDS developed by Thunder- head Engineering [225] (see Figure 2.8). PyroSim supports importing CAD models, quickly drawing complex geometry objects with built-in tools, managing compo- nents including materials, surfaces, doors/exits, agent types and initial positions of virtual humans. In the context of project like the work described in this thesis, the use of PyroSim would significantly boost the development procedure, especially for buildings with complex geometry. Chapter 2. Literature Review 40

Figure 2.8: The user interface of Pyrosim [225].

2.4.2 BuildingEXODUS

BuildingEXODUS is a commercial evacuation simulator developed by the University of Greenwich. The EXODUS model uses a 2D grid of nodes to control evacuee movement and behaviours [166,191]. EXODUS has a family of models consisting of buildingEXODUS, maritimeEXODUS, airEXODUS, railEXODUS and vrEXODUS. Each product is designed for a specific application area. This section only concerns buildingEXODUS, which is used to simulate indoor evacuations (see Figure 2.9). BuildingEXODUS has four main interacting aspects; behaviour, configuration, procedures and environment [166]. This model has a modular structure imple- mented using C++. It has five core interacting submodels as Occupant, Movement, Behaviour, Toxicity and Hazard submodel. The Occupant submodel could give individuals over 20 attributes, which are categorised as physical, physiological, positional and hazard effects. Some attributes are fixed and others are able to change under the influences of other submodels. The Movement submodel is concerned with the physical movement of the occupants through the different terrain types. The Behaviour submodel was originally designed Chapter 2. Literature Review 41

Figure 2.9: An example buildingEXODUS application [238]. for aircraft scenarios. However, it could be altered to adapt to different building scenarios [166]. The Behaviour submodel determines an occupant’s response to the current prevailing situation. The Toxicity submodel determines the effect of fire hazards upon the occupants, and the Hazard submodel controls the enclosure environment and allows the user to specify the specific simulation scenario. Due to the license issue, only limited information about its current behaviour submodel is available.

2.4.3 Simulex

Simulex is a fine network evacuation model distributed by Integrated Environmental Solutions (see Figure 2.10). Simulex features the geometrically accurate simulation of the evacuation movement of each individual of a building [223]. Simulex is not freely available for public use. Major features of Simulex are very similar to those of EXODUS and FDS+Evac (see Section 2.4.2 and Section 2.4.1). Due to its license issue, Simulex is not suitable for this project. Chapter 2. Literature Review 42

Figure 2.10: A screenshot of Simulex [95].

2.4.4 VISSIM

VISSIM is commercial software developed by PTV Group [179] for simulating vehicle and public transport and it is still one of the most commonly used traffic planning tools [57]. VISSIM is a microscopic, behaviour-based multi-purpose traffic simula- tion to analyse and optimise traffic flows. The system can be used to investigate private and public transport as well as in pedestrian movements [57]. Its pedestrian model is based on Helbing and Moln`arsocial force model [84]. VISSIM started to fully support pedestrian movement simulation in 2008 and then was released as an independent simulator named Viswalk. Viswalk remains available as a module of VISSIM. A newly released version of Viswalk provides a graphical editor which supports importing CAD files, pre-defined routes, access to social force parameters and others. Though Viswalk is not specifically designed for fire evacuation, the SKRIBT25 project extended this model to support human behaviour in fire scenarios.

25http://skribt.org/ [Last Access: 20/01/2017] Chapter 2. Literature Review 43

2.4.5 STEPS

STEPS is a microsimulation tool developed by Mott MacDonald Ltd. for the pre- diction of pedestrian movement under both normal and emergency conditions. It was originally designed for modelling pedestrian flows in transportation systems, though it is now a general application [258]. STEPS employs a modern agent- based approach which predicts the movement of discrete individuals (virtual hu- mans) through three-dimensional space, which provides realistic representation of pedestrian movement and allows the elucidation of subtle but important details of pedestrian movement. Virtual humans involved in STEPS simulation are as- signed with basic attributes including free walking speed, awareness of environment, patience when waiting in a queue, association with group members (e.g. family members) and pre-movement time in the case of evacuation. STEPS can import 3D models from 3D modelling tools such as 3ds Max. It can also be used to simulate fire evacuation by importing fire data from the FDS model (see Section 2.4.1). This model is commercial software used by a range of leading companies and government organisations. Example applications include New Yankee Stadium, New York, USA and Adelaide Oval, Australia [155]. STEPS also allows the user to define specific routes through the use of checkpoints.

2.4.6 Pathfinder

Pathfinder is developed by Thunderhead [226]. It is an agent-based simulation software and supports 3D animation with a friendly graphical user interface (see Figure 2.11). Pathfinder is a commercial simulator which can be purchased from its online store26, while it is freely available for academic use. Pathfinder provides an advanced wayfinding strategy. According to its user man- ual and technical reference [226,227], the human behaviour module, which is similar to other models, uses an agent-based model to simulate the evacuation process. In Pathfinder, each occupant (agent) uses a combination of parameters to select their current path to an exit. The parameters include: queue times for each door of the

26http://www.thunderheadeng.com/pathfinder/ [Last Access: 20/03/2017] Chapter 2. Literature Review 44

Figure 2.11: The user interface of Pathfinder [228]. current room, the time to travel to each door of the current room, the estimated time from each door to the exit, and the distance already travelled in the room. The agent responds dynamically to changing queues, door openings/closures, and changes in room speed constraints (simulating smoke and debris). A user can observe pedestrian movement flows with different speeds or directions via the built-in high-quality 3D visualisation tool. In addition, Pathfinder provides robust import options that support the import of AutoCAD models, FDS models, and PyroSim models.

2.4.7 Summary of Evacuation Simulators

The evacuation simulators reviewed in this section all support the simulation of hu- man behaviours (group/individual), pathway finding and agent movement. These evacuation simulators can compute an agent’s decision-making process with the con- sideration of fire-related factors (e.g. heat and smoke), and represented as rational evacuation paths. These features have made the evacuation simulators suitable for assessing fire safety of buildings. However, a VR-based evacuation training system needs rational exit route selec- Chapter 2. Literature Review 45 tion, and communication among the virtual human evacuees (agents). For example, a small group (e.g. family) may get together to discuss what is happening before selecting an evacuation path. Also, if someone took a wrong direction (e.g. running towards the fire), other evacuees may tell them or lead them back to a better route. All these social activities may affect the decision making dynamically, which should be reflected in a virtual fire evacuation drill system. Licensing issues are also an important factor while selecting an evacuation sim- ulator. In the reviewed simulators, only the FDS+Evac is open source and freely available to the public, while the others need to be purchased. Some have listed prices for different licenses on their websites, but some are only available by enquiry. For either situation, the prices may be beyond the budget of research projects. Fur- thermore, this can bring unnecessary difficulties while distributing the final software product in the future. Finally, an evacuation simulator has to be validated before being reused. As shown in Table 2.8, all the evacuation simulators reviewed in these sections have been validated with at least two methods.

2.5 3D Modelling Tools

When building a virtual environment in a game engine (No. 1 area in Figure 1.1), one of the most important tasks is creating 3D models; for example, avatar models, buildings, furniture, stairs, and other 3D content. Although many game engines usually come with built-in geometry modelling tools, independent 3D modelling tools are typically more powerful in creating 3D models. This section reviews four recent commercial modelling tools which have been used in scientific research projects to create 3D content.

2.5.1 AutoCAD

There is a variety of computer-aided design (CAD) software available including AutoCAD, SolidWorks, Pro/E, Inventor, CATIA and other CAD software. Auto- CAD [78] is a commercial software developed by AutoDesk and has a free education Chapter 2. Literature Review 46 N N N Y Y Y Code requirements Y Y N N N N validation Third party a Y N Y N N Y Other models Validation methods Y Y Y N Y Y evacuation experiments Literature on past Y Y Y Y Y Y Table 2.8: Validation status of the evacuation simulators reviewed in the section. people movement experiments/trials Fire drills or other Model STEPS Simulex VISSIM EXODUS Pathfinder FDS+Evac Compared the simulation results of the same environment setting with other evacuation simulators. a Chapter 2. Literature Review 47 license for academic use. AutoCAD has a comprehensive set of features. However, one shortcoming of AutoCAD is that it is significantly more complex and requires extra time to learn than other CAD software [115]. AutoCAD has been frequently used to create different virtual environments, such as data centre building [252], virtual offices [207], and container terminals [63]. AutoCAD supports exporting models to multiple formats (e.g. FBX) to be used in a variety of game engines, including OGRE3D [63], Unreal Engine [207] and Unity [16,267].

2.5.2 SketchUp

SketchUp [17], formerly known as Google SketchUp, is a 3D modelling tool for a wide range of drawing applications, such as architectural design, interior design and video game design. It has a free version called SketchUp Make and a commercial ver- sion named SketchUp Pro. Compared to complex CAD software (e.g. AutoCAD), SketchUp is light-weight with low hardware requirements. SketchUp provides a user- friendly interface and several built-in tools for drawing 3D models which are easy to understand and implement. It also has an on-line repository called 3D Warehouse, which provides free 3D models for users. Previously, SketchUp had been used to create 3D models for Google Earth [139, 256]. It had also been used to model 3D content for virtual environments, such as large shopping malls [195].

2.5.3 Blender

Blender [13] is a free and open source 3D animation suite developed and main- tained by Blender Foundation. It was initially released in 1995 and its current (2017, March) stable version is 2.78c. Blender supports the entire 3D pipeline, in- cluding modelling, rigging, animation, simulation, rendering, compositing and game creation. Blender is cross-platform software that supports Linux, Windows and Ma- cOS. Blender is well suited to individuals and small studios who benefit from its unified pipeline, responsive development process and license. Blender has been used Chapter 2. Literature Review 48 to create 3D models for game engines, such as Unity [96,186,216,224].

2.5.4 Revit

Revit [273] is a popular Building Information Modelling (BIM) tool from AutoDesk to help construct and render building models. BIM is a set of interacting poli- cies, processes and technologies generating a “methodology to manage the essential building design and project data in digital format throughout the building’s life cycle” [174]. Compared to 3D modelling (e.g. SketchUp, Blender, etc.), BIM is regarded as 4D CAD modelling [111], where the 4th dimension is regarded as time analysis or additional information to a 3D model. For example, BIM can carry material information of specific building construction components [217]. BIM has contributed to many success cases, such as the skyscraper Shanghai Tower, and huge Heathrow Airport Terminal 5, London [20]. Models created by BIM tools, such as Revit, can also be used in game engines [12]. Silva et al. [209, 210] created a hospital building model by AutoDesk Revit for an emergency evacuation training system.

2.5.5 Summary of 3D Modelling Tools

This section briefly reviewed current 3D modelling software, which can be used as 3D modelling tools of the pipeline approach described in this thesis (see Chapter3). Although these tools differ in user interface, drawing tools and modelling procedure, previous research and case studies have demonstrated that they are all capable of creating 3D content for use in game engines. The choice of 3D modelling tools may largely be based on personal experience and knowledge. In this thesis, SketchUp and AutoCAD will be used in the case studies as the author is familiar with both software platforms.

2.6 Summary of Chapter2

This chapter has reviewed the main components that are required to build realistic fire evacuation systems, including human behaviour in fires, VE-based fire drill Chapter 2. Literature Review 49 systems, game engines, evacuation simulators and 3D modelling tools. Section 2.1 showed that the human behaviours in fires are complex. In contrast, previous work on building virtual fire evacuation training systems (Section 2.2) are missing the domain knowledge (fire science knowledge), such as accurate fire environments (e.g. spread of fire and smoke) and realistic virtual humans that can perceive the fire risk and have realistic decision making when deciding what evacuation paths to choose. Given the need for fire science knowledge, this chapter further reviewed six vali- dated fire evacuation simulators which were modelled from expert fire science (Sec- tion 2.4). Previous work also showed the possibility of reusing the simulation data into a virtual environment to support creating accurate fire environments [272]. This finding showed the feasibility of pursuing an approach that can solve the problem of the lack of realistic virtual humans, by reusing the simulator. In summary, this chapter showed the components that are necessary for a pipeline approach that can rapidly prototype realistic virtual fire evacuation environments by reusing 3D modelling tools, game engines, and the fire evacuation simulators. Chapter3 proposes a general pipeline approach that reuses the tools reviewed here to build virtual fire evacuation training environments without dynamic path finding. Chapter 3

PREVENT- A Pipeline to Reuse Domain Knowledge in Static Environments

This chapter describes a virtual environment development pipeline named PRE- VENT (a Pipeline for pRotoyping Evacuation training Virtual ENvironmenT). The PREVENT approach supports creating realistic virtual environments by reusing expert domain knowledge (e.g. fire science knowledge) to generate realistic virtual human behaviours deployed within gaming technologies. The pipeline starts with creating a 3D building from a floor plan in a 3D modelling tool, then extracts sim- ulated human behaviours from an expert domain simulator, and finally ends with virtual human behaviour scripts supporting computer-controlled virtual humans in a game engine. An example use of PREVENT will be presented which reuses expert fire science knowledge to support the creation of a realistic virtual environment-based fire drill simulator. This example of PREVENT uses SketchUp to model 3D buildings, then simulates behaviour in FDS+Evac and creates fire evacuation behaviour scripts for Unity. This chapter also explores the validity of PREVENT in terms of consistency, scalability and accuracy in four case studies. The work described here is an expanded version of the research presented in [265,267].

50 Chapter 3. The PREVENT Pipeline 51 3.1 Overview of the Pipeline Approach

The PREVENT approach consists of creating 3D building models, generating hu- man behaviours in a domain evacuation simulator and importing them into a game engine. PREVENT enables VR developers to easily prototype virtual drill simula- tors featuring expert domain knowledge from floor plans (see Figure 3.1). Firstly, 3D buildings are modelled from floor plans in 3D modelling tools, such as AutoCAD and 3ds Max. Then the 3D buildings are exported into an evacuation simulator and then into a game engine. The evacuation simulator runs simulations based on the imported building model and target scenario to generate evacuation behaviours to be used in the game engine. In PREVENT, the human behaviours are presented in the form of evacuation paths that the evacuees use to evacuate the scenario during the simulation process (in the expert domain evacuation sim- ulator). The final step is to script the virtual human behaviours by reusing the evacuation paths generated from the domain evacuation simulator and setting up the virtual environment in a game engine (includes importing the 3D building model and configuring the evacuation scenario).

3.1.1 Modelling Virtual Environments

3D modelling is the first part of PREVENT, which requires the modelling of target simulation scenarios (normally buildings) in a 3D modelling tool. In order to create models with accurate dimensions and structures, a floor plan is commonly used [212]. There are a wide range of tools available for this step, such as AutoCAD, Blender, and 3ds Max (see Section 2.5). The choice of modelling tool may depend on the VR developers’ previous experience. However, the selected tool should be able to export created 3D objects with attached textures to frequently used formats, for example, AutoCAD DWG file (.dwg), AutoCAD DXF file (.dxf), FBX file (.fbx), or COL- LADA file (.dae), either directly or through a plugin. For example, SketchUp can export COLLADA format via its plugin PlayUp27. Alternatively, format converters

27http://www.playuptools.com/ [Last Access: 21/01/2017] Chapter 3. The PREVENT Pipeline 52

Figure 3.1: The main procedures of PREVENT. can be used to generate required formats, e.g. Autodesk FBX Converter28. The exported 3D models will be imported into a domain evacuation simulator (see Section 2.4) to generate human behaviours (see Section 3.1.2), as well as im- ported into a game engine to create the target virtual environments (see Section 3.1.3).

3.1.2 Generating Human Evacuation Behaviour as Paths

To generate human behaviours in the domain evacuation simulator, the first step is to import the 3D building models into the selected domain evacuation simulators to set up evacuation scenarios. The evacuation scenario is the configuration for the domain simulator, including layout information of the physical environment (from imported 3D models) and descriptions of target training scenarios (e.g. fire locations and evacuee population). Given the required information, an evacuation scenario

28http://www.autodesk.com/products/fbx [Last Access: 21/01/2017] Chapter 3. The PREVENT Pipeline 53 can be quickly set up by an experienced user. The simulation process is the most time consuming but critical procedure of the entire PREVENT approach. During this procedure, the evacuation simulator with its built-in expert domain knowledge is regarded as a “black box”. All decisions made by evacuees are based on in-built professional and complex domain knowl- edge. In PREVENT, the behaviours are regarded as the evacuation paths chosen by evacuees in an evacuation simulation. The path selection frequency taken by evacuees will be used to script the behaviours of the virtual human evacuees in the target game engine. To select a suitable domain evacuation simulator, one should consider the license, documentation and popularity among the community. The choice of the evacuation simulator varies from case to case. For example, when generating human fire evac- uation behaviours for virtual environment-based fire drill systems, a number of fire evacuation simulators can be reused, such as STEPS, Pathfinder, and FDS+Evac (see Section 2.4).

3.1.3 Presenting Virtual Target Environment in a Game Engine

This step focuses on augmenting the virtual humans in a game engine with expert domain knowledge (e.g. fire science knowledge). The procedure starts with im- porting 3D models and textures into the level editor provided by the selected game engine. Then the human behaviour data generated from a selected evacuation sim- ulator can be imported into the game engine to script the evacuation behaviours. The human behaviour scripts should include path selection frequencies, as evacuees may choose different paths during repetitive evacuation simulations. The path se- lection frequencies represent how evacuees choose each available path during the evacuations simulated in the domain evacuation simulator, which are implemented as a range value and decided by the random number. A general behaviour script is described as Algorithm1, which means that the frequency for a virtual human in Room X to choose “Path A” is 30 percent, while 70 percent choose “Path B”. The frequencies shown in Algorithm1 (i.e. 30 and Chapter 3. The PREVENT Pipeline 54

70) are only examples to help explain the algorithm. In actual development, these frequencies are decided by the simulated behaviour data. This script will be shared by all virtual human evacuees initiated in Room X. By altering the evacuee’s initial position (Room 1, Room 2,..., Room N), path labels and selection frequencies, all behaviour scripts can be quickly generated in the game engine. At the end of this step, each virtual human evacuee should have its own behaviour scripted.

Algorithm 1 Sample Script in Game Engine if Random Number < 30a then Agent choose RoomX P ath A; else Agent choose RoomX P ath B; end if a. The frequencies shown here (i.e. 30, 70) are only examples to help explain the algorithm. Actual frequencies are determined by simulated data.

In this way, the decision-making process of the virtual humans could be con- trolled by giving a random generator in behaviour scripts. Case studies presented in following sections will demonstrate that this method accurately reflects the human behaviours in the target virtual environment from the source evacuation simulator.

3.1.4 Summary of Pipeline

There are three types of software involved in this pipeline, 3D modelling tools, evacuation simulators and game engines. Figure 3.2 overviews available options for each procedure. The bold path illustrates an example chain of software (SketchUp with PlayUp plugin, FDS+Evac, and Unity) used in the example PREVENT in this thesis, which reused expert fire science to build realistic virtual fire evacuation systems (see Section 3.2). Although the work presented in this chapter only used a particular combination of tools, the pipeline can be implemented using a variety of tools for each step. For example, instead of SketchUp, AutoCAD were used to model a 3D building in Chapter6. Chapter 3. The PREVENT Pipeline 55

Floor Plan Target Virtual Environment Evacuation Simulator 3D Modelling Tool STEPS

3ds Max Game Engine Pathfinder

Maya CryEngine 3 VISSIM AutoCAD Unreal Engine 4 Simulex Blender Source Engine Evacuation CAD Format Model FDS+Evac Paths SketchUp Unity3D PyroSim

COLLADA Format Model PlayUp Plug-in

Figure 3.2: A summary of the PREVENT approach. The bold path indicates the example tool-chain used in the example in this thesis.

3.2 An Example Implementation of the Pipeline

This section presents a case study which builds a virtual fire evacuation environment based on the PREVENT approach described in Section 3.1. This study details the procedure of how an example building was modelled in SketchUp, simulated in FDS+Evac and deployed using Unity. An example floor plan was used in this case to build the virtual environment (see Figure 3.3). This floor plan had a very simple layout which was intended to be used for demonstration of concept rather than for building a final software product. The dimensions of this building were 10m x 15m x 3m. There were three exits in this building, EXIT TOP, EXIT LEFT and EXIT BOT. The building was divided into 9 zones, including 5 rooms and 4 corridor areas. These 9 zones were used as starting locations in the evacuation scenario where evacuee agents will be placed when generating evacuation paths in FDS+Evac. Chapter 3. The PREVENT Pipeline 56

EXIT_TOP

AREA1 DOOR_5 RM5 DOOR_1 RM1

EXIT_LEFT AREA2 AREA4 DOOR_2 RM2

AREA3 DOOR_3 RM4 DOOR_4 RM3

EXIT_BOT

Figure 3.3: The floor plan of an example building for demonstrating the concept of PREVENT. Chapter 3. The PREVENT Pipeline 57

3.2.1 Exporting the 3D Building Model from SketchUp

The PREVENT approach starts with modelling the floor plan in a 3D modelling tool, which was SketchUp in this case. While creating the 3D model, only basic textures were attached to the model as an initial step of demonstrating the concept of PREVENT. The completed model in SketchUp is shown in Figure 3.4.

Figure 3.4: The completed 3D building model in SketchUp.

In order to be used in FDS+Evac and Unity, the modelled 3D building needed to be exported into an appropriate software format. For FDS+Evac, the model was exported to AutoCAD DWG file via the SketchUp built-in export tool. For Unity, the model was exported to COLLADA format using a third party plug-in named PlayUp.

3.2.2 Evacuation Path Generation in FDS+Evac

In this example use of PREVENT, FDS+Evac was used as the expert domain simu- lator as it is open source and has an active user community. However, one shortcom- Chapter 3. The PREVENT Pipeline 58 ing of FDS+Evac is that there is no graphical user interface (GUI), which makes it difficult to configure a simulation scenario for fire evacuation in FDS+Evac. With- out a GUI and proper editor, the FDS+Evac configuration file has to be written by hand, line by line, which is very time consuming and error prone, especially when creating complex building models. In this example, PyroSim, a graphical user in- terface for FDS+Evac (see Section 2.4.1), with a free educational license, was used to run simulations and generate paths for virtual human scripts used in the target virtual environment.

Generating Paths in FDS+Evac

To generate evacuation paths in FDS+Evac, the DWG building model was imported into PyroSim (see Figure 3.5).

Figure 3.5: The imported building model in PyroSim.

All FDS simulations are computed by splitting the environment into rectilinear volumes called meshes (see FDS User Guide [113,148]). The size of mesh cells defines Chapter 3. The PREVENT Pipeline 59 the granularity of the mesh and has to be defined before simulation. There is no specific rule to tell what exact size of mesh should be used for a particular case and this is out of the scope of this study. However, if the mesh size is too large, the building layout will not be accurately calculated. In contrast, small mesh size will dramatically increase the required computation and impact simulation time. According to the FDS User Guide [113,148], in this case study, the mesh cell size of the evacuation mesh was set to 0.333 m x 0.375 m x 3.25 m as a trade-off between accuracy and computation cost after comparing different mesh sizes. Apart from mesh cell size, detectors and vents, which are entities provided by FDS to record specific type of data during simulation, were placed on each check- point to record the movements of agents. The checkpoint is used to mark locations in the evacuation scenario, such as junctions of corridors29. A sequence of check- points forms a path (more details see Section 3.2.2: Analysing path lists). The agents, which represent evacuees in FDS+Evac simulations, were modified based on the default adult model (see FDS+Evac User Guide [113]), which has constant movement velocity of 1.05 m/s and a collision radius of 0.3 m. The modified agent was used as the default type of evacuee in this example use of PREVENT, as well as in all cases presented in this thesis. Also, as suggested in the FDS+Evac User’s Guide [113], a dozen simulations are needed to minimise the impact of randomness during the simulation30. Defining the number of simulations that are required to find all the possible unique paths is outside the scope of this thesis and will be explored in future work. In this thesis, the simulation stops if no new paths are generated in 5 consecutive simulations, which provides a sufficient set of unique evacuation paths for the purpose of demonstrating the PREVENT approach rather than generating a complete set of paths. In this case study, 30 simulations were conducted, as no new paths were generated after

29The actual set of checkpoints varies from case to case. The number and distance between neighbouring checkpoints may affect the granularity and complexity of the simulation calculations. However, the research on defining the checkpoints is out of the scope of the thesis and is intended to be explored in future work of this research. 30Randomness functions are used in the simulation process of the FDS+Evac. Chapter 3. The PREVENT Pipeline 60 the 25th simulation.

Analysing Path Lists

The purpose of conducting the simulations in FDS+Evac is to generate the statistics of regularised behaviours, i.e. the paths selected by the agents during the evacua- tion31. Paths are defined as a sequence of checkpoints, beginning with the agent’s start location to evacuation exits. All agents started in the same location, shared the same set of paths, i.e. paths have same start area but consists of different checkpoints to the exits. As stated in Section 3.2, this building is divided into 9 zones As a result, there should be at least 9 sets of paths, i.e. at least one path from each start zone to an exit. Each set of paths may consist of a number of paths, which are determined by simulation results. Multiple paths may be generated from multiple simulations based on events such as fires or view obstruction. For example, in the context of Figure 3.3, if an agent leaves RM2, then passes AREA2 and AREA4, and finally egresses through EXIT LEFT, its path is described as:

P R2 = RM2 → AREA2 → AREA4 → EXIT LEF T

In order to manage the increasing number of paths, especially when converting into scripts to be used in the virtual environment, a concept of a position-based path library (PPL) and a global path library (GPL) was proposed. All agents initiated in the same area share the same set of paths, which is referred to as a PPL, while the GPL is a complete set of all position-based libraries for a specific scenario. The hierarchy of the path libraries is illustrated in Figure 3.6. A fragment of a global path library generated in this study is shown in Table 3.1, which shows there are five unique evacuation paths shared by evacuees that started in RM2, and these were generated from 30 simulations in the FDS+Evac.

31These paths are used to script virtual human behaviours in a game engine. Chapter 3. The PREVENT Pipeline 61

Figure 3.6: The structure of path libraries incorporating one global path library (GPL) and multiple position-based path libraries (PPL) across n start locations. Chapter 3. The PREVENT Pipeline 62

Table 3.1: A fragment of the global path library (GPL) for the example buildings.

PPL Name Path Name Selection Path Description Probability RM2 P R1 55.00% RM2 → AREA2 → AREA4 → EXIT LEF T RM2 RM2 P R2 20.00% RM2 → AREA2 → AREA1 → EXIT TOP RM2 P R3 5.00% RM2 → AREA2 → AREA1 → AREA2 → AREA3 → EXIT BOT RM2 P R4 15.00% RM2 → AREA2 → AREA3 → EXIT BOT RM2 P R5 5.00% RM2 → AREA2 → AREA1 → AREA2 → AREA4 → EXIT LEF T

3.2.3 Fire Evacuation Simulation in Unity

The 3D building model that was exported as a COLLADA file from SketchUp via the PlayUp plug-in was imported into Unity (see Figure 3.7). To identify each zone, a floor was added to the original model and relevant zones were coloured. Exits in PyroSim were replaced by assembly points (green areas in Figure 3.7). As the purpose of this case study is to explore and validate the concept of PREVENT, the graphical realism of virtual environment is less important. Simple cylinders were used to represent the virtual humans in this case. During this procedure, the key task was coding the AI scripts. Paths were implemented using Unity scripting API and would be triggered under a certain range of values. These values match the path selection frequencies analysed from simulation. Random function provided by Unity32 was used as a random generator to control the path selection. The scripts were written in C# using Algorithm1 described in Section 3.1.2.

32An educational license was purchased from Unity Technologies under the name of The Uni- versity of Newcastle, Australia. Chapter 3. The PREVENT Pipeline 63

Figure 3.7: The imported building model in Unity. Red cylinders are simulated virtual human evacuees.

An example behaviour script shared by all virtual human evacuees that starts in RM2 is shown in Listing 3.1. The behaviour script is implemented as a C# class which extends the MonoBehaviour33 class in Unity. As described in Algorithm1, a random value RandomSeed was generated via UnityEngine.Random.value. Based on the value of RandomSeed, the virtual human evacuee will follow an evacuation path that was generated from the FDS+Evac simulations. The path selection frequencies used in this script strictly followed the data in Table 3.1

Listing 3.1: A fragment of virtual human behaviour script in C#. The frequencies of the paths matches those in Table 3.1 using UnityEngine;

33MonoBehaviour is the base class from which every Unity script derives. When using C#, class must be explicitly derived from MonoBehaviour. For detailed document about MonoBehaviour, see https://docs.unity3d.com/ScriptReference/MonoBehaviour.html Chapter 3. The PREVENT Pipeline 64 public class Agent_Area2 : MonoBehaviour { private float RandomSeed; ActionController agent; void Start () { RandomSeed = UnityEngine.Random.value * 100;//Geta random value agent = GetComponent (); InitiatePath_Agent_RM2(); } void InitiatePath_Agent_A2 () { if (RandomSeed <= 55.00)//55.00% follow path RM2_P_R1 agent.FollowPath(GPL.RM2.RM2_P_R1); else if (55.00 < RandomSeed <= 75.00)//20.00% follow path RM2_P_R2 agent.FollowPath(GPL.RM2.RM2_P_R2); else if (75.00 < RandomSeed <= 80.00)//5.00% follow path RM2_P_R3 agent.FollowPath(GPL.RM2.RM2_P_R3); else if (80.00 < RandomSeed <= 95.00)//15.00% follow path RM2_P_R4 agent.FollowPath(GPL.RM2.RM2_P_R4); else://5.00% follow path RM2_P_R4 agent.FollowPath(GPL.RM2.RM2_P_R5); } }

To automate the scripting procedure, a data parser tool was developed (see Figure 3.8). The data parser firstly reads the output file and filters the records of evacuee movements, then analyses and stores the filtered data in XML files. By using an XML parser, this tool can automatically generate the core set of scripts (e.g. behaviour scripts and map configuration scripts) for different game engines (e.g. Unity in this case). Chapter 3. The PREVENT Pipeline 65

Figure 3.8: The procedure of translating raw simulation data into behaviour scripts. Bold path indicated the components used in this research thesis.

3.2.4 Summary of Pipeline Implementation

An example of the PREVENT approach has been described, which extracted the fire science knowledge from FDS+Evac and reused it in a virtual environment created with the Unity engine. SketchUp was used as the 3D modelling tool when modelling the environment. Although the pipeline is possible to be used for various domains and multiple tools, the validity of the end product is important. Thus, there is a need to compare the source behaviour to that in the Unity driven virtual environments.

3.3 Validating the Consistency of Evacuation Time

During an emergency evacuation, the most important things are that all the evac- uees successfully evacuate from the building, and how much time the evacuees take to egress. The previous section has described an example use of the PREVENT approach to prototype a fire evacuation simulation system in Unity. However, there are two critical questions about the overall evacuation time: (i) it is unknown that if there is any difference in the evacuation time between the source simulator and the target virtual environment, (ii) if there is such difference, it is important to know why it happens and whether this difference is acceptable. To answer these ques- tions, the consistency of evacuation time between Unity and FDS+Evac examples were compared. Chapter 3. The PREVENT Pipeline 66

Test Configuration

Table 3.2 gives the key configurations between the source simulator (FDS+Evac) and the target virtual environment (Unity). As shown in Table 3.2, there was 1 evacuee for each area in both Unity and FDS+Evac for a base line measure without collisions at doors.

Table 3.2: Test configuration for comparing evacuation time between FDS+Evac and Unity.

Test Name Consistency Test Test Environment FDS+Evac Unity Evacuee Count per Area 1 Evacuee Velocity 1.05 m/s Pre-evacuation time 10s Known Exits Share same pre-defined known exits. Path Share same pre-defined fixed path.

To compare the actual evacuation times, four benchmarks of building egress were set up based on the example building model (Figure 3.3) namely: (i) the optimal case time; (ii) the worst case time; (iii) the case when agent placements are optimal in rooms/areas but they choose the worst building exit; and (iv), the case when agent placements in rooms/areas are the worst but they choose the best possible exits. These combinations are summarised as:

• Best start position and best exit strategy (BPBE)

• Worst start position and worst exit strategy (WPWE)

• Best start position and worst exit strategy (BPWE)

• Worst start position and best exit strategy (WPBE)

For simplicity, it is assumed that all agents move concurrently, at a velocity of 1.05 m/s, which is the default velocity of the adult agent model in FDS+Evac. All agents do not collide with each other and all have a pre-evacuation thinking time of Chapter 3. The PREVENT Pipeline 67

10 seconds34. Thus the final egress time can be defined by the last agent to leave the building. For example, with best start position and best exit strategy (BPBE), the slowest agent to exit will be from RM 2 (optimally positioned at the door) to the exit (EXIT LEFT) directly opposite (see Figure 3.3), with a distance of 6m to travel and an exit time of 15.71 seconds. The worst start position and worst exit strategy (WPWE) will be an agent placed in the back corner of RM1 and exiting through the bottom exit (EXIT BOT), with an exit time of 27.05 seconds. The other benchmark routes are illustrated in Figure 3.9 and a summary of the manually calculated results are shown in Table 3.3.

Table 3.3: The summary of four benchmark evacuation times (manually calculated).

BPBE WPBE BPWE WPWE Total Time 15.71 s 17.06 s 24.28 s 27.05 s

The 3D building models in FDS+Evac and Unity have the same dimensions as they are based on the same floor plan. What is different between FDS+Evac and Unity is that they have different agent moving algorithms and collision detection mechanisms to deal with evacuation flow conjunctions. These differences might result in different evacuation times. Compared with the fixed initial positions (see Figure 3.9), agents were placed with random initial positions and face directions in both FDS+Evac and Unity. 30 simulations were performed in both FDS+Evac and Unity as noted in Section 3.2.

Test Results

The simulation results are shown in Table 3.4. It is clear that the overall evacuation time in both FDS+Evac and Unity are between the optimal (BPBE) and the worst situation (WPWE). This is because evacuees have random start positions between the best position (BP) and the worst position (WP) and they do not always choose the best exits, in both Unity and FDS+Evac. However, the time in Unity is approx-

3410 seconds is only set as the default pre-evacuation time for the case study presented here. In an applied context, this should be customised accordingly. Chapter 3. The PREVENT Pipeline 68

EXIT_TOP

AREA1

BPWE DOOR_5 RM5 DOOR_1 RM1

WPWE EXIT_LEFT

BPBE DOOR_2 RM2 AREA4 AREA2

WPBE

AREA3 DOOR_3 DOOR_4 RM4 RM3

EXIT_BOT

Figure 3.9: The four benchmark paths. Each red ellipse represents an evacuee and the arrows illustrate their evacuation paths. Four situations (BPBE/WPWE/BP- WE/WPBE) are marked on the floor plan. Chapter 3. The PREVENT Pipeline 69

Table 3.4: Comparison of evacuation time. (Results of Unity and FDS+Evac are averages.)

BPBE WPBE Unity FDS+Evac BPWE WPWE Time 15.71 s 17.06 s 19.73 s 23.58 s 24.28 s 27.05 s imately 3.85 seconds shorter than that in FDS+Evac. This is caused by different agent navigation algorithms between Unity and FDS+Evac. Figure 3.10 shows that

Target Target Blocked Blocked Zone Zone

Evacuee Evacuee

Unity FDS+Evac

Figure 3.10: Comparison of virtual human evacuees movement between Unity and FDS+Evac. the virtual human evacuees in Unity always take the shortest path towards the des- tination (target), while in FDS+Evac the evacuee needs a longer travel distance35. As a result, both FDS+Evac and Unity have realistic evacuation time, i.e. between

35According to FDS+Evac Technical Guide [113], The agent movement in FDS+Evac can also be affected by the mesh size. If the mesh size is too large, the building layout will not be accurately calculated, which could affect agent movement. However, as a fine small mesh size was used in this test, the impact of mesh size can be eliminated. Chapter 3. The PREVENT Pipeline 70 optimal and worst case, while Unity has better performance because of optimised path planning function.

3.4 Validating the Scalability

Section 3.3 has shown the evacuation time in both Unity and FDS+Evac fall into a realistic range. However, there were only 9 evacuees involved in that example. Of more interest is exploring the performance of the pipeline with a scenario including a larger population of evacuees, for example, 50 or 100 evacuees. Also, as noted in Section 3.3, there is a difference between the evacuees’ navigation. It is necessary to know whether such difference impacts the functionality of the pipeline. For example, it is possible that multiple evacuees may collide around doors or with other evacuees which may lead to unexpected delays and congestion. Two scalability tests were presented in this section to demonstrate that the pipeline is still valid with increased numbers of virtual human participating in the simulation.

3.4.1 Scalability Test I: Simple Building

This scalability test used the same building which was used in the example pipeline in Section 3.2. The floor plan consists of several rooms and corridor areas (see Figure 3.3). This test explored whether the scalability of the pipeline was robust in densely populated sites.

Test Configuration

The test configurations are shown in Table 3.5. The number of evacuees placed in each area increased from 1 to 18, with the total number of evacuees increased from 9 (16 m2 per person) to 162 (0.92 m2 per person). In FDS+Evac, all evacuees were aware of the locations of all exits by defining the knowledge of building in the simulation configuration files (for details, see FDS+Evac User Manual [113]). Thirty simulations were conducted to calculate average evacuation time for each set of evacuees. Chapter 3. The PREVENT Pipeline 71

Table 3.5: Configuration of the scalability test.

Test Name Scalability Test Test Environment FDS+Evac Unity Evacuee Count per Area 1/3/6/9/12/15/18 Evacuee Velocity 1.05 m/s Pre-evacuation Time 10 s Known Exits Exits are unknown to all Not available in Unity agents Path Generated by FDS+Evac Randomly picked from path according to actual building lists generated from simula- involved in simulation. tions in FDS+Evac.

Test Results

The results are shown in Figure 3.11. The overall evacuation time shows a linear growth when the number of evacuees in the scenario is no more than 27. Then the evacuation time in Unity and FDS+Evac both start increasing much faster, but still linearly. However, Unity has a slower growth rate than FDS+Evac, especially when the number of evacuees is over 54. This might be because with more evacuees in a scenario, more collisions occur, which lead to bottlenecks around doors and exits. This also shows that evacuee navigation and collision detection in Unity are better than those in the FDS+Evac, as when bottleneck happens, the avatars in Unity is able to replan its movement path to reduce the congestions dynamically. When the total number of evacuees reaches 162, the area per evacuee becomes very small, only 0.93 m2 (150 m2/162), which is unlikely to happen in the real world. Thus, it is reasonable to conclude that the overall evacuation time in Unity will always be shorter than that in FDS+Evac, but within a reasonable range.

3.4.2 Scalability Test II: Large Building

The scalability of the example PREVENT has been validated in Section 3.4.1 in a small building. Demonstrating that the pipeline works with a large-sized real Chapter 3. The PREVENT Pipeline 72

140.00 120.00 100.00 80.00 60.00 40.00 20.00

Overall Overall Evacuation Time (s) 0.00 9 27 54 81 108 135 162 The Total Number of Evacuees Involved in Simulation

Unity FDS+Evac

Figure 3.11: The increase in overall evacuation time against the total number of evacuees in Unity and FDS+Evac. building is also important. A larger real building usually results in a bigger path library and larger population of evacuees in the simulation. The scalability test presented in this section aims to demonstrate that the overall evacuation time in the target virtual environment is consistent with that in FDS+Evac regardless of the size and complexity of the building model.

Test Configuration

This test is based on the Engineering S (ES) building, which is located on the Callaghan campus of The University of Newcastle, Australia. The floor plan of the ES building is shown in Figure 3.12. When configuring the number and location of evacuees, the evacuees were put in eight large rooms including two large lecture theatres (RM 203 and RM 204), four medium-sized lecture rooms (RM 206, RM 209, RM 238, and RM 247) and two computer labs (RM 205 and RM 210). This is because the rest of rooms on this floor are mainly staff offices or computer server rooms, which normally have less than two people in each room. As this is a university building, the main population on this floor at any time is the students who are having lectures or lab tutorials in these eight rooms. Thus the eight rooms were used as start zones of the evacuees in Chapter 3. The PREVENT Pipeline 73

Figure 3.12: The floor plan of Engineering S (ES) building. this test. This study simulated the evacuation process with four different population densi- ties (25% / 50% / 75% / 100%). The 100% population meant the room had reached its full capacity. The recommended room capacities were retrieved from the “Uni- versity of Newcastle - Web Scientia Room Booking System” and are listed in Table 3.6.

Table 3.6: Room capacities of the Engineering S building (Total capacity = 411). (Data was retrieved from a university room booking system).

Room Number Room Type Capacity 203 Lecture Theatre 111 204 Lecture Theatre 86 205 Computer Lab 25 206 Lecture Room 40 209 Lecture Room 45 210 Computer Lab 24 238 Lecture Room 40 247 Lecture Room 40 Chapter 3. The PREVENT Pipeline 74

Test Results

To minimise the influence of randomness, the simulations were run multiple times until no new paths were generated in 5 consecutive simulations in FDS+Evac36. Several pilot studies were performed based on these criteria, and the results showed that 10 simulations were needed. As a result, for each configuration, 10 simulations were conducted in both FDS+Evac and Unity game engine. The average overall evacuation times are presented in Figure 3.13.

120.00 FDS+Evac Unity

100.00

80.00

60.00 Overall Evacuation Time (second s )

40.00

20.00 105 (25%) 207 (50%) 310 (75%) 411 (100%) The Total Number of Agents Involved in the Simulations

Figure 3.13: Overall evacuation time against the total number of evacuees in Unity and FDS+Evac examples (Successful Evacuation Rate = 100%). (The percentage values in the figure are the room density percentages.)

The overall evacuation time grows with the increasing number of agents in the

36More details on determining the number of simulations can be found in Section 3.2.2.

Page 1 Chapter 3. The PREVENT Pipeline 75 scenario. However, Unity has shorter evacuation times than those in FDS+Evac through all configurations. As with the smaller building example, this is caused by the optimisation of collision detection and path finding in Unity (see Section 3.4.1). In conclusion, consistent results are seen across total number of agents in the simulation indicating a strong scalability of PREVENT with larger and more complex environments.

3.5 Validating the Accuracy under Fire Condi- tions

The evacuation time (Section 3.3) and scalability test (Section 3.4) have shown the consistency and stability of PREVENT. However, apart from the total evacuation times, the results were not able to reflect how the evacuee actually behaved during the evacuation simulation in Unity. It is important to know whether the evacuees strictly follow the path and if the path planning function in Unity is optimised to allow virtual human evacuees to take the shortest path between two checkpoints (see Figure 3.10). This section explores how the virtual human evacuees behave in a fire environ- ment and whether such behaviours match what happened in related FDS+Evac simulations. The floor plan used in Section 3.2 (see Figure 3.14) and its 3D build- ing model from SketchUp (see Figure 3.4) were used in this case study, but with fires added to locations in three different scenarios in the test building, specifically AREA3 (see Figure 3.14), AREA1 and AREA4. To generate evacuation paths for each scenario with fires, the configuration in Table 3.2 was used again. 30 FDS+Evac simulations were performed during the path generation procedure. The behavioural data was then reused in Unity to script the virtual human evacuee behaviour scripts. In this study, fires in Unity were represented as a red cube object (see Figure 3.15a). To observe evacuee movements, the coloured areas from Figure 3.7 were removed and paths were identified by assigning each evacuee a unique colour using customisable textures provided by Unity. The evacuation path of the virtual human Chapter 3. The PREVENT Pipeline 76

EXIT_TOP

AREA1

DOOR_5 RM5 DOOR_1 RM1

EXIT_LEFT

AREA2

AREA4

DOOR_2 RM2

AREA3

DOOR_3 RM4 DOOR_4 RM3

EXIT_BOT

Figure 3.14: An example scenario with an explicit fire included in the path gener- ating simulation. Chapter 3. The PREVENT Pipeline 77 evacuees was represented as a stream of dots, using Unity instantiate function. The instantiate function can create cylinder objects over fixed intervals when the computer-controlled virtual evacuees are moving on the floor, resulting in coloured paths (e.g. coloured dots in Figure 3.15a). The evacuation paths of the virtual human evacuees are shown in Figure 3.15a, where EXIT BOTTOM was blocked by fire. The screenshot shows that the vir- tual human evacuees were strictly following the expected paths and all moved along straight lines between two locations. In addition, compared to the optimal evacua- tion paths (see Figure 3.15b), evacuees that started from RM3, RM4 and AREA3, chose to evacuate through EXIT LEFT to avoid the fire near EXIT BOT. This im- plies that the evacuees behaved rationally in FDS+Evac simulations when facing fires and demonstrates that such rational behaviours are accurately reflected on the virtual human evacuees in the target virtual environment (i.e. as demonstrated here in Unity). In addition to fires at AREA3, the fire evacuation paths of virtual humans in other fire scenarios were also captured and shown in Figure 3.16, including fires in AREA1 (see Figure 3.16a) and AREA4 (see Figure 3.16b). The screenshots further demonstrate that the evacuation paths in Unity worked, and also that the virtual human evacuees in Unity can behave rationally when facing fires, as they did in the source FDS+Evac simulations.

3.6 Summary of Chapter3

This chapter demonstrated PREVENT which aims to solve the research question of reusing domain knowledge (fire science knowledge in this chapter) to support realistic evacuation behaviours of the virtual humans in a virtual evacuation environment (Q1). PREVENT is a general pipeline approach of building domain knowledge (e.g. fire, flood and earthquake knowledge) supported virtual environments by reusing 3D modelling tools, evacuation simulators, and game engines (see Section 3.1). The approach is versatile that can also be used for building design, procedure devel- Chapter 3. The PREVENT Pipeline 78 (b) The optimal paths tobuilding. evacuate from the example BOT (see (a) Evacuation paths in Unity with EXIT Figure 3.15b ) blocked by fire. Thestart red circles positions were of the each path. Figure 3.15: Actual evacuation paths in Unity and the optimal evacuation paths on the floor plan. Chapter 3. The PREVENT Pipeline 79 LEFT being LEFT being blocked by fire. TOP or EXIT blocked by fire. (b) Evacuation paths in Unity with EXIT TOP being Figure 3.16: Evacuation paths in Unity with EXIT (a) Evacuation paths in Unity with EXIT blocked by fire. Chapter 3. The PREVENT Pipeline 80 opment, and incident recreation. It also supports a variety of software platforms at each stage of the approach, which has been shown in a number of case studies throughout this thesis. However, to focus on the research in this thesis, PREVENT here reuses fire science knowledge in virtual fire evacuation environments (see Section 3.2). Using SketchUp, FDS+Evac, and Unity, the case studies presented in this chapter have demonstrated the concept of PREVENT in terms of the consistency (see Section 3.3), scalability (see Section 3.4), and have shown that the PREVENT can accurately reflect the evacuation paths in the virtual environment from the source simulator (see Section 3.5). Chapter 4

Dynamic Path Switching in Virtual Environments

Chapter4 has shown that virtual human evacuees can benefit from the fire science knowledge from an evacuation simulator (e.g. FDS+Evac) by using the PREVENT pipeline. However, there are drawbacks when used in the context of training-based virtual environments, as they cannot provide interactions required by such VEs. As dynamic interactions are a critical component of many virtual systems, this chapter explores extending PREVENT to support interactive virtual experiences, specifically by enabling path switching for virtual humans without the loss of exist- ing domain knowledge supported behaviours or the cost of a full multi-agent system (Q2). Taking a fire drill system for fire wardens as an example, this chapter presents a case study to demonstrate the effectiveness of the proposed path-switching frame- work. This chapter is an expanded version of the work presented in [268].

4.1 Introduction

Chapter3 has shown that PREVENT can extract expert domain knowledge from evacuation simulators (e.g. fire science from FDS+Evac) and reuse the extracted knowledge to support realistic evacuation behaviours of the virtual human evacuees. Example case studies have shown that statically defined paths can be suitable for visualising the evacuation behaviours in virtual environments. However, these

81 Chapter 4. Dynamic Path Switching in VEs 82 paths are not suitable to be used in the context of virtual training environments as they cannot provide real-time interactions required by target virtual environments. During an evacuation, an evacuee may interact with other evacuees and envi- ronmental events, for example, becoming blocked by spreading smoke or receiving evacuation suggestions from others [178,191,206]. These unpredictable interactions and events may influence an evacuee’s existing behaviour, particularly evacuation paths [178]. However, the original PREVENT is based on pre-calculated paths and the agents are not able to change their pre-selected paths37. Thus the strength of PREVENT, of embedding domain knowledge into the behaviours of virtual humans, is weakened by a lack of flexibility to respond to new events in dynamic environ- ments. This chapter describes improving the utility of PREVENT by enabling path switching on interactive events for computer-controlled virtual human evacuees. There are a variety of algorithms that can enable dynamic path searching and planning for non-player characters (e.g. virtual humans), for instance, A*, MiniMax and Monte Carlo Tree Search (MCTS) [275]. However, these pathfinding techniques are not suitable to be integrated into PREVENT as they completely abandon the statically generated paths while planning new paths. In this case, the system will loose the fire science knowledge extracted from the fire evacuation simulator. Modelling new pathfinding will either require expertise in fire science or reduce the accuracy of the reused human evacuation behaviours. In order to retain the ex- pert domain knowledge, an alternative solution is enabled by extending PREVENT with dynamic path switching (i.e. real-time re-planning) for virtual human evacuees. Taking a fire drill system for fire wardens as an example, Section 4.2 proposes a path-switching framework solution to enable essential interactions required by a fire drill system for fire wardens (e.g. sending evacuation commands with new event information) with the minimum loss of the embedded fire science knowledge from the reused scripts. A case study is then presented to demonstrate the effectiveness of the proposed path-switching framework (see Section 4.3). At the end of the chapter,

37This because the FDS+Evac does not consider real-time interaction with actual human evac- uees in its simulations due to the huge computation cost. Thus the dynamic interaction has to be added at the post scripting level described in this chapter. Chapter 4. Dynamic Path Switching in VEs 83 the findings and limitations are discussed and summarised.

4.2 Path-Switching Framework

As described in Section 3.2, a path is defined by a sequence of checkpoints. The path switching here aims to enable virtual human evacuees to switch between existing paths, or to newly generated paths by reusing these paths at the checkpoint level. The low-level navigations tasks, such as moving between given checkpoints will be handled by the selected game engine. Before looking at the technical implementations of the path-switching frame- work, it is necessary to understand what interactions are needed and how these interactions might be initiated. This section takes a virtual fire evacuation drill sys- tem for fire wardens as an example. This example is useful as a trained leader can significantly reduce the total evacuation time and save more lives in an emergency. Also, such system is an ideal test-bed for validating the evacuation behaviours of virtual humans created via PREVENT, as it not only requires virtual human evac- uees to select evacuation paths, but also to consider interactions between the virtual human evacuees and the human evacuees. For example, a fire warden needs to send evacuation commands to all evacuees that are controlled by both computer and hu- mans, in real time. Section 4.2.1 reviews the role of fire wardens including their main duties and the interactions they can initiate. Based on these interactions, an approach that extends PREVENT to support path switching in a virtual environ- ment is discussed. This path-switching framework considers four situations when dynamic events happen; namely, stay on current path, change to new local path, borrow a global path or generate a new path. To exemplify the path-switching framework, this chapter uses the floor plan of the Engineering S building (see Figure 4.1) and a fragment of the path library generated by PREVENT (see Appendix A.1) in the following path switch examples. For all examples in this section, it is assumed that there is a fire warden standing near the checkpoint P11 giving evacuation commands to surrounding evacuees (marked with a green star in Figure 4.1). Chapter 4. Dynamic Path Switching in VEs 84

Figure 4.1: The floor plan of the Engineering S building. The green star near P11 indicates the location of a fire warden.

4.2.1 Fire Warden as Interaction Initiator

A fire warden is an authority role in fire emergencies who will interact with evacuees to help them evacuate the building by leading the evacuation flow, advising appro- priate evacuation paths or other escape instructions [53,167,176]. Fire wardens are also referred to as safety wardens [128,206]. By giving clear evacuation instructions, the fire warden can significantly reduce the overall evacuation time [35,58,134,172]. While designing the path-switching framework, the fire warden was regarded as a special type of computer-controlled virtual human that can send evacuation com- mands to other evacuees to update their evacuation paths. An evacuation command may consist of one or more keywords combined with entities (e.g. checkpoints and exits, see Section 3.2). For the virtual human evacuees, a list of recognisable key- words are in Table 4.1. In the examples described here, the commands will be sent as messages in Unity and received by approaching virtual human evacuees.

Table 4.1: A list of keywords used in evacuation commands.

Keyword Description FOLLOW Follow the fire warden’s movement. GOTO Move towards a target given by the fire warden. USE A certain path element must be included in the path. AVOID A certain path element must be excluded from the path.

With these keywords and entities, multiple combinations can result in different Chapter 4. Dynamic Path Switching in VEs 85 evacuation commands. For example, “USE EXIT 1” which contains the keyword “USE” and an entity named EXIT 1, while “USE EXIT 1, AVOID P11” contains additional keyword “AVOID” and entity P11. It should be noted that in an ideal software product, natural languages would be used for communication (e.g. vocal communication are used in the example in Chapter6). These types of communica- tion will be parsed as machine understandable commands, i.e. commands made up of keywords and entities. For example, “Side doors are blocked, evacuate through the front main entrance” can be parsed as “USE EXIT 1, AVOID EXIT 3, AVOID P7” in the ES building. In this path-switching framework, these commands will be used in four situations where a virtual human evacuee receives an evacuation command from a fire warden and needs to determine whether a new path is needed.

4.2.2 Situation 1: Check Current Path

When a virtual human evacuee receives a path changing command from an external source, such as a fire warden, the evacuee will always start with checking its current path. If its existing path meets all the requirements of the evacuation instruction, the evacuee will continue following its current path to the exit. For example, if the current evacuation path of a virtual human evacuee initiated in RM 209 is RM 209 2, a fire warden standing at P11, requests the evacuee to egress through EXIT 1 (command: USE EXIT 1). The virtual human evacuee will then check its current path to see if it meets the fire warden’s command. In this example, the evacuee is egressing towards EXIT 1 which meets the fire warden’s command, and thus it will continue following its current path without making any changes.

RM 209 2 = RM 209 → P 11 → P 10 → EXIT 1

4.2.3 Situation 2: Search Its Own Path Library

If a virtual human evacuee fails to find a valid path in situation 1, it will move on to situation 2 where it has to search its own position-based path library (PPL, see Chapter 4. Dynamic Path Switching in VEs 86

Chapter3) to find an alternative path. If found, it will switch to the new PPL path and continue from that point to the final target. The aim here is efficiency. Localised path searching will be faster and each evacuee’s path has already been embedded with fire science knowledge of the scenario when generated by the source evacuation simulator. Switching between the validated paths will retain the embedded fire science knowledge. For example, one virtual human evacuee whose current path is RM 209 2 and is asked to leave though EXIT 5 (command: “USE EXIT 5”) by a fire warden standing at checkpoint P11 (see Figure 4.1), will fail to evacuate in situation 1 as its path does not end with EXIT 5. However, in situation 2, the evacuee can switch to path RM 209 3, the 1.00% probability path, by searching its own PPL (path library RM 209 ).

RM 209 3 = RM 209 → P 1 → P 12 → P 13 → P 1 → P 2 → EXIT 5

4.2.4 Situation 3: Search the Global Path Library

When a virtual human evacuee fails to find a valid path in situation 2, it will continue to situation 3, where the search range will be extend to the global path library (GPL), i.e. all the paths generated for the scenario. For example, a virtual human evacuee that has RM 209 2 (see Appendix A.1) as the current path, is advised by a fire warden (at P11 ) to exit through EXIT 4 (command: “USE EXIT 4”). This evacuee will fail to egress in situation 2 as no appropriate path could be found from its own path library RM 209 (see Table A.1). However, after searching the global path library, it will be able to escape the building through EXIT 4 by switching to path RM 210 2, which belongs to path library RM 210 and has a checkpoint shared with the current location, i.e. P11. One situation which should be noted is that the virtual human evacuee may find more than one suitable path from the GPL. In this case, it will always pick the path with highest selection probability.

RM 209 2 = RM 209 → P 11 → P 10 → EXIT 1 Chapter 4. Dynamic Path Switching in VEs 87

4.2.5 Situation 4: Generate New Path

With the three situations discussed above, the virtual human evacuees are expected to evacuate the building with pre-generated paths. However, sometimes they may fail to find valid paths if the GPL is not rich enough. This could be solved by two methods, which are: i) enriching the GPL; and ii), allowing the game engine to generate a new path by reusing existing paths. Enriching the GPL will allow the virtual humans to switch to an appropriate path in situations 1 to 3. However, enriching GPL is out of the scope of the work presented here; the focus of the work here is to generate new paths, while minimising the loss of embedded expert domain knowledge. To enable the path search, a path-searching algorithm based on the Breadth- First-Search (BFS) graph search algorithm was designed. The floor plan of the Engineering S (ES) is used here to illustrate the algorithm (see Figure 4.1). In order to use a graph-searching algorithm, the floor plan should initially be converted into a topological graph. Considering rooms, checkpoints and exits as vertices, the floor plan of ES building is represented in Figure 4.2. In this way,

Figure 4.2: The graph of the Engineering S building (G1(V,E)). Vertices represent rooms, checkpoints and exits. Edges represent available paths between two locations (i.e. corridors). Chapter 4. Dynamic Path Switching in VEs 88

the predefined evacuation paths become paths on the graph G1. The position-based libraries (PPLs) and global path library (GPL) can be represented as directed graphs (see Figure 4.3). For example, Figure 4.3a shows the graph representation of the position-based path library RM210, while Figure 4.3b gives an example fragment of the GPL of the Engineering S building.

(a) PPL of RM210 (b) A fragment of the GPL of the ES building (contains PPLs of

(Gp210(V,E)). RM209, RM210, and RM238) (Gg(V,E)).

Figure 4.3: Graphs of PPL and GPL of the ES building (All the paths marked on the graphs can be found in Table A.1).

Given the graph representations of the path libraries, any fire warden command will alter the structure of the graph. For example, if a virtual human evacuee receives an instruction of “AVOID P7” from the fire warden (located near P11 by default, see Figure 4.1), it will delete P7 from its own instantiate of the GPL graph Gg (see Figure 4.4). Similarly, “USE P7” will mark vertex P7 as a core vertex that must be included while searching the graph for a new available path. Continuing the case that all situations 1 to 3 fail to give an available path, the priority of situation 4 is not generating a completely new path directly in the game engine, as this would result in losing the fire science knowledge that has been embedded in the pre-calculated paths. Instead, the priority of situation 4 is to search Chapter 4. Dynamic Path Switching in VEs 89

Figure 4.4: The GPL after deleting node P7 (Gg1(V,E)). for the best available checkpoint (vertex) that links to or is a part of another path that meets the fire warden’s command requirements. To explain the path searching logic, one example is that a virtual human evacuee (from RM210 ) receives an evacuation command from fire warden (located near P11 by default), asking it to “USE EXIT 3”. Assuming the virtual human evacuee’s original evacuation path is RM 210 1 (RM 210 → P 11 → P 10 → EXIT 1), the evacuee will first check if its current path (RM 210 1 ) meets the warden’s command (situation 1). Situation 1 fails as EXIT 3 is not the target exit of path RM 210 1.

Then it continues to the situation 2 to search its PPL (see Gp210 in Figure 4.3a) and fails again as none of the paths in PPL RM210 end with EXIT 3. Next is situation 3, where a check is done on the GPL to find if there is any path containing P11 and ending with EXIT 3. After searching the GPL (see Gg in Figure 4.3b), the virtual human evacuee still fails to find a pre-defined available path. Finally, it comes to the situation 4, where the virtual evacuee starts with Breadth-First-Search from node P11. After checking P12 and P10, it finds that node P7 links to path RM 238 2 (RM 238 → P 7 → P 8 → EXIT 3) which ends with EXIT 3. In this case, the virtual human can successfully switch from the validated path RM 210 1 to another validated path RM 238 2 and finally evacuate through EXIT 3. Thus the dynamic path switching can reuse both local (PPL) and full/partial global (GPL) paths. Chapter 4. Dynamic Path Switching in VEs 90

The general procedure of situation 4 is summarised as Algorithm2, where the DecisionP oint refers to the location of the virtual evacuee when it encounters a fire warden. If a virtual human evacuee is not at any checkpoint, then the nearest checkpoint will be used as the DecisionP oint38.

Algorithm 2 Situation 4: path search algorithm P athList = New List < P ath > (); CheckP ointList = BFS(DecisionP oint); for CheckP oint in CheckP ointList do if Find available paths at CheckP oint then Add these paths to the P athList; end if end for // Best path has the highest selection frequency return The best available path from the P athList

4.2.6 Summary of Path Switching

To support the dynamic path switching for virtual humans in virtual environments created via PREVENT, a path-switching framework that concerns four different situations which require path changing was introduced. The general logic of the framework is summarised in Algorithm3. When a virtual human evacuee is close to a fire warden, it will be able to receive commands from the fire warden. Based on its situation, different path checking or search functions will be called to support the path switching of virtual human evacuees.

38The locations of checkpoints are configured in the game engine as Vector3 (x,y,z). Chapter 4. Dynamic Path Switching in VEs 91

Algorithm 3 Path switch algorithm if Distance to F ireW arden < 2 metres then Command = P rocessCommand(); if Situation 1.Check Current P ath() == T rue then return Null else if Situation 2.Search Own P athLibrary() == T rue then return The best available path found in situation 2; else if Situation 3.Search Global P athLibrary() == T rue then return The best available path found in situation 3; else if Situation 4.Generate New P ath() == T rue then // Best path has the highest selection frequency return The best (shortest) available path generated in situation 4; end if end if

4.3 Case Study: Validating the Path-Switching Framework

This case study aims to demonstrate the effectiveness of the path switch framework introduced in Section 4.2 by comparing the evacuation rates between four situations and two benchmark rates. The first benchmark is the evacuation rate when no fire warden is involved. This rate is referred to as “Static Evacuation Rate” and is always 100%, as all movements are statically pre-defined to successfully evacuate the building as shown in case studies in Chapter3. The other benchmark is referred to as “ Situation 0”, which means the fire warden is placed in the test scene, but no path switch functions are enabled. Thus all virtual human evacuees that receive commands from fire warden will go into idle status. The idle status is not indicative of real behaviour but a measure of virtual human evacuees that are deadlocked in the scenario. This gives benchmarks of the worst evacuation rates, i.e. measures the number of evacuees that encounter the fire warden. Chapter 4. Dynamic Path Switching in VEs 92

While configuring the test scenarios in Unity, all algorithms are implemented in C# and packed as a plugin. This enables the path framework to be applied in any other virtual evacuation environments created via the PREVENT pipeline, in real time. Four tests were presented to collect data of evacuation rates in different situa- tions. In these cases, 105 virtual human evacuees (25% of the capacity of the ES building, see Table 3.6) were initiated in the scenario to test if the virtual human evacuees were able to make correct responses to fire warden’s command while evac- uating the building. As noted in Section 4.2, the fire warden had a fixed position at P11 in all the tests (see Figure 4.1). The number of agents in each room and the location of fire warden are marked in Figure 4.5.

Table 4.2: Evacuation success rates (average value) under different path switching methods (n = 105)

X XXX XXX Situation enabled XXX Static Evacu- Situation 0 Situation 1 Situation 1-2 Situation 1-3 Situation 1-4 XXX Command XXX XX ation Rate USE EXIT 1 100% 86.38% 98.10% 100.00% 100.00% 100.00% USE EXIT 3 100% 86.19% 86.19% 86.19% 86.19% 100.00% USE EXIT 4 100% 86.19% 86.19% 92.95% 100.00% 100.00% USE EXIT 5 100% 87.14% 87.14% 94.29% 100.00% 100.00%

Table 4.2 shows the commands and benchmarks used in the following tests. Four commands were sent by the fire warden in each test. As described before, the “Static Evacuation Rate” of all commands is 100%, but the evacuation rates of “Situation 0” varies. For each situation and command, 10 simulations were conducted to minimise the impact of randomness. The overall evacuation rates for each command were then compared with the benchmark evacuation rates to show the functionality of the path switching approach. Each situation was cumulatively enabled in following tests to demonstrate impact on evacuation.

4.3.1 Test I: Situation 1

As described in Algorithm3, when the distance between a virtual human evacuee and a fire warden is within 2 metres, the virtual human evacuee will “hear” the fire Chapter 4. Dynamic Path Switching in VEs 93 Figure 4.5: Floor plan ofroom. the Green ES star near building. P11 indicates Numbers the in location brackets of indicate the the fire number warden. of virtual human evacuees initiated in each Chapter 4. Dynamic Path Switching in VEs 94 warden’s command, which will trigger function P rocessCommand() to analyse the received evacuation command. To show the functionality and limitation of situation 1, the path switch algorithms for situation 2, 3 and 4 were disabled. By turning on situation 1, virtual human evacuees with appropriate paths were able to continue their evacuation after receiving a command, resulting in an increased number of virtual human evacuees that successfully evacuated the building (see Table 4.2).

Figure 4.6: Virtual evacuees failed in situation1 (marked with red colour) after re- ceiving command from the fire warden (marked with green colour). Arrows indicate the paths selected by evacuees (see Appendix A.1).

Some virtual human evacuees failed to evacuate because situation 1 cannot find them appropriate paths. For example, in Figure 4.6, with fire warden command “USE EXIT 5”, 14 virtual human evacuees (105 x 12.86%) that were not heading to EXIT 5, failed to evacuate and turned themselves into idle (marked with red colour) with situation 1 after encountering the fire warden (standing at a corner, marked with green colour).

4.3.2 Test II: Situation 2

To deal with the limitation found in situation 1, path-switching situation 2 was then activated. Situation 2 enables virtual human evacuees to search for their own Chapter 4. Dynamic Path Switching in VEs 95 position-based path libraries (PPLs) to find available paths to use, and thus some of the virtual evacuees that failed in situation 1 were able to find their way out successfully.

Figure 4.7: An example of situation 2. Virtual human evacuees in RM 209 managed to find a way out (marked with blue colour) by switching to RM 209 3, while others failed (marked with red colour). Arrows indicate the paths selected by evacuees (see Appendix A.1).

For example, in Figure 4.7, where the received command was “USE EXIT 5”, the virtual human evacuees initiated in RM 209 with path RM 209 2 as their initial evacuation paths would fail to egress in situation 1 (paths are listed in Table A.1). Virtual human evacuees that failed to egress were marked with red colour in Figure 4.7. However, after activating situation 2 algorithm, they would now be able to find the appropriate path RM 209 3 from their own path library RM 209 (see Table A.1). After switching to path RM 209 3, these virtual human evacuees should be able to evacuate successfully (marked with blue colour in Figure 4.7). This also increased the evacuation rate of “USE EXIT 5” by approximately 7%, see Table 4.2. The average evacuation rates for all commands after enabling situation 2 are presented in Table 4.2. The overall evacuation rates were generally improved as ex- pected. However, some virtual human evacuees still failed to evacuate; for example, Chapter 4. Dynamic Path Switching in VEs 96 evacuees initiated in RM 210 could not find any path that ends with EXIT 5 from their own path libraries RM 210 (see Appendix A.1), and thus failed to egress when they encountered the fire warden. Virtual human evacuees that failed to egress in situation 2 with the command of “USE EXIT 5” are marked with red colour in Figure 4.7.

4.3.3 Test III: Situation 3

As stated in Section 4.2.4, the virtual human evacuees are able to search the global path library (GPL) for an alternative path. This provides a solution for some of the virtual human evacuees deadlocked in situation 2. Again, when taking command “USE EXIT 5” as an example, the virtual human evacuees that were initiated in RM 210 (see Figure 4.5) and failed in Test II, can now successfully find and switch to path RM 209 3 which belongs to library RM 209 (see Appendix A.1). As a result, they were able to egress through EXIT 5 (see Figure 4.8). Compared to the results of situation 2, the evacuation rate of command “USE EXIT 5” has been improved from 94.29% (situation 2) to 100% (situation 3). The overall evacuation rates for all five evacuation commands after enabling situation 3 are presented in Table 4.2. The test results of situation 3 showed that the virtual human evacuees were able to switch to another existing path out of their own path libraries, evidenced by increased evacuation rates. However, some virtual human evacuees still fail to evacuate the building as there is no path that could meet the requirements from the global path library (GPL). For example, for command “USE EXIT 3”, there was no existing path that could meet the command (i.e. paths which contain P11 and end with “EXIT 3”), resulting in all virtual human evacuees encountering a fire warden failing to evacuate (see Table 4.2). For this situation, the Unity engine could be used to generate a new path, i.e. situation 4 (see Section 4.2.5). Chapter 4. Dynamic Path Switching in VEs 97

Figure 4.8: An example of situation 3. Virtual human evacuees managed to find way out (marked with blue colour) after searching the GPL. Arrows indicate the paths selected by evacuees (see Appendix A.1).

4.3.4 Test IV: Situation 4

As described in Section 4.2.5, situation 4 allows the virtual humans to create new paths under certain conditions. This ensures that all virtual human evacuees will not get deadlocked by either switching to another validated path by generating a short connecting path (ideal case), or generate a whole new path that has no overlaps with any existing paths (worst case). For command “USE EXIT 3” (see Section 4.3.3), the deadlocked virtual evacuees in Test II (see Section 4.3.2) can now switch to path RM 238 2 (RM 238 → P 7 → P 8 → EXIT 3) by generating a short connecting path (P 11 → P 7). The final path of these virtual human evacuees became P 11 → P 7 → P 8 → EXIT 3. Chapter 4. Dynamic Path Switching in VEs 98

4.3.5 Discussion

Table 4.2 shows a general increase in evacuation rate, indicating the increasing flexibility of path switching when interacting with other avatars (i.e. virtual fire warden in the tests). In the presented tests, situations 1, 2, and 3 strictly search and/or pick one existing path without any modification in order to retain the full accuracy of the embedded domain knowledge. However, path switching performance heavily relies on the richness of the path library, i.e. the number of unique paths generated through the simulation process in a fire evacuation simulator. Thus a drawback is that a virtual human evacuee may lose its original path while not receiving a new path after checking these situations 1 to 3. For example, in the case with “USE EXIT 3”, about 14% of the virtual human evacuees failed to find a new path (see Table 4.2). Exploring the metrics of generating an appropriate path library from the source simulator may contribute to the robustness of these interactions. In addition to the three situations, situation 4 will only be triggered if no path is available from the global path library (GPL) and it will hand the path planning to the game engine. This situation has greatly improved the flexibility of PREVENT. However, the graph search algorithm was not fully optimised as it was used for the purpose of demonstrating the concept of situation 4. There are a few aspects that may need to be optimised; for example, refining the algorithm for selecting the best path. Currently, this best path selection algorithm first compares the distances between the connecting nodes and the location of the virtual evacuee. If multiple paths are found with equal distances, then the algorithm will select the path with highest selection frequency. However, this may not be optimal for all cases. Assuming there is one situation that two available paths are found, i.e. path A (distance 5m, frequency 5%) and path B (distance 6m, frequency 95%). In this case, the current algorithm will pick path A as it has a shorter connecting path; however, path B is apparently more favoured (95% selection frequency is 19 times that of path A). A better algorithm may potentially return more optimal paths. However, as a trade-off, it requires more computational resources and takes longer to find a Chapter 4. Dynamic Path Switching in VEs 99 path. In contrast, a less optimal but faster algorithm may be more suitable for the scenarios described here. Although this work is beyond the scope of this research, it is interesting to find a more balanced method to rank the available paths in future work. From simple path check to complicated global path search, this path-switching framework has significantly extended PREVENT with path switching triggered by dynamic interactive events for virtual humans, with a minimum loss of embedded fire science, by reusing static paths. Besides the fire wardens described in the examples, this path-switching framework would also work with other event initiators (e.g. human evacuees and fire fighters).

4.4 Summary of Chapter4

This chapter presented a path-switching framework as a solution for enabling dy- namic interaction in virtual environments (Q2, see Section 1.2). The path-switching framework categorised interactions into four situations (i.e. situation 1, 2, and 4 in Section 4.2), which can significantly improve the flexibility to responses to dynamic events. Given the PREVENT prototyped virtual fire drill environment for fire wardens as an example, the case studies demonstrated that the path-switching framework en- abled the virtual human evacuees to dynamically change their predefined evacuation paths when interacting with the virtual fire warden (see Section 4.3). Combining the work presented in Chapters3 and4, this thesis has introduced an extended PRE- VENT pipeline that can create interactive virtual environments featuring accurate expert domain knowledge. The next step is to review whether the behaviours (including both predefined and dynamic path switching behaviours) of the virtual human evacuees are realistic from the perspective of human participants in real fire drill sessions. Chapter5 proposes a general test to review the behavioural realism of computer-controlled virtual humans in virtual environments, which will then be used in Chapter6 to validate the realism of these virtual humans in an interactive fire evacuation drill Chapter 4. Dynamic Path Switching in VEs 100 system for fire wardens. Chapter 5

Evaluating Virtual Human Behaviour - a New Turing Test

The concept of the conventional Turing Test (TT) was introduced by Alan Turing in his paper, “Computing Machinery and Intelligence”, to determine whether the intelligent behaviour of a machine is equivalent to, or indistinguishable from, that of a human [231]. This proposal has made a significant influence on computer science, cognitive science, and philosophy for over fifty years [197]. The emergence of virtual reality and the use of computer-controlled agents, such as virtual humans, present new opportunities, where most efforts to pass either stan- dard or modified versions of the Turing Test were made in text-only, environments. This chapter reviews the existing Turing Test, its extended uses, and proposes a new design for evaluating behavioural realism of virtual humans in 3D virtual envi- ronments.

5.1 The Conventional Turing Test

Introduction

The conventional Turing Test proposed that a human evaluator would judge natu- ral language conversations between a human and a machine that were designed to generate human-like responses. The evaluator would be aware that one of the two

101 Chapter 5. A New Turing Test 102 partners in the conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so that the results would not be dependent on the machine’s ability to render words as speech [144,231]. The first entity that was claimed to be able to pass the Turing Test was known as ELIZA, which was a program created by Weizenbaum in 1966 [260]. The program behaved as a Rogerian psychotherapist39 and used keywords to generate responding sentences. Later, Colby created a program called PARRY in 1972, which was described as “ELIZA with attitude” [14]. This program simulated the behaviour of a paranoid schizophrenic and was tested using a variation of the conventional Turing Test. In this test, a group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the “patients” were human and which were computer programs. However, the results showed a correct identification rate of 48%, which suggested random guessing. After the emergence of ELIZA and PARRY, multiple variations of the conver- sational robots have been developed and used widely in dialogue systems40. For example, people have automated online assistants (e.g. IKEA’s Anna) which is a program that uses AI to provide customer service or other assistance on a website. These conversational robots are now widely referred to as chatterbots, which are computer programs that can conduct conversations via auditory or textual meth- ods [144].

39Rogerian psychotherapy, also known as person-centred therapy (PCT), is a form of psychother- apy that aims to provide clients with an opportunity to realize how their attitudes and behaviour are being effected [24]. 40https://www.chatbots.org/ [Last Access: 30/11/2016] Chapter 5. A New Turing Test 103

Variations to the Turing Test

In addition to the conventional (standard) Turing Test, researchers have proposed and attempted numerous versions of the Turing Test. One version is the Total Turing Test (TTT) [79, 165], which requires the ma- chine to be able to respond to all input channels, and not merely to text-formatted linguistic inputs. Harnad [79,165] believed that a fully intelligent robot should have both linguistic and motor skills. However, as pointed out by Hauser [82], Harnad in his own paper stated that linguistic capability (i.e. rendering text to speech) tends to be evidence of motor skills. Thus, the TTT is redundant. Schweizer [200] claimed that machines should have linguistic abilities, motor skills, and the ability, as a species, to create. He proposed the TTTT (“Truly Total Turing Test”), which requires an entire species of robots that evolve and create. He proposed that intelligence was attributed to a species, not to an individual. However, as argued by LaCurts [121], it would be difficult to generate a species of robot that could pass the TTTT without starting with a single robot that could pass the TT. The Reverse Turing Test (RTT), was proposed by Baird et al. [8], and differs from Turing’s proposal in at least four ways: the judge is a machine; there is only one user; there is only one challenge per test; and the design goal of the RTT is to distinguish, rather than to fail to distinguish, between human being and computer. The RTT has been widely applied on the internet today to verify if an internet request is sent by an actual human user or a robot, such as the CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) project [28,193,253,254].

Summary

The Turing Test, with its tractability and simplicity, is a powerful test that allows the judge (interrogator) to use a wide range of tasks to test the whether a machine behaves like a human being. This is argued by Turing, where the question for such a test was “Can machines do what we can do?” Feigenbaum [56] and Gray [74] also agreed that the Turing Test gives a clear and tangible vision, is reasonably objective, and can make concrete ties to human behaviour by using the unarticulated criteria of a human judge. Chapter 5. A New Turing Test 104 5.2 The Turing Test in Virtual Environments

This section considers the relevance of the Turing Test to virtual humans in virtual environments. The focus is on the reuse of the Turing Test and its variations for the evaluation of the behavioural realism of virtual humans in virtual environments41.

Challenges

Turing’s proposal restricted the communication and interaction between human and bot to the text channel [79, 160]. Compared to the conventional Turing Test, de- signing a modified Turing Test for the virtual world faces many challenges. One of the critical challenges is that there are multiple communication channels rather than just linguistic, such as speech and text communication, and movement, gesture and appearance of virtual humans, which may all affect the test result [22, 160]. The Total Turing Test (see Section 5.1), to some degree, gave an idea about evaluating the humanness of avatars through multiple channels.

Related Work

Several attempts have been made in designing, or passing the Turing Test in virtual environments. Gilbert and Forney conducted the first natural language Turing Test (modified) in Second LifeTMand reported that passing the Turing Test was a result of a complex process of human-computer interaction, instead of the sole product of advances of the AI engine [72]. They also suggested that a branch of the Turing Test to isolate the contribution of any AI engine is necessary. Hingston proposed a Turing Test for bots and applied it in BotPrize competitions [88]. In this variant, each judge was matched against a human player and a bot, and played several rounds of 10-min deathmatch42 in the computer game Unreal

41The realism of virtual human consists of both behavioural and graphical realism. The graphical realism, such the uncanny valley [7, 154, 201], is out of the scope of the work presented here. 42Deathmatch is a game type found in first-person shooter games, such as the Unreal Tournament series. The objective in Deathmatch is to reach the score limit by getting “frags”. A player can Chapter 5. A New Turing Test 105

Tournament 2004 (UT2004) [47]. At the end of each round, judges were required to evaluate the humanness of the bots with a five-level Likert scale. It should be noted that the judges were either expert level gamers, AI researchers or had a strong technology background. One shortcoming was the sample size (judge) was limited (N = 5). The test was applied in the BotPrize43 competition in 2008 and 2009, where the results indicated none of the competing bots passed the test. One issue with this test, as addressed by Hingston, was that it was difficult to organise, and laborious to collect and analyse the results [89]. Hingston then proposed another variant to make the judging process “part of the game” [89]. This variant removed the role of judge. Instead, the game players judged the bots with the help of a modified a weapon called Link Gun in UT2004. Weapons in UT2004 have two firing “modes”: a primary and an alternate mode. With the modified weapon, the primary fire mode is intended to be used against bots-hitting a bot with the primary fire mode instantly kills the bot, and rewards the player with 10 points (compared with the normal 1 point per frag). On the other hand, shooting a human opponent with the primary fire mode results in instant death to the shooter and the loss of 10 points. Therefore, the players needed to be very sure their opponent was a bot before using the primary fire mode. The alternate fire mode was the mirror image: it should only be used to shoot human opponents. This test penalised players for randomly guessing, and the weapon could only be used on a particular opponent once–subsequent shots simply had no effect. Though the use of the modified Link Gun significantly improved efficiency of collecting judge- ment data, this test seemed only to be suitable for shooter games as the main eval- uation cues (e.g. aiming skills and tactic knowledge) were not applicable in other game formats. Also, the participants (human game players) were experienced game players, which limits the generality of the results. This version of test was applied to BotPrize from 2010, where two bots, namely UT∧2 and MirrorBot passed the earn these frags by killing an enemy player. One frag will be awarded for a kill. If a player kills him/herself, the player will lose a frag. The first person to reach the score limit wins. 43http://botprize.org [Last Access: 20/03/2017] Chapter 5. A New Turing Test 106 test by achieving 52% humanness rating in BotPrize 201244. Laird and Duchi [123] pointed out that attempting to evaluate behaviour while participating in a simulation/game is not very discriminating. They argued that if a player was performing a task, then he or she was unable to devote full attention to the evaluation. In addition, the player may be limited to seeing only a small part of the bot’s behaviour that was available to player’s sensors. Also, the player would not be able to make a judgement if a bot was out of view. They proposed a variation of the Turing Test which recorded gameplay of the computer game Quake45 as videos, from the views of the avatars (both bots and human players). The videos were then sent to judges for evaluation. Their results indicated that all five human players received higher humanness rating than the bots in Quake. An issue with this test is the field of view of the recorded videos. Recording video in this test scenario may not impact the results as the virtual battlefields are usually small areas, where all bots would be observed given enough time (e.g. 3 minutes in their example). However, this would be problematic when used in a large scenario with limited time. For example, in a 20-level high-rise building, one bot may never come across other bots in a reasonable time.

Summary

Previous work has shown that the Turing Test is a powerful tool to review the humanness of the computer-controlled virtual humans. The test proposed by Laird and Duchi that separates judging from playing [123] is an inspiring format when dealing with multiple subjects (e.g. virtual human shooters). Limitations of previous works were noted; for example, the tests were designed for a particular type of game. Organising gameplay events and collecting results is difficult with increasing variations of bots and participants [89]. Another shortcom- ing is that these tests only evaluate an individual bot in one attempt, while in recent virtual environment-based applications, crowds are being used more frequently, e.g.

44http://botprize.org/result.html [Last Access: 20/03/2017] 45Quake is a first-person shooter video game, developed by id Software and published by GT Interactive in 1996. More details can be found at http://www.idsoftware.com/ Chapter 5. A New Turing Test 107 massive evacuation flow in a virtual fire emergency drill environment. A new version of the Turing Test with high generality (i.e. supports general game formats and compatible with crowd avatars) is needed, which should provide a guideline for designing evaluation studies on the realism of the virtual human behaviours in virtual environments.

5.3 Design of VHBTT - A Virtual Human Be- haviour Turing Test

This section proposes VHBTT (a Virtual Human Behaviour Turing Test), which is a general procedure for evaluating the realism of the behaviours of virtual humans in virtual environment applications. The VHBTT consists of two phases that separate the judging process from the test session (see Figure 5.1) and uses game replay features provided by modern game engines to enhance the judging process. The data collection phase (Phase I) collects logged data of the behaviour of virtual humans. The participants do not make judgements during their test sessions as this may distract them from completing their in-session tasks (e.g. evacuating from a building or driving a car). Also, no matter how one participant may control an avatar to navigate in a virtual environment, in either first or third person view, he or she is still incapable of making judgements for crowd behaviours. For example, in a virtual fire evacuation drill session, there may be a large number of computer-controlled virtual humans. This is very difficult for participants to judge either every single virtual human evacuee or the crowd as a whole. The data review phase (Phase II), requires a different group of participants that will act as judges to evaluate the realism of the behaviours of the avatars by reviewing the recorded test session. Depending on the actual application scenario, the collected test session data may be visualised and produced as videos or be replayed via game replay functions provided by some game engines. In practice, more participants are usually required in the data review phase than in the data collection phase. If participants in the data collection phase take part in the data review phase, the Chapter 5. A New Turing Test 108

Figure 5.1: A two phase Turing Test for virtual environments.

background of participants may be significantly unbalanced. This variation of the Turing Test eliminates the impacts of learning bias and background bias, and is compatible with customised virtual environments. The goal of the VHBTT is to evaluate whether a virtual human behaves as naturally as a real human, rather than measuring the intelligence of computer-controlled virtual humans. Chapter 5. A New Turing Test 109

5.3.1 Phase I: Data Collection

System Design Principle

There are certain design requirements that the target virtual environment should meet. Feigenbaum [56] notes that “humanness” is highly multidimensional; for example, the ability to reason analogically and the ability to concatenate assertions and arrive at a new conclusion [70]. In a virtual environment, such a dimension may become more inclusive, and may include the ability to render text to speech (e.g. provide audio feedback during an interactive event), the ability to make believable decisions and the ability to navigate in a human-like manner. A well-designed Turing Test for a virtual environment should reduce the dimen- sions to a single dimension, in other words, eliminate the impacts of factors that are non-relevant to the single type of behaviour that is to be assessed by the test. This ensures that the behaviour should be the only variable (dimension) that differs between the control groups of human-controlled avatars and computer-controlled virtual humans. For example, if the test is to review the realism of the driving behaviours of AI drivers in a virtual racing system, the system should make sure that the driving behaviour is the only variable. Thus, the system may have to give both human drivers and AI drivers the same cars and put them on a same virtual track. In this way, judges can only judge which car is driven by human or computer by observing the driving behaviour. Also, the appearance of the avatars should be visually indistinguishable to avoid unnecessary significant impacts on interactions between human participants and computer-controlled virtual humans [81,229]. Research has shown that the appear- ance of avatars may affect social presence, and further impact the observed realism of the behaviour of the avatars [4,106,198]. For example, participants with younger avatars are more willing to disclose their personal information than those with older avatars in a virtual interview scenario [81]. It is also suggested that using visually indistinguishable 3D models for human and computer-controlled avatars can impact the anthropomorphic assumption [72]. High-level graphical realism is not essential, as in some cases high graphical realism has no effect on the task performance [218]. Chapter 5. A New Turing Test 110

However, this does depend on the actual task performed by participants in the test. While developing a data collection system over testing framework, the data log- ging feature should be supported. As the test sessions are to be recorded for review in the data review phase (see Section 5.3.2), the system should be able to log all required data (e.g. environment data and avatar behaviour data), which will be used to replay the test session. For example, a racing system may have to log the track information, weather information, driver actions, car model information and other required data in order to replay the test session for future review.

Participants

The participants in this phase will be controlling an avatar to complete one or more tasks in designed virtual environments. There are no particular requirements for selecting participants in this phase. However, it is strongly recommended to select the participants from the target end user group, as they may provide more useful feedback to help improve the usability of the target VR applications [135]. These participants are referred to as human players in VHBTT.

General Procedure

The procedure should follow the good human-compute-interaction experimental de- sign principles (more details about design principles can be found in [125, 279]). In the VHBTT data collection phase, a tutorial session is compulsory to let the par- ticipants practice the basic operations within a 3D virtual environment, especially for participants with limited or no gaming experience [135]. The tutorial session helps the participants gain the minimum level of skills for completing required tasks and set them realistic expectations of the test system [43], which contribute to more reliable results (i.e. collected behavioural data). For example, if the task is to navi- gate from one location to another, all participants must be able to use input devices (e.g. keyboard and mouse) to control their avatars to navigate around in the test environment after taking the tutorial session. Otherwise, novice participants may be unable to perform the required task(s) and bias the result. All test sessions should involve exactly the same sample size, as a test may have Chapter 5. A New Turing Test 111 different control groups which vary in the ratio between the human controlled avatars and robots and/or test scenarios. For example, session A may have 20% of the avatars controlled by humans and the remaining 80% controlled by computer (20:80), while the ratio can be 50:50 for session B. Regardless of the number of variations, each participant should complete the same number of each type of variation, as unbalanced tests may lead to biased results.

Data Collection and Management

Firstly, all the test sessions should be recorded either as videos for direct review by judges or in the form of computer logged data for post video production. If data is to be logged, it should include both the environment data and the behavioural data of the avatars. As shown in Figure 5.2, the designed test system should be able to log general session data, such as session time, session ID, test scenario, session participant settings and others. Also, the system should be able to log the behavioural data of all avatars in a session, including both the human participants and the virtual humans. For these avatars, the movements (including movement speed, direction, location) and the behaviours of the cameras attached to the human controlled avatars (human views) should be recorded as a time series. The type of the virtual human should also be logged, especially if there are variations in the algorithms behind them. The recorded session data can be managed in a suitable database for future review. Apart from the test sessions, a demographic survey should be collected from the participants, which may help analyse if the background of the human players impacts the test results. Also, the usability of the system should be measured (e.g. using the System Usability Scales), as it may impact the judgement results. This is because the players may have difficulty completing the test tasks due to the unsatisfactory usability of the system. This may lead to abnormal behaviour and further impacts the test results. For example, if the navigation of a virtual evacuation drill system is hard to control, the player-controlled avatars might constantly be hitting the wall or spinning in the same place. These abnormal behaviours will make the test results unreliable. Chapter 5. A New Turing Test 112

Figure 5.2: An example set of data that should be collected in Phase I of the VHBTT.

5.3.2 Phase II: Data Review

The data review phase aims at reviewing the test sessions which were recorded in the data collection phase (Phase I) to determine the realism of observed behaviours of virtual humans.

System Design Principle

To support reviewing the realism of the behaviours of the avatars, a reviewing system is required. The system should be able to replay the sessions based on the logged session data (See Figure 5.2) and provide appropriate interfaces to let the judges observe the behaviours of the avatars. For example, virtual CCTV cameras can be used to observe how avatars behave in the test session, which could be implemented as either a complex real time interactive surveillance system presented by a game engine (e.g. Unity) or a simplified web-based application with similar interfaces, but using pre-recorded CCTV videos. Chapter 5. A New Turing Test 113

Participants

The participants in this phase will be acting as judges in the VHBTT and are responsible for determining if an avatar is a human or a bot. One requirement is the sample size should not be less than 20, as using small number of participants may impact the reliability of the data [27, 55, 263]. The participants in this phase are also referred as the judges.

General Procedure

As mentioned in Section 5.3.1, the design of the procedure should follow the good human-compute-interaction experimental design principles (more details about de- sign principles can be found in [125, 279]). In the VHBTT data review phase, participants should be required to attend a tutorial session, which instructs them to observe the behaviours of the avatars with the reviewing system. The judges should review exactly the same number and types of sessions to prevent a biased result. The order of the sessions should be organised on a random basis to avoid order effects. Similarly to the participants in the data collection phase, the judges may take after-trial surveys, especially the usability surveys (e.g. the System Usability Scale) in case the results are biased by the low usability of the reviewing system.

Data Collection and Management

Compared to Phase I (see Section 5.3.1), the data review phase is designed to collect the review results from the human judges. To review the realism of an avatar, a judge has to answer the following two questions:

1. Do you think this avatar is a human or a bot?

2. How confident is the judgement for this avatar?

The first question asks a judge to identify whether an avatar is controlled by a human or a bot (virtual human) based on the observed evacuation behaviours (e.g. through virtual CCTV cameras). The second question requires a judge to evaluate his/ her confidence in the answer to the first question. The second question requires Chapter 5. A New Turing Test 114 a judge to evaluation his/ her confidence of the answer of the first question. This test uses a 10-point46 Likert scale for each question, which enables researchers to analyse the data with statistical tests. Also, a demographic survey should be collected from the human judges, to help analyse if the participants’ background impacts the judgement of results. Finally, the usability data (e.g. collected via the System Usability Scales (SUS)) might be collected to analyse whether the usability of the reviewing system impacts the judgement result in cases where a customised reviewing system is used in this phase.

5.4 Data Report

The actual implementation of a specific VHBTT may have varied hypotheses. This section only provides a guide for exploring the main hypothesis of the VHBTT:

H0 The computer-controlled avatars behave as naturally as the human con- trolled avatars in the example virtual environment.

Statistic tests are highly recommended to report the results for hypothesis H0. Depending on the actual implementation of the VHBTT, the collected data of the judgement and confidence scores may have different distributions. Thus, it is always necessary to check if the data meets the assumptions of either parametric or non- parametric tests when choosing statistic tests [34,58]. The parametric tests assume that sample data comes from a population that fol- lows a probability distribution based on a fixed set of parameters [58,69]. Compared to the parametric tests, the non-parametric tests, referred to as distribution-free tests, do not rely on data that belongs to any particular distribution [34, 58, 103]. The One Sample Kolmogorov-Smirnov test47 (K-S test) and Shapiro-Wilk test48 can

46As suggested by Hui and Triandis [94], 10-point Likert scale can reduce extreme responses. 47The One-Sample Kolmogorov-Smirnov Test procedure compares the observed cumulative dis- tribution function for a variable with a specified theoretical distribution, which may be normal, uniform, Poisson, or exponential [58, p. 144]. 48The Shapiro-Wilk test is a statistical procedure for testing a complete sample for normality [58, p. 144] [202]. Chapter 5. A New Turing Test 115 be used to test the normality of the data. The homogeneity of variance may also be tested if analysis of variance (ANOVA49) is to be used.

To explore the hypothesis H0, the data review phase should have collected the data of the judgement scores and confidence scores. Depending on the distribution of data, statistic tests, such as Kruskal-Wallis test50 (non-parametric) or one-way ANOVA (parametric) could be used to explore if significant differences exist between multiple groups. If significant differences are found in the previous Kruskal-Wallis test or ANOVA, then post-hoc tests, such as the t-test51 (parametric), the Mann-Whitney U test52 (non-parametric), or Wilcoxon rank-sum test53, may be performed to analyse how the variable (e.g. judgement scores or confidence scores) differs between each of the two groups. Also, mean comparisons and charts (e.g. box plot) can help with analysis if the difference (even if insignificant) matters in the particular setting, as sometimes an insignificant difference in a large sample size may indicate a significant impact in practical cases [58].

5.5 Summary for Chapter5

This chapter reviewed the use of the conventional Turing Test and its variations for virtual environment-based applications. A new two-phase Turing Test, VHBTT, was proposed as a guide for reviewing the behavioural realism of the virtual humans

49The ANOVA is used to determine whether there are any statistically significant differences between the means of multiple independent groups [58, p. 347]. 50The Kruskal-Wallis H test is a rank-based non-parametric test that can be used to determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable [58, p. 559]. 51The t-test assesses whether the means of two groups are statistically different from each other [58, p. 324]. 52Mann-Whitney U test is the non-parametric alternative test to the independent sample t-test. It compares two sample means that come from the same population, and used to test whether two sample means are equal or not [58, p. 540] [142]. 53The Wilcoxon rank-sum test is another nonparametric alternative test to the independent sample t-test, which is equivalent to the Mann-Whitney U test [58, p. 540]. Chapter 5. A New Turing Test 116 in virtual environment-based applications. This chapter also provides guidance for appropriate statistics tests for reporting the test results. This chapter provides the design of VHBTT for the case study in Chapter6, which reviews the realism of the evacuation behaviours of PREVENT-developed virtual humans in a virtual fire evacuation training system. Chapter 6

Evaluating the Realism of Reused Evacuation Behaviours

This chapter presents a user study based on the VHBTT introduced in Chapter 5. The user study details an example use of VHBTT to review the realism of the simulated virtual human evacuation behaviours that were generated via the PREVENT approach (see Chapter3). Following the VHBTT design principles, this case study consisted of two phases, data collection (Phase-I) and data review (Phase-II). The data collection phase collected the behaviours of the virtual human evacuees from 24 networked fire evac- uation drill sessions where both the human participants and virtual human evacuees were required to complete six evacuation tasks in three different fire scenarios. The collected behaviour data was then reviewed in the data review phase by 20 human judges. Both human participants and virtual humans attended the data collection phase and only human participants attended the data review phase (see Appendix A.2). This study was approved by the Human Research Ethics Committee (HREC) at the University of Newcastle, Australia (Approval No. H-2016-0198).

117 Chapter 6. Evacuation of Reused Behaviours 118 6.1 Data Collection: Generating Gameplay Data

6.1.1 System Implementation

The virtual environment used in this study was modelled on the Engineering S (ES) building on the Callaghan campus of The University of Newcastle, Australia. The floor plan of this building is shown in Figure 6.1. This real building consists of multiple corridors, rooms of a variety of sizes, and five fire exits, which ensures the richness of the evacuation behaviours of the reused virtual human evacuees.

Figure 6.1: The original floor plan of the Engineering S building.

Following the PREVENT approach (Chapter3), a 3D building model was created with Autodesk AutoCAD 2016 (see Figure 6.2), then exported to the FBX format to be used in Unity via the Autodesk FBX Converter (see Figure 6.3). To support the networked fire evacuation drill, the UnityEngine.Networking package was used in this system. For in-game navigation, each human participant controlled a 3D avatar using a keyboard and mouse. In addition to the evacuee avatars controlled by the human participants, the system also had virtual human fire wardens, which were computer-controlled to send audio fire evacuation instruc- tions54 to the evacuees. The full list of evacuation instructions can be found in Appendix A.3.

54Audio files were created using the TTS service on http://www.fromtexttospeech.com/ [Last Access: 16/11/2016] Chapter 6. Evacuation of Reused Behaviours 119

Figure 6.2: The completed 3D model of the Engineering S building in AutoCAD.

Figure 6.3: A screenshot of the Engineering S building as rendered in Unity. Chapter 6. Evacuation of Reused Behaviours 120

Since the focus of this study was on reviewing the behavioural realism of the PREVENT-generated virtual human evacuees, the graphical realism is less impor- tant. Thus, only essential audio effects (e.g. fire alarms and human voices) and basic particle effects (e.g. fire and smoke) were added into the virtual environment to simulate the fire drill. Also, human-like avatar models were simplified as 3D cylinders wearing black sunglasses55, which represented the virtual human evacuees and virtual human fire wardens (see Figure 6.4). The simplified avatars still showed basic characters of

Figure 6.4: The 3D models used to represent virtual humans in the virtual fire drill environment. All wardens were marked in green and evacuees were marked in white. humans (e.g. wearing glasses and holding road signs), but hide details such as gender, as visible gender differences may critically affect the judgement results [72]. For the evacuee avatars, first-person view cameras were attached in front of the models with sunglasses. Using first-person view provides a more immersive virtual experience as it enables the evacuees to observe the virtual environments during the evacuation as if they were in the real world (See Figure 6.5 and Figure 6.6). The basic evacuee model was shared by both human participants and virtual humans; thus the human participants did not know if others in the drill were real humans or computer controlled. For the three green virtual human fire warden avatars,

55The sunglasses allowed face direction to be determined in review phase. Chapter 6. Evacuation of Reused Behaviours 121

Figure 6.5: A screenshot of the view from the perspective of an evacuee avatar in the virtual environment.

Figure 6.6: A photo of the view from the perspective of an evacuee in real world. Chapter 6. Evacuation of Reused Behaviours 122 each avatar would hold a different sign, depending on what instructions to send. For example, the fire warden avatar with a stop sign may yield an instruction like “Don’t go this way, it’s blocked by fire!” (see Appendix A.3). In addition, three fire evacuation scenarios were designed for this study (see Fig- ure 6.7). The flames represent fire zones and each purple icon represents the location of a virtual human fire warden. The starting room icon indicates that evacuees (both real and virtual human) may be located in this area at the beginning of an evac- uation. These three fire scenarios have different locations of fires, positions of fire wardens, and start rooms where evacuee avatars were initiated at the beginning of the evacuation test. Also, a training scenario was created based on this building model to let par- ticipants practice their navigation skills. However, the training scenario had no computer-controlled virtual humans (neither evacuees nor fire wardens), particle effects (e.g. fire and smoke) or audio effects (e.g. fire alarms). In this way, the participants would not be able to make evacuation plans for the three fire scenarios prior to the actual evacuation tests. . There are four reason why the training session can, and should be, conducted in the ES building.

• Firstly, the use of this virtual environment has no impact on the evacuation test sessions as it is an established fact that the end users (i.e. students, staff, and fire wardens) of the fire evacuation training system have different levels of knowledge of the ES building. For example, first year students have less knowledge of the building than staff working in it. A short training session (5 mins) would not change this fact.

• Secondly, participants already have knowledge of the ES building as they travel to the ES building and before they reach the test venue.

• Thirdly, giving such brief knowledge of a training environment also helps par- ticipants to identify their locations at the beginning of the test session, which exactly complies with the fact that an evacuee knows where he/she is when a fire emergency happens in real life. Chapter 6. Evacuation of Reused Behaviours 123

Figure 6.7: Three fire evacuation scenarios S1, S2 and S3. Chapter 6. Evacuation of Reused Behaviours 124

• Finally, it sets appropriate expectations for users (e.g. art style of the target virtual environment), thus the participants are not distracted by looking at textures and furniture in the actual test session.

In order to record the evacuation behaviours of the evacuee avatars, a recorder plugin was created in C# and installed into the fire evacuation drill system, which can log the locations, directions and movement speeds of the evacuee avatars every 0.25 seconds during the evacuation test. It also records the environmental data, such as the locations of fires and fire wardens.

Figure 6.8: The locations of virtual surveillance cameras in the ES building.

In addition to the fire evacuation drill system, a playback system was also created in Unity. This system was designed to load and replay the evacuations from the data recorded by the recorder plugin (e.g. evacuee movements and fire locations) in an offline environment. Also, six virtual CCTV cameras were installed into the playback system (see Figure 6.8). A camera control plugin was created to enable the CCTV cameras to record 720p videos (1280 x 720) at 15 FPS, which were then used to review the realism of the evacuation behaviours of the virtual avatars by human judges in the data review phase (see Section 6.2). In summary, the completed networked virtual fire drill system consists of two subsystems, i.e. a fire drill subsystem and a playback subsystem (see Figure 6.9). The drill subsystem provides a first-person shooter (FPS) style fire evacuation expe- rience, featuring network support, an imported virtual environment and PREVENT Chapter 6. Evacuation of Reused Behaviours 125 generated virtual humans.

Fire Evacuation Drill System

Fire Drill Playback Subsystem Subsystem ,,

Evacuation Data Evacuation Data

Evacuation Videos

Figure 6.9: The system diagram for the completed fire evacuation drill system.

6.1.2 Apparatus

All the PCs used in this case study were owned by the University of Newcastle, Australia (UoN) with the same specifications as listed in Table 6.1.

Table 6.1: The specifications of the PCs used in the user study.

Specification Description Operating System Windows 7 Processor Type Intel i7-960 (3.20 GHz) Memory 8 GB Hard Drive 1 TB Graphics Card NVIDIA GTX 460 Sli Chapter 6. Evacuation of Reused Behaviours 126

6.1.3 Participants

Twelve UoN students (11 male and 1 female) were recruited by posters on cam- pus (see Appendix B.7) as approved by the HREC. Eight of the participants were postgraduates and four were undergraduates. The age of participants ranged from 19 to 40 years (M = 27.8,SD = 5.4). All the participants came from technology disciplines, including civil engineering, mechanical engineering, electrical engineer- ing and computer science. The human participants are also referred to as human players in this chapter. Participants were paid $20 AUD for their time. In addition to the 12 human participants, there were 54 virtual human evacuees involved in the fire evacuation drill sessions, where they completed the evacuation tasks together with the human participants. These virtual human evacuees fea- tured pre-computed evacuation behaviours that were generated via the PREVENT approach (Chapter3). Also, with support of the path-switching framework (all sit- uations enabled, see Chapter4), these virtual human evacuees could dynamically switch between or create new paths based on the pre-computed evacuation paths when encountering environmental events, such as receiving egress instructions from the fire wardens. The realism of the evacuation behaviours of these evacuee avatars was reviewed in this study by the human judges in the data review phase (see Sec- tion 6.2). In this thesis, these computer-controlled virtual human evacuees are also referred to as bots.

6.1.4 Procedure

The overall procedure of the data collection phase is shown in Figure 6.10. Prior to the tests, the human players (N = 12) were required to read the information sheet (see Appendix B.1) of this study, which describes the evacuation tasks required to be completed. Two data collection sessions were conducted to collect evacuation data. Each session involved six human players from the Group of Players (see Section 6.2.2), who signed the consent forms (see Appendix B.3) and completed pre-trial demographic questionnaires (see Appendix B.4) before taking a five minutes training session. The training session introduced the basics of navigating within a training Chapter 6. Evacuation of Reused Behaviours 127

Figure 6.10: The overall procedure of the data collection phase. virtual environment using mouse and keyboard and let the human players practice their navigation skills in a test environment. Also, the players were required to refrain from communicating with other participants and wear headphones, which allowed them to hear the evacuation instructions from the fire wardens and fire alarm during the evacuation test. At the beginning of each evacuation test, the six evacuees (both players and bots) joined a Unity lobby hosted on a local network. Once all six evacuees had joined the lobby, the test system loaded a test scenario and randomly placed the evacuees in one of the six lecture rooms in the virtual environment. After a 5-second count down, the virtual alarm system was triggered to alert the evacuees to evacuate the building and move to assembly points near the fire exits. The testing system recorded all the evacuation behaviours till all the evacuees reached the assembly points. After completing 6 evacuation tests, the human players were required to complete a System Usability Scale (SUS) on the evacuation training system (see Appendix B.6), which was used to explore whether the usability of the system affects the evacuation experience. The data collection phase took approximately 35 minutes to complete. After completing the study, each participant received $20 Chapter 6. Evacuation of Reused Behaviours 128

AUD voucher for his or her time, as approved by HREC. While completing the evacuation sessions, the human participants were mixed with other human participants and (or) bots in three different fire emergency sce- narios as shown in Figure 6.7. Also, the evacuation sessions were categorised into three human/bot group combinations (H/BGC) (see Table 6.2) according to the ratio between real human participants and virtual human evacuees (bots) involved. The full list of configurations of the evacuation sessions can be found in Appendix A.4.

Table 6.2: The three human/bot group combinations (H/BGC) in this user study.

Type Description 6H0B All six avatars in the evacuation sessions were controlled by human players. 3H3B Three avatars in the evacuation sessions were controlled by human players, while the other three avatars were bots (virtual humans). 0H6B All six avatars in the evacuation sessions were bots (virtual hu- mans).

6.1.5 Data Collected

In this phase, all 12 participants completed all sessions assigned to them, resulting in 24 fire evacuation sessions that were recorded and processed in the playback system (see Section 6.1.1), which were to be reviewed in the data review phase (see Section 6.2). In addition to the recorded evacuation sessions, 12 SUS and 12 pre-trial demographic questionnaires (see B.4) were collected. Chapter 6. Evacuation of Reused Behaviours 129 6.2 Data Review: Reviewing the Fire Evacuation Replays

The Phase-II of the study was the judging process of the VHBTT. The phase evalu- ated the realism of the evacuation behaviours of generated virtual humans by com- paring the judgement scores and confidence scores, which were collected from eval- uation forms (see Appendix B.5). This section introduces the participants (judges) involved in the Phase-II and a web-based judging system.

6.2.1 System Implementation

The virtual fire evacuation drill system had recorded 24 evacuation sessions (6 x 6H0B, 12 x 3H3B, and 6 x 0H6B) as listed in Appendix A.4. To balance the experiment sessions, 18 evacuation sessions were selected to review in this phase, including six 6H0B sessions, six 3H3B sessions, and six 0H6B sessions (see Table 6.3). Also, these sessions were well balanced across three fire scenarios (see Figure 6.7), including six S1 sessions, six S2 sessions, and six S3 sessions. This ensured that all the avatars which appeared in the evacuation sessions were judged same times.

Table 6.3: A list of evacuation replays to be reviewed (n = 18).

Type Count Evacuation Session ID 6H0B 6 S1G1G2 6H0B S1, S1G1G2 6H0B S2, S1G1G2 6H0B S3, S2G1G2 6H0B S1, S2G1G2 6H0B S2, S2G1G2 6H0B S3 3H3B 6 S1G1 3H3B S1, S1G2 3H3B S2, S1G6 3H3B S3, S2G1 3H3B S1, S2G2 3H3B S2, S2G6 3H3B S3 0H6B 6 S1 0H6B S156, S1 0H6B S2, S1 0H6B S3, S2 0H6B S1, S2 0H6B S2, S2 0H6B S3 ∗Session ID is “[Session][Groups] [H/BGC ] [Fire Evacuation Scenario]”.

As described in Section 6.1.5, each of the selected sessions was processed in the playback system and this generated six video clips as recorded by the six virtual CCTV cameras. Each video clip lasted approximately 60 to 70 seconds, depending Chapter 6. Evacuation of Reused Behaviours 130 on the total evacuation time of the evacuation session. In total, the 18 evacuations sessions output 108 video clips. As each judge was to review six sessions, the 18 evacuation sessions were then divided into 3 judgement groups, referred to as J1, J2 and J3 (see Appendix A.5). Each group includes two 6H0B sessions, two 3H3B sessions and two 0H6B sessions. Similarly, the sessions in each group were spanned evenly across three fire scenarios. Three judges were randomly and evenly divided into three groups for J1, J2, and J3, which ensured that every judge reviewed exactly the same number and types of evacuation sessions. To manage the video clips and facilitate the judging process, a web-based judging system was developed in HTML/CSS and JavaScript. Figure 6.11 shows the home page of the judging system, where six tasks were listed for judges to review. Also, as seen on the right side, brief instructions on how to review the fire evacuation replays, and how to complete the evaluation form, were given to judges.

Figure 6.11: The web-based judging system - home page.

On clicking the replay hyperlink, the judges were redirected to a related page which contained a floor plan and clickable CCTV icons (see Figure 6.12). The test number at the top of the page showed the judge which evaluation form to fill in. The Chapter 6. Evacuation of Reused Behaviours 131 evaluation forms were marked with a test number of 1 to 6 instead of real session identification as the session ID contained critical information such as the number of bots. On top right corner, the 5-minute timer was used only to indicate to the judges

Figure 6.12: The web-based judging system - interactive floor plan. how much time they had spent in judging this particular session, rather than forcing them to stop the session. The functionality of the timer was explicitly explained to the judges during their training sessions. The judges could watch the recorded evacuation by clicking on the CCTV icon. A video player then popped up to show the evacuation process captured by that virtual CCTV camera (see Figure 6.13). To help judge each individual evacuee, a number57 was attached to the white cylinder evacuee model. The judge could pause and replay the CCTV video clip as many times as required to make judgements.

6.2.2 Participants

The Group of Judges, also referred to as Group J, had 20 UoN students who were recruited by poster on campus (see Appendix B.7) as approved by the HREC. The participants included 11 males and 9 females. The age of human participants in this group ranged from 18 to 40 (M = 24.55,SD = 5.8). The judge participants in this

57These numbers only appeared in the review system as markers of different evacuees. In the evacuation test sessions, there were no numbers on evacuee models. Chapter 6. Evacuation of Reused Behaviours 132

Figure 6.13: The web-based judging system - view of a CCTV camera. The white cylinders represent the evacuees and were marked with numbers to help judges make judgements about each individual evacuee in Section 6.2.4. group were different to those in the player group. The participants in this group are also referred to as human judges or judges in this thesis.

6.2.3 Procedure

The overall judging procedure is shown in Figure 6.14, where the judges (N = 20) were required to read the information sheet for the Group J (see Appendix B.2) before attending the on-site judgement sessions. On the day of the test, the judges signed the consent forms and completed the pre-trial demographic questionnaire (see Appendix B.4). They were then given a five-minute instruction session. In the instruction session, the author briefly introduced the background of this study and instructed the judges to use the web-based judging system (see Section 6.2.1). The judges were then given six identical evaluation forms (with different test IDs on them) which were intended to evaluate the realism of avatars and their confidence scores of each judgement (see Appendix B.5). One evacuation session could be reviewed multiple times if needed. After completing the six evaluation Chapter 6. Evacuation of Reused Behaviours 133

Figure 6.14: The overall procedure of the data review phase. forms, the judges were required to complete a System Usability Scale (SUS) (see Appendix B.6) to ensure the usability of the interface had no impact on the judging process. The judging process took approximately 35 minutes to complete. After completing the study, each participant received a $20 AUD voucher for his or her time as approved by the HREC.

6.2.4 Data Collected

For each avatar, two ten-level Likert scales were used to collect the judgement score and the confidence score (see an example in Figure 6.15). The first question asks a judge to identify whether the avatar (labelled as No.1) was controlled by a human or a bot (virtual human) based on the evacuation behaviours observed through the six virtual CCTV cameras shown in Figure 6.8. The second question required a judge to evaluate his/ her confidence of the judgement score for this avatar. By the end of the study, 120 evaluation forms (1,440 evaluation records in total) were collected, including 720 evaluation records of the realism of avatars and 720 confidence scores of each judgement. Also 20 demographic forms and 20 SUS forms were collected for TURING TEST STUDY IN A VIRTUAL ENVIRONMENT Evaluation Questionnaire

Date: ______

Participant ID: ______(see welcome email. ID will be in the form of VST_ID_XX) Chapter 6. Evacuation of Reused Behaviours 134

Definitely Definitely bot human 1. Do you think Avatar No. 1 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 2. How confident is the judgement for Avatar No.1?

1 2 3 4 5 6 7 8 9 10

Figure 6.15: A fragment of the evaluationDefinitely form (the complete form can be foundDefinitely in bot human 3. Do you think Avatar No. 2 is a human or a bot?

Appendix B.5). 1 2 3 4 5 6 7 8 9 10

Extremely Extremely unsure confident 4. How confident is the judgement for Avatar No.2?

1 2 3 4 5 6 7 8 9 10 further analysis. Definitely Definitely bot human 5. Do you think Avatar No. 3 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 6.36. How confident Results is the judgement for Avatar No.3?

1 2 3 4 5 6 7 8 9 10

The main focus of this study was to investigate whether there was a significant Definitely Definitely bot human difference7. Do you think between Avatar No. the 4 is a virtual human orhuman a bot? evacuees (bots) and human controlled avatars

1 2 3 4 5 6 7 8 9 10 Extremely Extremely in the recorded fire evacuation sessionsunsure from the view of the human judges. Also,confident 8. How confident is the judgement for Avatar No.4? the study aimed to clarify that the judgement1 2 results3 4 were 5 reliable6 as7 they8 were9 not10 made on a random basis. Definitely Definitely bot human 9. Do you think Avatar No. 5 is a human or a bot? As suggested in Section 5.3.2, the main hypothesis here is: 1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 10. How confident is the judgement for Avatar No.5? 58 H1 The human-controlled avatars behaved more naturally than bots . 1 2 3 4 5 6 7 8 9 10

One sub-hypotheses, H2 ,was proposedDefinitely and examined in order to explore DefinitelyH 1 bot human 11. Do you think Avatar No. 6 is a human or a bot? (see Table 6.4). Section 6.3.2 investigated H1 and H2 from the perspective of overall 1 2 3 4 5 6 7 8 9 10 Extremely Extremely review results. Section 6.3.3 and Sectionunsure 6.3.4 further discussed H1 in three H/BGC confident 12. How confident is the judgement for Avatar No.6? and three fire evacuation scenarios. 1 2 3 4 5 6 7 8 9 10

6.3.1 Metrics

As stated in Section 6.2.4, 10-point Likert scales were used to measure the judge- ments of the evacuation behaviours of the avatars. Turing [231] suggested that a

58Although significance tests were performed throughout this section, the tests were looking for non-significant results. The hypothesis can only be rejected if non-significance results are reported, indicating that the bots created via PREVENT can behave at least as realistic as the human-controlled avatars. Chapter 6. Evacuation of Reused Behaviours 135

Table 6.4: An overview of the hypotheses and results.

Hypothesis Result 1. The human controlled avatars behaved more naturally than bots. Rejected 2. The judgements were made on a random basis. Rejected machine could be treated as passing the test if it could convince a human 30% of the time after five minutes of conversation. In the BotPrize, to pass the Turing Test, the bot should receive a humanness rating over 50% [88,89]. Given these metrics, in this test, avatars receiving the judgement scores between 6 to 10 were categorised as being identified as the human controlled avatars. Otherwise, they were categorised as being identified as the bots. There were four categories of judgement results for both the human and bot group (see Table 6.5). The identification codes were used to refer to the identification results in this thesis.

Table 6.5: Four categories of identification results.

Group Code Description HH A human controlled avatar is identified as a human controlled Human avatar. HB A human controlled avatar is identified as a bot. BH A bot is identified as a human controlled avatar. Bot BB A bot is identified as a bot.

6.3.2 Overview of the Judgement Results

This section investigated H1 and H2 (Table 6.4) by performing statistical tests for all data collected in the data review phase (see Section 6.2.4). The One Sample Kolmogorov-Smirnov tests (K-S test) were performed to deter- mine which statistical tests were suitable to analyse the collected data. The distribu- tions for both judgement and confidence scores are shown in Figure 6.16. The judge- ment scores (Figure 6.16a), D(720) = 0.179, p < .001, were significantly non-normal Chapter 6. Evacuation of Reused Behaviours 136

Distribution of all collected confidence scores Distribution of all collected judgement scores 200

120

100 150

80

100

60 Frequency Frequency

40

50

20

0 0 0 2 4 6 8 10 0 2 4 6 8 10 Judgement score Confidence score (a) The judgement scores. (b) The confidence scores.

Figure 6.16: Distributions of judgement and confidence scores.

with skewness of −0.125 (SE = 0.091) and kurtosis of −1.442 (SE = 0.182). As the distribution of the judgement scores seems like a bi-modal distribution, further Page 1 Page 1 K-S tests were performed separately for both judgements on bots and judgements on humans. The results indicated that the distributions of both judgements on bots (D(360) = 0.844, p < .001) and judgements on humans (D(360) = 0.880, p < .001) were significantly non-normal. The confidence scores (Figure 6.16b), D(720) = 0.161, p < .001, were also signif- icantly non-normal with skewness of −0.752 (SE = 0.091) and kurtosis of −0.360 (SE = 0.128). As the data distributions did not meet the assumptions for paramet- ric tests, non-parametric tests were used to analyse the judgement and the confidence scores for this study.

Before investigating the hypothesis H1, the hypothesis H2 must be rejected. This is because the judges may have made their decisions by chance (i.e. scores were significantly randomly distributed), unless H2 could be rejected. Otherwise, this may suggest there were flaws in the implemented system or the reused bots.

The following hypotheses must be rejected in order to reject H2:

H2a The judgement scores were determined on a random basis. Chapter 6. Evacuation of Reused Behaviours 137

H2b The confidence scores were determined on a random basis.

The Wald-Wolfowitz runs tests were performed for all the collected judgement and confidence scores to check for randomness (see Table 6.6). The test results indicates that both scores differed significantly from random distribution, which rejects both H2a (N = 720, runs = 2, p < .001) and H2b (N = 720, runs = 227, p < .001). Also the mean value of the confidence score (N = 720,M = 7.07,SD = 2.008) was significantly higher than 5, indicating the judgements were not made by chance as the judges were confident regardless what judgements were made, thus rejecting

H2a again. As a result, the hypothesis H2 was strongly rejected.

Table 6.6: The results of the Wald-Wolfowitz runs tests on the collected judgement and confidence scores.

Score NMSD Number of Runs Z p Judgement 720 5.50 3.002 2 -26.627 < .001∗∗ Confidence 720 7.07 2.008 227 -7.643 < .001∗∗ ∗∗Significant at .001 level.

To support the hypothesis H1, the following two sub-hypotheses should be ac- cepted:

H1a The judgement score for human-controlled avatars is significantly higher than that of bots among the four identification results59.

H1b The confidence score for human-controlled avatars is significantly higher than that of bots among the four identification results.

The Kruskal-Wallis60 tests were performed to explore if there were significant differences for both the judgement scores and the confidence scores between the four

59All the statistical tests applied in the thesis were one-tailed to ensure that all values depart from the reference value in only one direction. 60In SPSS, the Kruskal-Wallis test reports Chi-Squared value, which is suggested to be reported as H value. More details can be found in [58, p. 546 ]. The version of SPSS used to perform statistical tests in the thesis was 24.0 with a licence issued to The University of Newcastle, Australia. Chapter 6. Evacuation of Reused Behaviours 138 identification results, as well as between the whole human group and bot group (see Table 6.7). The metrics described in Section 6.3.1 were applied when categorising the judgement scores into the identification result HH, HB, BH and BB.

Table 6.7: Test statistics for the Kruskal-Wallis tests.

Score Groups H df p HH, HB, BH, BB 549 3 < .001∗∗ Judgement Total: Human, Bots 1.49 1 .222 HH, HB, BH, BB 3.661 3 .300 Confidence Total: Human, Bots 2.91 3 .088 ∗∗Significant at 0.001 level

Table 6.7 indicated that no statistically significant differences were found for the confidence scores (H(3) = 3.661, p = .300) among four identification results and between the human and bot group, which rejected the hypothesis H1b. This indicated that the judges were equally confident regardless what judgement scores they gave. For the judgement scores between the human and bot group (H(1) = 1.49, p = .222), no significantly difference were reported. However, a significant difference was found for the judgement scores among HH, HB, BH and BB (H(3) = 549, p < .001). This is because, according to the metrics (Section 6.3.1), the judgement scores for HH and BH (6 to 10) always have higher mean ranks than those of HB and BB (1 to 5). Thus, the Kruskal-Wallis test was unable to determine the rejection of H1a. To investigate the significance between each pair of identification result groups, the Mann-Whitney U test is recommended [168]. The Mann-Whitney U61 tests were performed as post hoc tests to explore how the judgement scores differed between HH and BH, and between HB and BB (see Table 6.8). In addition, a Mann-Whitney U test was also performed on the judgement scores between the whole human group and the bot group to further determine if the difference remained insignificant as

61While choosing a post hoc test, the Wilcoxon’s test can also be used as it is equivalent to the Mann-Whitney U test [58, p. 540]. Chapter 6. Evacuation of Reused Behaviours 139 indicated in the Kruskal-Wallis tests (H(1) = 1.49, p = .222) (see Table 6.7).

Table 6.8: Test statistics for the Mann-Whitney U Test.

Score Groups Mdn U z p HH vs. BH 8 vs. 8 14278 -3.371 < .001∗∗ Judgement HB vs. BB 2 vs. 3 13414 -1.218 .223 Human vs. Bot 5 vs. 7 61416 -1.222 .222 ∗∗Significant at 0.001 level

As shown in Table 6.8, no significant difference was reported for the judgement scores between the human and the bot group (Human vs. Bot: p = .222), which validates the result of the previous Kruskal-Wallis test. Similarly, the judgement scores did not significantly differ between HB and BB (HB vs. BB: p = .223). However, the judgement scores of HH did significantly differ from that of BH (HH vs. BH: U = 14278, z = −3.37, p < .001). Post hoc mean comparison tests were then performed on the judgement scores between HH and BH to determine whether the hypothesis H1a could be supported (see Table 6.9).

Table 6.9: The report for mean comparisons on judgement scores across two identi- fication results.

Group N M SD Mdn Min Max BH 205 7.92 1.131 8 6 10 HH 173 8.31 1.139 8 6 10

Table 6.9 shows that 32 more human controlled evacuees were judged as bots (HB: N = 205) than human (HH: N = 173). This indicated that the human- controlled avatars did not behave more naturally than the bots. Actually, as shown in Figure 6.17, the bots were more likely to be judged as humans than the humans themselves, as 56.94% of the bots were identified as being controlled by the human players (BH), while only 48.06% of the human-controlled avatars were correctly judged (HH). Other measurements (M, SD, Min and Max in Table 6.9) did not Chapter 6. Evacuation of Reused Behaviours 140

report any significant differences BH and HH. Thus the hypothesis H1a was rejected as no evidence could prove that the human controlled avatars received significantly higher scores than the bots.

Overall judgements results for both human controlled avatars and virtual humans (bot)

Bot 56.94% 43.06% Subject Group Human 48.06% 51.94%

0.00% 50.00% 100.00%

Identified as Human Identified as Bot

Figure 6.17: The overall judgements results.

In conclusion, firstly, the hypothesis H2 was rejected, indicating the data col- lected in the VHBTT data collection phase (see Section 6.2) was reliable to be used for further analysis, including both the judgement and confidence scores. Also, the hypothesis H1 cannot be accepted as both H1a and H1b were rejected, which in- dicated that the human-controlled avatars did not behave more naturally than the bots.

6.3.3 Judgements Results of Three H/BGC

This section further investigated H1 by exploring whether the human-controlled avatars behaved more naturally than the bots across three H/BGCs, i.e. 6H0B, 3H3B and 0H6B (Table 6.2 and Figure 6.18). According to Table 6.2, there were no human-controlled avatars in the 0H6B Chapter 6. Evacuation of Reused Behaviours 141 evacuation sessions, while the 6H0B sessions had no bots. This means that neither the Kruskal-Wallis nor the Mann-Whitney U test can be performed for the judge- ment and confidence scores between HH, HB, BH and BB across the three H/BGC.

Alternatively, the correctness rate of judgements can be compared to support H1. If the human-controlled avatars behave more naturally than the bots, more of them should be correctly identified as being controlled by the human players (HH) than bots (HB). The numbers of avatars of each identification result across three H/BGC, 0H6B, 3H3B and 6H0B are shown in Table 6.10 and plotted in Figure 6.18.

Table 6.10: The identification results of three H/BGC.

Identified as H/BGC Group Human Controlled Avatar Bot 0H6B Bot (N = 240) 132 (BH) 108 (BB) Human (N = 120) 65 (HH) 55 (HB) 3H3B Bot (N = 120) 73 (BH) 47 (BB) 6H0B Human (N = 240) 108 (HH) 132 (HB)

Firstly, for the H/BGC 3H3B, 60.83% of the bots were wrongly identified as the human-controlled avatar (3H3B: BH), while 54.17% of the actual human-controlled avatars were correctly identified (3H3B: HH). This indicated that the bots were more likely to be judged as being controlled by the human players in the H/BGC 3H3B. Secondly, for the H/BGC 6H0B, only 45.00% of the human-controlled avatars were judged as the human-controlled avatars (6H0B: HH). On the contrary, 55.00% of the bots were identified as the human-controlled avatars (0H6B: BH). The results implied that the bots were actually more likely to be judged as the human-controlled avatars in both H/BGC 6H0B and 0H6B. In conclusion, the evidence presented above implies that the bots behaved at least as naturally as the human-controlled avatars, as they are more likely to be judged as the human-controlled avatars. Additionally, a 9% increase was found in the correctness rate of judgements Chapter 6. Evacuation of Reused Behaviours 142

Judgements results of three H/BGC

0% 20% 40% 60% 80% 100%

Human 45.00% 55.00% 6H0B

Human 54.17% 45.83% 3H3B Three H/BGC Bot 60.83% 39.17%

Bot 55.00% 45.00% 0H6B

Identified as Human Controlled Avatar Identified as Bot

Figure 6.18: The judgements results over the three H/BGC.

on the human-controlled avatars (HH) after replacing a proportion of the human- controlled avatars with bots (i.e. from 45.00% (6H0B: HH) to 54.17% (3H3B:HH)). Similarly, the proportion of the bots that were judged as being controlled by the human players (BH) were increased from 55.00% (0H6B: BH) to 60.83% (3H3B: BH). These increases indicate that both the human-controlled avatars and bots were more likely to be judged as the human-controlled avatars in a mixed evacuation H/BGC, i.e. 3H3B.

6.3.4 Judgements Results of Three Fire Evacuation Scenar- ios

This section continues to explore whether the human-controlled avatars behaved more naturally than bots in certain fire evacuation scenarios, i.e. S1, S2 and S3 (see Figure 6.7). The following two sub-hypotheses have to be accepted to support the hypothesis H1:

H1c The judgement score for human-controlled avatars is significantly higher than that of bots among the three fire scenarios. Chapter 6. Evacuation of Reused Behaviours 143

H1d The confidence score for human-controlled avatars is significantly higher than that of bots among the three fire scenarios.

The Kruskal-Wallis tests were performed on both the judgement and confidence scores across three fire evacuation scenarios (see Table 6.11). The results indicate that neither the judgement scores nor the confidence scores differed significantly across S1, S2 and S3, rejecting both H1c and H1d.

Table 6.11: Test statistics for the Kruskal-Wallis tests.

Score Groups H df p Judgement S1, S2, S3 5.496 2 .064 Confidence S1, S2, S3 3.404 2 .182

Section 6.3.2 suggested that the judgement scores differed among four identi- fication results (see Table 6.7), particularly between HH and BH as reported in Table 6.8. It also suggested that the confidence score should not significantly differ between these groups. Thus, the same difference in each fire scenario among the four identification results should be reported. As a result, Kruskal-Wallis tests were performed on the judgement scores and confidence scores (see Table 6.12).

Table 6.12: The Kruskal-Wallis test statistics.

Score Groups Scenario H df p S1 213.651 3 < .001∗∗ Judgement HH, HB, BH, BB S2 182.165 3 < .001∗∗ S3 151.157 3 < .001∗∗ S1 4.511 3 .211 Confidence HH, HB, BH, BB S2 3.804 3 .283 S3 5.452 3 .142 ∗∗Significant at .001 level.

The Kruskal-Wallis test results shows that no significant differences were re- ported for the confidence scores among the four identification results in all three Chapter 6. Evacuation of Reused Behaviours 144

fire evacuation scenarios, thus rejecting H1d. However, the judgement scores were reported to be significantly different across HH, HB, BH and BB in all three fire evacuation scenarios (i.e. S1, S2 and S3). This complies with the results in Section 6.3.2, as the mean ranks of HH and BH were always higher than those of HB and BB. Mann-Whitney U tests were then performed as post hoc tests on the judgement scores to determine if those differences were statistically insignificant between HH and BH and between HB and BB in all three fire evacuation scenarios (see Table 6.13). For HB/BB, no significant differences were found for the judgement scores across all three fire evacuation scenarios. However, for HH/HB, a significant differ- ence was found in the fire evacuation scenario S1.

Table 6.13: The test statistics for the Mann-Whitney U test on the judgement scores.

Score Groups Scenario Mdn U z p S1 8 vs. 8 2239 -3.360 < .001∗∗ HH vs. BH S2 8 vs. 8 1724 -0.854 .393 S3 8 vs. 8 909 -1.400 .162 Judgement S1 3 vs. 3 1584 -0.195 .846 HB vs. BB S2 2 vs. 2 1516 -1.102 .270 S3 2 vs. 3 1106.0 -1.393 .163 ∗∗Significant at 0.001 level

To analyse whether the significance of HH/BH in S1 supports H1c, post hoc tests were performed and the results are shown in Table 6.14 and Figure 6.19. Although the mean comparison test results suggest that the human-controlled avatars received higher judgement scores than the bots, it also shows that more bots (BH, N = 84) were identified as being controlled by the human players than the actual human- controlled avatars (HH, N = 76). Thus, the contradicting evidence is insufficient to prove that the human-controlled avatars behaved significantly more naturally than the bots in fire evacuation scenario S1. According to the discussions presented above, this section has no sufficient evi- dence to support the acceptance of the hypothesis H1 (see Table 6.4), as the bots Chapter 6. Evacuation of Reused Behaviours 145

Table 6.14: The statistic report on means comparison between HH and BH in fire evacuation scenario S1.

Group N Mdn M SD Min Max HH 76 8 8.54 1.113 6 10 BH 84 8 7.92 1.111 6 10

11

10

9

8

7

6

5

Judgement score 4

3

2

1

0 HH BH Identification result

Figure 6.19: The distribution of judgements scores of fire scenario S1.

behaved as least as naturally as the human-controlled avatars in all fire evacuation sessions across all different fire evacuation scenarios, i.e. S1, S2 and S3. In conclusion, none of the evidence shown in the overall judgement results (Sec- tion 6.3.2), judgement results of three H/BGC (Section 6.3.3), and judgement results of three fire evacuation scenarios (Section 6.3.4), can support the hypothesis H1 (see

Table 6.4). As a result, H1 is strongly rejected, indicating the PREVENT gener-

Page 1 Chapter 6. Evacuation of Reused Behaviours 146 ated bots behaved at least as naturally as human evacuees in the test virtual fire evacuation system.

6.3.5 Evacuation Time With Bots

The results of the VHBTT have demonstrated strong behavioural realism of the PREVENT generated bots. Outside the hypotheses of the VHBTT, an interesting fact was found that mixing bots with human evacuees in the virtual fire drill sessions (i.e. 3H3B) reduces the total evacuation time (see Figure 6.20). As claimed by Burden [22], a human avatar may behave differently when engaging with a robotic avatar. This section presents a preliminary study towards the following hypothesis:

H0 Mixing virtual human evacuees into the virtual fire drill sessions has no significant relations to the reduction of the total evacuation time.

Figure 6.20 shows the mean total evacuation times (excluding the pre-evacuation time) between the three H/BGC (see Table 6.2) across three fire evacuation scenarios (see Figure 6.7). The diagram shows a general decrease in the mean total evacua- tion time against an increase in the number of bots involved in the fire evacuation sessions. For the fire evacuation scenario S1, the average total evacuation time of 3H3B (N = 2,M = 20.51,SD = 0.27) was 39.27% shorter than that of 6H0B (N = 2,M = 33.77,SD = 9.09), and only 5.75% longer to that of 0H6B (N = 2,M = 19.33,SD = 2.64). A Spearman’s correlation was run to determine the relation between the percentage of bots and the total evacuation times. A strong negative correlation was found between the percentage of bots and the total evacuation times, but was not significant (rs = −.84, n = 6, p = .11). For the fire evacuation scenario S2, the similar trend was also found that the average total evacuation time of 3H3B (N = 2,M = 30.29,SD = 6.15) was 40.49% shorter than that of 6H0B (N = 2,M = 50.90,SD = 3.90), and only 10.30% longer than that of 0H6B (N = 2,M = 27.17,SD = 4.20). A Spearman’s correlation proved a strongly significant negative monotonic correlation between the percentage of bots and the total evacuation times (rs = −.84, n = 6, p < .05). Chapter 6. Evacuation of Reused Behaviours 147

Percent of bots (%) 80.00 6H0B (0%) 3H3B (50%) 0H6B (100%)

60.00

40.00

20.00 Mean T otal E vacuation ime (second s )

0.00 S1 S2 S3 Fire Evacuation Scenario

Error Bars: +/- 1 SD

Figure 6.20: Mean total evacuation time of three H/BGC and three fire scenarios).

For the fire evacuation scenario S3, the mean total evacuation time went down gradually from 6H0B (N = 2,M = 58.40,SD = 17.06), 3H3B (N = 2,M = 54.49,SD = 10.99) to 0H6B (N = 2,M = 47.87,SD = 1.99). A Spearman’s correlation test suggested a very week negative monotonic correlation between the percentage of bots and the total evacuation times (rs = −.12, n = 6, p = .822). In conclusion, the results of mean total evacuation time has shown that mix- ing human players with bots can reduce the total evacuation time. Although the Spearman’s correlations indicated one significant relationship for the fire evacuation scenario S2, the correlations were not significant across two other scenarios. As

62 result, the H0 was only partially rejected .

62Future work may perform more tests with larger sample sizes and more control groups to

Page 1 Chapter 6. Evacuation of Reused Behaviours 148 6.4 Summary of Chapter6

This chapter presented a case study of using the VHBTT to evaluate the behavioural realism of the PREVENT generated virtual human evacuees (bots) (Chapter3). To enable the VHBTT, a networked fire evacuation training system and a judging system were developed. The results of the study strongly demonstrated that the PREVENT generated virtual human evacuees (bots) behaved at least as naturally as the human partic- ipants in the given fire evacuation training scenarios. It also showed that these realistic evacuation behaviours were consistent in all the three H/BGC and fire evacuation scenarios. This suggested that the reuse of the expert domain knowledge via PREVENT (Chapter3) and the use of the path switching framework (Chap- ter4) can contribute to the development of realistic virtual fire evacuation training systems. The preliminary study on the total evacuation time implied that mixing the human participants with expert bots partially contributes to better training outcome (i.e. shorter total evacuation times) for virtual fire evacuation training.

determine if the hypothesis H0 can be fully rejected or accepted. However, this is out the scope of the work presented here. Chapter 7

Conclusions and Future Work

This chapter concludes the body of the thesis with a review of the aims, approach, and outcomes, an overview of contributions, and some final ideas for future work.

7.1 Summary

The rapid development of virtual reality technologies has made it possible to fa- cilitate training activities in virtual worlds, especially in dangerous scenarios such as fire drills [153, 212] and earthquake training [131]. However, building virtual environment-based training systems is difficult. A major issue, as explored in this thesis, is the need to include expert domain knowledge (e.g. fire science and earth- quake science) and the difficulty of including this in virtual environments. For example, the lack of fire science in a virtual fire evacuation training system can lead to an inaccurate simulation of fire and toxic gases, and limitations in realistic virtual human evacuees. The motivation of this research was to find an approach to building realistic virtual environment-based training systems that embed reused expert domain knowledge. This thesis proposes an innovative pipeline approach, PREVENT, which trans- lates existing expert domain knowledge (in this thesis, fire science knowledge) from offline numerical simulators (e.g. fire evacuation simulator FDS+Evac) into virtual environments. Also, the reuse of modern game engines can ease the time-consuming process of creating virtual environments. Together with a data parser and code

149 Chapter 7. Conclusions and Future Work 150 generator, the PREVENT approach can automate the transition of behaviour data from a numerical simulator to the target virtual environment. Once the original PREVENT approach was defined and validated (Chapter3), the challenge emerged of turning statically defined virtual human behaviour into dynamic interactive behaviour. The challenge required the original PREVENT ap- proach to be extended to support interaction among computer-controlled virtual humans and their surrounding environment with a minimum loss of the embedded domain knowledge (e.g. fire science in the case study). An interaction framework was developed that combined multiple searching algorithms to support path switch- ing of computer-controlled virtual human evacuees. This successfully enhanced the interactivity of the virtual humans in the PREVENT generated virtual environments (Chapter4). An evaluation of the behavioural realism of the computer-controlled virtual hu- man evacuees generated via the PREVENT approach was then presented. A new test, VHBTT, for reviewing the realism of virtual human behaviour was then defined (Chapter5). The VHBTT was used to show that the PREVENT approach can gen- erate virtual humans which are judged to behave equally or even more realistically than those controlled by human evacuees (Chapter6).

7.2 Overview of Contributions

The primary contributions of this thesis are the design and validation of the PRE- VENT approach (including an interaction framework to support path switching for computer-controlled virtual humans) and the VHBTT for reviewing the behavioural realism of virtual humans in virtual environments. A complete pipeline for creating interactive virtual environments has been demonstrated. The PREVENT approach performs reliably, and accurately reuses domain knowledge to create realistic virtual humans in virtual environment-based training systems (Chapter6). The three main contributions for solving the three research questions are sum- marised here:

Q1 How is it possible to reuse expert domain knowledge to support realistic be- Chapter 7. Conclusions and Future Work 151

haviours of computer-controlled virtual humans in virtual evacuation training environments?

Firstly, in order to reuse domain knowledge in a virtual environment (Q1), the thesis contributed a novel pipeline approach, PREVENT, which can successfully reuse domain knowledge to support creating realistic virtual environments by us- ing 3D modelling tools, evacuation simulators and game engines (see Chapter3). PREVENT is capable of reusing a broad range of domain knowledge into virtual environment-based training systems. As a proof of concept, an example of the use of PREVENT was presented in the thesis, which extracted fire science knowledge from a fire evacuation simulator (FDS+Evac) and reused to create realistic computer- controlled virtual human evacuees in a virtual environment presented in the Unity engine (see Section 3.2). Evidence from the case study validated the scalability and consistency of the PREVENT approach regardless of the size, complexity and fire conditions of the test environment (see Section 3.4 and Section 3.5). The work towards Q1 was published in [265–267].

Q2 How is it possible to enable dynamic interaction in virtual evacuation train- ing systems without impacting the embedded expert domain knowledge?

Secondly, to enable the dynamic interaction in the prototyped virtual environ- ments (Q2), the thesis has contributed a path-switching framework that extended the existing PREVENT approach to successfully support interactions between vir- tual humans with a minimum loss of the reused fire science knowledge (see Section 4.2). The path-switching framework was validated with improved interactivity in the test virtual environment as shown in several case studies (see Section 4.3). Also, statistical analysis of a user study presented in Chapter6 has further demonstrated that the PREVENT generated virtual humans can behave at least as realistically as those controlled by real human participants.

Q3 How is it possible to evaluate the realism of virtual human behaviours in a virtual environment? Chapter 7. Conclusions and Future Work 152

The final question was to evaluate the realism of the virtual human behaviours in virtual environments (Q3), such as the virtual human evacuees created with the solutions of Q1 and Q2. As far as the author is aware, the VHBTT proposed in this thesis is the first test that is specifically designed for reviewing the behavioural realism of virtual humans in general virtual environment-based applications (see Chapter5). Chapter6 detailed a complete case study on using VHBTT in a virtual fire evacuation training system. In addition to the three main contributions described above, the thesis also has made the following contributions:

• Developed a test fire evacuation training system based on a real building at The University of Newcastle, Australia. This system consists of a networked train- ing subsystem where multiple human participants and virtual human evacuees can complete fire egress tasks together, and a playback subsystem that can re- play the previously recorded evacuation training session (Chapter6).

• Developed an example of a web-based reviewing system that facilitates the judgement process of the VHBTT (Chapter6).

• Presented evidence that mixing virtual human evacuees with human evacuees in virtual fire evacuation training scenarios enhances the training outcome as a decrease in total evacuation time was observed (Section 6.3.5).

7.3 Future Work

Chapter6 presented a user study as part of the evaluation of the PREVENT ap- proach. As a proof of concept, only one full case study was conducted with virtual fire scenarios based on a campus building. Further studies may explore the use of the PREVENT approach in creating different training scenarios (e.g. fire evacuation in hospitals, banks and airports) with an increase in the number of participants of various backgrounds (e.g. doctors, patients and passengers). It would also be worthwhile to investigate the training impact between training users in real world scenarios and in virtual scenarios powered by PREVENT. This Chapter 7. Conclusions and Future Work 153 gives useful references on whether embedding domain knowledge improves the train- ing performance. For example, comparing the training results of real fire drills with those of virtual fire drills (e.g. the study in Section 6.3.5) can imply whether involv- ing fire science knowledge (e.g. in the form of computer-controlled virtual humans) helps human trainees gain more solid fire safety skills (i.e. faster evacuation). The FDS+Evac simulates not only the behaviour of evacuees, but also produces continuously simulated environment data of smoke, fire, heat and more. In addition to the reuse of evacuation behaviour, future work would explore the reuse of these environmental data in virtual environments. This is of great significance as it helps create highly realistic, accurate and reliable virtual fire environments to facilitate a wide range of fire-related (include combustion) research. For example, it gives a perfect environment for evaluating and training rescue robots, assessing the safety level of building designs, training fire fighters, and military application simulations. Apart from the FDS+Evac, there are several other domain simulators that produce similar simulation data. Future projects may consider replacing the FDS+Evac with other domain simulators such as buildingEXODUS. Another interesting avenue for work would be the use of building information modelling (BIM) in the 3D modelling phase of PREVENT for more accurate sim- ulation configurations of evacuations in simulators. BIM is a digital representation of physical and functional characteristics of a facility. A building information model has a more precise description of the target environment, such as the information of construction materials, which directly contributes to the evolution of fire and smoke, and the behaviours of simulated agents. The automated integration between the knowledge of a building with a fire evacuation simulator via PREVENT can sig- nificantly improve the accuracy and reliability of simulated fire environments. This potentially leads to a dramatic increase of computation cost, as the amount of avail- able information is expanded. This brings an opportunity of further strengthening the PREVENT approach by integrating distributed computing frameworks. The thesis has development a large code base while implementing PREVENT and supporting the VHBTT. One future development plan is to refactor the code base to produce a complete PREVENT system consisting 3D modelling module, Chapter 7. Conclusions and Future Work 154 behaviour simulation module, path switching module, data reuse module, game engine reuse module, and a framework of the VHBTT judging system. The system will be designed to be loosely-coupled, thus each component (module) can be reused or refactored to suite different research projects and different levels of developers. As a result, users can easily produce interactive virtual systems by simply providing 3D models, simulation configurations (e.g. from BIM), and performing basic virtual environment creation operations (e.g. import 3D models and attach auto-generated scripts to these models). One limitation of the research is that the simulated behaviours are always ra- tional, as the virtual human evacuees are always making optimal decisions based on their surrounding environments as defined in the fire simulator. Instead, human beings may behave irrationally in emergency scenarios, such as running through fires and jumping through windows. This may lead to an interesting connection between the research in this thesis and the area of affective computing. Apart from the behaviours that can be reused by PREVENT, the FDS+Evac simulations also showed other interesting crowd behaviours, such as bottlenecks, herding behaviour and queuing. Future work may investigate how these crowd behaviours can be translated into a game engine. The one aim of researching virtual evacuation training systems is to provide an immersive training experience that can effectively improve people’s ability to make correct decisions when emergencies occur. Upon the interactive environment created by PREVENT, the addition of virtual/augmented reality devices, such as HTC Vive and Oculus Rift, together with multimodal interaction design could further improve the training experience. For example, adding stereo audio would allow the trainee to hear the cracking sound of burning materials, integrating smoke generators would let the trainee smell smoke, and the use of heat generators would enable that sense to be felt. In addition to virtual reality, another interesting future work would be expanding PREVENT to support augmented reality (AR)-based training environments. Aug- mented reality provides a different way of interacting with the world. Future work could investigate in what form the expert domain knowledge can be present in AR Chapter 7. Conclusions and Future Work 155 environments and what interface would be appropriate to engage human trainees with augmented virtual humans. Also, it would be interesting to investigate by which method, VR or AR, could provide more engaging training experiences and better outcomes. I believe further studies and integration of existing technologies will certainly push the experience of VR/AR-based evacuation training to a new level. I am strongly confident that PREVENT will make a difference in advancing this field and ultimately save lives from tragedies. Bibliography

[1] ACT Fire and Rescue (2012). CFU communications drills 2011-12. ACT, Aus- tralia.

[2] Anderson, E. F., McLoughlin, L., Watson, J., Holmes, S., Jones, P., Pallett, H., & Smith, B. (2013, September). Choosing the infrastructure for entertainment and serious computer games - A whiteroom benchmark for game engine selec- tion. In Games and Virtual Worlds for Serious Applications (VS-GAMES), 2013 5th International Conference on (pp. 1-8). http://doi.org/10.1109/VS- GAMES.2013.6624223

[3] Angelin, J., Blair, P., & Carson, N. (1993). Aircraft evacuation testing: Re- search and technology issues. Office of Technology Assessment-Congress of the United States.

[4] Atkinson, D. J., & Clark, M. H. (2014, October). Methodology for study of human-robot social interaction in dangerous situations. In Proceedings of the Second International Conference on Human-Agent Interaction (pp. 371-376). http://doi.org/10.1145/2658861.2658871

[5] Autodesk. What AutoCAD users are saying. Retrieved February 06, 2017, from http://www.autodesk.com/products/autocad/case-studies/

[6] Aziz, E. S., Chang, C., Arango, F., Esche, S. K., & Chassapis, C. (2007, January). Linking computer game engines with remote experiments. In ASME 2007 International Mechanical Engineering Congress and Exposition (pp. 413- 420). http://doi.org/10.1115/IMECE2007-41969

156 Bibliography 157

[7] Bailey, J. D., & Blackmore, K. L. (2017, January). Gender and the perception of emotions in avatars. In Proceedings of the Australasian Computer Science Week Multiconference (p. 62). http://doi.org/10.1145/3014812.3014876

[8] Baird, H. S., Coates, A. L., & Fateman, R. J. (2003). PessimalPrint: A reverse Turing Test. International Journal on Document Analysis and Recognition, 5 (2-3), 158-163. http://doi.org/10.1007/s10032-002-0089-1

[9] Bartish, A., & Thevathayan, C. (2002, July). BDI agents for game development. In Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems: part 2 (pp. 668-669). http://doi.org/10.1145/544862.544901

[10] Bayliss, J. D., Vedamurthy, I., Bavelier, D., Nahum, M., & Levi, D. (2012, September). Lazy eye shooter: A novel game therapy for visual recovery in adult amblyopia. In Games Innovation Conference (IGIC), 2012 IEEE Inter- national (pp. 1-4). http://doi.org/10.1109/IGIC.2012.6329836

[11] B´ıda,M., Brom, C., Popelov´a,M., & Kadlec, R. (2011, November). Story- Factory - A tool for scripting machinimas in Unreal Engine 2 and UDK. In Fourth International Conference on Interactive Digital Storytelling (pp. 334- 337). http://doi.org/10.1007/978-3-642-25289-1 42

[12] Bille, R., Smith, S. P., Maund, K., & Brewer, G. (2014). Explor- ing building information modelling to game engine conversion. Proceed- ings of the 2014 Conference on Interactive Entertainment (pp. 1-8). http://doi.org/10.1145/2677758.2677764

[13] Blender Foundation. Blender. Retrieved February 06, 2017, from https://www.blender.org/

[14] Boden, M. A. (2006). Mind as machine: A history of cognitive science. Oxford, England: Oxford University Press. Bibliography 158

[15] Bolte, B., Steinicke, F., & Bruder, G. (2011, April). The jumper metaphor: An effective navigation technique for immersive display setups. In Proceedings of Virtual Reality International Conference (VRIC) 2011 (pp. 1-7).

[16] Boeykens, S. (2013). Unity for architectural visualization. Birmingham, Eng- land: Packt Publishing Ltd.

[17] Brightman, M. (2013). The SketchUp workflow for architecture: Model- ing buildings, visualizing design, and creating construction documents with SketchUp Pro and LayOut. New York City: John Wiley & Sons.

[18] Bruder, G., Interrante, V., Phillips, L., & Steinicke, F. (2012). Redirecting walking and driving for natural navigation in immersive virtual environments. IEEE Transactions on Visualization and Computer Graphics, 18 (4), 538-545. http://doi.org/10.1109/TVCG.2012.55

[19] Bruder, G., Pusch, A., & Steinicke, F. (2012, August). Analyzing effects of geometric rendering parameters on size and distance estimation in on-axis stereographics. In Proceedings of the ACM Symposium on Applied Perception (pp. 111-118). http://doi.org/10.1145/2338676.2338699

[20] Bryde, D., Broquetas, M., & Volm, J. M. (2013). The project benefits of building information modelling (BIM). International Journal of Project Man- agement, 31 (7), 971-980. http://doi.org/10.1016/j.ijproman.2012.12.001

[21] Burdea Grigore, C., & Coiffet, P. (1994). Virtual reality technology. London, England: Wiley-Interscience.

[22] Burden, D. J. (2009). Deploying embodied AI into virtual worlds. Knowledge- Based Systems, 22 (7), 540-544. http://doi.org/10.1016/j.knosys.2008.10.001

[23] Carpin, S., Lewis, M., Wang, J., Balakirsky, S., & Scrapper, C. (2007, April). USARSim: A robot simulator for research and education. In Proceedings 2007 IEEE International Conference on Robotics and Automation (pp. 1400-1405). http://doi.org/10.1109/ROBOT.2007.363180 Bibliography 159

[24] Cepeda, L. M., & Davenport, D. S. (2006). Person-centered therapy and solution-focused brief therapy: An integration of present and future aware- ness. Psychotherapy: Theory, Research, Practice, Training, 43 (1), 1-12. http://doi.org/10.1037/0033-3204.43.1.1

[25] Cha, M., Han, S., Lee, J., & Choi, B. (2012). A virtual reality based fire training simulator integrated with fire dynamics data. Fire Safety Journal, 50, 12-24. http://doi.org/10.1016/j.firesaf.2012.01.004

[26] Chang, Y., Aziz, E. S., Esche, S. K., & Chassapis, C. (2011, October). Assessment of the pilot implementation of a game-based gear design lab- oratory. In 2011 Frontiers in Education Conference (FIE) (pp. S4H-1). http://doi.org/10.1109/FIE.2011.6143031

[27] Chen, C. J., Lau, S. Y., & Teh, C. S. (2015). A feasible group testing frame- work for producing usable virtual reality learning applications. Virtual Reality, 19 (2), 129-144. http://doi.org/10.1007/s10055-015-0263-7

[28] Chew, M., & Baird, H. S. (2003). Baffletext: A human interactive proof. In Proc. SPIE (pp. 305-316). http://doi.org/10.1117/12.479682

[29] Childers, R. (2010). A virtual Mars. In Online Worlds: Convergence of the Real and the Virtual (pp. 101-109). http://doi.org/10.1007/978-1-84882-825- 4 8

[30] Chittaro, L., Ranon, R., & Ieronutti, L. (2006). Vu-flow: A visu- alization tool for analyzing navigation in virtual environments. IEEE Transactions on Visualization and Computer Graphics, 12 (6), 1475-1485. http://doi.org/10.1109/TVCG.2006.109

[31] Chittaro, L., & Ranon, R. (2009, March). Serious games for training occu- pants of a building in personal fire safety skills. In Games and Virtual Worlds for Serious Applications, 2009. VS-GAMES’09. Conference in (pp. 76-83). http://doi.org/10.1109/VS-GAMES.2009.8 Bibliography 160

[32] Chow, W. K. (2001). Review on fire safety management and application to Hong Kong. International Journal on Engineering Performance-Based Fire Codes, 3 (1), 52-58.

[33] Christel, M. G., Stevens, S. M., Maher, B. S., Brice, S., Champer, M., Jayapalan, L., ... Zhang, X. (2012, July). RumbleBlocks: Teaching sci- ence concepts to young children through a Unity game. In 2012 17th International Conference on Computer Games (CGAMES)(pp. 162-166). http://doi.org/10.1109/CGames.2012.6314570

[34] Conover, W. J. (1980). Practical nonparametric statistics. New York City: John Wiley & Sons.

[35] Connell, R. (2001). Collective behavior in the September 11, 2001 evacuation of the World Trade Center (Preliminary Paper# 313). Newark, DE: University of Delaware Disaster Research Center.

[36] CryTek. (2016a). CryENGINE V. Retrieved January 30, 2017, from https://www.cryengine.com/

[37] CryTek. (2016b). CryENGINE Showcase. Retrieved January 30, 2017, from https://www.cryengine.com/showcase/

[38] CryTek. (2013c). Crysis. Retrieved January 30, 2017, from http://www.crysis.com/

[39] CryTek. (2016d). The Climb. Retrieved January 30, 2017, from http://www.theclimbgame.com/

[40] CryTek. (2016e). CryENGINE. Github Repository, Retrieved January 30, 2017, from https://github.com/CRYTEK-CRYENGINE/CRYENGINE/

[41] CryTek. (2016f). CryENGINE Forums. Retrieved March 16, 2017, from https://forum.cryengine.com/

[42] CryTek. (2016f). CryENGINE Sandbox. Retrieved September 25, 2017, from https://forum.cryengine.com/ Bibliography 161

[43] Davis, F. D., & Venkatesh, V. (2004). Toward preprototype user accep- tance testing of new information systems: Implications for software project management. IEEE Transactions on Engineering Management, 51 (1), 31-46. http://doi.org/10.1109/TEM.2003.822468

[44] De Amicis, R., Girardi, G., Andreolli, M., & Conti, G. (2009, October). Game based technology to enhance the learning of history and cultural heritage. In Proceedings of the International Conference on Advances in Computer Enter- tainment Technology (pp. 451-451). http://doi.org/10.1145/1690388.1690499

[45] Dunwell, I., Petridis, P., Hendrix, M., Arnab, S., Mohammad, A. S., & Guetl, C. (2012, July). Guiding intuitive learning in serious games: An achievement- based approach to externalized feedback and assessment. In Complex, Intelli- gent and Software Intensive Systems (CISIS), 2012 Sixth International Con- ference on (pp. 911-916). http://doi.org/10.1109/CISIS.2012.205

[46] Eikelboom, R. H., Leishman, N. F., Munro, T. J., Nguyen, B., Riggs, P. R., Tennant, J., ... Robertson, W. B. (2012). “Epic Ear Defence” - A game to educate children on the risks of noise-related hearing loss. Games for Health Journal, 1 (6), 460-463. http://doi.org/10.1089/g4h.2012.0065

[47] Epic Games. (2004). Unreal Tournament 2004.

[48] Epic Games. (2014). What is Unreal Engine 4. Retrieved January 30, 2017, from https://www.unrealengine.com/what-is-unreal-engine-4

[49] Epic Games. (2017a). Retrieved January 30, 2017, from https://www.epicgames.com/

[50] Epic Games. (2017b). Unreal Engine End User License Agreement Version 11. January 30, 2017, from https://www.unrealengine.com/eula

[51] Epic Games. (2017c). Level Editor. Retrieved September 25, 2017, from https://docs.unrealengine.com/latest/INT/Engine/UI/LevelEditor/index.html Bibliography 162

[52] Euro Fire Protection and Maintenance Service. (2013, May). Carry- ing out an Effective Fire Drill. Retrieved February 06, 2017, from http://www.eurofireprotection.com/blog/carrying-out-an-effective-fire-drill/

[53] Fahy, R. F., & Proulx, G. (1997). Human behavior in the world trade center evacuation. Fire Safety Science, 5, 713-724. http://doi.org/10.3801/IAFSS.FSS.5-713

[54] Fahy, R. F. & Proulx, G. (2005). Analysis of published accounts of the World Trade Center evacuation (Research Report). Gaithersburg, MD: Department of Commerce, Technology Administration, National Institute of Standards and Technology

[55] Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, and Computers, 35 (3), 379-383. http://doi.org/10.3758/BF03195514

[56] Feigenbaum, E. A. (2003). Some challenges and grand challenges for computational intelligence. Journal of the ACM (JACM), 50 (1), 32-40. http://doi.org/10.1145/602382.602400

[57] Fellendorf, M., & Vortisch, P. (2010). Microscopic traffic flow sim- ulator VISSIM. In Fundamentals of Traffic Simulation (pp. 63-93). http://doi.org/10.1007/978-1-4419-6142-6 2

[58] Field, A. (2009). Discovering statistics using SPSS. London, England: Sage Publications Ltd.

[59] Fire and Rescue NSW. Annual Statistical Report 2006/07. Retrieved January 30, 2017, from http://www.fire.nsw.gov.au/page.php?id=171

[60] Flanagan, K. (2011, June). Avatars train on Navy’s future ship. Retrieved January 30, 2017, from http://www.hmascanberra.com/history/nushipcanberra.html Bibliography 163

[61] Forney, G. P. (2014). Smokeview, A tool for visualizing fire dynamics simu- lation data. Volume I: User’s guide. Gaithersburg, MD: National Institute of Standards and Technology. http://doi.org/10.6028/NIST.SP.1017-3

[62] Froschauer, J., Arends, M., Goldfarb, D., & Merkl, D. (2011, May). To- wards an online multiplayer serious game providing a joyful experience in learning art history. In Games and Virtual Worlds for Serious Applica- tions (VS-GAMES), 2011 Third International Conference on (pp. 160-163). http://doi.org/10.1109/VS-GAMES.2011.47

[63] Fumarola, M., & Poelman, R. (2011). Generating virtual environments of real world facilities: Discussing four different approaches. Automation in Construc- tion, 20 (3), 263-269. http://doi.org/10.1016/j.autcon.2010.08.004

[64] Galea, E. R., & Galparsoro, J. P. (1994). A computer-based simulation model for the prediction of evacuation from mass-transport vehicles. Fire Safety Jour- nal, 22 (4), 341-366. http://doi.org/10.1016/0379-7112(94)90040-X

[65] Galea, E. R., Owen, M., & Lawrence, P. J. (1996). Computer modelling of human behaviour in aircraft fire accidents. Toxicology, 115 (1-3), 63-78. http://doi.org/10.1016/S0300-483X(96)03495-6

[66] Galea, E. R., Hulse, L., Day, R., Siddiqui, A., Sharp, G., Boyce, K., ... & Greenall, P. V. (2010). The UK WTC 9/11 evacuation study: An overview of the methodologies employed and some preliminary analysis. In Pedestrian and Evacuation Dynamics 2008 (pp.3-24). http://doi.org/10.1007/978-3-642- 04504-2 1

[67] Gamberini, L., Cottone, P., Spagnolli, A., Varotto, D., & Mantovani, G. (2010). Responding to a fire emergency in a virtual environment: Differ- ent patterns of action for different situations. Ergonomics, 8 (46). 842-858 http://doi.org/10.1080/0014013031000111266

[68] Gatzidis, C., Parry, K., Kavanagh, E., Wilding, A., & Gibson, D. (2009, March). Towards the development of an interactive 3D coach training se- rious game. In Games and Virtual Worlds for Serious Applications, 2009. Bibliography 164

VS-GAMES’09. Conference in (pp. 186-189). http://doi.org/10.1109/VS- GAMES.2009.28

[69] Geisser, S., & Johnson, W. O. (2005). Modes of parametric statistical inference. New York City: John Wiley & Sons.

[70] Gentner, D. (2003). Language in mind: Advances in the study of language and thought. Cambridge, MA: The MIT Press.

[71] Gershon, R. R., Qureshi, K. A., Rubin, M. S., & Raveis, V. H. (2007). Factors associated with high-rise evacuation: Qualitative results from the World Trade Center evacuation study. Prehospital and Disaster Medicine, 22 (03), 165-173. http://doi.org/10.1017/S1049023X0000460X

[72] Gilbert, R. L., & Forney, A. (2015). Can avatars pass the Tur- ing Test? Intelligent agent perception in a 3D virtual environ- ment. International Journal of Human-Computer Studies, 73, 30-36. http://doi.org/10.1016/j.ijhcs.2014.08.001

[73] Glasa, J., Valasek, L., Weisenpacher, P., & Halada, L. (2012). Use of Py- roSim for simulation of cinema fire. International Journal on Recent Trends in Engineering and Technology, 7 (2), 51-56.

[74] Gray, J. (2003). What next? A dozen information-technology research goals. Journal of the ACM (JACM), 50 (1), 41-57. http://doi.org/10.1145/602382.602401

[75] Guimaraes, M., Said, H., & Austin, R. (2011, June). Using video games to teach security. In Proceedings of the 16th Annual Joint Conference on Innovation and technology in Computer Science Education (pp. 346-346). http://doi.org/10.1145/1999747.1999860

[76] Gwynne, S., Galea, E. R., Owen, M., Lawrence, P. J., & Filippidis, L. (1999). A review of the methodologies used in the computer simulation of evacua- tion from the built environment. Building and Environment, 34 (6), 741-749. http://doi.org/10.1016/S0360-1323(98)00057-2 Bibliography 165

[77] Habonneau, N., Richle, U., Szilas, N., & Dumas, J. E. (2012, November). 3D simulated interactive drama for teenagers coping with a traumatic brain injury in a parent. In International Conference on Interactive Digital Storytelling (pp. 174-182). http://doi.org/10.1007/978-3-642-34851-8 17

[78] Hamad, M. (2013). AutoCAD 2014 beginning and intermediate. Herndon, VA: Mercury Learning & Information.

[79] Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1 (1), 43-54.

[80] Harthoorn, K., & Hughes, S. (2011, July). Interface design to support situ- ation awareness in virtual puppetry. In International Conference on Human- Computer Interaction (pp. 112-115). http://doi.org/10.1007/978-3-642-22095- 1 23

[81] Hasler, B. S., Tuchman, P., & Friedman, D. (2013). Virtual re- search assistants: Replacing human interviewers by automated avatars in virtual worlds. Computers in Human Behavior, 29 (4), 1608-1616. http://doi.org/10.1016/j.chb.2013.01.004

[82] Hauser, L. (1993). Reaping the whirlwind: Reply to Harnad’s “other bodies, other minds”. Minds and Machines, 3 (2), 219-237. http://doi.org/10.1007/BF00975533

[83] Haworth, M. B., Baljko, M., & Faloutsos, P. (2012, November). Treating pho- bias with computer games. In International Conference on Motion in Games (pp. 374-377). http://doi.org/10.1007/978-3-642-34710-8 36

[84] Helbing, D., & Moln`ar,P. (1995). Social force model for pedestrian dynamics. Physical Review E, 51(5), 4282. http://doi.org/10.1103/PhysRevE.51.4282

[85] Henneton, N. (2012). Modelling evacuation in a cinema complex: Validation study and comparison between different egress strategies. In Human Behaviour in Fire Symposium. Bibliography 166

[86] Hern´andez-Orallo,J., & Dowe, D. L. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174 (18), 1508- 1539. http://doi.org/10.1016/j.artint.2010.09.006

[87] Herrlich, M., Meyer, R., Malaka, R., & Heck, H. (2010, September). Develop- ment of a virtual electric wheelchair - Simulation and assessment of physical fidelity using the Unreal Engine 3. In International Conference on Entertain- ment Computing (pp. 286-293). http://doi.org/10.1007/978-3-642-15399-0 29

[88] Hingston, P. (2009). A Turing Test for computer game bots. IEEE Trans- actions on Computational Intelligence and AI in Games, 1 (3), 169-186. http://doi.org/10.1109/TCIAIG.2009.2032534

[89] Hingston, P. (2010, August). A new design for a Turing Test for bots. In Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games (pp. 345-350). http://doi.org/10.1109/ITW.2010.5593336

[90] Hostikka, S., Korhonen, T., Paloposki, T., Rinne, T., Matikainen, K., & Heli¨ovaara, S. (2007). Development and validation of FDS+ Evac for evac- uation simulations (Research Report). VTT Research, Finland.

[91] Howard, R. A. (1960). Dynamic Programming and Markov Processes. Cam- bridge, MA: The MIT Press.

[92] Hu, C., Jiang, C., Xu, M., Ding, H., & Chow, W. K. (2005). Studies on human behaviour in fires in China. International Journal on Architectural Science, 6 (3), 133-143.

[93] Hu, W., Qu, Z., & Zhang, X. (2012, August). A reusable and interoperable component library for mechanics simulation. In 2012 International Conference on Computer Science and Information Processing (CSIP) (pp. 1374-1377). http://doi.org/10.1109/CSIP.2012.6309118

[94] Hui, C. H., & Triandis, H. C. (1989). Effects of culture and response format on extreme response style. Journal of Cross-Cultural Psychology, 20 (3), 296-309. Bibliography 167

[95] IES. (2017). SIMULEX. Retrieved September 25, 2017, from http://www.iesve.com/software/ve-for-engineers/module/simulex/480

[96] Indraprastha, A., & Shinozaki, M. (2009). The investigation on using Unity3D game engine in urban design study. Journal of ICT Research and Applications, 3 (1), 1-18. http://doi.org/10.5614%2Fitbj.ict.2009.3.1.1

[97] Jacobson, J., Holden, L., Studios, F., & Toronto, C. A. (2005, June). The virtual Egyptian temple. In World Conference on Educational Media, Hyper- media & Telecommunications (ED-MEDIA), Montreal, Canada.

[98] Jacobson, J., & Lewis, M. (2005). Game engine virtual reality with CaveUT. Computer, 38 (4), 79-82. http://doi.org/10.1109/MC.2005.126

[99] Jarrett, C. (2009). Get a second life - Christian Jarrett on the benefits (and dangers) of using virtual worlds in your psychological research, therapy and teaching. Psychologist, 22 (6), 490-493.

[100] Johnson, M. W. (2010, June). Supporting collaborative real-time strate- gic planning in multi-player games. In Proceedings of the Fifth Interna- tional Conference on the Foundations of Digital Games (pp. 265-267). http://doi.org/10.1145/1822348.1822388

[101] Jorge, C. A. F., M´ol,A. C. A., Couto, P. M., & Pereira, C. M. N. (2010). Nuclear plants and emergency virtual simulations based on a low-cost engine reuse. Nuclear Power, Pavel Tsvetkov (Ed.), InTech.

[102] Juarez, A., Schonenberg, W., & Bartneck, C. (2010). Implementing a low- cost CAVE system using the CryEngine2. Entertainment Computing, 1 (3), 157-164. http://doi.org/10.1016/j.entcom.2010.10.001

[103] Kendall, M. G., Stuart, A., & Ord, J. K. (1968). The advanced theory of statistics (Vol. 3, pp. 306-311). London, England: John Wiley.

[104] Khronos Group. (2015, March). Vulkan - Industry Forged. Retrieved February 14, 2017, from https://www.khronos.org/vulkan/ Bibliography 168

[105] Kielar, P. M., Handel, O., Biedermann, D. H., & Borrmann, A. (2014). Concurrent hierarchical finite state machines for modeling pedes- trian behavioral tendencies. Transportation Research Procedia, 2, 576-584. http://doi.org/10.1016/j.trpro.2014.09.098

[106] Kim, Y., & Baylor, A. L. (2006). A social-cognitive framework for pedagogical agents as learning companions. Educational Technology Research and Devel- opment, 54 (6), 569-596. http://doi.org/10.1007/s11423-006-0637-3

[107] Kobes, M., Helsloot, I., De Vries, B., & Post, J. G. (2010). Building safety and human behaviour in fire: A literature review. Fire Safety Journal, 45 (1), 1-11. http://doi.org/10.1016/j.firesaf.2009.08.005

[108] Kock, N. (2008). E-collaboration and e-commerce in virtual worlds: The po- tential of Second Life and World of Warcraft. International Journal of e- Collaboration, 4 (3), 1-13. http://doi.org/10.4018/978-1-59904-825-3.ch001

[109] Koenig, S. T., Crucian, G. P., D¨unser,A., Bartneck, C., & Dalrymple-Alford, J. C. (2011). Validity evaluation of a spatial memory task in virtual environ- ments. International Journal of Design and Innovation Research, 6 (1), 1-13.

[110] Koenig, S. T., D¨unser,A., Bartneck, C., Dalrymple-Alford, J. C., & Crucian, G. P. (2011, June). Development of virtual environments for patient-centered rehabilitation. In 2011 International Conference on Virtual Rehabilitation (pp. 1-7). http://doi.org/10.1109/ICVR.2011.5971838

[111] Koo, B., & Fischer, M. (2000). Feasibility study of 4D CAD in commercial construction. Journal of Construction Engineering and Management, 126 (4), 251-260. http://doi.org/10.1061/(ASCE)0733-9364(2000)126:4(251)

[112] Korhonen, T., Hostikka, S., Heli¨ovaara, S., Ehtamo, H., & Matikainen, K. (2007). FDS+ Evac: Evacuation module for fire dynamics simulator. In Pro- ceedings of the Interflam2007: 11th International Conference on Fire Science and Engineering (pp. 1443-1448). Bibliography 169

[113] Korhone, T. & Hostikka, S. (2009). Fire dynamics simulator with evacuation FDS+ Evac. Technical reference and user’s guide, VTT Technical Research Centre of Finland, Finland.

[114] Korhonen, T. & Heli¨ovaara, S. (2011). FDS+Evac: Herding behaviour and exit selection. In Proceedings of the 10th International IAFSS Symposium (pp. 19-23). http://doi.org/10.3801/IAFSS.FSS.10-723

[115] Kostic, Z., Radakovic, D., Cvetkovic, D., Trajkovic, S., & Jevremovic, A. (2012). Comparative study of CAD software, Web3D technologies and existing solutions to support distance-learning students of engineering profile. IJCSI International Journal of Computer Science Issues, 9 (4), 7.

[116] Kostoulas, T., Mporas, I., Kocsis, O., Ganchev, T., Katsaounos, N., Santa- maria, J. J., ... Fakotakis, N. (2012). Affective speech interface in serious games for supporting therapy of mental disorders. Expert Systems with Applications, 39 (12), 11072-11079. http://doi.org/10.1016/j.eswa.2012.03.067

[117] Kriz, Z., Prochaska, R., Morrow, C. A., Vasquez, C., & Wu, H. (2010, March). Unreal III based 3-D virtual models for training at Nuclear Power Plants. In Nuclear & Renewable Energy Conference (INREC), 2010 1st International (pp. 1-5). http://doi.org/10.1109/INREC.2010.5462548

[118] Kuligowski, E. D., Peacock, R. D., & Hoskins, B. L. (2005). A review of build- ing evacuation models (Research Report). Gaithersburg, MD: US Department of Commerce, National Institute of Standards and Technology.

[119] Kuligowski, E. D. (2009). The process of human behavior in fires (Research Report). Gaithersburg, MD: Department of Commerce, National Institute of Standards and Technology.

[120] Kumar, S., Hedrick, M., Wiacek, C., & Messner, J. I. (2011). Developing an experienced-based design review application for healthcare facilities using a 3D game engine. Journal of Information Technology in Construction (ITcon), 16 (6), 85-104. Bibliography 170

[121] LaCurts, K. (2011). Criticisms of the Turing Test and why you should ignore (most of) them. Official Blog of MIT’s Course: Philosophy and Theoretical Computer Science.

[122] Lai, J., Tang, W., & He, Y. (2011, October). Team tactics in military serious game. In 2011 Fourth International Symposium on Computational Intelligence and Design (ISCID) (Vol. 1, pp. 75-78). http://doi.org/10.1109/ISCID.2011.28

[123] Laird, J. E., & Duchi, J. C. (2000). Creating human-like synthetic characters with multiple skill levels: A case study using the Soar Quakebot. In Proceedings of 2000 AAAI Fall Symposium Simulating Human Agents (pp. 75-79).

[124] Lawson, G. (2011). Predicting human behaviour in emergencies (Doctoral dis- sertation, University of Nottingham, UK).

[125] Lazar, J., Feng, J. H., & Hochheiser, H. (2010). Research methods in human- computer interaction. New York City: John Wiley & Sons.

[126] Leach, J. (2004). Why people ‘freeze’ in an emergency: Temporal and cog- nitive constraints on survival responses. Aviation, Space, and Environmental Medicine, 75 (6), 539-542.

[127] Lee, G. N. E., de Clunie, G. T., & Santana, G. (2012, May). 3D game engine as a visual information system. In Proceedings of the 6th Euro American Conference on Telematics and Information Systems (pp. 331-334). http://doi.org/10.1145/2261605.2261655

[128] Lentz, C. (2006). How one emergency plan works in a complex re- search facility. Journal of Chemical Health and Safety, 13 (3), 21-25. http://doi.org/10.1016/j.chs.2005.07.015

[129] Lewis, J., Brown, D., Cranton, W., & Mason, R. (2011, November). Simu- lating visual impairments using the Unreal Engine 3 game engine. In Serious Games and Applications for Health (SeGAH), 2011 IEEE 1st International Conference on (pp. 1-8). http://doi.org/10.1109/SeGAH.2011.6165430 Bibliography 171

[130] Lewis, M., & Jacobson, J. (2002). Game Engines In Scientific Research.. Com- munications of the ACM, 45 (1), 27.

[131] Li, C., Liang, W., Quigley, C., Zhao, Y., & Yu, L. F. (2017). Earthquake safety training through virtual drills. IEEE Transactions on Visualization and Computer Graphics, 23 (4), 1275-1284.

[132] Li, H., Chan, G., & Skitmore, M. (2012). Visualizing safety assessment by integrating the use of game technology. Automation in Construction, 22, 498- 505. http://doi.org/10.1016/j.autcon.2011.11.009

[133] Li, H., Tang, W., & Simpson, D. (2004, June). Behaviour based motion simulation for fire evacuation procedures. In Proceedings of Theory and Practice of Computer Graphics, 2004. (pp. 112-118). http://doi.org/10.1109/TPCG.2004.1314460

[134] Li, T. Y., Jeng, Y. J., & Chang, S. I. (2001). Simulating vir- tual human crowds with a leader-follower model. In Proceedings of the Fourteenth Conference on Computer Animation 2001. (pp. 93-102). http://doi.org/10.1109/CA.2001.982381

[135] Lin, H. X., Choong, Y. Y., & Salvendy, G. (1997). A proposed index of usability: A method for comparing the relative usability of different software systems. Behaviour & information Technology, 16 (4-5), 267-277. http://doi.org/10.1080/014492997119833

[136] Lu, P., & Liu, H. (2012). Research on web-based educational aircraft de- sign system. In Advances in Technology and Management (pp. 411-419). http://doi.org/10.1007/978-3-642-29637-6 52

[137] Lugrin, J. L., Charles, F., Cavazza, M., Le Renard, M., Freeman, J., & Lessiter, J. (2012, December). CaveUDK: A VR game engine middleware. In Proceedings of the 18th ACM Symposium on Virtual reality Software and Technology (pp. 137-144). http://doi.org/10.1145/2407336.2407363 Bibliography 172

[138] Ma, M., Bale, K., & Rea, P. (2012, September). Constructionist learning in anatomy education. In International Conference on Serious Games Develop- ment and Applications (pp. 43-58). http://doi.org/10.1007/978-3-642-33687- 4 4

[139] Ma, Q., Wang, R., Xin, Q., Wang, F., & Yang, J. (2013, June). The research on the key technology of building 3D virtual scene in Wendeng City based on Google Earth. In 2013 21st International Conference on Geoinformatics (pp. 1-4). http://doi.org/10.1109/Geoinformatics.2013.6626095

[140] Mac Namee, B., Rooney, P., Lindstrom, P., Ritchie, A., Boylan, F., & Burke, G. (2006, November). Serious Gordon using serious games to teach food safety in the kitchen. In Proceedings of 9th International Conference on Computer Games: AI, Animation, Mobile, Educational and Serious Games (pp. 1-7). Dublin Institute of Technology.

[141] Macy, S. G. (2015). Dota2 now Valve’s First Ever Source 2 Game - Welcome to Reborn. Retrieved February 10, 2017, from http://au.ign.com/articles/2015/09/09/dota-2-now-valves-first-ever-source-2- game

[142] Mann, H. B., & Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the other. The Annals of Mathematical Statistics, 50-60.

[143] Marks, S., Windsor, J., & W¨unsche, B. (2007, December). Evaluation of game engines for simulated surgical training. In Proceedings of the 5th International Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia (pp. 273-280). http://doi.org/10.1145/1321261.1321311

[144] Mauldin, M. L. (1994, August). Chatterbots, TINYMUDS, and the Turing Test: Entering the Loebner Prize Competition. In AAAI, 94,16-21.

[145] Mazalek, A., Nitsche, M., R´ebola, C., Wu, A., Clifton, P., Peer, F., & Drake, M. (2011, November). Pictures at an exhibition: A physical/digital puppetry Bibliography 173

performance piece. In Proceedings of the 8th ACM Conference on Creativity and Cognition (pp. 441-442). http://doi.org/10.1145/2069618.2069739

[146] McConnell, N. C., Boyce, K.E., & Shields, T. J. (2009). An analysis of the recognition and response behaviours of evacuees of WTC 1 on 1/11. In Pro- ceedings of the 4th International Symposium on Human Behaviour in Fire. (pp. 659-696). Interscience Communications, London.

[147] McGrattan, K. B., & Forney, G. P. (2000). Fire dynamics simulator: User’s manual. Gaithersburg, MD: US Department of Commerce, Technology Ad- ministration, National Institute of Standards and Technology.

[148] McGrattan, K. B., Hostikka, S., & Floyd, J. E. (2010). Fire dynamics sim- ulator, User’s Guide (NIST Special Publication 1019), Gaithersburg, MD: National Institute of Standards and Technology.

[149] McNamara, T. (2016). Questioning risk-based fire and life safety education age priorities. Injury Prevention. http://doi.org/10.1136/injuryprev-2016-042014

[150] Merlo, A., Dalc´o, L., & Fantini, F. (2012, September). Game en- gine for cultural heritage: New opportunities in the relation be- tween simplified models and database. In 2012 18th International Con- ference on Virtual Systems and Multimedia (VSMM) (pp. 623-628). http://doi.org/10.1109/VSMM.2012.6365993

[151] Microsoft Corporation. What is DirectX?. Retrieved February 9, 2017, from https://microsoft.com/resources/documentation/windows/xp/all/proddocs/en- us/what is directx.mspx

[152] Milam, D., Seif El-Nasr, M., Bartram, L., Lockyer, M., Feng, C., & Tan, P. (2012, May). Toolset to explore visual motion designs in a video game. In CHI’12 Extended Abstracts on Human Factors in Computing Systems (pp. 1091-1094). http://doi.org/10.1145/2212776.2212393 Bibliography 174

[153] M´ol,A. C. A., Jorge, C. A. F., & Couto, P. M. (2008). Using a game engine for VR simulations in evacuation planning. IEEE Computer Graphics and Applications, 28 (3), 6-12. http://doi.org/10.1109/MCG.2008.61

[154] Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny val- ley [from the field]. IEEE Robotics & Automation Magazine, 19 (2), 98-100. http://doi.org/10.1109/MRA.2012.2192811

[155] Mott MacDonald Ltd. STEPS software - Simulating pedes- trian dynamics. Retrieved at March 20, 2017 from http://www.steps.mottmac.com/files/page/260409/STEPS software leaflet.pdf

[156] MultiGen-Paradigm, Inc. (2005). MultiGen Creator. Retrieved February 8, 2017, from http://www.presagis.com/products services/products/modeling- simulation/content creation/creator

[157] Nardi, B., & Harris, J. (2006, November). Strangers and friends: Collabo- rative play in World of Warcraft. In Proceedings of the 2006 20th Anniver- sary Conference on Computer Supported Cooperative Work (pp. 149-158). http://doi.org/10.1145/1180875.1180898

[158] Nilsson, D. and Frantzich, H. and Husted, B. P. and Bygbjerg, H. (2008). Validation of egress models: Simulation of cinema theatre evacuations with Simulex, STEPS and buildingEXODUS. In 9th International Symposium on Fire Safety Science, 9, 341-352. http://doi.org/10.3801/IAFSS.FSS.9-341

[159] NeoAxis Group Ltd. (2016). NeoAxis 3D Engine 3.5. Retrieved February 09, 2017, from http://www.neoaxis.com/

[160] Neumann, F., Reichenberger, A., & Ziegler, M. (2009, September). Variations of the Turing Test in the age of internet and virtual reality. In Annual Con- ference on Artificial Intelligence (pp. 355-362). http://doi.org/10.1007/978-3- 642-04617-9 45

[161] Neto, J. N., Silva, R., Neto, J. P., Pereira, J. M., & Fernandes, J. (2011, May). Solis’Curse - A cultural heritage game using voice interac- Bibliography 175

tion with a virtual agent. In 2011 Third International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES)(pp. 164-167). http://doi.org/10.1109/VS-GAMES.2011.31

[162] Newton, S., & Lowe, R. (2011, September). Using an analytics engine to un- derstand the design and construction of domestic buildings. In Proceedings of RICS Construction and Property Conference, COBRA (pp. 410-419).

[163] NVIDIA. (2011). NVIDIA PhysX Technology. Retrieved February 9, 2017 from http://www.geforce.com/hardware/technology/physx

[164] O’Neill, E., Lewis, D., McGlinn, K., & Dobson, S. (2006, July). Rapid user- centred evaluation for context-aware systems. In International Workshop on Design, Specification, and Verification of Interactive Systems (pp. 220-233). http://doi.org/10.1007/978-3-540-69554-7 18

[165] Oppy, G., & Dowe, D. (2003). The Turing Test. The Stanford Encyclopaedia of Philosophy, 1, 1-26.

[166] Owen, M., Galea, E. R., & Lawrence, P. J. (1996). The EXODUS evacuation model applied to building evacuation scenarios. Journal of Fire Protection Engineering, 8 (2), 65-84. http://doi.org/10.1177/104239159600800202

[167] Owen, M., Galea, E. R., & Lawrence, P. J. (1997). Advanced occupant be- havioural features of the building-EXODUS evacuation model. Fire Safety Science, 5, 795-806. http://doi.org/10.3801/IAFSS.FSS.5-795

[168] Ortega, J., Shaker, N., Togelius, J., & Yannakakis, G. N. (2013). Imitating human playing styles in super mario bros. Entertainment Computing, 4(2), 93-104. http://doi.org/10.1016/j.entcom.2012.10.001

[169] Pauwels, P., De Meyer, R., & Van Campenhout, J. (2010). Visualisation of semantic architectural information within a game engine environment. In Pro- ceedings of the 10th International Conference on Construction Applications of Virtual Reality (pp. 219-228). Bibliography 176

[170] Pauwels, P., De Meyer, R., Audenaert, M., & Samyn, K. (2011, May). The role of game rules in architectural design environments. In 2011 Third Inter- national Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES) (pp. 184-185). http://doi.org/10.1109/VS-GAMES.2011.37

[171] Pan, X., Han, C. S., & Law, K. H. (2005). A multi-agent based simulation framework for the study of human and social behavior in egress analysis. Computing in Civil Engineering, 179 (92), 1-12. http://doi.org/10.1061/40794(179)92

[172] Pelechano, N., & Badler, N. I. (2006). Modeling crowd and trained leader behavior during building evacuation. IEEE Computer Graphics and Applica- tions, 26 (6), 80-86. http://doi.org/10.1109/MCG.2006.133

[173] Pelechano, N., & Malkawi, A. (2008). Evacuation simulation mod- els: Challenges in modeling high rise building evacuation with cellu- lar automata approaches. Automation in Construction, 17 (4), 377-385. http://doi.org/10.1016/j.autcon.2007.06.005

[174] Penttil¨a,H. (2006). Describing the changes in architectural information tech- nology to understand design complexity and free-form architectural expression. Electronic Journal of Information Technology in Construction, 11, 395-408.

[175] Petridis, P., Dunwell, I., De Freitas, S., & Panzoli, D. (2010, March). An engine selection methodology for high fidelity serious games. In 2010 Second Inter- national Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES) (pp. 27-34). http://doi.org/10.1109/VS-GAMES.2010.26

[176] Poy, H. M., & Duffy, B. (2014). A cloud-enabled building and fire emergency evacuation application. IEEE Cloud Computing, 1 (4), 40-49. http://doi.org/10.1109/MCC.2014.67

[177] Proulx, G., & Reid, I. M. (2006). Occupant behavior and evacuation during the Chicago Cook County Administration Building fire. Journal of Fire Protection Engineering, 16 (4), 283-309. http://doi.org/10.1177/1042391506065951 Bibliography 177

[178] Proulx, G. (2001, May). Occupant behaviour and evacuation. In Proceedings of the 9th International Fire Protection Symposium (pp. 219-232).

[179] PTV Group. Retrieved February 15, 2017, from http://vision- traffic.ptvgroup.com

[180] Queensland Fire and Rescue Service. (2004, November). Skills and Drills Man- ual Version 3. Queensland, Australia.

[181] Ranathunga, S., Cranefield, S., & Purvis, M. (2011, May). Interfacing a cogni- tive agent platform with Second Life. In International Workshop on Agents for Educational Games and Simulations (pp. 1-21). http://doi.org/10.1007/978- 3-642-32326-3 1

[182] Rao, A. S., & Georgeff, M. P. (1995, June). BDI agents: From theory to practice. In ICMAS, (Vol. 95, pp. 312-319).

[183] Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G., & Myszkowski, K. (2010). High dynamic range imaging: Acquisition, display, and image-based lighting. Morgan Kaufmann.

[184] Ren, A., Chen, C., & Luo, Y. (2008). Simulation of emergency evacuation in virtual reality. Tsinghua Science & Technology, 13 (5), 674-680.

[185] Ren, C., Yang, C., & Jin, S. (2009, February). Agent-based modeling and simulation on emergency evacuation. In International Conference on Complex Sciences (pp. 1451-1461). http://doi.org/10.1007/978-3-642-02469-6 25

[186] Ribeiro, J., Almeida, J. E., Rossetti, R. J., Coelho, A., & Coelho, A. L. (2012, June). Using serious games to train evacuation behaviour. In 2012 7th Iberian Conference on Information Systems and Technologies (CISTI) (pp. 1-6). IEEE.

[187] Richards, D., & Porte, J. (2009, December). Developing an agent-based train- ing simulation using game and virtual reality software: Experience report. In Proceedings of the Sixth Australasian Conference on Interactive Entertainment (p. 9). http://doi.org/10.1145/1746050.1746059 Bibliography 178

[188] Richens, P., & Harney, M. (2011, July). Reconstruction of historic landscapes. In Proceedings of the 2011 International Conference on Electronic Visualisa- tion and the Arts (pp. 1-2). British Computer Society, London, UK.

[189] Ronchi, E., Alvear, D., Berloco, N., Capote, J., Colonna, P., & Cuesta, A. (2010). Human behaviour in road tunnel fires: Comparison between egress models (FDS+ Evac, STEPS, Pathfinder). In Proceedings of the 12th Interna- tional Interflam 2010 Conference, Nottingham, UK (pp. 837-848).

[190] Ronchi, E., & Kinsey, M. (2011). Evacuation models of the future: Insights from an online survey of user’s experiences and needs. in Proceedings of the Advanced Research Workshop: ”Evacuation and Human Behaviour in Emer- gency Situations” (pp. 145-155). Universidad de Cantabria, Santander, Spain.

[191] Ronchi, E., & Nilsson, D. (2013). Fire evacuation in high-rise buildings: A review of human behaviour and modelling research. Fire Science Reviews, 2 (1), 1-21. http://doi.org/10.1186/2193-0414-2-7

[192] Rooney, C., Passmore, P., & Wong, W. (2010). Crisis: Research priorities for a state-of-the-art training simulation and relevance of game development techniques. In Proceedings of the Workshop on Crisis Management Training: Design and Use of Online Worlds, NordiCHI. ACM.

[193] Rui, Y., & Liu, Z. (2004). ARTiFACIAL: Automated reverse Tur- ing Test using facial features. Multimedia Systems, 9 (6), 493-502. http://doi.org/10.1007/s00530-003-0122-3

[194] R¨uppel, U., & Schatz, K. (2011). Designing a BIM-based serious game for fire safety evacuation simulations. Advanced Engineering Informatics, 25 (4), 600-611. http://doi.org/10.1016/j.aei.2011.08.001

[195] Sangani, S., Weiss, P. L., Kizony, R., Koenig, S. T., Levin, M. F., & Fung, J. (2012). Development of a complex ecological virtual environment. In Pro- ceedings of the 9th International Conference on Disability, Virtual Reality and Associated Technologies, Laval, France. Bibliography 179

[196] Santos, G., & Aguirre, B. E. (2004). A critical review of emergency evacuation simulation models. NIST Workshop on Building Occupant Movement during Fire Emergencies, Gaithersburg, MD: National Institute of Standards and Technology.

[197] Saygin, A. P., Cicekli, I., & Akman, V. (2003). Turing Test: 50 years later. In The Turing Test (pp. 23-78). http://doi.org/10.1007/978-94-010-0105-2 2

[198] Schultze, U. (2010). Embodiment and presence in virtual worlds: A review. Journal of Information Technology, 25 (4), 434-449. http://doi.org/10.1057/jit.2010.25

[199] Schwab, B. (2009). AI game engine programming. Boston, MA: Nelson Edu- cation.

[200] Schweizer, P. (1998). The truly total Turing test. Minds and Machines, 8 (2), 263-272.

[201] Seyama, J. I., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and virtual environments, 16 (4), 337-351. Boston, MA: The MIT Press.

[202] Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52 (3/4), 591-611. http://doi.org/10.1093/biomet/52.3-4.591

[203] Sharma, S., & Otunba, S. (2012, May). Collaborative virtual environment to study aircraft evacuation for training and education. In 2012 International Conference on Collaboration Technologies and Systems (CTS) (pp. 569-574). http://doi.org/10.1109/CTS.2012.6261107

[204] Sharma, S., Jerripothula, S., Mackey, S., & Soumare, O. (2014, December). Immersive virtual reality environment of a subway evacuation on a cloud for disaster preparedness and response training. In 2014 IEEE Symposium on Computational Intelligence for Human-like Intelligence (CIHLI) (pp. 1-6). http://doi.org/10.1109/CIHLI.2014.7013380 Bibliography 180

[205] Shieber, S. M. (1994). The Turing Test and the Loebner Prize. Communica- tions of the Association for Computing Machinery, 37 (6),70-78.

[206] Shields, T. J., Boyce, K. E., & McConnell, N. (2009). The be- haviour and evacuation experiences of WTC 9/11 evacuees with self- designated mobility impairments. Fire Safety Journal, 44 (6), 881-893. http://doi.org/10.1016/j.firesaf.2009.04.004

[207] Shiratuddin, M. F., & Thabet, W. (2002). Virtual office walkthrough using a 3D game engine. International Journal of Design Computing, 4.

[208] Shtern, M., Haworth, M. B., Yunusova, Y., Baljko, M., & Faloutsos, P. (2012, November). A game system for speech rehabilitation. In International Con- ference on Motion in Games (pp. 43-54). http://doi.org/10.1007/978-3-642- 34710-8 5

[209] Silva, J. F. M., Almeida, J. E., Pereira, A., Rossetti, R. J., & Coelho, A. L. (2013). Preliminary experiments with EVA - Serious games virtual fire drill simulator. In Proceedings of the 27th European Conference in Modelling and Simulation (ECMS’13) (pp. 221-227). http://doi.org/10.7148/2013

[210] Silva, J. F., Almeida, J. E., Rossetti, R. J., & Coelho, A. L. (2013, May). A serious game for EVAcuation training. In 2013 IEEE 2nd International Conference on Serious Games and Applications for Health (SeGAH) (pp. 1- 6). http://doi.org/10.1109/SeGAH.2013.6665302

[211] Sime, J., Creed C., Kimura M. & Powell J. (1994 November). Human Be- haviour in Fires (Research Report). Department for Communities and Local Government, London, UK.

[212] Smith, S. P., & Trenholme, D. (2009). Rapid prototyping a virtual fire drill environment using computer game technology. Fire Safety Journal, 44 (4), 559-569. http://doi.org/10.1016/j.firesaf.2008.11.004 Bibliography 181

[213] St Julien, T. U., & Shaw, C. D. (2003, October). Firefighter command training virtual environment. In Proceedings of the 2003 Conference on Diversity in Computing (pp. 30-33). http://doi.org/10.1145/948542.948549

[214] Steinicke, F., Bruder, G., & Kuhl, S. (2011). Realistic perspective projections for virtual objects and environments. ACM Transactions on Graphics (TOG), 30 (5), 112:1–112:10. http://doi.org/10.1145/2019627.2019631

[215] Steinicke, F., & Bruder, G. (2012, March). Visual perception of perspective distortions. In 2012 IEEE VR Workshop on Perceptual Illusions in Virtual En- vironments (PIVE) (pp. 29-32). http://doi.org/10.1109/PIVE.2012.6229798

[216] Su´arez,A. A., and Santamar´ıa,M. and Claudio, E. (2013). Virtual reality: A tool for treating phobias of heights. In Eleventh LACCEI Latin American and Caribbean Conference for Engineering and Technology (LACCEI’2013).

[217] Succar, B. (2009). Building information modelling framework: A research and delivery foundation for industry stakeholders. Automation in Construction, 18 (3), 357-375. http://doi.org/10.1016/j.autcon.2008.10.003

[218] Sumerfield, J., & Smith, S. P. (2012). Investigating graphical realism in a virtual environment for threat identification. In 10th Asia Pacific Conference on Computer Human Interaction (APCHI 2012) (Vol. 2, pp. 473-479).

[219] Tang, F., & Ren, A. (2012). GIS-based 3D evacuation simu- lation for indoor fire. Building and Environment, 49, 193-202. http://doi.org/10.1016/j.buildenv.2011.09.021

[220] Tate, D. L., Sibert, L., & King, T. (1997). Using virtual environments to train firefighters. IEEE Computer Graphics and Applications, 17 (6), 23-29. http://doi.org/10.1109/38.626965

[221] The Mono Project. (2016d). Mono 4.6.2. Retrieved January 30, 2017, from http://www.mono-project.com/

[222] Thiedemann, S., Henrich, N., Grosch, T., & M¨uller, S. (2011, February). Voxel-based global illumination. In Proceeding of the I3D’11 Bibliography 182

Symposium on Interactive 3D Graphics and Games (pp. 103-110). http://doi.org/10.1145/1944745.1944763

[223] Thompson, P. A., & Marchant, E. W. (1995). Testing and application of the computer model ‘SIMULEX’. Fire Safety Journal, 24 (2), 149-166. http://doi.org/10.1016/0379-7112(95)00020-T

[224] Totten, C. (2012). Game character creation with blender and unity. New York City: John Wiley & Sons.

[225] Thunderhead Engineering. (2008). PyroSim User Manual. Manhattan, KS: Thunderhead Engineering.

[226] Thunderhead Eng. (2013). Pathfinder user manual. Manhattan, KS: Thunder- head Engineering.

[227] Thunderhead Eng. (2014). Pathfinder technical reference. Manhattan, KS: Thunderhead Engineering.

[228] Thunderhead Eng. (2017). Pathfinder Fundamentals. Retrieved September 25, 2017, from http://www.thunderheadeng.com/pathfinder/fundamentals

[229] Tombs, G., Bhakta, R., & Savin-Baden, M. (2014, April). ‘It’s Almost Like Talking to a Person’: Student disclosure to pedagogical agents in sensitive set- ting. In Proceedings of the 9th International Conference on Networked Learning 2014, (pp. 296-304). Lancaster University.

[230] Trenholme, D., & Smith, S. P. (2008). Computer game engines for de- veloping first-person virtual environments. Virtual Reality, 12 (3), 181-187. http://doi.org/10.1007/s10055-008-0092-z

[231] Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 (236), 433-460.

[232] Ulicny, B., & Thalmann, D. (2001). Crowd simulation for interactive virtual environments and VR training systems. In Computer Animation and Simu- Bibliography 183

lation 2001, Eurographics. (pp. 163-170). http://doi.org/10.1007/978-3-7091- 6240-8 15

[233] Unity Technologies. (2015a). Unity community. Retrieved January 30, 2017, from https://forum.unity3d.com/

[234] Unity Technologies. (2015b). Unity game engine. Retrieved January 30, 2017, from https://unity3d.com/

[235] Unity Technologies. (2015c). Unity showcase gallery. Retrieved January 30, 2017, from https://unity3d.com/showcase/gallery/

[236] Unity Technologies. (2015d). Unity store. Retrieved January 30, 2017, from https://store.unity3d.com/

[237] Unity Technologies. (2017). Unity Documentation: Cre- ate Game Script. Retrieved September 25, 2017, from https://docs.unity3d.com/Manual/UnityAnalyticsCreateSDK.html

[238] University of Greenwich. (2017). BuildingEXODUS example application (3). Retrieved September 25, 2017, from https://fseg.gre.ac.uk/exodus/air.html

[239] U.S. Fire Administration. Residential and nonresidential building fire and fire loss estimates by property use and cause (2003 - 2014). Retrieved January 30, 2017, from https://www.usfa.fema.gov/downloads/xls/statistics/death injury data sets.xlsx

[240] Valve Software. (1996). Retrieved January 9, 2017, from https://developer.valvesoftware.com/wiki/SDK Docs

[241] Valve Software. (2003). Steam. Retrieved February 8, 2017, from http://store.steampowered.com/

[242] Valve Software. (2004a). Counter-Strike: Source. Retrieved February 8, 2017, from http://blog.counter-strike.net/

[243] Valve Software. (2004b). Half-Life 2. Retrieved February 8, 2017, from http://orange.half-life2.com/hl2.html Bibliography 184

[244] Valve Software. (1996). Valve Hammer Editor. Retrieved January 9, 2017, from https://developer.valvesoftware.com/wiki/Valve Hammer Editor

[245] Valve Software. (2012). Source Filmmaker. Retrieved February 8, 2017, from http://www.sourcefilmmaker.com/

[246] Valve Software. (2013a). Dota 2. Retrieved February 8, 2017, from http://blog.dota2.com/

[247] Valve Software. (2013b). Source SDK 2013, Retrieved February 8, 2017, from https://developer.valvesoftware.com/wiki/SDK2013 GettingStarted

[248] Valve Software. (2013c). The 2013 edition of the Source SDK. Retrieved Febru- ary 8, 2017, from https://github.com/ValveSoftware/source-sdk-2013/

[249] Valve Software. (2015). Valve Announces Link, Source 2, SteamVR, And More At GDC. Retrieved February 8, 2017, from http://www.valvesoftware.com/news/?id=16000

[250] Veksler, V. D. (2009). Second Life as a simulation environment: Rich, high- fidelity world, minus the hassles. In Proceedings of the 9th International Con- ference of Cognitive Modeling - ICCM2009. Manchester, UK.

[251] Ventrella, J., Seif El-Nasr, M., Aghabeigi, B., & Overington, R. (2010). Ges- tural Turing Test. A motion-capture experiment for exploring believability in artificial nonverbal communication. In Proceedings of AAMAS International Workshop on Interacting with ECAs as Virtual Characters.

[252] Verma, D. K., Rajan, A., Paraye, A., & Rawat, A. (2013, Decem- ber). Virtual walkthrough of data centre. In 2013 IEEE Second Interna- tional Conference on Image Information Processing (ICIIP) (pp. 51-55). http://doi.org/10.1109/ICIIP.2013.6707554

[253] Von Ahn, L., Blum, M., Hopper, N. J., & Langford, J. (2003, May). CAPTCHA: Using hard AI problems for security. In International Conference on the Theory and Applications of Cryptographic Techniques (pp. 294-311). http://doi.org/10.1007/3-540-39200-9 18 Bibliography 185

[254] Von Ahn, L., Blum, M., & Langford, J. (2004). Telling humans and com- puters apart automatically. Communications of the ACM, 47 (2), 56-60. http://doi.org/10.1145/966389.966390

[255] Wang, B., Li, H., Rezgui, Y., Bradley, A., & Ong, H. N. (2014). BIM based vir- tual environment for fire emergency evacuation. The Scientific World Journal, 2014. http://doi.org/10.1155/2014/589016

[256] Wang, D. H., Hsieh, H. C., Wu, C. S., Honjo, T., Chiang, Y. J., & Yang, P. A. (2012, October). Visualization with Google Earth and gam- ing engine. In 2012 IEEE Fourth International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA) (pp. 426-430). http://doi.org/10.1109/PMA.2012.6524868

[257] Wang, P. C., Chang, C. H., Su, M. C., Yeh, S. C., & Fang, T. Y. (2011). Virtual reality rehabilitation for vestibular dysfunction. Otolaryngology-Head and Neck Surgery, 145 (2 suppl), 158-159. http://doi.org/10.1177/0194599811415823a81

[258] Waterson, N. P., & Pellissier, E. (2010). The STEPS pedestrian microsimula- tion tool-A technical summary. Mott MacDonald Simulation Group, UK.

[259] Weber, B. G., Mateas, M., & Jhala, A. (2011, November). Building human- level AI for real-time strategy games. In AAAI Fall Symposium: Advances in Cognitive Systems (Vol. 11).

[260] Weizenbaum, J. (1966). ELIZA - A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9 (1), 36-45. http://doi.org/10.1145/365153.365168

[261] Williamson, B., Wingrave, C., LaViola, J. J., Roberts, T., & Garrity, P. (2011). Natural full body interaction for navigation in dismounted soldier training. In The Interservice/Industry Training, Simulation & Education.

[262] Woodcock, S., Laird, J. E., & Pottinger, D. (2000). Game AI: The state of the industry. Game Developer Magazine, 8 (1), 7-8. Bibliography 186

[263] Woolrych, A., & Cockton, G. (2001, September). Why and when five test users aren’t enough. In Proceedings of IHM-HCI 2001 conference (Vol. 2, pp. 105-108). Toulouse, France.

[264] Xi, C., Wu, H., Joher, A., Kirsch, L., & Luo, C. (2009). 3-D virtual reality for education, training and improved human performance in nuclear appli- cations. In Proceedings of ANS NPIC HMIT 2009 Topical Meeting-Nuclear Plant Instrumentation, Controls, and Human Machine Interface Technology, Knoxville, Tennessee, ANS.

[265] Xi, M., & Smith, S. P. (2014, March). Simulating cooperative fire evacuation training in a virtual environment using gaming technology. In 2014 IEEE Virtual Reality (VR) (pp. 139-140). http://doi.org/10.1109/VR.2014.6802090

[266] Xi, M., & Smith, S. P. (2014, December). Reusing simulated evacuation be- haviour in a game engine. In Proceedings of the 2014 Conference on Interactive Entertainment (IE’2014) (pp. 1-8). http://doi.org/10.1145/2677758.2677779

[267] Xi, M., & Smith, S. P. (2015, January). Exploring the reuse of fire evacuation behaviour in virtual environments. In Proceedings of the 11th Australasian Conference on Interactive Entertainment (IE’2015), (Vol. 27, p. 35-44). Aus- tralian Computer Society.

[268] Xi, M., & Smith, S. P. (2016). Supporting path switching for non-player char- acters in a virtual environment. In 2016 IEEE Virtual Reality (VR) (pp. 315- 316). http://doi.org/10.1109/VR.2016.7504780

[269] Xu, F., Shan, D., & Yang, H. (2010, December). Simulation research of crime scene based on UDK. In The 2nd International Conference on Information Sci- ence and Engineering (pp. 1-4). http://doi.org/10.1109/ICISE.2010.5690391

[270] Xu, Y., Kim, E., Lee, K., Ki, J., & Lee, B. (2013). FDS simulation high rise building model for Unity3D game engine. International Journal of Smart Home, 7 (5), 263-274. Bibliography 187

[271] Xu, Y., Lee, K., Kim, E., Ki, J., Lee, B. (2013). Simulation of smoke to improve Unity3D game engine particle system based on FDS. In Proceedings of the 2nd International Conference on Software Technology (pp. 183-186).

[272] Xu, Z., Lu, X. Z., Guan, H., Chen, C., & Ren, A. (2014). A virtual reality based fire training simulator with smoke haz- ard assessment capacity. Advances in Engineering Software, 68, 1-8. http://doi.org/10.1016/j.advengsoft.2013.10.004

[273] Yan, W., Culp, C., & Graf, R. (2011). Integrating BIM and gaming for real-time interactive architectural visualization. Automation in Construction, 20 (4), 446-458. http://doi.org/10.1016/j.autcon.2010.11.013

[274] Yannakakis, G. N. (2012, May). Game AI revisited. In Proceed- ings of the 9th conference on Computing Frontiers (pp. 285-292). http://doi.org/10.1145/2212908.2212954

[275] Yannakakis, G. N., & Togelius, J. (2015). A panorama of artifi- cial and computational intelligence in games. IEEE Transactions on Computational Intelligence and AI in Games, 7 (4), 317-335. http://doi.org/10.1109/TCIAIG.2014.2339221

[276] Yoo, K. S., & Lee, W. H. (2008, September). An intelligent non player char- acter based on BDI agent. In Fourth International Conference on Networked Computing and Advanced Information Management (NCM’08). (Vol. 2, pp. 214-219). http://doi.org/10.1109/NCM.2008.37

[277] Yu, X., & Ganz, A. (2011, August). MiRTE: Mixed reality triage and evacuation game for mass casualty information systems design, testing and training. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 8199-8202). http://doi.org/10.1109/IEMBS.2011.6092022

[278] Zhao, Q. (2009). A survey on virtual reality. Science in China Series F: Infor- mation Sciences, 52 (3), 348-400. http://doi.org/10.1007/s11432-009-0066-0 Bibliography 188

[279] Zimmerman, J., Forlizzi, J., & Evenson, S. (2007, April). Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 493- 502). http://doi.org/10.1145/1240624.1240704 Appendix A

Table List

A.1 An Example Fragment of GPL

Table A.1: A fragment of the global path library (GPL) for Engineering S Building.

PPL Name Path Name Selection Path Description Probability RM 209 1 45.00% RM 209 → P 12 → P 13 → P 1 → P 2 → EXIT 5 RM 209 RM 209 2 54.00% RM 209 → P 11 → P 10 → EXIT 1 RM 209 3 1.00% RM 209 → P 11 → P 12 → P 13 → P 1 → P 2 → EXIT 5 RM 210 1 80.00% RM 210 → P 11 → P 10 → EXIT 1 RM 210 RM 210 2 20.00% RM 210 → P 11 → P 7 → P 6 → EXIT 4 RM 238 1 98.00% RM 238 → P 6 → EXIT 4 RM 238 RM 238 2 2.00% RM 238 → P 7 → P 8 → EXIT 3

189 Appendix A. Table List 190 A.2 Types of Participants in User Study

Table A.2: Three types of participants involved in the Turing Test user study.

No. Phase (I/II) Group (Player/Judge) Type (Human/NPC) Count 1 I Player Human 12 2 I Player NPC 54 3 II Judge Human 20 Appendix A. Table List 191 A.3 Fire Warden Evacuation Instructions

Table A.3: Evacuation instructions used in the fire evacuation drill system.

No. Text Speak speed 1 Don’t go this way, it’s blocked by fire! Very Fast 2 Don’t go this way! Very Fast 3 Don’t go this way! The main entrance is blocked! Fast 4 Go this way! Medium 5 This way, use the main entrance! Medium 5 Use the fire exit on the other end of the building! Medium Appendix A. Table List 192 A.4 The Configurations of Evacuation Sessions

Table A.4: The configurations of evacuation sessions.

Group Evacuation ID Scenario Human Players (A-F) Number of Bots

S1G1G2 6H0B S1 S1 A1 B1 C1 D1 E1 F1 0

S1G1G2 6H0B S2 S2 A1 B1 C1 D1 E1 F1 0

S1G1G2 6H0B S3 S3 A1 B1 C1 D1 E1 F1 0

S1G1 3H3B S1 S1 A1 E1 F1 3

S1G2 3H3B S2 S2 B1 C1 E1 3

S1G3 3H3B S3 S3 A1 B1 C1 3 1 S1G4 3H3B S1 S1 B1 C1 D1 3

S1G5 3H3B S2 S2 A1 D1 F1 3

S1G6 3H3B S3 S3 D1 E1 F1 3 S1 0H6B S1 S1 - 6 S1 0H6B S2 S2 - 6 S1 0H6B S3 S3 - 6

S2G1G2 6H0B S1 S1 A2 B2 C2 D2 E2 F2 0

S2G1G2 6H0B S2 S2 A2 B2 C2 D2 E2 F2 0

S2G1G2 6H0B S3 S3 A2 B2 C2 D2 E2 F2 0

S2G1 3H3B S1 S1 A2 E2 F2 3

S2G2 3H3B S2 S2 B2 C2 E2 3

S2G3 3H3B S3 S3 A2 B2 C2 3 2 S2G4 3H3B S1 S1 B2 C2 D2 3

S2G5 3H3B S2 S2 A2 D2 F2 3

S2G6 3H3B S3 S3 D2 E2 F2 3 S2 0H6B S1 S1 - 6 S2 0H6B S2 S2 - 6 S2 0H6B S3 S3 - 6 Appendix A. Table List 193 A.5 The Judgement Groups

Table A.5: A list of evacuation sessions in judgement group J1, J2 and J3.

Group Order Evacuation Session ID Fire Scenario 1 S1G1G2 6H0B S1 S1 2 S2 0H6B S3 S3 3 S2G2 3H3B S2 S2 J1 4 S1 0H6B S3 S3 5 S2G2 3H3B S2 S2 6 S1G4 3H3B S1 S1 1 S1G5 3H3B S2 S2 2 S1G1G2 6H0B S2 S2 3 S2 0H6B S1 S1 J2 4 S2G3 3H3B S3 S3 5 S1 0H6B S1 S1 6 S2G1G2 6H0B S3 S3 1 S2G4 3H3B S1 S1 2 S2G1G2 6H0B S1 S1 3 S2 0H6B S2 S2 J3 4 S2G3 3H3B S3 S3 5 S1 0H6B S2 S2 6 S1G1G2 6H0B S3 S1 Appendix B

User Study Materials

194 Appendix B. User Study Materials 195 B.1 Information Sheet for the Group of Players

Research Team Mr Mingze Xi, Dr Shamus Smith School of Electrical Engineering and Computer Science The University of Newcastle, Australia Contact Email: [email protected]

Information Sheet for the Research Project: Turing Test for NPC Behaviour in a Virtual Training Environment

You are invited to participate in the study of the Turing test in a virtual training environment. This study is part of Mr Mingze Xi’s PhD research project, which is supervised by Dr Shamus P. Smith at the School of Electrical Engineering and Computer Science at The University of Newcastle, Australia.

Why is the research being done? The purpose of the research is to explore the use of a modified Turing test to evaluate the realism of non- player characters (NPC).

Who can participate in the research? All UoN students are welcome to participant in this research.

What choice do you have? Participation in this research is entirely your choice. Only those people who give their informed consent will be included in the project. Whether or not you decide to participate, your decision will not disadvantage you. If you do decide to participate, you may withdraw from the project at any time without giving a reason. Any questions you answered in the evaluation form will not be recorded in our database, and your responses will be shredded and discarded.

What would you be asked to do? The study involves following activities. You will be allocated to group of 6 people before the test. On the day of test, firstly, the research team will collect pre-session questionnaires and signed consent forms from all participants in your group. You will be invited to attend a virtual fire evacuation game session, where you will complete 6 evacuation tasks with 5 group members and complete a system usability scale (SUS) form. The testing and training will be conducted in the Engineering Sciences (ES) building on Callaghan campus. Your test session may be rescheduled if any participant in your group fails to show up on the scheduled test time.

How much time will it take? The game play sessions will take approximately 40 minutes to complete the following activities:

 A training session (5 minutes)  Completion of 6 evacuation tasks (30 minutes).  A system usability scale form (5 minutes)

What are the risks and benefits of participating? There are no physical risks in participating in either session. Some participants may feel uncomfortable with fire and smoke animation effects in the fire evacuation environment. Please advise the research team if you feel uncomfortable with such animation effects and you can withdraw from this study at any time. Your participation will contribute to the evaluation of intelligent agents in future research in the virtual reality community. In addition, you will receive a $20 Woolworths Essentials Gift Card for completing you session. Also, you can request the summary of the study by emailing Chief Investor (Smith) or visit the I3 Lab website.

How will your privacy be protected? No identifying information will be collected. All paper copies of the evaluation forms will be stored in a locked room, and will be accessible only to members of the research team (Mr Mingze Xi and Dr Shamus Smith). The fire evacuation game replay data will be encrypted and kept on a secured computer drive. This data will be retained for at least 5 years at the University of Newcastle.

1 Appendix B. User Study Materials 196

How will the information collected be used? The research team will write reports, journal articles and conference papers. No individual will be identified in these publications. Non-identifiable data may also be shared with other parties to encourage scientific scrutiny, and to contribute to further research and public knowledge, or as required by law.

What do you need to do to participate?

Please read this Information Statement and be sure you understand its contents before you consent to participate. To ask any questions about the information sheet please contact the researcher (see below).

If you would like to participate, please return the attached pre-session questionnaire and email Mr Mingze Xi ([email protected]) to arrange a suitable time to sign the consent form and participate in the study.

Thank you for considering this invitation.

Kind Regards,

Mingze Xi and Shamus Smith

Complaints about this research This project has been approved by the University’s Human Research Ethics Committee, Approval No. H- 2016-0198. Should you have concerns about your rights as a participant in this research, or you have a complaint about the manner in which the research is conducted, it may be given to the researcher, or, if an independent person is preferred, to the Human Research Ethics Officer, Research Office, The Chancellery, The University of Newcastle, University Drive, Callaghan NSW 2308, Australia, telephone (02) 49216333, email [email protected].

2 Appendix B. User Study Materials 197 B.2 Information Sheet for the Group of Judges

Research Team Mr Mingze Xi, Dr Shamus Smith School of Electrical Engineering and Computer Science The University of Newcastle, Australia Contact Email: [email protected]

Information Sheet for the Research Project: Turing Test for NPC Behaviour in a Virtual Training Environment

You are invited to participate in the study of the Turing test in a virtual training environment. This study is part of Mr Mingze Xi’s PhD research project, which is supervised by Dr Shamus P. Smith at the School of Electrical Engineering and Computer Science at The University of Newcastle, Australia.

Why is the research being done? The purpose of the research is to explore the use of a modified Turing test to evaluate the realism of non- player characters (NPC).

Who can participate in the research? All UoN students are welcome to participant in this research.

What choice do you have? Participation in this research is entirely your choice. Only those people who give their informed consent will be included in the project. Whether or not you decide to participate, your decision will not disadvantage you. If you do decide to participate, you may withdraw from the project at any time without giving a reason. Any questions you answered in the evaluation form will not be recorded in our database, and your responses will be shredded and discarded.

What would you be asked to do? The study involves following activities. Firstly, the research team will collect pre-session questionnaires and signed consent forms from all participants. Then you will be invited to a game review session, where you will review 6 recorded game replays and complete a survey questionnaire on the realism of 3D avatars that appeared in fire evacuation game replays and a system usability scale (SUS) form. The testing and training will be conducted in the Engineering Sciences (ES) building on Callaghan campus.

How much time will it take? The review session will take approximately 40 minutes to complete following activities:

 A training session (5 minutes)  Review 6 game replays (20minutes)  An evaluation forms (10 minutes)  A system usability scale form (5 minutes)

What are the risks and benefits of participating? There are no physical risks in participating in the review session. Your participation will contribute to the evaluation of intelligent agents in future research in the virtual reality community. In addition, you will receive a $20 Woolworths Essentials Gift Card for completing you session. Also, you can request the summary of the study by emailing Chief Investor (Smith) or visit the I3 Lab website.

How will your privacy be protected? No identifying information will be collected. All paper copies of the evaluation forms will be stored in a locked room, and will be accessible only to members of the research team (Mr Mingze Xi and Dr Shamus Smith). This data will be retained for at least 5 years at the University of Newcastle.

How will the information collected be used? The research team will write reports, journal articles and conference papers. No individual will be identified in these publications. Non-identifiable data may also be shared with other parties to encourage scientific scrutiny, and to contribute to further research and public knowledge, or as required by law.

What do you need to do to participate? 1 Appendix B. User Study Materials 198

Please read this Information Statement and be sure you understand its contents before you consent to participate. To ask any questions about the information sheet please contact the researcher (see below).

If you would like to participate, please return the attached pre-session questionnaire and email Mr Mingze Xi ([email protected]) to arrange a suitable time to sign the consent form and participate in the study.

Thank you for considering this invitation.

Kind Regards,

Mingze Xi and Shamus Smith

Complaints about this research This project has been approved by the University’s Human Research Ethics Committee, Approval No. H- 2016-0198. Should you have concerns about your rights as a participant in this research, or you have a complaint about the manner in which the research is conducted, it may be given to the researcher, or, if an independent person is preferred, to the Human Research Ethics Officer, Research Office, The Chancellery, The University of Newcastle, University Drive, Callaghan NSW 2308, Australia, telephone (02) 49216333, email [email protected].

2 Appendix B. User Study Materials 199 B.3 Consent Form

Research Team Mr Mingze Xi, Dr Shamus Smith School of Electrical Engineering and Computer Science The University of Newcastle, Australia Contact Email: [email protected]

Consent Sheet for the Research Project: Turing Test for NPC Behaviour in a Virtual Training Environment If you would like to participate, please read the following:

 I agree to participate in the evaluation of NPC behaviour and give my consent freely.

 I understand that the study will be conducted as described in the Information Statement, a copy of which I have retained.

 I understand I can withdraw from the study at any time and do not have to give any reason for withdrawing.

 I understand that my personal information will remain confidential to the researchers except as required by law.

 I have had the opportunity to have questions answered to my satisfaction.

Participant ID: ______(see welcome email. ID will be in the form of VST_ID_XX) Print Name: ______Signature: ______Date: ______

Complaints about this research This project has been approved by the University’s Human Research Ethics Committee, Approval No. H-2016-0198. Should you have concerns about your rights as a participant in this research, or you have a complaint about the manner in which the research is conducted, it may be given to the researcher, or, if an independent person is preferred, to the Human Research Ethics Officer, Research Office, The Chancellery, The University of Newcastle, University Drive, Callaghan NSW 2308, Australia, telephone (02) 49216333, email [email protected]. Appendix B. User Study Materials 200 B.4 Pre-trail Questionnaire

PRE-TRIAL QUESTIONNAIRE TURING TEST STUDY

Date: ______

User ID: ______(see welcome email. ID will be in the form of VST_ID_XX)

1. Gender: (please mark one with an X) Male [ ] Female [ ] Others [ ]

2. Age: [ ]

3. What degree are you studying at UoN?

_

4. How long have you been playing video games?

Less than 6 months [ ] 1 Year [ ] 2 – 5 Years [ ] 5 – 10 Years [ ] 10 or more years [ ]

5. Please indicate how much time you spend on playing video games per week: (please mark one)

1 - 5 hours [ ] 6 – 10 hours [ ] 11 – 15 hours [ ] 16-20 hours [ ] 20+ hours [ ]

6. How do rate your gaming skills? No skill [ ] Not very skilled [ ] Moderately good [ ] Skilled [ ]

7. Have you played first-person shooter games before? Yes [ ] No [ ]

8. Have you participated in fire drill before? Yes [ ] No [ ]

Please check that you have answered all the questions. Thank you.

Please return completed questionnaire as an attachment to [email protected] or to Mingze Xi, room ES226, Engineering Science, School of Electrical Engineering and Computer Science, UoN, Callaghan campus. If you have questions about this study, please contact Mingze Xi ([email protected]) and/or Dr Shamus Smith ([email protected]). Appendix B. User Study Materials 201 B.5 Data Review Form

TURING TEST STUDY IN A VIRTUAL ENVIRONMENT Evaluation Questionnaire

Date: ______

Participant ID: ______(see welcome email. ID will be in the form of VST_ID_XX)

Definitely Definitely bot human 1. Do you think Avatar No. 1 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 2. How confident is the judgement for Avatar No.1?

1 2 3 4 5 6 7 8 9 10

Definitely Definitely bot human 3. Do you think Avatar No. 2 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 4. How confident is the judgement for Avatar No.2?

1 2 3 4 5 6 7 8 9 10

Definitely Definitely bot human 5. Do you think Avatar No. 3 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 6. How confident is the judgement for Avatar No.3?

1 2 3 4 5 6 7 8 9 10

Definitely Definitely bot human 7. Do you think Avatar No. 4 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 8. How confident is the judgement for Avatar No.4?

1 2 3 4 5 6 7 8 9 10

Definitely Definitely bot human 9. Do you think Avatar No. 5 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 10. How confident is the judgement for Avatar No.5?

1 2 3 4 5 6 7 8 9 10

Definitely Definitely bot human 11. Do you think Avatar No. 6 is a human or a bot?

1 2 3 4 5 6 7 8 9 10 Extremely Extremely unsure confident 12. How confident is the judgement for Avatar No.6?

1 2 3 4 5 6 7 8 9 10

Appendix B. User Study Materials 202 B.6 System Usability Scale (SUS)

TURING TEST STUDY IN A VIRTUAL ENVIRONMENT System Usability Scale

Date: ______

Participant ID: ______(see welcome email. ID will be in the form of VST_ID_XX)

Strongly Strongly disagree agree

1. I think that I would like to use this system frequently

1 2 3 4 5

2. I found the system unnecessarily complex 1 2 3 4 5

3. I thought the system was easy to use

1 2 3 4 5

4. I think that I would need the support of a technical person to be able to use this system 1 2 3 4 5

5. I found the various functions in this system were well integrated 1 2 3 4 5

6. I thought there was too much inconsistency in this system 1 2 3 4 5

7. I would imagine that most people would learn to use this system very quickly 1 2 3 4 5

8. I found the system very cumbersome to use

1 2 3 4 5

9. I felt very confident using the system

1 2 3 4 5

10. I needed to learn a lot of things before I could get going with this system 1 2 3 4 5

Appendix B. User Study Materials 203 B.7 Recruitment Poster

Research Team Mr Mingze Xi, Dr Shamus Smith School of Electrical Engineering and Computer Science The University of Newcastle, Australia Contact Email: [email protected]

VOLUNTEERS NEEDED FOR TURING TEST STUDY IN A VIRTUAL ENVIRONMENT

Turing Test for NPC Behaviour in a Virtual Training Environment You are invited to participate in a research project that is being conducted by Mr Mingze Xi and Dr Shamus Smith from the School of Electrical Engineering and Computer Science at The University of Newcastle.

Why is the research being done? The purpose of the research is to explore the use of a modified Turing test to evaluate the realism of non-player characters (NPC).

Who can participate in the research? All students at UoN are welcomed to participant in this research.

How much time will it take? This study contains two groups of participants. You will be allocated to the group of subjects or the group of judges. The test takes around 40 minutes to complete. You will receive a $20 Woolworths Gift Card after completing your test session.

How can I find out more information? If you are interested in participating in the project, please contact Mr Mingze Xi at [email protected] for more information.

This project has been approved by The University of Newcastle Human Research Ethics Committee: Approval Number H-2016- 0198. Should you have concerns about your rights as a participant in this research, or you have a complaint about the manner in which the research is conducted, it may be given to the researcher, or, if an independent person is preferred, to the Human Research Ethics Officer, Research Office, The Chancellery, The University of Newcastle, University Drive, Callaghan NSW 2308, Australia, telephone (02) 49216333, email [email protected].

[email protected] St Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Stu Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test [email protected] Environment Training Virtual a in Study Turing Test mingze.xi Environment Training Virtual a in Study Turing Test

@

uon

udy in a Virtual Training Environment Training Virtual a in udy

dy in a Virtual Training Environment Training Virtual a in dy

.edu.au

Appendix C

Floor Plans

204 Appendix C. Floor Plans 205 C.1 Floor Plan A

EXIT_TOP

AREA1 DOOR_5 RM5 DOOR_1 RM1

EXIT_LEFT AREA2 AREA4 DOOR_2 RM2

AREA3 DOOR_3 RM4 DOOR_4 RM3

EXIT_BOT Appendix C. Floor Plans 206 C.2 Floor Plan C