<<

The Experience: Designing Participatory Immersive Environments for Experiential Learning

THESIS

Presented in Partial Fulfillment of the Requirements for the Degree Master of Fine Arts in the Graduate School of The Ohio State University

By

Cheng Zhang

Graduate Program in Design

The Ohio State University

2013

Master's Examination Committee:

Professor Maria Palazzi

Professor Rick Parent

Professor Alan Price (Advisor)

Copyright by

Cheng Zhang

2013

Abstract

The Moon Experience is an interactive and immersive system based on the historic events of the Program (1961-1972). The goal of this thesis project is to demonstrate a solution to the design challenge of framing and creating an effective learning experience in a virtual space that would otherwise be impossible to realize in the real . I employ technologies and approaches from multiple disciplines in the design and implementation of the system. Through the technology of virtual reality I establish the virtual lunar world, which provides participants with immersive firsthand experiences. Computer game technology reinforces the effectiveness of the learning environment, lending The Moon Experience the qualities of entertainment, deep thought, and knowledge retention. and computer animation facilitate real-time interactions between users and the system to sustain the sensation of being on the Moon. I take into account related learning principles and narrative storytelling to give a participant the proper situated learning content. Providing a framework of narrative helps heighten the audience’s perception, trigger their imagination, and transcend the virtual reality’s current limitations.

ii

Dedication

To my Grandpa, for his unconditional love,

To my parents, for nurturing my creativity and imagination,

To my daughters, for the joys and the challenges they bring to my life and for inspiring

balance between family and career.

iii

Acknowledgments

Working on an MFA thesis project based on one of the most complex, expensive, and influential projects in is a daunting task. The lasted more than a decade, cost roughly $170 billion (in 2005 dollars), employed of 400,000 people, and relied on the support of over 20,000 industrial firms and universities at its peak.

However, for my MFA thesis project, the available resources to create The Moon

Experience based on this historical event is very limited, consist of the research labs at

ACCAD, invaluable guidance from my committee members, unwavering support from friends and family, and a one-person development team (me). I would like to express my gratitude to these people.

I would like to thank my advisor, Professor Alan Price, for his guidance and encouragement throughout my studies at ACCAD. His belief in me gives me confidence in pursuing my creative endeavors independently as a researcher and artist in the future.

I am grateful to Professor Maria Palazzi for her unwavering support and encouragement.

Beyond the subjects of design, animation, and multimedia, she has taught me much about being a creative researcher through her excellent example.

iv

I would like to thank Professor Rick Parent for his advice, encouragement, patience, and understanding particularly when I started to pursue the duel degrees.

I would also like to thank Professor Barbara Ryden for the early conversations about the nearby star system, which inspired me working on The Moon Experience.

Extra special thanks to Tom Heban, Samantha Kuhn and Trent Rowland for their collaborations in this project: Tom played the and allowed me to record his motions in the motion capture lab. He did the voice acting for the control center in

Houston. Kuhn made the mockup space suit for the project. Trent is the voice actor for

Jack. I would like to extend my thanks to my friends at ACCAD: Tom Heban, Michael

Andereck, and Sheri Larrimer for their generous time to help me with testing and editing.

I owe the greatest gratitude to my family, particularly to my husband, Lixin Ye. His love, patience, encouragement, and support allow me to pursue my dream and make this work possible. Finally, for my parents, words do not exist to express my gratitude for their constant love, concern, and steadfast belief in me. To them I own the greatest part of what I am and what I have.

v

Publications

Several Approaches to Solve The Rotation Illusion with Wheel Effect, Cheng Zhang and

Rick Parent, SPIE Conference Proceedings on Computational Imaging VIII, Jan. 2010,

San Jose, CA.

Real-time Reflections on Curved Objects Using Layered Depth Textures, Cheng Zhang,

Hsien-Hsi Hsieh and Han-Wei Shen, IADISInternational Conference Proceedings on

Computer Graphics and Visualization 2008, July 2008, Amsterdam, The Netherlands.

Fields of Study

Major Field: Design

Area of Emphasis: Digital Animation and Interactive Media

vi

Table of Contents

Abstract ...... ii

Dedication ...... iii

Acknowledgments...... iv

Publications ...... vi

Fields of Study ...... vi

Table of Contents ...... vii

List of Figures ...... xi

Chapter 1: Introduction ...... 1

1.1 The Challenge ...... 1

1.2 Methodology ...... 3

1.3 Research Scope ...... 5

1.4 Contribution ...... 6

Chapter 2: Background / Contextual Review ...... 8

2.1 Learning theories ...... 8

2.1.1 How People Learn ...... 11

2.1.2 Multimedia Learning ...... 14

vii

2.1.3 Multimodal Learning ...... 17

2.1.4 Constructivism in Multimedia Learning...... 19

2.2 Technology of Virtual Reality...... 22

2.2.1 The properties of VR ...... 24

2.2.2 The components of the virtual reality system ...... 27

2.3 The Apollo Missions ...... 34

Chapter 3: Strategies to Solve Design Issues in The Moon Experience ...... 38

3.1 How to break down the large problem into a manageable design research project 38

3.2 The strategies of implementing The Moon Experience ...... 40

3.2.1 Theme-driven approach – narrative ...... 41

3.2.2 The three scenarios ...... 44

Chapter 4: The Design of Virtual Lunar World ...... 47

4.1 Modeling the Virtual Lunar Terrain ...... 48

4.2 ...... 52

4.2.1 Overview of Apollo Extravehicular Mobility Unit ...... 54

4.2.2 Modeling the Astronauts ...... 60

4.3 Modeling the ...... 67

4.4 Responsive Animations ...... 69

4.4.1 Character Animations ...... 69

viii

4.4.2 The Real-time Animations created in ...... 74

4.4.3 The Hybrid Animations ...... 76

Chapter 5: Interface Design ...... 78

5.1 The interface for the participant ...... 80

5.2 The interface for the operator ...... 85

5.3 The interface for information communication ...... 87

Chapter 6: Interaction Design ...... 91

6.1 Multi-layered Interactions ...... 91

6.1.1 Interactions in the story ...... 91

6.1.2 Interactions involving players ...... 93

6.1.3 Multimodal Interactions ...... 94

6.2 Mechanics...... 97

6.2.1 The project space ...... 98

6.2.2 Three modes and finite states ...... 99

Chapter 7: Conclusions and Future Work ...... 104

7.1 Summary ...... 104

7.2 User Tests, Analysis, and Evaluation ...... 108

7.3 Limitation and Future Work ...... 110

Bibliography ...... 113

ix

Appendix A: A Survey on the Content of The Moon Experience ...... 126

Appendix B: The narratives of The Moon Experience ...... 134

x

List of Figures

Figure 1 Mayer's multimedia learning model excerpted from (R. E. Mayer, Multimedia

Learning) ...... 14

Figure 2.The structure of VR-Technology field, excerpted from (Blach)...... 23

Figure 3 The first head-mounted display was patented in 1916. The image is excerpted from (Sherman and Craig 24)...... 23

Figure 4 The Reality- Continuum proposed by Milgrom et al. (Milgrom,

Takemura and Utsumi) ...... 26

Figure 5 Mixed-fantasy framework excerpted from (Stapleton, Hughes and Moshell). .. 27

Figure 6 Figure 6 Base technologies for VR systems, excerpted from (Blach)...... 28

Figure 7 The structure of a VR system...... 29

Figure 8 Lunar nearside with major maria and craters labeled. The yellow rectangle marks the region where landed...... 48

Figure 9 Three gradually zoom-in images of Taurus-Littrow valley...... 49

Figure 10 A close-up of Taurus-Littrow valley...... 50

Figure 11 The 3D model generated with the DEM data of Taurus-Littrow Vally ...... 51

Figure 12 texture, mountain texture, and gravel texture applied to the terrain.

...... 51

Figure 13 A bird’s eye view of Taurus-Littrow valley created digitally in Unity...... 52

Figure 14 A snapshot of the lunar terrain created in the ...... 52

xi

Figure 15 The lunar surface configuration of extravehicular mobility unit, excerpted from

(Apollo Operations Handbook Extravehicular Mobility Unit Volume I)...... 54

Figure 16 Left: () in A7-L PGA. Right: John W. Young

() in A7-LB PGA...... 55

Figure 17 The diagram of PLSS and OPS...... 56

Figure 18 Left: The diagram of LEVA excerpted from (Natonal Aeronautics and Space

Administration 2-70). Right: wearing the LEVA during training. He had the eyeshades down with flap up. His sun visor was also two-thirds down...... 57

Figure 19 Jack Schmitt at Taurus-Littrow valley with the sun visor and the center eyeshade down but the viewport door open...... 58

Figure 20 The diagram of glove assemblies (Natonal Aeronautics and Space

Administration 2 - 24)...... 58

Figure 21 Left: Jack Schmitt's Apollo 17 EVA gloves. Right: The close-up of a Chromel-

R patch on Ed Mitchell’s suit...... 59

Figure 22 Left: The diagram of the lunar boots (page 2-30). Right: Apollo 17 Eugene

Cernan's lunar boots...... 59

Figure 23 Apollo 17 astronaut was at Taurus-Littrow valley on the lunar surface...... 60

Figure 24 Eugene Cernan Apollo 17 flown space suit displayed in Smithsonian National

Air and Space Museum...... 61

Figure 25 The color map of the astronaut model (except for the helmet)...... 62

xii

Figure 26 Left: The helmet texture created based on the images of the real LEVA. Right:

The interior texture of the helmet...... 62

Figure 27 The low-resolution model of an astronaut after skinning...... 64

Figure 28 The high-resolution model sculpted in Mudbox with 16 million faces...... 64

Figure 29 The normal map created from the high-resolution model...... 65

Figure 30 The resultant glove model...... 65

Figure 31 A snapshot of Jack in the virtual lunar world...... 66

Figure 32 Apollo 17 Lunar Roving Vehicle...... 67

Figure 33 The configuration diagrams of the LRV...... 68

Figure 34 Tom Heban performs the role of “Jack” in the motion capture lab...... 70

Figure 35 The process of animating a character from motion capture to Unity...... 71

Figure 36 The motion capture data and the character model are imported in

MotionBuilder...... 72

Figure 37 The motion capture data are mapped to the model in MotionBuilder...... 73

Figure 38 Snapshot of the real-time animations: the wheel-tracks, footprints and dust. .. 74

Figure 39 Wheel-track shader in Unity...... 75

Figure 40 The footprint shader in Unity...... 76

Figure 41 The prop of a lunar rock and its model in the virtual space...... 77

Figure 42 The physical and virtual interfaces in Schell's book (Schell)...... 78

Figure 43 The Sony Head Mounted Display (HMD) is the major visual display device. 81

Figure 44 Five motion capture markers are put in the HMD...... 82

Figure 45 Sennheiser wireless microphone used in The Moon Experience...... 82

xiii

Figure 46 Several props in the motion capture lab...... 83

Figure 47 A participant wearing the space suit sits on the chair and operates the moon buggy through a game controller...... 84

Figure 48 A dual monitor showing the operator control interface...... 85

Figure 49 The system settings in the motion capture lab...... 86

Figure 50 The key-button layout...... 87

Figure 51 Information flow through the interface in The Moon Experience...... 88

Figure 52 The three modes in The Moon Experience...... 99

Figure 53 The finite states in the onGround mode...... 100

Figure 54 The finite states in the inLRV mode...... 101

xiv

Chapter 1: Introduction

1.1 The Challenge

We all learn from experiences. In the process of knowledge accumulation, it is essential to generate meaningful information in the form of experiences that one can pass on to others. However, due to various constraints such as time or location, experiences are not always accessible to everyone. For example, the excitement of scuba diving may never be fully experienced unless one actually goes scuba diving to have the first-hand experience.

There is a long history of people working on solving the issue of experience sharing.

Experiences have historically been described or recorded in some way so that they can be disseminated, often limited in their ability to fully re-create the experience of “being there”.

With the rapid development of the human society, the fast pace of our daily life requires people to learn effectively and efficiently. The traditional ways of sharing experiences through written text may not be efficient nowadays. In addition, the experiences we encounter are getting more complex. This complexity reflects the fast and multi- dimensional development of our society, in particular the technologies and the large volume of information brought by the technologies. In other words, human society is facing the challenge about how to conduct vast, effective, and efficient learning

1 especially with complex experiences. A complex experience refers to those that contain many parts in intricate arrangement and can be difficult to analyze. Often inaccessible to the general public, one major challenge in design is how to make an inaccessible complex experience accessible as an effective learning environment. The goal of this thesis is to discover underlying design problems and potential solutions to create immersive virtual learning environments in which the user can engage in firsthand experiences otherwise impossible to experience in the real world.

As an illustrative event, I chose the NASA lunar landings of the Apollo program (1961-

1972) to create The Moon Experience as an immersive and scaffolding1 learning environment because of its two features: (1) a complex experience and (2) inaccessible in the real world. One of the greatest achievements in human history, the ’s planning and execution is undeniably complex (Brooks and Prasad). The lunar landings were recorded in various forms including photos, audio, and video (Jones and Glover,

Apollo Lunar Surface Journal) (Smithwick) (Jones, Apollo 17 Video Library). For the general public, the large volume of documents is extensive and overwhelming. The existing literature and media describing the lunar landings fail to provide audiences with immersive experiences of these events.

More than 40 years after Neil Armstrong landed on the Moon, the experience of the landing is still limited to the twelve lunar astronauts lucky enough to have had the chance

1 Scaffolding is the sufficient support given during the learning process which is designed to help the student achieve his/her learning goals (Sawyer). 2 to experience it firsthand. The goals of this project include: designing and creating a virtual environment to approximate what the real experience was like for the astronauts, conveying the sensation of being on the Moon through the tangible and multimodal interactions in the virtual lunar world, and designing the system as an effective learning environment. In the following section, I address how to meet these challenges by drawing from concepts and technologies across multiple disciplines.

1.2 Methodology

The design and implementation of this project takes a cross disciplinary approach. Virtual reality, computer games, computer graphics and motion capture technologies are woven together with the principles of learning theories, and narrative storytelling.

Learning from firsthand experience is a well-known effective learning approach. This is advocated by constructivist learning theory, which suggests that learning is accomplished best by following a hands-on approach (Hein) (Glaserfield, Constructivism in Education)

(Glaserfield, An exposition of Constructivism: Why some like it radical). Other learning theories also provide insight into what factors affect a learning outcome and how and why they do so. A contextual review of the related learning theories is given in section

2.1.

3

Technologies such as virtual reality (VR) can be used to create firsthand learning experiences that are otherwise unavailable in the real world. A virtual environment is a synthetic world created with computer technology that can provide information through stimulation of the five human senses: sight, sound, touch, smell, and taste. Through tangible, multimodal, and embedded interactions, a user can be immersed in the virtual world, providing simulated but believable firsthand experiences. A review of the technology of virtual reality is given in section 2.2. On the other hand, the current VR technology cannot completely fill the gap between reality and virtuality through the simulations of sensory inputs. Traditional theatrical design’s multimodal artistic conventions can work with VR technology to blend the real and virtual words, which also helps a designer cut off expensive computing and unnecessary details. I designed two types of users to have this mixed fantasy model (Stapleton, Hughes and Moshell) in The

Moon Experience. One is a participant, and the other is an operator who assists the participant to have a firsthand experience. The roles of two types of users are described in details in section 3.2.

Computer game technology applies to designing an effective learning environment.

Games are usually fun, engaging, and are often played with challenge and strategy. They also require deep thinking, which promotes knowledge retention (Squire) (Laughlin and

Marchuk). Computer games often embed narratives that directly provide the audience with the pertinent learning context. The Moon Experience uses the story-telling process

4 to heighten the audience’s perception, trigger their imagination, and transcend some of the virtual reality’s technical limitations.

The technology of motion capture provides the convincing motions of virtual characters.

It is also used in tracking an object’s position in real-time. A combination of motion capture and computer animation allows for real-time interactions between users and the virtual world, which contributes to the believable experience on the Moon.

In summary, The Moon Experience is designed and created using virtual reality, computer game, motion capture, and computer animation technologies combined with learning theories and narrative storytelling. The project explores cross-disciplinary research and spurs a creative solution to the design challenge.

1.3 Research Scope

Although many disciplines are utilized to create The Moon Experience, in this thesis project, I focus on the following three areas:

 Related learning theories that provide strong evidence that people learn most

effectively through hands-on experience. The narrative storytelling as a traditional

art form helps engage an audience and ensure they experience the desired content.

 Computing technology, including virtual reality, computer game, motion capture,

and computer animation, supports the implementation of the virtual experience.

5

 The historical Apollo missions (1961 - 1972) provide a content rich setting for me

to demonstrate how to solve the design problem.

This thesis begins with an introduction including the challenge statement, methodology and my research scope, followed by the background / contextual review in Chapter 2. The strategies to solve design issues are presented in Chapter 3, which elaborates on solutions to the design problems. Chapters 4, 5, and 6 discuss the implementation of these solutions and contain the virtual world design, interface design, and interaction design. Chapter 7 is the conclusion.

1.4 Contribution

The Moon Experience can provide a reference for designers who face similar design challenges. It could also serve as a case study for future multidisciplinary research, particularly any research involving virtual environment, computer games, and motion capture in learning applications.

Four decades have passed since the Apollo missions, but interest in the Moon has not declined.2 In fact many more programs have been launched.3 4 I hope

2 After the first Japanese lunar probe, Hagoromo (launched in 1990), Japan launched its second lunar obiter Kaguya (SELENE) in September 2007. China launched its Chang'e 1 lunar explorer on October 24, 2007 and and Chang'e 2 on October 1, 2010. India launched Chandrayaan-1 on October 22, 2008. launched the Lunar Reconnaissance Orbiter in June 2009 and twin moon-bound GRAIL spacecraft on September 10, 2011. Many countries are planning future manned lunar exploration missions and lunar outpost construction on the Moon between 2018 and 2025. 6 that this project will provide a small virtual world for the general public to explore, to learn, and to enjoy, as ``All experience is an arch, to build upon" (Henry B. Adams).

3The NASA twin Exploration Rovers Spirit and Opportunity landed on the surface of Mars in January 2004. The rover was delivered on the surface of Mars with the mission launched on November 26, 2011.

4 NASA does not have a monopoly on space explorations any more. Space travel moves to corporate invention. For example, on 22 May 2012, SpaceX's carried the unmanned Dragon capsule into space, marking the first time a private company has sent a spacecraft to the space station. Recently, Dennis Tito unveiled his plan of human Mars flyby mission with Paragon (http://www.paragonsdc.com/). 7

Chapter 2: Background / Contextual Review

Designing and implementing The Moon Experience involves several important areas. In this chapter, I provide a contextual review of these areas. To understand how we learn, and how learning is influenced by contemporary technologies, I will first review the learning theories closely related to The Moon Experience. I will then take a close look at the technology of virtual reality and its key elements. Finally, I will briefly describe the historical event of the Apollo missions.

2.1 Learning theories

Learning is the lifelong activity of transforming information and experience into knowledge, skills, behaviors, and attitudes (Cobb). What is the best way to learn?

Unfortunately there is neither a unified nor a unique answer to this question. The best way for people to learn depends on many factors such as the characteristics of a learner, the subject to be learned, the instructional design, and the learning environment. In the following review of learning theories, I explore the elements and technologies that help discover design problems and potential solutions for The Moon Experience.

In education and psychology, many learning theories have been developed to reveal the learning process or to suggest key factors of the solution to practical learning problems. 8

A brief description of the major learning theories is as follows:

 Behaviorism considers that learning is manifested by a change in behavior

through certain conditions or environment. Behaviorists assume that environment

shapes behavior and the principles of contiguity5 and reinforcement6 are central to

explain or reveal the learning process.

 Instructional theory focuses on how to structure learning materials to offer

explicit guidance on how to better help people learn and develop.

 Cognitivism looks beyond behavior to explain brain-based learning and considers

how human memory works to promote learning. Cognitivist views learning as

making changes in long-term memory.

 Constructivism views learning as a process in which the learner actively

constructs his/her understanding and builds a sequence of problem-solving steps.

 Multimedia learning theory is derived from cognitive learning theory and focuses

especially on multimedia involving computer technology.

 Connectivism takes into account trends in learning, the use of technology and

networks, social structures, and the diminishing half-life of knowledge. It is

claimed to be the learning theory for the digital age (Siemens).

Each of these learning theories has its own strength and weakness since they focus on different aspects of learning and cannot substitute one for the other. For example,

5 How close in time two events must be for a bond to be formed. 6 A reward (or a punishment) increases (or decreases) the likelihood that an event will be repeated. 9 connectivism draws strength from social networks and open-source educational materials.

However, as an emerging theory, it lacks a substantial body of empirical research that would lend it greater credibility. Instructional theory approaches are characterized by the systematic structures of curricula design. Students proceed in learning through a series of steps along predetermined linear knowledge paths. The approaches are good for subjects such as math requiring strong logical reasoning skills. However, in this pedagogy, students passively accept the pre-existing knowledge and easily get bored by material and exams7. On the other hand, constructivism proposes that learning is an active process in which the learner constructs his/her understanding and skills from experiences. However, without proper instructions, this learning-by-doing could lead learners to frustration and inefficiency. To configure an effective solution to a real learning problem, we should design an approach that takes into account the strengths from the different learning theories in a complementary way, which promotes the learning and fits into the real learning situation. In the next section, I examine the learning theories that are closely related to my thesis project and identify elements that can benefit my research on The

Moon Experience project.

I will first take a close look at how people learn to reveal the insights of effective and meaningful learning. I will also review multimedia learning theory to show how contemporary technologies affect learning. I then will discuss the benefits and the pitfalls

7 In some subjects, this pedagogy may be the most effective and efficient approach to learn. For example, mathematics is one of subjects that require students to learn pre- existing knowledge step-by-step before they can go to advanced topics. 10 of multimodal learning and constructivism that are major approaches in The Moon

Experience.

2.1.1 How People Learn

Understanding the physiological limitations and characteristics of human cognition process is the first step to figure out how people learn. Research in cognitive science and neuroscience reveals how information is processed in a brain and indicates that there are three types of memory: sensory (short) memory, working memory, and long-term memory (Atkinson and Shiffrin) (Baddeley, Eysenck and Anderson, Memory). Any sensory input about the world from the human senses first arrives in sensory memory.

However, sensory memory traces degrade quickly. Only when a person pays attention to elements of sensory memory8 would those experiences be introduced into working memory. Working memory (Baddeley and Hitch, Working memory) is the place where thinking process occurs. Working memory has dual code channels/buffers for storage

(Paivio) – one for verbal/text elements and the other for visual/spatial elements. The verbal/text memory and visual/spatial memory work together, without interference, to augment understanding. Long-term memory is the central, dominant structure of human cognition. Everything that we see, hear, and think about is critically dependent on and influenced by our long-term memory. The processing in working memory automatically

8 Attention serves as selection mechanism to choose the information to be processed further. 11 triggers storage in long-term memory. Research reveals the limitations on thinking processes as follows.

1. If information stored in working memory is not rehearsed within 30 seconds, it

will get lost (Peterson).

2. For the verbal buffer, only about 7 objects can be simultaneously stored at a time

(Miller), while only about 4 objects can be simultaneously stored in the visual

buffer (Cowan). Overfilling either buffer will result in cognitive overload

(Sweller, Cognitive load during problem solving: effects on learning).9

3. Multitasking results in delay due to switching from one task to another and leads

to the loss of efficiency (Marois and Ivanoff).

4. Sensory inputs from seeing, hearing, touching, smelling, tasting, etc. at the same

time can converge into the linked memories. This convergence in the creation of

memory traces has positive effects on memory retrieval. The triggering of any

aspect of the experience will bring to consciousness the entire memory, often with

context.

Bransford et al. in (Bransford, Brown and Cocking) outlines the following principles of how people can learn effectively.

9 These limitations imply that thinking process happens serially. The limitations of working memory only apply to new, yet to be learned information.

12

 Student preconceptions of curriculum must be engaged in the learning process.

Learning becomes meaningful when learners can build new concepts based

on/connected to the prior knowledge.

 Expertise is developed through deep understanding. Deep understanding means

that learning can lead to a build-up of problem-solving skills.

 Learning is optimized when students develop “metacognitive” strategies.

This refers to the ability of a student who approaches problems by automatically

trying to predict outcomes, explaining ideas to themselves, noting and learning

from failures, and activating prior knowledge. Given appropriate scaffolding by

educators and other adults, all students can learn metacognitive strategies.

These principles reveal the essences of effective learning, which provides a clear guidance on how to develop an effective approach to a real world learning problem. The solution I proposed and implemented is guided by these insights, which respects the limitations while making use of our cognition to promote learning. The Moon Experience may not provide a learning environment for users to gain expertise, but it can serve as the first step toward that direction. The project first creates an immersive virtual lunar environment where an audience can employ his/her multimodalities such as visual, auditory, and tactile sensory channels to explore on the Moon. These hands-on experiences could lead the audience to activating prior knowledge, thinking deeply, and solving problems as the first step of effective learning.

13

2.1.2 Multimedia Learning

Multimedia in this context refers to “the capacity of computers to provide real-time representations of nearly all existing media and sensory modes of instruction. Sensory modes are distinguished from media because they relate to the sensory format of information so that it is compatible with one of the five senses.” (Kirschner, Sweller and

Clark)

Mainly based on the theories of dual coding (Paivio) and other theoretical understandings of how we learn10, Mayer developed a theory specifically for multimedia learning (R. E.

Mayer, Multimedia Learning) as shown in Figure 1.

Figure 1 Mayer's multimedia learning model excerpted from (R. E. Mayer, Multimedia

Learning)

Mayer states that in multimedia learning, the learner engages in three important cognitive

10 Other theories include working memory (Baddeley, Working memory: the interface between memory and cognition) (Baddeley, Is working memory still working?) and the cognitive load (Sweller, Cognitive load during problem solving: effects on learning). 14 processes. The first, selecting (attention), is applied to incoming verbal information and visual information to yield a text base and an image base respectively. The second cognitive process, organizing, is applied to the word base and the image base to create a verbally-based model and a visually-based model of the to-be-explained system separately. Finally, the third process, integrating, occurs when the learner builds connections between corresponding events in the verbally-based model and the visually- based model, often involving retrieval of the previous knowledge in the long-term memory. His preferred mode of presentation is to present auditory words so they do not conflict with the visual code that is needed for pictures. The sounds are organized into a verbal model and the visual images into a pictorial model. In this way, a learner can make use of available cognitive resource while avoiding overload either verbal or visual buffer.

Working memory is used to integrate the verbal model, pictorial model, and prior knowledge stored in long-term memory. Moreover, cognitive load is a central issue in the design of multimedia instruction. Mayer and Moreno concluded nine ways to reduce cognitive load in multimedia learning (Mayer and Moreno, Nine ways to reduce cognitive load in multimedia learning). Mayer and his colleagues conducted dozens of experiments 11 12 13 14 15 16 17and proposed the following principles for design of

11 Mayer, Richard E and Richard B. Anderson. "The instructive animation: helping students build connections between words and pictures in multimedia learning ." Journal of educational psychology 84.4 (1992): 444-452.

12 Mayer, Richard E. and Roxana Moreno. "A split-attention effect in multimedia learning: evidence for dual processing systems in working memory." Journal of Educational Psychology 90.2 (1998): 312-320. 13 Mayer, Richard E. and Roxana Moreno. "The cognitive principles multimedia learning: the role of modality and contiguity." Journal of Educational Psychology 91.2 (1999): 358-368. 14 Mayer, Richard E. "Multimedia learning: are we asking the right questions?" Educational psychologist 32.1 (1997): 1-13. 15 multimedia learning.

1. Multimedia principle: Students learn better from words and pictures than from

words alone.

2. Spatial contiguity principle: Students learn better when corresponding words

and pictures are presented near, rather than far from, each other on the page or

screen.

3. Temporal contiguity principle: Students learn better when corresponding

words and pictures are presented simultaneously rather than successively.

4. Coherence principle: Students learn better when extraneous words, pictures,

and sounds are excluded.

5. Modality principle: Students learn better from animation and narration than

from animation and on-screen text.

6. Redundancy principle: Students learn better from animation and narration

than from animation, narration, and on-screen text.

7. Individual differences principle: Design effects are stronger for low-

knowledge learners than for high-knowledge learners and for high-spatial

learners than for low-spatial learners.

8. Personalization effect - “students learn more deeply when words are presented

in conversational rather than formal style – both in computer-based

15 Mayer, Richard E., Julie Heiser and Steve Lonn. "Cognitive constraints on multimedia learning: when presenting more material results in less understanding." Journal of Educational Psychology 93.1 (2001): 187-198. 16 Mayer, Richard E. and Roxana Moreno. "Verbal redundancy in multimedia learning: when reading helps listening." Journal of Educational Psychology 94.1 (2002): 156-163. 17 Mayer, Richard E. and Roxana Moreno. "Aids to computer-based multimedia learning." Learning and Instruction (2002): 107-119. 16

environments containing spoken words and those using printed words.” (R. E.

Mayer, The promise of multimedia learning: using the same instructional

design methods across different media)

These principles provide me with a practical guide on designing an effective learning application. Following these principles, The Moon Experience is designed to be a multimedia application that has narrative, visual and auditory components. The narrative involves conversations based on the personalization effect (item 8 above). The visual part contains many appropriate animations. The audiences for this project are laymen rather than experts.

2.1.3 Multimodal Learning

Massaro provides a definition of Multimodal Learning in the Encyclopedia of the

Sciences of Learning (Massaro) as “an embodied learning situation which engages multiple sensory systems and action systems of the learner”.

From the cognitive science and neuroscience point of view, the human perceptual system is naturally multimodal, which means that multiple sensory systems work together rather than separately in order to understand (Medina). The sensory integration can be seen in the famous McGurk effect (McGurk and MacDonald) where a visual /ga/ with a voiced

/ba/ is integrally perceived as /da/ by most subjects. When multiple sensors are stimulated simultaneously, the brain can code more information per unit time and remember that

17 information better as well18.

The multimodal learning that can activate multiple sensory modalities could lead to “deep understanding of material, which includes attending to important aspects of the presented material, mentally organizing it into a coherent cognitive structure, and integrating it with relevant existing knowledge” (Mayer and Moreno 43).

Multimodal learning includes not only various sensory systems but also action systems.

Engelkamp’s work (Engelkamp, Human memory: a multimodal approach) shows that learning involving actions (so called self-performed tasks or SPT in Engelkamp’s research) leads to strong memory retention and deep understanding. In one of his experiments, participants “were read lists of simple, unrelated actions – such as comb your hair, open the marmalade jar, bend the wire, close the umbrella, etc. and were requested to remember the phrases” (Engelkamp, Memory for actions). One group of subjects only listened to the phrases, while the other group both listened to the phrases and acted out the tasks described by the phrases. The second group recalled a great deal more phrases than the first one. Engelkamp called this effect “enactment effect”. One advantage of enacting phrases, as opposed to simply listening to them, is that enactment assures semantic processing of the phrase because it is necessary to understand a

18 Brain capability of multisensory convergence of neurons provided that sensory inputs from seeing, hearing, touching, smelling, and tasting at the same time can converge into the linked memories. This convergence in the creation of memory traces has positive effects on memory retrieval. The triggering of any aspect of the experience will bring to consciousness the entire memory, often with context.

18 command before carrying it out. The carry-out action can be objectively observed and is consistent with the principles of behaviorist theory of learning (Graham). Engelkamp developed the multimodal memory theory to explain why SPTs are recalled more successfully than the verbal tasks (VT) in which the participants only heard the phrases.

The main point is that actions (motor processes) that take place in the learning phases are only retention efficient within the context of conceptual coding. In other words, the recall of an action concept depends on the movement performed. The details about the multimodal memory theory can be found in his book (Engelkamp, Memory for actions).

Moreover, multimodal approaches in learning can also accommodate various learners with different learning styles, modal preferences, and different levels of prior knowledge

(Sankey, Birch and Gardiner) ( and David F. Feldon).

Their research highlights the point that multimodal learning is a better approach than single-modal learning. It also reinforces my strong preference for designing an effective learning experience in which self-performance tasks (or actions) are designed into my thesis project. It would assist the audience to achieve strong memory retention and deep understanding of learning content.

2.1.4 Constructivism in Multimedia Learning

Constructivist learning theory emphasizes that learning is an active process and can be performed best through firsthand experience. Learners construct new ideas or concepts

19 based upon their hand-on activities and their current/past knowledge. The learner selects and transforms information, constructs hypotheses, and makes decisions through experiences (Bruner, The process of education) (Bruner, Toward a theory of instruction).

The constructivist approach has many advantages. Learners enjoy and learn more when they are actively involved rather than being passive listeners. By grounding learning activities in an authentic, real-world context, the constructivist-approach engages learners and encourages them to apply their natural curiosity to the world.

Multimedia can provide many learning options, in particular simulations of real situations and activities for learners to experience. It seems reasonable to take the constructivist approach as a dominant pedagogy in multimedia learning. For The Moon Experience, the learning solution will be based on the constructivist approach with multimedia and multimodal learning settings.

The constructivist approach is not superior in every learning situation. According to a report (Metiri Group) commissioned by Cisco Systems, contrary to popular opinion, research shows that lessons in which students interact with material, rather than passively absorb it, are not always better. The report concludes that interactivity should be saved for complex subjects19. From human cognitive architecture, Kirschner et al. analyze why

19 When learning basic skills, the average student's scores increase with multimodal learning. But the increase is greater -- by 21 percentile points -- when the lesson isn't interactive (scores increase by 9 percentile points when it is interactive). When it comes to acquiring more advanced skills, however, the 20 minimal guidance approaches in some cases are less effective and less efficient

(Kirschner, Sweller and Clark). For instance, asking novice/beginners to solve

“authentic” problems or acquire complex knowledge in information-rich setting could lead to frustration or misconception since these approaches require the learners to search a problem space for problem-relevant information, which imposes heavy demands on limited working memory. The heavy load in working memory caused by problem-based searching does not contribute to knowledge accumulation in the long-term memory and is detrimental to learning. This discovery-type approach works only if the learner possesses sufficiently high prior knowledge that can serve as “internal guidance”, since the prior knowledge brought back from long-term memory to working memory does not add any cognitive load and helps integrate new information into the long-term memory. Even in learning-by-doing approaches, well-designed instructions are often needed to provide learners with a scaffolding environment that prevents learners from being aimless and wandering.

To summarize, the existing research on how people learn, and the pros and cons of learning theories provide great insights into the learning process. These insights serve as guidance for me in conducting my thesis project. The Moon Experience is therefore designed to be an immersive, multimedia, and multimodal learning system that contains narrative, animations, instructional conversations, and self-performance tasks. The goal

situation is reversed: The average student's scores increase by 32 percentile points with multimodal interactive lessons, compared with 20 points with non-interactive lessons. 21 of the design is to help audiences achieve a firsthand learning experience effectively.

These hand-on learning experiences in a virtual space are mainly obtained through the technology of virtual reality, which will be the main topic in the next section.

2.2 Technology of Virtual Reality

Virtual Reality (VR) technology is a rapidly evolving and diversifying field involving various technologies. Roland Blach explains VR-technology in terms of the structure of the field in his paper (Blach) as follows.

Most VR-Technology is comprised of multimodal input systems, content technology, and multimodal rendering and display. Specific applications are built upon this three-part core, adapting the basic technologies to the specific needs. Interaction concepts and evaluative criteria are related to both the core and specific applications. All these elements work in conjunction with a computing infrastructure concerned with real-time performance (see Figure 2).

VR technology can be traced back to the first head-mounted display invented by Albert

B. Pratt and patented in 1916 (see Figure 3). Sherman and Craig provide a detailed history (1916- 2000) of VR technology in their book (Sherman and Craig 24-36) . They cover the key figures and innovations in the field such as Morton Heilig’s ,

Ivan Sutherland’s head-mounted display, Myron Krueger’s Videoplace, Grimes’ glove,

22

CyberGlove, and the CAVE to name a few. Youngblut et al. also wrote a comprehensive review of early VR technology in their IDA paper (Youngblut, Johnson and Nash).

Figure 2.The structure of VR-Technology field, excerpted from (Blach).

To understand the field of VR technology, an explanation of the essential properties of

VR is given, followed by a description of the common components of VR system.

Figure 3 The first head-mounted display was patented in 1916. The image is excerpted from (Sherman and Craig 24). 23

2.2.1 The properties of VR

There are several essential properties or elements of VR, and understanding these elements gives insight into the structure, complexity, and interplay between the components in VR technology. These properties can also be a set of metrics for objective evaluation of VR systems. Burdea and Coiffet describe the most important properties of

VR as the “Three I’s:” immersion, interaction, and imagination. Sherman and Craig instead offer four essential elements of virtual reality: virtual world, immersion, sensory feedback, and interactivity. Blach provides yet another list of essential properties of VR:

3D-representation and perception, spatial interaction in real-time, and sense of presence and immersion.

Immersion and presence are especially pertinent and thus require further clarification.

Immersion is a sensation of being in an environment. It can be a purely mental state

(mental immersion) or can be accomplished through physical means (physical immersion). Mental immersion requires an individual to be deeply engaged or involved to the point of suspension of disbelief. Physical immersion is defined as bodily entering into a medium; this can lead to synthetic stimulus of the body’s senses via the use of technology (Sherman and Craig 9).

Presence is often referred to as a state of mental immersion causing a subjective experience of being in one place or environment, even when one is physically situated in

24 another (Witmer and Singer 225). Like those in a dream, presence can be a sensation purely within the mind and is not necessarily accompanied by sensory input. Inducing such sensation involves some mechanism for extracting memory traces from the long- term memory. Presence can also be brought about by classic art forms, particularly narrative work such as novels, plays, and movies. Simply, imagination plays an important role in generating the sense of presence and suspension of disbelief.

A virtual world is a virtual space generated by computers that may include a set of 3D objects and their accompanying rules and relationships governing those objects. These rules often reflect the content of an individual VR application. The virtual world is a place where a participant can be immersed with the sensation of presence.

Interactions between a user and the virtual world are bidirectional through input and output devices. The interactivities are often real-time and multimodal, which is consistent with human sensory channels. The Virtual world, multimodalities, and real-time interaction are the essential elements that help generate and sustain the sense of immersion and presence, since the human brain cannot differentiate between virtual experiences and real ones if the simulations are believable enough. It is actually difficult to draw a clear line between reality and virtual reality. Milgrom et al. proposed the concept of a reality-virtuality continuum, which indicates the relationship among reality, (), and virtuality (see Figure 4). Although it is difficult to draw a clear line between reality and virtual reality, one of the most obvious differences is the source of sensory inputs. For reality, the sensory inputs are from the ‘real’ world

25 while the sensory inputs of virtual reality are from computer simulations. This difference causes the gap between reality and virtual reality. The field of VR technology hence focuses on how to generate convincing synthesis stimuli to feed human sensory channels, just as the ‘real world’ stimuli would. Can VR technology alone fill the gap effectively and ergonomically? This question is complex and involving many aspects such as spatial multi-sensory representation, interaction, and presence, combined with real-time simulation techniques and high-level management of the handled virtual environment

(Blach).

Figure 4 The Reality-Virtuality Continuum proposed by Milgrom et al. (Milgrom,

Takemura and Utsumi)

Stapleton et al. (Stapleton, Hughes and Moshell) propose a framework in entertainment, which expands this technical concept (RV continuum) into the mixed-fantasy framework

(see Figure 5). Comprised of Barnum’s reality-imagination continuum and Aristotle’s media-imagination continuum, the two approaches are critical in entertainment

26 business20. They suggest that theatrical design’s multimodal artistic conventions can work with VR technology to blend the real and virtual . We can then use the story- telling process to heighten the audience’s perception, trigger their imagination, and transcend augmented reality’s current limitations. When all our senses validate a virtual event, the experience moves us across a credibility threshold, suspending our disbelief.

Figure 5 Mixed-fantasy framework excerpted from (Stapleton, Hughes and Moshell).

2.2.2 The components of the virtual reality system

Creating a virtual reality system is complex, requiring a broad range of knowledge in the areas of interactive 3D computer graphics, tracking, networking, mechanics, ergonomics,

20 P. T. Barnum’s approach focuses on how to structure people’s expectations and excite their imaginations so that they would perceive ordinary objects in extraordinary ways. This transforms the perception of reality, limiting it only by the audience’s imagination. This approach is the principle skill of the theme park design industry for bending the audience’s perception of reality. Aristotle’s method emphasizes using the author/storyteller’s intent to excite the audience’s imagination, encouraging them to creatively supply the story’s unseen parts, which is the foundation that informs today’s theatrical, film, and television industries. 27 design, project management and more (see Figure 6). A VR system generates synthetic stimuli that are fed into a participant’s sensory channels; this helps create synthetic experiences for the user. In their book (Burdea and Coiffet), Burdea and Coiffet describe five classic components in a VR system – VR engine, I/O devices, a participant, software/databases, and tasks. Based on the models of Burdea and Coiffet (Burdea and

Coiffet) and Blach (Blach), I re-designed a diagram of the VR system model shown in

Figure 7. It includes a VR engine, a human-computer interface (such as display21 and input devices), user physical interaction space, and network. The interaction between the users and the VR system is not shown in the figure, but it is a necessary component that is realized through the input/output devices.

Figure 6 Figure 6 Base technologies for VR systems, excerpted from (Blach).

21 Display refers to all output formats with different modality, including not only the visual output, but also auditory, haptic, olfactory, and gustatory output feeding into human sensory channels. 28

For a VR system, a user’s physical interaction space must first anchor into the virtual world generated by the VR engine, which is a prerequisite for interface devices to work properly. Then the input devices can track the user or monitor the physical world space.

The recorded data is sent to the VR engine for further analysis. The VR engine computes the changes, dynamically updates the virtual world, and renders the displays for different modalities. The interaction between a user and the computer through the interface continuously loops until the VR engine receives the signal to terminate. A VR system may accommodate a single user or multiple users who may share the same physical space or may interact over a network.

Figure 7 The structure of a VR system.

29

In a VR system, the inputs can be passively sensed attributes such as the position of the user or events that the user must specifically activate. The data tracked can be the user’s position, heart rate, blood pressure, etc. or it could be time, temperature, or humidity of the physical space. The targets tracked by the system can be the user’s head, eyes, body, hand, finger, feet, and/or other biological attributes. Common input devices include a keyboard, mouse, and joysticks. A camera is another often-used device to track the user, particularly in a VR system using image-based software. For example, a motion capture system uses cameras to track a user wearing a suit with reflective markers. Other tracking devices like data gloves, hand-held wands, and other electric/magnetic sensors may allow the user to more naturally navigate through and interact with a virtual environment.

Directional sound, tactile and force feedback devices, voice recognition and other technologies may be employed to enrich the immersive experience and to create more

"sensualized" interfaces. Input data can be considered to be how the computer “sees” (by body tracking), “hears” (through voice/sound recognition), and “feels” (with physical controllers) the user and his/her physical world.

Since vision is the primary sense, visual display is the main output used to immerse a user into a virtual world. Visual display devices include computer monitors, light weight glasses, head-mounted displays (HMD), various projection systems such as cave automatic virtual environment (CAVE), multi-view displays, stereoscopic displays etc.

Computer monitors and projection VR systems are stationary. Light weight glasses and

30

HMDs are portable. All visual displays have visual presentation and logistic properties22 associated with them. These properties help me understand the strengths, the weakness, and the ergonomic logistics of these visual displays and then help me make a decision of the implementation.

Computer monitors and light weight glasses are low cost. However, computer monitors lack the ability to provide immersive 3D virtual environments and light weight glasses are restricted by a small field of view and low resolution. HMDs are also relatively low cost, compared with the large projection systems. They can block out the real world and are highly portable with complete field of regard (FOR)23. These benefits allow a participant to more easily feel immersion. HMDs are usually limited by their field of view (FOV). However, this is not a disadvantage in The Moon Experience since the astronauts’ helmets actually limited their view. Since HMD’s visually separate the user from the real world, anything that the user sees, including the user’s own body, must be rendered in whole or partially by the computer. Seamlessly rendering the user’s avatar in order to be used properly in an occlusive HMD is a challenge since proper registration between the real body parts and the virtual avatar might prove to be difficult. In

22Sherman and Craig, in their book (Sherman and Craig 121-122), describe visual presentation properties as color, spatial resolution, contrast, brightness, number of display channels, focal distance, opacity, masking, field of view, and field of regard, head position information, graphics latency tolerance, and temporal resolution (frame rate). Logistic properties include user mobility, interface with tracking methods, environment requirements, associability with other sense displays, portability, throughput, encumbrance, safety, and cost. 23 A display’s field of regard is the amount of space surrounding the user that is filled with the virtual world. In other words, how much the viewer is enveloped visually. 31 addition, some people may experience simulator sickness when wearing an HMD over an extended period of time.

Auditory display is another important output in a VR system. Auditory cues increase awareness of surroundings, prompt visual attention, and convey a variety of complex information without taxing the visual system. Auditory display includes several configurations – headphones, speaker (transaural) system, multi-channel audio systems, and speaker arrays (wavefield synthesis) system. In headphone and stereo speaker systems, sounds are usually computed using Head-Related Transfer Functions

(HRTFs). The HRTF is an impulse response created based on the position of the sound- source and allows precise stimulus control, especially if the acoustic signals change with listener movement. Multi-channel audio systems distribute the spatial sound on different speakers. If the listener position is tracked, the output can be adapted to various positions.

Vector-based amplitude panning (VBAP) and Ambisonics are the most common methods in a multi-channel audio system. The well-known Dolby surround systems are implemented with VBAP method. The wavefield synthesis method is based on the

Huygens principle24 and is used to recreate sound sources in a predefined space surrounded by speaker arrays. This method can be used to synthesize acoustic wavefronts of an arbitrary shape.

24 Huygens’s principle is a simple method of constructing the position of a wave at successive times and was developed by the Dutch scientist Christiaan Huygens (1629 – 1695). 32

The goal of haptics in a VR system is to achieve tactile/touch sensations and the kinesthetic sense to a degree, which contributes to creating believable immersion and presence. Touch information is critically important for fast and accurate interaction with the virtual environment (Robles-De-La-Torre). The illusory tactile sensation results from both the user’s perceptual system and the technical qualities of the interfaces devices

(Hayward, Astley and Cruz-Hernandes). Tactile perception can be achieved through any part of the human body, most commonly through the hands, fingers, and the skin

(receptors). The skin is sensitive to pressure, vibration, temperature, electric voltage and current. The major applications for haptic display in VR systems include medical applications (particularly in virtual surgery), text and graphics (e.g. Wacom interactive tablets), military applications, entertainment and educational applications, and tactile displays embedded in consumer electronics and wearable devices. With inexpensive off- shelf sensing devices nearly anything can become an interface device for simulation in

VR.

VR technology can be used to construct an immersive virtual environment that is hard to have access to in the physical world. With appropriate input/output devices, the created virtual world allows audiences to immerse with and to have the firsthand experience of being there. VR technology seems to be an ideal tool to solve the issue of inaccessible experience. Due to limited computing resources and other constrains, gaps remain between reality and virtual reality, which is hard to fill even with VR technology.

Traditional art forms can be used to transcend current limitations of VR technology.

33

Narrative story can engage audiences with the right situated content, trigger their imagination, and suspend their disbelief. In my thesis project, I needed to design a narrative story and combine it with VR technology to create an immersive virtual world for audiences to have believable firsthand learning experiences. That is one of the reasons I chose the Apollo program.

2.3 The Apollo Missions

To illustrate how to meet the design challenge, I shall select a demo case with the following features: (1) a complex experience and (2) inaccessible in the real world. The

Apollo program seems to be a perfect case that has these features. Besides, there is a great volume of existing documents to be used as the valuable references. In order to create the lunar explorations of the Apollo program in a virtual space, I need to understand this historical event.

The Apollo program was the United States’ spaceflight effort designed to land humans on the Moon and return them safely to the . President John F. Kennedy announced the challenge to Congress on May 25, 1961. The original motivation was competition with the for supremacy in space exploration25, which was perceived as necessary for national security and symbolic of technological and ideological superiority. The

25 The Soviet Union launched the world’s first artificial Sputnik on 4 October 1957. Soviet cosmonaut became the first human to travel into space with Vostok spacecraft on 12 April 1961. 34

Apollo program was carried out by the National Aeronautics and Space administration

(NASA) from 1961 to 1972. The program included a large number of unmanned test missions and 11 crewed missions. President Kennedy’s goal was accomplished with the

Apollo 11 mission as Neil Armstrong first set foot on the lunar surface on July 20, 1969 and all three crewmembers safely returned back to the Earth on July 24, 1969. Apollo 17 was the last lunar landing mission as Eugene Cernan left the last human footprint on the moon and the crew safely returned to the Earth on , 1972. In total 6 manned missions (Apollo 11, 12, 14, 15, 16, and 17) successfully landed twelve astronauts on the moon and safely returned them back to the Earth.

The Apollo program is one of the most magnificent and complex projects in human history, lasting more than a decade and costing $25.4 billion (in 1973 dollars, which equal to roughly $170 billion in 2005 dollars) (CBO ) (Butts and Linton). At its peak, the

Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.

The Apollo program has had a great influence on human society. It set several major milestones in human space exploration: spaceflight beyond , orbiting another celestial body, and landing on the Moon are just a few examples. About 400 kilograms of lunar samples were brought back and helped answer many important cosmological questions such as the origin and evolution of the (Compton

432). The Apollo program stimulated technological development in avionics,

35 telecommunications, computers, and civil, mechanical, and . The program also generated many spinoffs that have changed the public’s daily lives

(NASA). For example, fire-resistant textiles for use in space suits and vehicles were developed after the tragedy26. These materials are now used in numerous firefighting, military, motor sports and other applications. Reflective materials used to protect astronauts and their delicate instruments from radiation and heat are now found in common home insulation. Even athletic shoe design and manufacturing also benefited from the Apollo program’s space suit technology.

In addition to the burst of scientific discovery and technological innovation, the Apollo program greatly influenced human culture and our perception of the Earth. As astronaut

Eugene Cernan said “We went to explore the Moon, and in fact discovered the Earth”.

For example, the photos of the Earth taken from space put into perception the fragility of our planet motivating the environmental social movement. This is a kind of learning from visuals that could not have been taught in any other forms. The lunar landing also sparked the popular culture boom in interest in space exploration. The Apollo astronauts became cultural icons, presenting the leading virtues of American culture (Launius). The space suit became one of the most popular Halloween costumes.

As an engineering accomplishment it is unparalleled in history. As a scientific project it enabled researchers on the Earth to study documented specimens from another body in

26 A cabin fire killed all three crewmembers (command pilot Gus Grisson, senior pilot Edward White and pilot Roger Chaffee) during a pre-launch test of the Apollo 1 mission on 27 January 1967. 36 the solar system. As an exercise in the management of an unprecedentedly large and complex effort it stands alone in human experience (Compton). To recreate The Moon

Experience based on such a complex project is a great challenge.

Although it is the last lunar landing mission, Apollo 17 is the most representative since it is the climax of the whole Apollo program. It hosted the first scientist-astronaut Harrison

Schmitt to land on the Moon. Apollo 17 was the longest manned lunar landing flight (12 days 13 hours 52 minutes), the longest total lunar surface extravehicular activities (LRV traversed 30.5 kilometers and total lunar surface time is 75 hours); the largest lunar sample return (110.4 kilograms, or 243 pounds), and the longest time in (17 hours).

I choose Apollo 17 as the major reference to construct The Moon Experience project. In the next chapter, I address how the principles of learning theories, the technologies of

VE, and the information on the Apollo program informed my design and creation of the virtual lunar world.

37

Chapter 3: Strategies to Solve Design Issues in The Moon Experience

One of my major design challenges is how to make an inaccessible complex experience accessible as an effective learning environment. Recreating the astronauts’ experience on the Moon as a way of demonstrating how to meet this design challenge is not an easy task, particularly within the scope of an MFA thesis project. In this chapter, I will address each design issue I encountered and describe how I solved them.

There are two major issues I need to solve in order to meet this challenge. One is how to break the large problem domain down into a manageable research project, i.e., how to select and identify the appropriate subsets/contents/targets to work with. The other is how to implement the solution to achieve the goal efficiently and ergonomically. Both issues contain many smaller design problems. In the following sections, I’ll describe each of these design problems in depth and present solutions in detail.

3.1 How to break down the large problem into a manageable design research project

The first problem is how to break down the gigantic Apollo program into a manageable design research project. The twelve astronauts’ explorations on the Moon are the focal point of the Apollo program, which manifest the entire mission. Choosing this point of focus helps me dramatically reduce the problem domain. However, even within this more 38 specific topic, the astronauts’ activities on the Moon are still too large to simulate in their entirety. Among six successful lunar landings, the last mission – Apollo 17 is the climax of the whole Apollo program27. For this reason, I chose Apollo 17 as the major reference to build The Moon Experience.

To further narrow down the content, the audience is the key factor to be considered. The

Moon Experience is for the general public rather than professionals such as researchers, experts, or scientists. Assuming that the audience does not have advanced knowledge and professional skills related to the Apollo program, this allows The Moon Experience to forgo heavy use of technical and scientific details. Instead, it will hint at the technical complexity and scientific rigor.

I have reduced the problem domain to Apollo 17’s three Extra-Vehicular Activities

(EVAs) on the lunar surface. However the 22-hour long EVAs could each be its own design research project. To further refine the problem size, I designed and conducted an online survey28 to identify what most interests an audience. The survey is not a flawless way to identify the most important aspects of the Apollo mission, but it provides a good hint or reference point for the interest from the general public. The details of the survey and the demographics of the respondents are in Appendix A. The survey identifies nine

27 The first scientist-astronaut joined the mission and landed on the Moon. Apollo 17 was the longest manned lunar-landing flight (12 days 13 hours 52 minutes), the longest total lunar surface extravehicular activities (LRV traversed 30.5 kilometers and total lunar surface time is 75 hours); the largest lunar sample return (110.4 kilograms, or 243 pounds), and the longest time in lunar orbit (17 hours). 28 The survey contained 15 questions and was sent to 21 people who are either peer students or faculty at The Ohio State University and Aalborg University Copenhagen. 16 people responded (return rate around 76%). 39 concepts in the lunar landing: lunar geological features, prominent lunar landmarks, motions of the Sun and the Earth, lunar navigation, lunar gravity, astronaut's mobility, space suit features, communication, and cooperation. The lunar geological features refer to the hallmarks of the lunar landscape including and mountains, and the prominent lunar landmarks refer to the unique features seen from the lunar surface such as the Sun, the Earth, and the black sky. My design process intends to embed these concepts in the system. For example, ‘lunar navigation’ consists of two forms of lunar traversal: a user can walk on the lunar surface and he/she can drive the Moon buggy in the virtual world as well.

3.2 The strategies of implementing The Moon Experience

The goal of my thesis project is to create an immersive virtual reality system in which audiences can have firsthand learning experiences effectively. As discussed in section

2.1, learning in a real or situated environment is effective, which engages the learner in the right content. Constructing a situated learning environment is the first design task.

The landing site of Apollo 17 Taurus-Littrow Valley is ideal for the learning environment in this case. The DEM29 data of Taurus-Littrow Valley are available as open information in several resources. A 3D model of Taurus-Littrow Valley can be created in the computer.

29 DEM stands for Digital Elevation Models. It is a geospatial file format developed by the United States Geological Survey for storing a raster-based digital elevation model. A DEM file is a simple, regularly spaced grid of elevation points. 40

For the VR system – The Moon Experience, the Unity is used. A Sony head- mounted display (HMD) is the visual display. A HMD usually has a limited FOV, which could be considered as appropriate for this application as the limited view of an astronaut’s helmet. The Moon Experience is set up in the motion capture laboratory30, which serves as the tracking system for The Moon Experience. Five retro-reflective markers are attached to the HMD (see Figure 44), creating a tracking target for the software. The twelve cameras in the motion capture system track the markers on HMD to obtain the user’s head position and rotation. The data is sent to the VR engine (see Figure

46, Figure 49, and Figure 51). The VR engine, Unity, can render a convincing virtual world including Taurus-Littrow Valley, the Sun, the Earth, the Lunar Module, the Lunar

Rover Vehicle (LRV), and an astronaut. With the HMD, the participant can see the lunar landscape: the black sky, numerous craters, and mountains or hills as if he/she would have stood in the Taurus-Littrow Valley. However, several issues may emerge.

3.2.1 Theme-driven approach – narrative

As one of the common issues in game and VR applications, one may wonder if the application should be theme driven or information driven. In the data driven application, a participant may easily become bored with aimless wandering in the virtual world.

Especially for beginners who lack the background knowledge, aimless exploration often

30 The ACCAD’s Motion capture lab has 2 Vicon Giganets, 12 Vicon-T40s cameras with interchangeable lenses (12.5 mm or 24 mm), 3-24mm retro-reflective markers, and video documentation equipment. The software motion capture Lab used includes Vicon Blade, Vicon Tracker, and Autodesk Motion Builder. 41 leads to frustration and ineffective learning. As mentioned in section 2.1, certain guidance and instruction are essential for beginners to learn in this situation. In addition,

VR technology still cannot exactly simulate stimuli from the real world. Filling the gap between synthesized stimuli for multi modalities by computer and real stimuli from the real world by VR technology alone is very difficult, particularly with a small budget. The approach in classic narrative art forms such as novels, plays, and movies can immediately draw an audience’s attention and directly engage them with the right context. Within a short period of time, an audience may completely understand a story or event that may last decades or a century. One principal skill demonstrated in plays, films, and televisions is that authors and performers often excite the audience’s imagination, and encourage them to creatively supply the story’s unseen parts. In the design of theme parks, visitors’ expectations are influenced by the park’s design so that they perceive ordinary objects in extraordinary ways. This can transform the perception of reality to the audience’s generous imagination by suspending disbelief. These principles and skills can be used in VR to bridge the gap between reality and virtual reality, as Stapleton et al.

(Stapleton, Hughes and Moshell) suggest. Artistic conventions in classic art forms can be used to overcome the difficulty that VR technology is facing. We can then use the story- telling process to heighten the audience’s perception, trigger their imagination, and transcend augmented reality’s current limitations. When all the senses validate a virtual event, the experience moves the audience across the credibility threshold.

42

Inspired by traditional narrative art work, I created a short story including three scenes in

The Moon Experience and it is listed in Appendix B. As in each lunar landing31, the story has two major characters – the astronaut commander and the lunar module pilot who is played by the participant. The purpose of creating these scenes is multi-fold. First, the storyline sets up a specific context in which the audience may explore. This reduces inefficient wandering and improves learning efficiency. It avoids unnecessary multimodal interactions, cuts off their costly implementations, and closes the door to the disbelief and disruption of presence caused by the audience’s useless drifting. The narrative play also promotes the audience’s imagination and perception to overcome the limitation of our

VR system. For example, lunar gravity is a key concept but is very difficult to be presented in the VR system. It is impossible for me to build a NASA-kind-of facility 32 to have a first-person experience of lunar gravity. However, the story can promote the user’s imagination and perception, which helps the user’s understanding of lunar gravity. In the first scene, the lunar-gravity effect on an astronaut’s movement is explained by the commander during the conversation. Then, the commander’s falling provides a good example of imbalance caused by the lunar gravity. Motion capture and keyframe animation techniques were used to give the commander’s walk cycle a buoyant, lunar feel. The explanation, the example, and the walking movement are intended to suspend the audience’s disbelief and sustain the audience’s presence of the lunar surface.

31 In each Apollo mission, three astronauts were on aboard, the commander, Lunar Module Pilot (LMP), and Command Module Pilot (CMP). The commander and LMP landed on the Moon surface while CMP remained in the command module orbiting around the Moon. 32 To prepare astronauts for weaker gravity, NASA has several costly facilities used to simulate lunar gravity. For example, KC-135 (Vomit Comet) in NASA JSC, Zero Gravity Research Facility in NASA Glenn Research Center, and the Neutral Buoyancy Simulator at Marshall Space Flight Center. 43

3.2.2 The three scenarios

The scenario is composed of three scenes that are created based on the key concepts identified in section 3.1 to make the whole project theme/goal-driven rather than data- driven. In the first scene, entitled “Hello, welcome to the Moon”, the commander greets the participant, introduces the geological features of Taurus-Littrow valley and lunar gravity, unexpectedly falls down, and suggests that the user experiment with the lunar gravity. The second scene, “, we have a problem,” explores how an astronaut should react to a potential dangerous situation. This scene also highlights features of the space suit as well as the importance of communication, and cooperation during a lunar mission. The final scene is “Driving the moon buggy.” The participant can observe and learn how to drive the buggy, which demonstrates astronaut’s mobility, communication, and cooperation. The scripted events directly guide and engage the audience into the appropriate learning context after they are immersed in the virtual world.

The scenes also provide a vehicle to embed other learning principles into the system. For example, learning theory suggests that basic instruction should be provided, particularly for beginners in a situated, free exploring learning environment. Providing instructions through conversation is much more effective than a written format. To create a better learning application, I designed the commander, Jack, as an instructor figure who provides the audience with conversational guidance.

44

This theme-driven series of events also makes the interface design and interaction design goal-oriented, which in turn makes the implementation of the interface and interaction more effective and efficient. In the interface design, I used the motion capture system as the tracking system, the HMD as the major visual display, the old time radio-like conversations as the sound display, and physical props for the steering joystick and rocks within the motion capture lab as the simple haptic display. The script and the interface design helped me determine the interaction design. The major interactions are those between two astronauts (one is the participant) through an operator33, and those between the participant and other objects such as a lunar rock and the moon buggy in the virtual world. Interactions contain animations, dialogues, and computer programs that detect an event and generate or play responsive animations. These animations can be pre-created or real-time. For example, the commander’s motions (such as walking, falling, and waving) are recorded in the motion capture system and then imported to the astronaut model, while the moon buggy’s motion is generated by the program in real-time.

The purpose of my thesis is to create an effective learning experience for an audience in virtual space. The strategy of design and implementation is goal-oriented rather than data-driven, which has the following benefits:

 It breaks down the large design issue into manageable and solvable design

problems;

33 The Moon Experience has two types of users. One is a participant who experiences hand-on learning in the system. The other is an operator who assists the participant to have a firsthand learning experience through manipulating the virtual character or moving the Moon buggy. 45

 It cuts off unnecessary computing and details;

 It prevents an audience from aimless and inefficient wandering through engaging

narrative.

I will address the design of the virtual world in detail in Chapter 4, and the design of the interface and interaction is presented in Chapter 5 and Chapter 6.

46

Chapter 4: The Design of Virtual Lunar World

The goal of the virtual world is to provide an immersive virtual environment that helps set up an “authentic” situated learning environment to help fully engage a participant as constructivism advocates. Based on the real Apollo missions, the lunar virtual world contains a lunar terrain, two astronauts, a vehicle (LRV), a lunar module

(LM), the Sun, and the Earth.

One astronaut is the commander (named Jack); the other is the avatar of a participant

(named G, standing for guest) who enters the virtual world when putting on the HMD.

Except for the twelve lunar astronauts who landed on the Moon, people do not have experience of being on the Moon surface. It is necessary to provide a participant with sufficient guidance to explore in the virtual lunar world. For this reason Jack is designed to play an instructor role. All necessary information is provided to the participant through the conversations between Jack and G. In other words, Jack is an intelligent, dynamic information vehicle that can naturally convey the instructions to G within an “authentic” situated lunar world.

In the following section, I first describe how I model the virtual lunar environment – a lunar terrain. Then I give a brief overview of the Apollo Extravehicular Mobility Unit

47

(EMU). Based on the overview of the EMU and the information about the Apollo program, I address how I modeled the two astronauts, LRV, and the other objects in the virtual lunar world.

4.1 Modeling the Virtual Lunar Terrain

The major component of the virtual lunar world is a section of lunar terrain, which is the base of the virtual lunar landscape to be built. It should have craters, highlands, mountains and other lunar geological features presenting characteristics such as dry, dusty, lifeless, bleak, and full of grey, dark color. The goal of the creation of the lunar terrain is to help create a lunar environment where a participant has the feeling of being on the lunar surface.

Figure 8 Lunar nearside with major maria and craters labeled. The yellow rectangle marks the region where Apollo 17 landed.

48

I have been studying the six landing sites of the Apollo program and find Taurus-Littrow valley (Apollo 17 landing site) well presents the lunar geological features. The Valley of

Taurus-Littrow (see Figure 9 and Figure 10) is on the southeastern edge of the Sea of

Serenity (). It was formed about 3.8 to 3.9 billion years ago, when a large asteroid or comet hit the Moon and blasted out a basin nearly seven hundred kilometers in diameter. Around the rim of the Sea of Serenity, great blocks of rock were pushed out and up, forming a ring of mountains. In places, the blocks quickly fell again, and left radial valleys among the mountains. Taurus-Littrow is one of such valleys, located just south of Littrow Crater in the southwestern Taurus Mountains that form the highlands east of the Sea of Serenity. At its inner, southeastern end is a large, blocky mountain called the East Massif (shown in Figure 10). The South Massif forms the southwestern wall of Taurus-Littrow valley. North of the East Massif, across an outlet into another small valley, the Sculptured Hills and farther to the west, the North Massif forms the remaining walls of Taurus-Littrow valley. Between the North and South Massifs, the main exit from the valley leads out toward Sea of Serenity.

Figure 9 Three gradually zoom-in images of Taurus-Littrow valley.

49

Figure 10 A close-up of Taurus-Littrow valley.

Luckily, I obtained the digital elevation model (DEM) data34 of Taurus-Littrow valley

(see Figure 11). The ranges in this DEM file are - its range from 29.904445 to

31.477203, its latitude range from 20.853697 to 19.386834, and its height range from

1420 meter to 4109 meter. In the original file, the cell size in the projection is 10 meters per pixel. The DEM data is converted into tiff file. Then the terrain is directly created with the tiff file through the terrain engine in Unity.

34 DEM data of Taurus-Littrow valley can be accessed through Japan SELENE project or the United State Geological Survey (USGS) 50

Figure 11 The 3D model generated with the DEM data of Taurus-Littrow Vally

I have studied many lunar surface photos taken by the astronauts who had landed there.

In order to have a convincing appearance in our virtual environment, I created three tileable textures in Photoshop based on the photos of the real surface of the Moon: lunar soil texture, lunar mountain texture, and gravel texture shown in Figure 12. These textures are applied to the terrain seamlessly through paint terrain texture tools in Unity.

Figure 13 shows the created Taurus-Littrow valley in a bird’s eye view. Figure 14 shows a close-up in the valley.

Figure 12 Lunar soil texture, mountain texture, and gravel texture applied to the terrain. 51

Figure 13 A bird’s eye view of Taurus-Littrow valley created digitally in Unity.

Figure 14 A snapshot of the lunar terrain created in the virtual world.

4.2 Astronauts

Astronauts were critical to the success of NASA’s space effort. They put a very human face on the grandest technological endeavor. Since Mercury Seven35, astronauts have

35 The Mercury program was the first of American manned space program. The Mercury Seven was the first group of astronauts chosen for the Mercury program, including Scott Carpenter, L. Gordon Cooper, Jr., 52 been the heroes and culture icons representing the best values of American culture. In our system, the digital models of astronaut also play vital roles. They are involved in almost all the interactions designed to help effective learning in the believable lunar environment. I have tried to reflect the public view of Apollo astronauts with my astronaut models. I created two astronauts in my system based on the fact that two astronauts landed on the lunar surface in every Apollo mission while the third astronaut manned the command module in the lunar orbit. One astronaut named Jack, a composite figure, represents typical Apollo astronauts who are highly motivated, technically skilled and extremely disciplined pilots. Jack is an avatar controlled completely by the operator.

The other astronaut named G is the avatar of the participant who enters the virtual lunar world when putting on the HMD. Jack oversees the exploration on the lunar surface. He plays the instructor role and gives all orders and guidance to the astronaut G (the participant).

Astronauts have been characterized by their space suits. The space suit represents protection and a symbol of the connection to our environment of Earth, the solar system, and the universe. It represented the triumph of technology. Essentially the space suit has been the core representation of the astronaut and modeling an astronaut means modeling the space suit. To create convincing astronauts in my system, I had to understand the space suits worn by the Apollo astronauts and major gears/gages used in the Apollo program. In the next section, I review the important equipment.

John Glenn, Jr., Virgil I. “Gus” Grisson, Walter Schirra, Jr., Alan B. Shepard, Jr., and Donald K. “Deke” Slayton. They were the best the nation could offer and embodied the deepest virtues of the United States. 53

4.2.1 Overview of Apollo Extravehicular Mobility Unit

Figure 15 The lunar surface configuration of extravehicular mobility unit, excerpted from

(Apollo Operations Handbook Extravehicular Mobility Unit Volume I).

As a mini spacecraft for one, the space suit is a highly complex, technical system. The space suits evolved from those used in high-altitude aeronautical missions in the late

1950s and , and eventually progressed into those used in zero-gravity, and finally with those worn on the Moon. Lutz et al. described the development and performance history of the Apollo extravehicular mobility unit (EMU) and its major subsystems in

54 their report (Lutz, Stutesman and Carson). The EMU consisted of three major subsystems

- the pressure garment assembly (PGA)36, the portable life-support system (PLSS), and the oxygen purge system (OPS) (see Figure 15).

In the Apollo program, the space suits had two basic configurations (Yong) – intravehicular (IV) space suit and extravehicular (EV) space suit. IV space suit is designed for the Command Module Pilot (CMP) who remained in Command Module orbiting the Moon during the mission. EV space suits were worn by the Crew

Commander (CDR) and the Lunar Module Pilot (LMP) during the extravehicular activities (EVA) on the lunar surface.

Figure 16 Left: Neil Armstrong (Apollo 11) in A7-L PGA. Right: John W. Young

(Apollo 16) in A7-LB PGA.

36 The space suit (EV PGA) was made of approximately 26 layers of materials to provide a high level of protection from abrasion, micro-meteoroids, and sharp objects on the lunar surface. 55

International Latex Corporation (ILC)37 designed two versions of space suits – A7-L and

A7-LB. A7-L space suits were worn from to Apollo 14. A7-LB space suits were worn in the last three missions38 , 16, 17, , and ASTP39. The most obvious and visible differences between the A7-L and A7-LB suits, were that the A7-LB had a wider seat than the A7-L, which necessitated a change in zipper location. The zipper therefore had to be repositioned from the back of the suit to the side. In turn, the repositioning of the zipper required that the hose connectors on the chest were repositioned as well. Figure 16 shows the differences between these two versions of space suits.

Figure 17 The diagram of PLSS and OPS.

37 ILC was the prime contractor for the Apollo suit program.

38 The last three Apollo missions were designed to enable the astronauts to travel farther afield from the Lunar Module with the battery-powered lunar roving vehicle (LRV). Using a LRV meant that the astronauts would need suits that enable them to sit without putting undue stress on the zipper, and bent their knees more easily. A7-LB was made for this purpose based on A7-L.

39 Skylab and ASTP were the last two major programs in the Apollo era. Skylab ( launched on May 14, 1973) was intended to test the astronauts’ ability to remain in space for extended periods of time, and to conduct a series of other experiments. The Apollo- Test Program (ASTP) is best known for the historic “handshake in space”, in which the U.S. Apollo spacecraft docked with the Soviet Soyuz 19 spacecraft (in July 1975). 56

The PLSS and OPS, shown in Figure 17 and Figure 19, were generally worn as the backpacks. PLSS was a device connected to an astronaut’s space suit, which allows EVA with maximum freedom. PLSS regulated suit pressure, provided breathable oxygen, removed carbon dioxide, humidity, odors and contaminants, cooled and re-circulated oxygen, provided two-way communications, and displayed health parameters. The OPS supplied the EMU with oxygen purge flow and pressure control for certain failure modes of the PLSS or PGA during EVA.

Other very specialized components of the space suit used on the moon were the lunar extravehicular visor assembly (LEVA)40, and the extravehicular gloves and boots. The

LEVA provided visual, thermal, and mechanical protection to the crewman’s helmet and head (see Figure 19). The LEVA consisted of one thermal cover, two visors (the inner protective visor and the outer the sun visor), and three eyeshades (the center eyeshade and two side eyeshade) shown in Figure 18.

Figure 18 Left: The diagram of LEVA excerpted from (Natonal Aeronautics and Space

Administration 2-70). Right: Alan Shepard wearing the LEVA during training. He had the eyeshades down with flap up. His sun visor was also two-thirds down.

40 The LEVA was designed to fit over the Apollo Pressure Helmet Assembly (aka Bubble Helmet) and latch into place. 57

Figure 19 Jack Schmitt at Taurus-Littrow valley with the sun visor and the center eyeshade down but the viewport door open.

The EV glove assembly (Natonal Aeronautics and Space Administration) is a protective hand cover interfaced with the torso limb suit assembly prior to egress for extravehicular operations. The EV glove consists of a modified IV pressure glove assembly covered by the EV glove shell assembly41 (see Figure 20).

Figure 20 The diagram of glove assemblies (Natonal Aeronautics and Space

Administration 2 - 24).

41 The EV glove assembly provides scuff, abrasion, flame impingement, and thermal protection to the pressure glove and crewman. The thumb and finger shells are made of high-strength silicone rubber which is reinforced with nylon cloth and provides improved tactility and strength. 58

Chromel-R42 fabric is incorporated over the hand area for added protection from abrasion

(see the right-side image in Figure 21).

Figure 21 Left: Jack Schmitt's Apollo 17 EVA gloves. Right: The close-up of a Chromel-

R patch on Ed Mitchell’s Apollo 14 suit.

The lunar over-boots were designed to give the crewmen thermal and abrasion protection, as well as extra traction in the slippery lunar dust. Abrasion protection was provided by the upper layer of Chromel-R. It was pulled on like a slipper over the PGA boots.

Figure 22 Left: The diagram of the lunar boots (page 2-30). Right: Apollo 17 Eugene

Cernan's lunar boots.

The Apollo program has a large volume documents and materials about the space suit and

42 Chromel-R is a steel fabric and usually used to protect from abrasion. 59 various gear uesed in the extravehicular activities on the lunar surface. However, when creating the astronaut’s models in my system, I am interested in only the appearance of the A7-LB EV suit (i.e. the most out layer), as shown in Figure 23.

Figure 23 Apollo 17 astronaut Gene Cernan was at Taurus-Littrow valley on the lunar surface.

4.2.2 Modeling the Astronauts

Modeling the astronauts and integrating them into the real-time system is a challenge because the astronaut model must be convincing in appearance and behavior43, which

43 The detail information about astronauts and their activities can be found in the NASA Apollo lunar surface journal (Jones and Glover, Apollo Lunar Surface Journal), which includes an Apollo image gallery, 60 requires a high-resolution model in a large-size file44. The astronauts are modeled after the space suits shown in Figure 19, Figure 23, and Figure 24. Minute details such as dirt and wrinkles reflect the dry and dusty lunar environment and add to the realism and believability of the models. These details give visual cues that hint at the intensive EVAs that the astronauts performed on the lunar surface. Adding the extra layer of realism to the space suit helps convey that Jack is experienced, hard-working, technically-skilled, and wise. He needs to be a character that a beginner would like to approach to ask for guidance and advice during his/her experience in the virtual lunar world.

Figure 24 Eugene Cernan Apollo 17 flown space suit displayed in Smithsonian National

Air and Space Museum.

the transcript of all recorded conversations, and an extensive commentary by the editor and the moonwalking astronauts. The Smithsonian National Air and Space Museum maintains the largest collection of Apollo space suits, providing another invaluable resource.

44 A large-size file is usually avoided in a real-time system since it could slow down the real-time performance. 61

Adobe Photoshop along with available reference images were used to create the textures

(see Figure 25 and Figure 26) for the “Jack” astronaut model. His rank as commander of the mission is shown by the red strips on his helmet, arms, and legs.45

Figure 25 The color map of the astronaut model (except for the helmet).

Figure 26 Left: The helmet texture created based on the images of the real LEVA. Right:

The interior texture of the helmet.

45 The red strips on the commander’s suit started in , which was evolved from the fact that it was difficult to distinguish who is who among two astronauts in the bulky space suits in Apollo 11 and 12. 62

It is not enough to simply have the correct color map; adding a layer of 3D “bumpiness” can make the texture behave in a manner closer to the real-life object. These 3D details can be created by sculpting a high-resolution model. The downside to this practice is that the large file size does not lend itself to real-time interactions involving the astronauts. In general, the more the data, the more time needed to update in Unity, and the longer the response time. In other words, the high-resolution model could slow down or freeze the real-time application. A balance must be found between the goals of vividly realistic models and real-time interactions in a virtual environment. To solve the issue, normal- maps are applied to the astronaut model. Two versions of the astronaut model were created, a low-resolution and a second higher-resolution model. The high-resolution model is used to create a normal map that is later applied to the low-resolution model. In this way, our system can achieve good real-time performance with the low-polygon models and also maintain the realistic appearance. The low-resolution model was created in Autodesk Maya 2012 (see Figure 27). The number of polygonal faces of the low-resolution model is 5,948 (the number of triangles is 11,640). The high-resolution model was sculpted in Autodesk Mudbox 2012 based on the low-resolution model. The number of polygonal faces of the high-resolution model is about 16 million (shown in

Figure 28). The normal map created from the high-resolution model is shown in Figure

29.

63

Figure 27 The low-resolution model of an astronaut after skinning.

Figure 28 The high-resolution model sculpted in Mudbox with 16 million faces.

64

Figure 29 The normal map created from the high-resolution model.

Figure 30 The resultant glove model.

To have the greatest flexibility to animate the character, it is modeled as a hierarchy. The helmet (head part) and the main body (PLSS, OPS, etc.) are separated and use different textures (in Figure 25 and Figure 26). This allows the helmet to be easily animated

65 independently from the rest of the body. For example, for reflection, the texture of the sun visor could be an environment map of the virtual world, or a real-time texture rendered by a camera positioned inside the helmet. Within the main body hierarchy, the astronaut's body, gloves, boots, PLSS, OPS, RCU were broken into small and easy modeling tasks (see Figure 30). Figure 31 shows the final-version of Jack in the virtual lunar environment.

Figure 31 A snapshot of Jack in the virtual lunar world.

The astronaut model for G is the same as the one for Jack, but as G is the avatar of the participant it does not have red stripes. While putting on the HMD, the participant should feel as if he or she is inside the space suit. S/he should be able to see some part of the

66 interior of the LEVA (helmet) when rotating his/her head within a given range. This required that a model and texture be created for the interior of the LEVA (the image on the right in Figure 26).

Figure 32 Apollo 17 Lunar Roving Vehicle.

4.3 Modeling the Lunar Roving Vehicle

The Lunar Roving Vehicle (LRV) was a battery-powered vehicle designed to operate on the Moon. It allowed the Apollo astronauts to extend the range of their surface extravehicular activities (see Figure 32). Shown in Figure 33, the Lunar Roving

Vehicle46 has a T-shaped hand controller situated between the two seats, which control

46 The LRV has a mass of 210 kg and was designed to hold a payload of an additional 490 kg. The maximum height was 1.14 meters. The frame was 3.1 meters long with a wheelbase of 2.3 meters. The wheels consisted of a spun aluminum hub and 81.8 cm diameter, 23 cm wide tires made of zinc-coated, 67 the four drive motors, two steering motors and brakes. I have sued data to ensure that the scale representation of the LRV in the virtual lunar world to be as accurate as possible.

Figure 33 The configuration diagrams of the LRV.

The LRV involves a large amount of real-time animation to allow participants to replicate the experience of driving the vehicle. Like the astronaut, this requires that the LRV be modeled in a hierarchy. For example, the four wheels with their dust guards are independent from the main vehicle body, because then the wheels can be animated when the astronaut makes turns. The details of the LRV animation are covered in section 4.4.2.

woven 0.083 cm diameter steel strands attached to the rim and discs of formed aluminum. Inside the tire was a 64.8 cm diameter bump stop frame to protect the hub. Dust guards were mounted above the wheels. A large mesh dish antenna was mounted on a mast on the front center of the rover (The Boeing Company, LRV Systems Engineering). 68

The Lunar Module (LM), the Sun, and the Earth were also modeled to add to the realism of the virtual lunar world. The process for modeling these objects is straightforward, and the same care and attention to accuracy was taken as with the astronaut and LRV.

4.4 Responsive Animations

In an interactive and immersive virtual reality system, animation is a must-have component to make the virtual experience believable. In this project, three methods were used to produce the responsive animations. The character animation is largely motion capture technology, the real-time animations of the moon buggy, lunar dust, wheel- tracks, footprints, etc. are generated by scripts in Unity, and the remaining animations use hybrid techniques. The motion of the lunar rock, for example, uses both the real-time data obtained through motion capture and the scripts in Unity. The responsive animations are essential content for interaction design described in Chapter 6.

4.4.1 Character Animations

To give lifelike qualities to the virtual astronaut, Jack, I used the inherent advantages of motion capture technology. The most significant advantage is that the character’s motions appear natural since they are recordings of actual human movements. While the motion capture data must still be tailored to meet the requirements for this specific

69 project, motion capture offers a more efficient solution than traditional character animation since rigging and most keyframing can be skipped.

Figure 34 Tom Heban performs the role of “Jack” in the motion capture lab.

When using motion capture to animate a character a couple of steps should be followed to avoid unnecessary complications. First, the character model should have a skeleton with the same naming convention and number of joints as the one generated from the motion capture pipeline. This can facilitate the data transfer from a real person’s motion to the digital model. It is also very important to maintain a nondestructive workflow so changes can be made throughout the process without starting from scratch. For example, the model’s geometry skin weights might need to be adjusted to compensate for unexpected deformations when motion capture data is imported. We want to keep the character model as a separate Maya file so that we can modify skin weight whenever 70 needed without affecting any resulting animations. Also, after editing motion capture data, we can save each resulting animation as a .move file with the Maya export option.

In this way, we can modify the character model and edit the motion capture data independent of one another. For example, the geometry of the astronaut in Figure 36 and

Figure 37 are different, which does not affect editing the motion data. To complete the character animation, one can open the file containing the geometry, import the .move files (the resulting animations), and save it as an animated character Maya file.

Figure 35 The process of animating a character from motion capture to Unity.

The whole animation process is illustrated in Figure 35, starting with the capture of the actor’s motions in the motion capture lab. Tom Heban (an MFA candidate in DAIM) played the role of astronaut Jack (see Figure 34) and acted out various actions including walking, running, hopping, left or right shifting walking, falling, turning off/on a device, getting on/off the moon buggy, driving the buggy, etc. In VICON BLADE, I cleaned the captured data and exported the data as .fbx files. Then, the 3D astronaut model (.fbx file)

71 and Tom’s motion data (also .fbx) were imported in MotionBuilder47 (see Figure 36). The captured motion data was applied to the astronaut model (see Figure 37) and exported as an .fbx file (see step C in Figure 35). The astronaut can now move as naturally in the virtual lunar world as Tom did in the motion capture space.

Figure 36 The motion capture data and the character model are imported in

MotionBuilder.

In Maya48, I modified the astronaut’s animations to more accurately represent a real astronaut’s movements. For example, the rotation of a real astronaut’s helmet is limited – turning left or right has to align with his body rotation. So I deleted all rotation attributes of the neck and its child nodes in the graph editor in Maya. Moreover, the astronaut’s animations were also adjusted to adapt to the operations in the game engine Unity. For example, a walk cycle was created based on the captured walking motion. The walk cycle

47 MotionBuilder is one of Autodesk’s 3D character animation software. It is used for Virtual Production, Motion Capture, and traditional keyframe animation. http://usa.autodesk.com/adsk/servlet/pc/index?id=13581855&siteID=123112 48 Editing animation in this step can be done in MotionBuilder as well. 72 was modified to ensure no change in x and z directions (assuming that y is the vertical direction). This allows a character controller in a game engine flexibility to control the character’s speed and direction. Removing the motions in x and z directions can also be efficiently done through deleting the curves of translateX and translateZ in the graph editor of Maya.

Next in Maya, I exported the animation as a .move file independent from the model file.

The model file (for example, skin weight) often needs to be updated to ensure the most natural deformation possible. After completing all character animations in .move files, I imported them back to the model file and saved it as the animated character file (see step

D in Figure 35), which was loaded in Unity as imported animations.

Figure 37 The motion capture data are mapped to the model in MotionBuilder.

73

4.4.2 The Real-time Animations created in Unity

In The Moon Experience, several situations require real-time animations. For example, driving the moon buggy through a keyboard or a game controller, the buggy’s wheel- tracks and dust, and the footprints left by the astronauts must be generated in real-time as well. In this section, I describe how to create these real-time animations through scripts in

Unity.

Figure 38 Snapshot of the real-time animations: the wheel-tracks, footprints and dust.

The animation of the Moon Buggy is created based on a Unity Car Tutorial.49 Each wheel has a Unity WheelCollider object to handle the interactions between the vehicle and the

49 Car Tutorial by Unity Technologies – Unity Asset Store, 2012, http://u3d.as/content/unity- technologies/car-tutorial/1qU 74 ground (more vehicle physics can be found in the Car Tutorial’s 3-part pdf document).

The heavy dust on the lunar surface requires that the moving buggy leave tracks constantly. The original script Skidmark.js does not handle this situation; it only allows a limited number of marks. I modified the script so that whenever the LRV moves wheel tracks are left on the ground. I used an ArrayList in C# as the data structure to store the wheel-tracks since the size of ArrayList can be changed dynamically. This relieves the original script’s limitation regarding the maximum number of skid-marks.

Figure 39 Wheel-track shader in Unity.

The wheel-track in my code is a piece of geometry – a plane, which grows when the vehicle is moving. A shader with a bump map for the wheel-track is created in Unity (see

Figure 39). Every n frames (default n = 8), the data structure is checked to see if any new track has been created. If true, a new track plane (a quad) will be created and merged with the original wheel tracks’ geometry. Then the wheel track objects are rendered.

Meanwhile, a particle system, groundDustEffect is created. Its position and emitter are changed to generate dust while the buggy is moving.

75

Figure 40 The footprint shader in Unity.

Each foot of the astronaut also has a collider and a particle system. A shader with bumpiness for a footprint is created in Unity (see Figure 40). When an astronaut’s foot collides with the ground, the particle system emits particles and creates dust. When the foot lifts off the ground, a footprint remains.

4.4.3 The Hybrid Animations

The Moon Experience allows the participant to experience lunar gravity by kicking a rock in physical space and observing the motion of the rock in virtual space. To implement this, I designed a prop to represent a lunar rock in the motion capture lab. The prop is made of a Styrofoam ball, wood, black tape, and four retro-reflective markers (see the left image in Figure 41). I created geometry for a lunar rock in Maya as a digital counterpart and loaded it into Unity (see the right image in Figure 41).

76

Figure 41 The prop of a lunar rock and its model in the virtual space.

The animation of the rock is created by the data obtained in motion capture and a script that calls a built-in function in Unity. When the participant kicks the prop in physical space in the motion capture lab, the positions of the prop in the first several frames can be used to compute the initial velocity and the torque. The velocity and the torque are then applied to move the lunar rock in the virtual world. The strategy allows the participant to use his/her own physical movement to kick the rock and observe the motion of the lunar rock in HMD.

In the next chapter, I discuss the interface design, which allows the participant to enter and interact with the virtual world.

77

Chapter 5: Interface Design

Interface design is critical for the success of this entire system, because all interactions take place through the interface. The interface between the user and the virtual reality system is the only place where the user has access to the system to ensure a meaningful user experience. The interface is also the information channel where input data from the user enters the system and the output data computed in the system is displayed for the user.

Figure 42 The physical and virtual interfaces in Schell's book (Schell).

In general, the purpose of an interface is to make users feel like they have control of their experiences. An interface can be classified as a physical interface or a virtual interface as

Schell described in his book (Schell) shown in Figure 42. The physical interface in this 78 project mainly refers to physical devices such as VICON motion capture equipment,

HMD, wireless microphones, keyboard, mouse, game controller. The virtual interface is defined as GUI components of the system, which can process a user’s input, and may also display the resultant actions (see Schell’s virtual interface in Figure 42 and The

Moon Experience’s virtual interface in Figure 48). The information channels are also shown in Figure 42. The data flow of this project through the interface is described in section 5.3.

The goal of The Moon Experience is to create an effective learning experience for the user, and the interface is designed to do just that. The interface design is user-centric, as described in Chapter 2 a user can learn effectively when he/she is in a situated learning environment with proper guidance. So I created a 3D virtual lunar world based on the real geological data of Taurus-Littrow valley. I also wrote a four-scene narrative to guide the participant’s experience toward the right learning content without being too direct.

Conversations with the virtual figure Jack are designed to follow this outlined narrative.

However, the participant may ask anything about the Moon, and this interaction may lead to more dynamic experiences akin to the non-linear experiences in computer games. Non- linearity introduces a complication when the virtual figure Jack needs to handle inquiries outside the original script. A possible solution would be an intelligent module that can analyze the voice input and find reasonable voice output from the database like Siri on

Apple’s iPhone4S. The issue itself could be a complicated research topic and is beyond

79 the scope of this project. Given the constraints of time and resources, I choose to handle it with another type of user – the operator.

Basically, the operator’s role is to assist the whole process. The operator can choose which mode or which scene to play. The operator can also control how Jack moves and what Jack says. The operator’s manipulation breaks the limitations of a linear experience and allows for spontaneity in the participant’s experience. The Moon Experience has two types of users – the participant and the operator. The participant is the main audience and the individual intended to have an effective learning experience. The operator assists the participant in achieving that goal. The interface is designed for these two users.

Generally speaking, an interface should be simple, easy, and close to the natural way that people perceive their surroundings through the five senses: sight, sound, touch, smell, and taste. The more intuitive the interface is, the less visible to the participant the interface becomes, and the more complete the participant’s immersion in the virtual world. In this project, the visual, auditory and tactile are given precedence in the interface design. In the following sections, I describe the strategies of designing the interfaces for these two users in terms of sensory information channels.

5.1 The interface for the participant

80

To allow the participant to have a firsthand learning experience, I designed an interface that has the three sensory channels – viewing visual display device, and auditory display50 device, and props to allow tactile interaction.

Figure 43 The Sony Head Mounted Display (HMD) is the major visual display device.

I decided to use a first-person view for a participant to mimic how he (she) would explore the lunar world through his/her own eyes. The Sony Head Mounted Display (HMD)

HMZT1 model (see Figure 43) is the major visual display device for the participant. The

HMD has adjustable 3D stereoscopic capabilities built in. When putting on the HMD, the participant can see the realistically rendered lunar environment in a 3D stereoscopic view which gives the effect of being on the Moon. When combined with the retro-reflective markers of the motion capture system (as shown in Figure 44) the HMD also becomes an input device. The VICON motion capture system constantly tracks the positions of the participant, and this positional is passed to the game engine Unity. Unity updates the 3D stereoscopic HMD displays in real-time.

50 In this thesis, display is used to refer to ‘output’ of VR system. It could be visual, auditory, or tactile. 81

Figure 44 Five motion capture markers are put in the HMD.

Trent Rowland (an undergraduate student from Theatre Department at OSU) and Tom

Heban were the voice actors for Jack and the “Houston Control Center” respectively.

Their performances were recorded and tailored into individual audio clips loaded into

Unity as a part of the audio channel of the interface. The Sony HMD has built-in headphones that provide the participant with an ideal audio device. Jack’s pre-recorded conversations can be played-back as though Jack is talking to the participant. The participant carries a wireless microphone so that his/her voice can be heard by the operator sitting in the control room (see Figure 45).

Figure 45 Sennheiser wireless microphone used in The Moon Experience.

82

Several props are created and used in the motion capture lab to further immerse the participant into learning experience. The space suit from the Apollo program was made of approximately 26 layers of materials to provide a high level of protection from abrasion, micro-meteoroids, and sharp objects on the lunar surface. It is not enough for the participant to have only a visual perception of the space suit. Tactile feeling of cumbersomeness and restrained motion is more effective for a participant. In collaboration with Samantha Kuhn (a MFA candidate in Theatre Department), a mockup space suit (shown in Figure 46 and Figure 47) was designed and created through the boning technique and a foam layer between the outside layer and inside layer. The purpose of the space suit is to provide the user with tactile feedback in the form of a limited view, a closed and bulky feeling, and restrained movements similar to when a lunar astronaut wore the 26-layer space suit described in section 4.2.

Figure 46 Several props in the motion capture lab. 83

A chair in the motion capture lab is aligned with the driver seat of the moon buggy in the virtual space. The participant can sit in the chair as if he/she were sitting in the driver seat of the moon buggy, and his/her virtual counterpart will do the same. To experience driving the buggy, a game controller that has the similar shape of the T-handle on the moon buggy is put aside along the chair. When the system is in the driving mode, the participant can operate the vehicle through the game controller (see Figure 47).

Figure 47 A participant wearing the space suit sits on the chair and operates the moon buggy through a game controller.

A final tactile information channel allows the participant to experience the effects of lunar gravity through an input device. The device in this case is a prop (with four markers) representing a lunar rock shown in Figure 41. The props positional data is

84 captured by the motion capture system and transferred to Unity, and Unity outputs the lunar rock’s movement into the HMD where the participant observes the effect of their kick on the virtual lunar rock with lunar gravity.

5.2 The interface for the operator

The goal of the interface for the operator is to provide a tool that controls the whole system to assist the audience to have meaningful learning experience. Like most computer games and applications, the interface for the operator typically involves computer equipment for inputs and outputs with physical and virtual interfaces such as monitors, GUIs, keyboards, mice, and game controllers. Because of the role the operator plays, I choose the third-person view for the operator to see. Since the whole system is set up in the motion capture lab, the operator sits in front of several computers in the control room (see Figure 49).

Figure 48 A dual monitor showing the operator control interface.

The main display device for the operator is a dual monitor configured with NVIDIA

Mosaic shown in Figure 48 and Figure 49. In Figure 48, the image on the left (or on the

85 left monitor in Figure 49) is a view of the virtual lunar world, which is generated by a camera following Jack. A small window at the up right corner on the left image (or on the left monitor in Figure 49) shows the participant view in the HMD. The window at the upper left corner is the operator control interface that shows all audio clips of Jack. A receiver for the wireless microphone is placed in the control room (see Figure 49), which allows an operator to hear what the participant says to Jack. Then the operator is able to choose a proper answer to play back as Jack talks to the audience. The image on the right in Figure 48 (or on the left monitor in Figure 49) is the 3D stereoscopic split screen representing the left and right eyes in the HMD.

Figure 49 The system settings in the motion capture lab.

86

A keyboard and a mouse are the input devices for the operator to control the system.

Through keys or buttons, the operator can choose a different scene or mode to play. By pressing the “n” key, the operator allows a participant to linearly experience a scenario outlined in the story script. Through any other keys defined in the system, the operator breaks the linearity of the story and allows dynamic non-linear experience for the audience. The operator can determine how Jack acts (see Figure 50) and what Jack says

(see the operator control interface in Figure 48), which also enriches the non-linear experience for a participant.

Figure 50 The key-button layout.

5.3 The interface for information communication

87

One of the most important functions for any interface is to provide communication channels between users and the system. Exchanging information between users and the system is essential for immersive interactivity.

The major components in the interface include the physical and virtual interfaces as shown in Figure 42. The Moon Experience has separated physical interfaces for the participant and the operator. In terms of input and output, the physical interface is further classified into physical input and physical output. Therefore, for exchanging information, the channels can be for either input or output, the participant or operator, visual, audio, or haptic, etc.

Figure 51 Information flow through the interface in The Moon Experience. 88

At the top level in The Moon Experience, there are exclusive three modes – onGround, driving, and inLRV (see section 6.2 and Figure 52). The system must be in one of these three modes at any given time. The mode combined with current stage/scene defines the running status of the system. The system also needs to track other important data such as the positions of the participant, rock, and LRV. This information about the virtual lunar world computed in the system needs to be delivered to the users as well. The major data flows of the system, both physical and virtual, include:

1. Status - current mode/stage/scene;

2. The position of a participant;

3. The position of the rock;

4. Visual information of the virtual world;

5. Aural information between Jack and the participant;

6. Tactile information such as kicking the rock;

7. Closed and bulky feeling and restrained movements;

8. Manipulating the virtual figure Jack;

9. Driving the Moon buggy.

These data are mapped into the interface channels (in Figure 51) so that the information can flow in The Moon Experience. In other words, the information flow partially defines how users see, hear, and touch the lunar world. The different types of data listed by the numbers above are correlative to the numbers in red along with the arrows in Figure 51.

For example, the current status of the system (data type 1) can be changed by the operator through his/her physical input interface – keyboard, mouse, or game controller. It can

89 also be updated automatically by the system. The position of the participant (data type 2) and the rock (data type 3) are mapped to the markers on the HMD and the prop

(representing the lunar rock). The motion capture system tracks and sends the data to the system. In real time, Unity updates the camera’s position, computes the initial velocity and torque, and applies them to the virtual rock. Not all data streams need to be shown to both the operator and the participant. The current visual information (data type 4) of the lunar world is displayed in the HMD (for the participant) and a monitor (for the operator).

Similarly, the commands to manipulate Jack (data type 8) are only mapped into the operator’s physical interface (a keyboard, a mouse, or a game controller) and the virtual interface, whereas the control information of the Moon buggy (data type 9) is mapped into both the operator’s physical input and the participant’s game controller.

In The Moon Experience, the interface is a user-centric design hoping to achieve the most effective learning experience for the participant. The interface is also designed to facilitate a meaningful flow of information between users and the system, because this exchange of information between users and the system is essential for immersive interactivity. In the next chapter, interaction design will be discussed in detail.

90

Chapter 6: Interaction Design

Interactions play a key role in achieving an effective learning experience, and they comprise the core components in The Moon Experience. The interactions reinforce the engagement of the participant in the virtual lunar world. Through firsthand interactions, the participant is able to better comprehend key concepts and information. In this chapter,

I discuss the multi-layered interactions that consist of storyline, players, and sensory channels. I also show how the underlying mechanics lay out the basic structures of interaction and how the story and game structure artfully merge together to indirectly control the learning experience.

6.1 Multi-layered Interactions

Interactions in this project are not simply built along a single thread. Rather, they are designed to make use of story, players, and sensory channels. These factors weave together and synthesize to form a unique quality of interaction that each component cannot offer on its own.

6.1.1 Interactions in the story

91

In order to put the participant directly into situated learning content, I wrote a short script that includes three scenes (see Appendix B for details). Each scene describes several interactions between participants and the virtual lunar world. The interactions embedded in the story are a means to cover the important concepts and knowledge that the participant should learn in the system, including the following three: lunar gravity, danger, and driving the LRV.

In the first scene, one of main goals is to help the participant understand lunar gravity.

The participant is given three different learning opportunities to understand lunar gravity.

Jack first talks to the participant about lunar gravity, introducing the concept to the participant. Then Jack accidently tumbles down to the ground. Jack’s falling gets the participant’s attention and delivers an implicit message to the participant that gravity significantly affects human movement and it’s hard to keep balance on the Moon. Finally

Jack suggests that the participant experience the lunar gravity by kicking a lunar rock and observing its motion. Using this variety of learning the participant will hopefully better retain the information.

Human space exploration is an inherently dangerous proposition, and the Apollo program is no exception. A fatal fire of the Apollo 1 mission took the lives of three excellent astronauts (Chaikin, A man on the Moon). Even in the first successful landing mission,

Apollo 11, NASA calculated only a 50% chance of success (To the Moon). Apollo 13 is another vivid demonstration of danger and heroism as the crew managed to turn a near-

92 disaster into a miracle recovery. In the second scene, I planned a small mechanical glitch in the participant’s EMU. The interactions in this event are meant to provide the participant with an experience of the emotions related to a life threatening situation and how to recover. The process of the interaction also reveals that teamwork is often the key to prevent a catastrophe.

The interactions in the third scene are designed to be the most fun part of the whole experience for the participant. It provides a chance for the participant to experience driving the lunar rover vehicle as if he or she was on the Moon’s surface. Jack first provides the instruction on how to operate the Moon buggy and then encourages the participant to give it a try.

The interactions described in the storyline do not mean that the participant can only have a linear experience. The inputs from the participant or the operator make interactions dynamic for the player and produce non-linear experiences. The script is merely a means for framing the experience and allowing a specific set of concepts to be relayed to the participant.

6.1.2 Interactions involving players

Despite the necessary linear interactions embedded in the narrative, there is room for spontaneous events. As discussed in the previous chapter, I introduce another type of

93 player, the operator, into the system to handle unprepared events. The interactions of The

Moon Experience can then be considered in two ways:

 Direct interactions between players and The Moon Experience;

 Indirect interactions between the participant and the operator through the system.

The first way allows the participant and the operator to interact with the virtual lunar world directly. The participant directly interacting with the virtual world includes (1) talking to the virtual character Jack, (2) kicking a lunar rock, (3) driving the Moon buggy.

The operator can directly manipulate Jack’s movement and speech. The operator can also use the “n” key to guide an interaction into the next event in the story. As a result, the sequence of events in the story is up to the operator’s discretion.

The direct interactions between players and the system also employ the underlying indirect interactions between the participant and the operator since the participant’s interactions need the operator’s actions to progress the play or change the states. Jack’s voice recording bank helps the operator to start, maintain, and direct the conversation between Jack and the participant. For example, the participant may be very curious about something on the Moon surface and try to do something that would not be safe in the actual situation. In this case, the operator may direct Jack to say: “Don’t do that. It could be dangerous” to stop the participant’s action. Either direct or indirect, these interactions facilitate a participant’s firsthand learning experience.

6.1.3 Multimodal Interactions

94

The interactions in The Moon Experience are designed to be multimodal, because experiences via multiple sensory channels are more natural, efficient, and engaging.

The multimodal interactions allow users to communicate information naturally since humans perceive the world through their five senses (sensory input) and through effectors such as arms, legs, fingers, eyes, body positions, and vocal system. Multimodal interactions are also efficient because activating more sensory channels means higher information processing bandwidth; users can perceive or perform multiple things at once.

For example, the participant may talk to Jack while moving around to find a rock to experience lunar gravity by kicking and observing its motion. Multi-channel redundant information can also lead to mutual disambiguation of recognition errors. Interactions with multimodal channels also provide users with flexible learning environment since different modalities excel at different tasks or users may have different styles of learning.

Multimodal interactions activate more of users’ senses, involving cognitive activities

(such as decision making) and sensory-motor coordination (such as perception for action and action for perception), and therefore users are potentially more engaged in their learning experience.

To achieve the above advantages of multimodal interactions, however, I needed to solve following essential issues in the design and implementation of The Moon Experience:

 Aligning interactive tasks with modality and vice versa;

95

 Multimodal fusion – combining information from different input modalities in

interactive systems (Liao);

 Multimodal Fission – repartitioning of information among several output

modalities (Cody, Cummins and Maguire).

Mapping interactive tasks into modalities is effectively accomplished through multimodal interfaces (Chittaro). In this project, the Sony HMD is a visual and audio output device which allows the participant to see and hear; a wireless microphone is an audio input device to collect the participant’s spoken information; the mockup space suit is bidirectional tactile information channel to create clumpy, closed, and movement- confined feelings for the participant even through it does not send any digital signals. The

Moon Experience also provides the participant with a game controller and props to utilize the participant’s motor skills. For the operator, monitors are common bidirectional visual channels for input and output; a mouse and game controller are a tactile input device; and the wireless microphone receiver and sound system embedded in computers are auditory output devices.

Except for the information from the tactile mockup space suit, all information from different modalities is digitally integrated into The Moon Experience. For the participant, his auditory information from wireless microphones, body positions, inputs from the game controller, and actions on the lunar rock prop is transferred into the system.

96

Likewise, the actions taken on the keyboard, the mouse, and the game controller for the operator are sent into the system as well.

The visual information from the system is the main area for multimodal fission. The visual stereoscopic information from the camera set in the participant’s avatar’s head (in the virtual world) is delivered simultaneously to the HMD and to one of the dual monitors for an operator. The overall visual information from the camera that tracks Jack is displayed in the other monitor for the operator.

Multimodal fusion and fission are related to information flows in the system, which in turn is determined by underlying mechanics of the system. In fact, all interactions in The

Moon Experience, regardless of factors involved, are defined by the mechanics, which is the topic in the next section.

6.2 Mechanics

Mechanics51 are the nuts and bolts of the system, and usually refers to a set of rules that define the finite states of the system. Schell says they “are the core of what a game truly is. They are the interactions and relationships that remain when all of the aesthetics, technology, and story are stripped away” (Schell). Miguel Sicart in his paper (Defining

Game Mechanics) defines game mechanics “as methods invoked by agents, designed for

51 In this thesis, I use the term ‘mechanics’ to refer to ‘game mechanics’. 97 interaction with the game state”. As a core of the system, the mechanics controls all actions and lays out the structure of the system.

6.2.1 The project space

First, I need to define the project space where the interactions take place. The Moon

Experience is set up in the motion capture laboratory at ACCAD. The physical space of the project includes the motion capture volume and the control room (see Figure 49). The motion capture volume is a limited 20'x20'x10' space where a participant would have a firsthand experience. The control room is connected to the motion capture volume and is where the operator performs his/her tasks. The virtual lunar world is created in the system and outputted to the HMD. Independent of the physical confines of the motion capture space, The Moon Experience has its own space52 that contains objects, attributions, and states, in which mechanics define all the relationships, actions, and interactions between users and the system.

52 In this thesis, I used the term ‘space’ to refer to ‘game space’ in game design. 98

Figure 52 The three modes in The Moon Experience.

6.2.2 Three modes and finite states

Three modes specify the behaviors of The Moon Experience at the top level. They are onGround, inLRV53, and driving (shown in Figure 52). The Moon Experience must be in one of these three modes at any given time.

As the name suggests, onGround mode is used to control Jack’s actions rather than the buggy’s movements. This mode governs the interactions in the first two scenes in the story. inLRV mode refers to the states in which a participant and/or Jack are in the buggy before driving it. inLRV is a transitional mode between onGround and driving events.

The purpose of this mode is to help a participant or Jack get on the LRV and to prepare the system for the driving mode. In driving mode, an operator manipulates the LRV

53 inLRV means the states in which a participant or Jack get on the moon buggy but not in driving mode yet. 99 rather than Jack. Like a switch, I use the same key to enter and exit a state/mode. Each mode has finite states to define the interactions and information flow.

In onGround mode, the actions and the finite states are shown in Figure 53. “idle” is a stable state and the others are temporary ones that return to the idle state after an action

(animation) is finished. From the start state “idle”, an operator can press a key (shown in

Figure 53 along a black arrow line) to have Jack carry out one of ten actions. Once the action is completed, Jack will be back to “idle” state as the blue arrow lines indicated in the figure. The operator can also press “n” key to progress the story forward, which is not indicated in the diagram.

Figure 53 The finite states in the onGround mode. inLRV is a transitional mode that prepares the system for the driving mode. First, the system needs to know who the potential driver and the passenger are. The mode provides

100 the flexible combinations of the driver and passenger. The combinations corresponding to six states shown in Figure 54 are described as follows:

 Jack is the driver and no passenger (state “Jack-driver”);

 Jack is driver and the participant is the passenger (state “Jack-driver & G-

passgr”);

 The participant is the driver and Jack is the passenger (state “G-driver &Jack-

passgr”);

 The participant is the driver and no passenger (state “G-driver”);

 Jack is the passenger and no driver (state “Jack-passgr”);

 The participant is the passenger and no driver (state “G-passgr”).

Figure 54 The finite states in the inLRV mode.

The first four are the common cases for astronauts riding on the buggy. I define a transition from onGround mode (idling state) directly to driving mode without any 101 astronaut in the buggy. This case and the last two states54 are useful for debugging during the implementation and tests. In Figure 54, a line connecting two states with arrows at both ends indicates that one can move back and forth between the states pressing an input

“code”. For example, by pressing input “g”, the operator can make the “idle” state progress to the “Jack-driver” state. The operator can then can progress the state to “Jack- driver & G-passgr” by pressing input “9” or go back to the “idle” by pressing “g” again.

A line connecting two states with an arrow at one end indicates that the transition is only possible in the direction indicated by the arrow. In order to move the “driving” state to any LRV state, input “B” and one of the following six conditions are required:

 C1 – if Jack is the driver and the participant is the passenger ;

 C2 – if the participant is the driver and Jack is the passenger;

 C3 – if Jack is the driver and no passenger;

 C4 – if the participant is the driver and no passenger;

 C5 – if Jack is the passenger and no driver;

 C6 – if the participant is the passenger and no driver.

The double-edge states such as “driving” in Figure 52, Figure 53, and Figure 54 are not only for an accepting state such as the start state (“idling”) but may also represent the modes at the top level.

54 There are three cases are mainly used for debugging: (1) Jack is on the passenger seat and no driver, then “B” is pressed; (2) a participant is on the passenger seat and no driver, then “B” is pressed; (3) in the onGround mode, an operator can press “b” to directly change to the driving mode without any person on the buggy. 102

To become a driver or a passenger, a sequence of actions needs to be activated. For example, if Jack is the driver, then

1. Jack needs to move to a certain position (for the driver side in this case) relative to

the LRV;

2. Play Jack’s animation of get-on-buggy (on the driver side);

3. Play Jack’s sitting-idling animation on the LRV;

A similar procedure will be applied to the cases if Jack is the passenger, and if the participant is the driver or the passenger (without step 1). The next thing is to make the model(s) of Jack and/or the avatar of the participant be a child of the LRV object so that when the LRV is moving, the driver and the passenger (if any) move with it.

In the driving mode, the transition of the states is made to be as simple as possible. Keys

“a”, “w”, “s”, “d”, and the four arrows are used to control the motion of the buggy. The space bar is no longer used for Jack to jump but instead it is used to stop (brake) the vehicle.

In summary, interactions in The Moon Experience are created from elements of the narrative, players, and different modalities. The underlying mechanics define the mode and the finite states of the project, and lay down the structure for the interactions.

103

Chapter 7: Conclusions and Future Work

7.1 Summary

My thesis project began with a challenge faced by all human society – how to conduct hands-on learning with complex experiences. In the process of conducting this project, I further refined my research into a design problem of how to create a firsthand learning experience in a virtual space, which would otherwise never be experienced in the physical world. I selected and worked with the historical Apollo program to demonstrate a solution to the design problem. The goal of my thesis is to create an interactive and immersive virtual space where a user can have a firsthand learning experience.

To achieve this goal, I drew upon various technologies and approaches from multiple disciplines because the of the problem is not restricted to a single field of study.

Some of the related research areas are learning theories and virtual reality technology.

The first question I needed to answer was: What is the best learning approach for a complex experience? Unfortunately, there is no definitive answer to that question, but it is well accepted that learning from firsthand experience is one of the most effective learning approaches. The existing learning theories reveal several factors that affect learning outcomes:

104

 Proper guidance or instruction is a key element for students to learn efficiently

and effectively.

 Multimodal learning is better than mono-modal learning.

 Students learn better from the combination of words, pictures, animation, and

narrative than from words alone.

 Students learn better when animation and narration (sound) are presented

simultaneously.

 Students learn more efficiently when words are presented in conversational rather

than formal style.

The above insights provided guidance for the design of The Moon Experience.

To create a virtual lunar world, I employed virtual reality technologies including computer graphics. However, simply exploring in a virtual world does not necessarily lead to effective learning; on the contrary, it may result in aimless wandering. I needed to find an engaging way to guide the user to explore the virtual world effectively and efficiently. On the other hand, completely synthesizing sensory inputs to fill the gap between reality and virtuality by the current VR technology alone is extremely difficult and prohibitive. Skills and principles in classic art forms such as storytelling can help overcome the shortcomings of VR Technology as suggested by Stapleton et al.’s mixed fantasy framework (Stapleton, Hughes and Moshell) (see Figure 5). Storytelling was then integrated to the design of The Moon Experience for these purposes:

 To place participants in the pertinent learning context;

105

 To heighten the audiences’ perception, trigger their imagination, and transcend

some of virtual reality’s technical limitations.

I wrote a short script covering three scenarios for these purposes. The narrative also served as a vehicle to embed the principles and features of effective learning. For example, the conversations between Jack and the participant were designed based on the principle that students learn better when instructions are presented in a conversational style. The narrative script also helps me (as a designer) focus on the design and implementation germane to the three scenarios. The design of interfaces and interactions is therefore theme-driven and user-centric based on the story.

Computer game technology further engages users in their learning experiences by providing elements of fun, challenge, and strategy. Games also require sophisticated thinking, which promotes knowledge retention. Applying computer game technology to

The Moon Experience also facilitates the design of invisible interfaces and the interactions between users and the system, which contributes to achieving a believable sensation of being on the Moon.

During the implementation, I encountered an issue that a participant could ask any questions which could lead to an experience that was not designed in the script. To solve this issue, I chose to create a secondary role - an operator who has access to a repertoire of Jack’s recordings. When an unexpected question is asked, the operator can choose an appropriate pre-recorded response to reply. This approach prevents me from bogging

106 down the implementation of intelligent conversational software that is not a concern in my thesis. On the other hand, this approach expands the linear story line to a non-linear user experience when spontaneous interactions occur. The operator becomes a part of the play. This choice opens up an opportunity for me to integrate theatrical design’s multimodal artistic conventions into the VR system to blend the real and virtual worlds.

Creating an operator role equips the project with traditional entertainment techniques such as Barnum’s reality-imagination continuum and Aristotle’s media-imagination continuum shown in Figure 5. These artistic approaches excite the audience’s perception, trigger their imagination, and move them across the credibility threshold, suspending their disbelief. Integrating these artistic elements, The Moon Experience overcomes the current limitation of VR technology and provides the participant with an effective learning environment for hand-on learning. During the user tests, The Moon Experience is further developed into a theatrical event with two crew members and a staging area just outside the motion capture space. The two crew members play the role of NASA technicians assisting the participant to put on the HMD and any other gears. They also ask the participant several questions to get them ready for the lunar exploration in the staging area. The theatrical aspects work well with the technologies applied to The Moon

Experience, which allows The Moon Experience to pass beyond the current limitations of

VR technology.

Because of the choice to include an operator, the interfaces and interactions are also designed to accommodate the role of operator at the same time as the participant. For the

107 participant, a Vicon motion capture system, Sony head-mounted display, and wireless microphone and a game controller make up the major physical interface. For the operator, computers, keyboard, mouse, and a game controller are the physical interface.

The virtual interfaces include GUI interfaces and software used to transfer data in the application. Thus, interactions are not only between a participant and the system but also incorporate the role of the operator as well.

The Moon Experience is a multidisciplinary design research project. It provides a reference for designers who face similar design challenges. It also serves as a case study for future interdisciplinary researches, particularly those that involve virtual environments, computer games, and motion capture in learning applications.

I hope that this project will provide a small virtual world for the general public to explore, to learn, and to enjoy, as “All experience is an arch, to build upon” (Henry B.

Adams).

7.2 User Tests, Analysis, and Evaluation

Many people have tried out The Moon Experience. Through these user tests, I’ve learned what designs and implementations work and which do not. This prompts me to carefully analyze and evaluate different designs and implementations

The technology of VR is a powerful tool used to simulate an event (that is inaccessible in the real world) in a virtual space, and thus could create a sensation of ‘being there’ for the

108 user. In The Moon Experience, the virtual lunar world created in Unity is delivered to a user through an HMD with a 3D view. The 3D view of the lunar world is updated dynamically through the motion capture tracking system with the HMD as the tracking target. The real-time 3D view allows the user to completely immerse himself or herself into the virtual lunar world and create a believable experience of being on the Moon. The feeling of being on the Moon is convincing and realistic, which has met the expectation of the project. Narrative is a necessary component to engage the user into the simulated virtual event to achieve the goal of the specific application. It is also a medium that combines traditional art skills and principles with VR technology to effectively blend reality and virtuality, which allows the project to transcend the limitations of VR technology. The linear story line is expanded into non-linear scenarios by introducing the role of an operator. The user experience in The Moon Experience becomes dynamic and appealing with spontaneous interactions between the participant and the operator through the system. This design helps the designer work around the issue that the conversations

(between the participant and Jack) may be outside the original script, thus requiring a complicated intelligent voice module like Siri on Apple’s iPhone4S. To support this expansion from linear to non-linear story, the system must have a communication channel between the participant and the operator.

In the three scenarios in the story line, kicking a lunar rock (in the first scene) and driving the moon buggy (in the last scene) are the highlights of a user experience. The participants are fascinated by their firsthand learning experiences with tactile actions

109

(self-performance) such as kicking a lunar rock and driving the moon buggy. On the other hand, “Houston, we have problem” (the second scene) did not seem to bring about the expected sensation of danger for most users. Reflecting on the second scene, I realized a flaw in the design of the scenario – the scene does not require any actions from the participant. The participant is passive and simply waits for Jack to help. Even while experiencing the feeling of being on the Moon, the participant does not have the immersive sensation of being in danger. Compared with the two audiences’ favorites kicking a rock and driving the moon buggy, both scenarios involve a user’s self- performance actions. This suggests that user’s self-performance action is an important element affecting the outcome of a VR application.

During user tests, I made use of the theatrical aspect of The Moon Experience to compensate for the flaw. When Jack told a participant: “Stand still, let me check your backpack”, one of the crew would go behind the participant and pretend to work on the backpack. This immediately activates the participant and makes him/her wonder what’s wrong with his/her gears. The theatrical element in a VR application can be used to not only overcome the shortcomings of VR technology but also help fix some design flaws.

7.3 Limitation and Future Work

110

My thesis project is a small-scale case to demonstrate a solution to the design challenge.

It simulates only a tiny portion of the massive historical Apollo program which is the most exciting part for users. The users are the general public rather than experts and professionals (in space, astronomy, and cosmology).

In the motion capture lab, only the participants head is tracked, not the whole body. The avatar is a rigid body and it moves as one piece without any variation in the subparts.

Since The Moon Experience is set up in a motion capture laboratory, portability is another limitation.

Future work should focus on alleviating the limitations of the current project mentioned above. First, the rigid movement of a participant’s avatar might be solved using the

Microsoft Kinect. The Kinect could be used to replace the Vicon motion capture system as a tracking device. In this configuration, the avatar’s motion would be driven by the participant’s entire movement rather than just the head position in the motion capture volume. The data transformation among several devices and corresponding software needs to be investigated further, which would introduce new programming. Motion capture technology can still be used for creating the character’s animation (like Jack’s animation loaded in Unity). This approach may also solve the issue of portability since the Kinect is smaller and cheaper than a Vicon motion capture system. However, the downside of Kinect approach is the very limited range of tracking (1~2 meters), which is

111 much smaller than one in the motion capture. If a system requires a certain tracking range to perform, Kinect will not be a good solution.

Another area that can be improved is the potential evaluation method. For example, players’ actions can be recorded as they play a game, which is a typical component in game design called analytics. With these data available, useful analysis can be conducted to reveal how participants have learned. A comprehensive evaluation may not be possible, but this would be a promising starting point.

112

Bibliography Adams, Ernest. Fundamentals of Game Design. Ed. second edition. Berkeley, CA 94710:

New Riders, 2009.

Astleitner, Hermann and Christian Wiesner. "An integrated model of multimedia learning

and motivation." Journal of Educational Multimedia and Hypermedia 13.1 (2004):

3-21.

Atkinson, R. C. and R. Shiffrin. "Human memory: a proposed system and its control

processes." The psychology of learning and motivation 2 (1968): 89-195.

Autodesk, Inc. MotionBuilder - 3D Character Animation Software. 2013. 24 Feb. 2013

.

Baddeley, Alan and Graham Hitch. "Working memory." Psychology of learning and

motivation 8 (1974): 17-90.

Baddeley, Alan. "Is working memory still working?" American Psychologist 56 (2001):

851-864.

—. The essentials of human memory. Hove: Psychology Press, 1999.

—. Working memory: the interface between memory and cognition. Cambridge: MIT

Press, 1994.

Baddeley, Alan, Michael W. Eysenck and Michael C. Anderson. Memory. Hove:

Psychology Press, 2009.

113

Benjamin, Marina. Rocket Dream: How the space age shaped our vision of a world

beyond. Free Press, 2003.

Blach, Roland. "Virtual reality technology - An overview." Talaba, Doru and Amditis,

Angelos. Product Engineering . Springer Netherlands, 2008. 21-64.

Blascovich, Jim and Jeremy Bailenson. Infinite reality : avatars, eternal life, new worlds,

and the dawn of the virtual revolution. William Morrow, 2011.

Bransford, John D., Ann L. Brown and Rodney R. Cocking. How people learn: brain,

mind, experience, and school. , DC.: The national academies press,

2000.

Brooks, Charles A. and Sivram Prasad. "Apollo Program Summary Report." NASA

Lyndon B Johson Space Center. 1975.

Bruner, Jerome. The process of education. Cambridge, MA: Harvard University Press,

1977.

—. Toward a theory of instruction. Cambridge, MA: Belknap Press of Harvard

University Press, 1974.

Brunken, Roland, Jan L. Plass and Detlev Leutner. "Directive measurement of cognitive

load in multimedia learning." Educational Psychologist 38.1 (2003): 53-61.

Burdea, Grigore C. and Philippe Coiffet. Virtual reality technology. 2nd edition. Wiley-

IEEE Press, 2003.

Chaikin, Andrew. A man on the Moon. New York: Penguin books, 2007.

—. A man on the Moon. New York, 1994.

114

Chaikin, Andrew and Victoria Kohl. Voices From the Moon - Apollo astronauts describe

their lunar experiences. New York: Viking Studio, 2009.

Chittaro, Luca. "Distinctive Aspects of Mobile Interaction and their Implications for the

Design of Multimodal Interfaces." Journal on Multimodal User Interface 3.3

(2010): 157-165.

Chouvardas, Vasilios G., Amalia N. Miliou and Miltiadis K. Hatalis. "Tactile Displays: a

short overview and recent developments." Developments, 5th International

Conference on Technology and Automation. 2005. 246-251.

Clark, Richard E. and David F. Feldon. "Five common but questionable principles of

multimedia learning." Mayer, Richared. Cambridge Handbook of Multimedia

learning. Cambridge: Cambridge University Press, 2005.

Cobb, Jeff. Mission to learn. 21 May 2009. 6 July 2012

.

Cody, Mick, et al. "Research project on adaptive multimodal fission and fusion." MIT,

2004.

Compton, William David. Where No Man Has Gone Before: A History of Apollo Lunar

Exploration Missions. Washington, DC: United States Government Printing,

1989.

Cowan, Nelson. "The magical number 4 in short-term memory: A reconsideration of

mental storage capacity." Behavioral and brain sciences 24 (2001): 87-114.

Craig, Alan B. Developing virtual reality applications: foundations of effective design.

Morgan Kaufmann, 2009.

115

Craig, Alan B., William R. Sherman and Jeffrey D. Will. Developing virtual reality

applications: foundations of effective design. Burlington: Morgan Kaufmann,

2009.

Doolittle, Peter E. "Multimedia learning: empirical results and practical applications."

n.d.

Engelkamp, Johannes. Human memory: a multimodal approach. : Hogrefe &

Huber Publishers, 1994.

—. Memory for actions. Hove: Psychology Press, 1998.

Garrett, Jesse James. The elements of user experience. Ed. Second Edition. Berkeley, CA

94710, U.S.A.: New Riders, 2011.

Glaserfield, Ernst von. "An exposition of Constructivism: Why some like it radical."

Noddings, R. B. Davis and C. A. Maher and N. Constructivist views of the

teaching and the learning of mathematics. 1991.

Glaserfield, Ernst von. "Constructivism in Education." Husen, T. and T. N. Postlethwaite.

The Internatinal Encyclopedia of Education. Oxford/New York:Pergamon Press,

1989. 162-163.

Graham, George. "Behaviorism." 2010. The Stanford Encyclopedia of Philosophy (Fall

2010 Edition). Ed. Edward N. Zalta. 4 July 2012

.

Hayward, Vincent, et al. "Haptic interfaces and devices." 2004.

Hein, George E. Constructivist Learning Theory | Exploratorium. 1996. 2011

.

116

Herrington, Jan and Peter Standen. "Moving from an instructivist to a constructivist

multimedia learning environment." Journal of Educational Multimedia and

Hypermedia 9.3 (2000): 195-205.

In the shadow of the Moon. Dir. David Sington. Perf. , et al. 2007.

Jacobs, Robert. Apollo: through the eyes of the astronauts. New York: Abrams, 2009.

Jacobs, Robert, et al. Apollo, Through the eyes of the astronauts. New York: Abrams,

2009.

John Jones (NASA). Apollo Spinoffs. 1 May 2011. 18 July 2012

.

Johnson, Jeff. Designing with the mind in mind: simple guide to understanding user

interface design rules. Burlington, MA 01803, USA: Morgan Kaufmann, 2010.

Jones, Eric M. and Ken Glover. Apollo Lunar Surface Journal. 28 April 2012. 25 July

2012 .

Jones, Eric M. Apollo 17 Video Library. 1995. 2011

.

Kirschner, Paul A. "Cognitive load theory: implications of cognitive load theory on the

design of learning." Learning and instruction 12 (2002): 1-10.

Kirschner, Paul A., John Sweller and Richard E. Clark. "Why minimal guidance during

instruction does not work: an analysis of the failure of constructivist, discovery,

problem-based, experiential, and inquiry-based teaching." Educational

Psychologist 41.2 (2006): 75-86.

117

Laughlin, Daniel and Nick Marchuk. "A Guide to Computer Games in Education for

NASA." 2005.

Launius, Roger D. "Heroes in a vacuum: The Apollo astronaut as cultural icon." 43rd

AIAA Aerospace Sciences Meeting and Exhibit. Reno: American Institute of

Aeronautics and Astronautics, 10-13 January 2005.

Lawless, Kimberly A. and Scott W. Brown. "Multimedia learning environments: issues

of learner control and navigation." Instructional Science 25 (1997): 117-131.

Liao, Hank. "Multimodal Fusion." 2002.

Lutz, Charles C., et al. Apollo experience report - development of the extravehicular

mobility unit. Technical report . Lyndon B. . Washington

DC: National Aeronautics and Space Administration, 1975.

Marois, Rene and Jason Ivanoff. "Capacity limits of information processing in the brain."

TRENDS in Cognitive Sciences 9.6 (2005): 296-305.

Massaro, Dominic W. "Multimodal learning." Seel, Norbert M. Encyclopedia of the

Sciences of Learning. Springer US, 2012. 2375-2378.

Mayer, Ricard E. and Roxana Moreno. "A cognitive theory of multimedia learning:

implications for design principles." (n.d.).

Mayer, Richard E and Richard B. Anderson. "The instructive animation: helping students

build connections between words and pictures in multimedia learning ." Journal

of educational psychology 84.4 (1992): 444-452.

118

Mayer, Richard E. and Roxana Moreno. "A split-attention effect in multimedia learning:

evidence for dual processing systems in working memory." Journal of

Educational Psychology 90.2 (1998): 312-320.

—. "Aids to computer-based multimedia learning." Learning and Instruction (2002): 107-

119.

—. "Animation as an aid to multimedia learning." Educational Psychology Review 14.1

(2002): 87-99.

—. "Nine ways to reduce cognitive load in multimedia learning." Educational

Psychologist 38.1 (2003): 43-52.

Mayer, Richard E. Multimedia Learning. Cambridg, UK: Cambridge University Press,

2001.

—. "Multimedia learning: are we asking the right questions?" Educational psychologist

32.1 (1997): 1-13.

—. "The promise of multimedia learning: using the same instructional design methods

across different media." Learning and Instruction 13 (2003): 125-139.

Mayer, Richard E., Gayle T. Dow and Sarah Mayer. "Multimedia learning in an

interactive self-explaining environment: what works in the design of agent-based

microworlds?" Journal of Educational Psychology 95.4 (2003): 806-813.

Mayer, Richard E., Julie Heiser and Steve Lonn. "Cognitive constraints on multimedia

learning: when presenting more material results in less understanding." Journal of

Educational Psychology 93.1 (2001): 187-198.

119

McGurk, Harry and John MacDonald. "Hearing lips and seeing voices." Nature 264

(1976): 746-748.

Medina, John. Brain rules: 12 principles for surving and thriving at work, home and

school. Seattle: Pear Press, 2009.

Metiri Group. "Multimodal learning through media: what the research says." Cisco public

information. Cisco Systems Inc., 2008.

Milgrom, Paul, et al. "Augmented Reality: A class of display on the reality-virtuality

continnum." SPIE (1994): 282 - 292.

Miller, G. A. "The magical number seven: some limits on our capacity for processing

information." Psychological review (1956): 81-97.

Mistry, Pranav and Pattie Maes. "SixthSense: a wearable gestural interface." ACM

SIGGRAPH Asia 2009 Sketches. New York, NY: ACM, 2009. 11:1-11:1.

Mistry, Pranav. SixthSense - a wearable gestural interface (MIT Media Lab). 2010. 30

July 2012 .

Moreno, Roxana and Richard E. Mayer. "A coherence effect in multimedia learning: case

for minimizing irrelevant sounds in the design of multimedia instructional

messages." Journal of educational psychology 92.1 (2000): 117-125.

—. "A Learner-Centered Approach to Multimedia Explanations: Deriving Instructional

Design Principles from Cognitive Theory." Interactive Multimedia Electronic

Journal of Computer-Enhanced Learning. Wake Forest University. 2003.

—. "The cognitive principles multimedia learning: the role of modality and contiguity."

Journal of Educational Psychology 91.2 (1999): 358-368.

120

—. "Verbal redundancy in multimedia learning: when reading helps listening." Journal of

Educational Psychology 94.1 (2002): 156-163.

Moreno, Roxana. "Interactive multimodal learning environments." Educational

Psychology Review 19.3 (2007): 309-326.

—. "Who learns best with multimedia representations? the cognitive theory implications

for individual differences in multimedia learning." ED-MEDIA 2002 World

Conference on Educational Multimedia, Hypermedia, and Telecommunications.

n.d.

Murray, Charles and Catherine Bly Cox. Apollo. Simon & Schuster, 1989.

NASA. Apollo Spinoffs. Ed. John Jones. 1 May 2011. NASA. 16 July 2012

.

Natonal Aeronautics and Space Administration. "Apollo Operations HandBook

Extravehicular Mobility Unit." Technical report. Manned Spacecraft Center,

1971.

Orloff, Richard W. and David M. Harland. Apollo: The Definitive Sourcebook. New

York: Springer, 2006.

Ormrod, Jeanne Ellis. Human Learning. Boston: Prentice Hall, 2012.

Paas, Fred, Alexander Renkl and John Swellers. "Cognitive load theory and instructional

design: Recent developments." Educational psychologist 38.1 (2003): 1-4.

Paivio, Allan. "Mental imagery in associative learning and memory." Psychological

review 76 (1969): 241-263.

121

Peterson, L. and Peterson, M. "Short-term retention of individual verbal items." Journal

of experimental psychology 58 (1959): 193-198.

Psotka, Joseph. "Immersive training systems: virtual reality and education and training."

Instructional Science 23 (1995): 405-431.

Reed, Stephen K. "Cognitve architectures for multimedia learning ." Educational

psychologist (2006): 87-98.

Robles-De-La-Torre, Gabriel. "The importance of the sense of touch in virtual and real

environments." MultiMedia, IEEE 13.3 (2006): 24-30.

Rogers, Scott. Level Up!: The guide to great . Wiley, 2010.

Salen, Katie and Eric Zimmerman. Rules of Play: Game Design Fundamentals.

Cambridge, MA 02142-1493, U.S.A: The MIT Press, 2004.

Sankey, Michael, Dawn Birch and Michael Gardiner. "Engaging students through

multimodal learning environments: the journey continues." Proceedings ascilite.

Sydney, 2010. 852-863.

Sawyer, R. Keith. The Cambridge Handbook of the Learning Sciences. New York:

Cambridge University Press, 2006.

Schell, Jesse. The Art of Game Design: A book of lenses. Burlington, MA 01803, USA.:

Morgan Kaufmann Publishers, 2008.

Sherman, William R. and Alan B. Craig. Understanding virtual reality: interface,

application, and design. The Morgan Kaufmann, 2002.

122

Shilling, Russell D. and Barbara Shinn-Cunningham. "Chapter 4 Virtual Auditory

Displays." Stanney, K. Handbook of Virtual Environment Technology.

Lauwrence Erlbaum, Associates, Inc., 2000.

Shinn-Cunningham, Barbara. "Applications of Virtual Auditory Displays." Proceedings

of the 20th International Conference of the IEEE Engineering in Biology and

Medicine Society. 1998. 1105-1108.

Sicart, Miguel. "Defining Game Mechanics." Game Studies 8.2 (2008).

Siemens, George. "Connectivism: a learning theory for the ditigal age." International

Journal of Instructional Technology and Distance Learning 2.1 (2005).

Smithwick, Mike. Apollo 11 PAO radio record. 2001. 2012

.

Squire, Kurt D. "Video Games and Education: Designing Learning Systems for an

Interactive Age." Educational Technology Magazine: The Magazine for Managers

of Change in Education 28.2 (2008): 17-26.

Stapleton, Christopher, et al. "Applying Mixed Reality to Entertainment." Computer

35.12 (2002): 122-124.

Steuer, Jonathan. "Defining virtual reality: dimensions determining ."

JOURNAL OF COMMUNICATION 42 (1992): 73-93.

Summerfield, Quentin. "Lipreading and Audio-Visual Speech Perception." Philosophical

Transactions: Biological Sciences 335.1273 (1992): 71-78.

Sweller, John. "Cognitive load during problem solving: effects on learning." Cognitive

Science 12.2 (1988): 257-285.

123

—. "Evolution of human cognitive architecture." The psychology of learning and

motivation (2003): 215-266.

—. "Instructional design consequences of an analogy between evolution by natural

selection and human cognitive architecture." Instructional science 32 (2004): 9-

31.

Tarr, Michael J. and William H. Warren. "Virtual reality in behavioral neuroscience and

beyond." Nature (Neuroscience) 5 (2002): 1089-1092.

The Boeing Company, LRV Systems Engineering. "Lunar Roving Vehicle Operations

Handbook." Technical report. 1971.

To the Moon. Dir. Alan Ritsko. NOVA. 2000.

Tracy, Ryan. E-learning provocateur. Seattle: CreateSpace, 2011.

Unity Technologies. Car Tutorial by Unity Technologies -- Unity Asset Store. 2012. Feb

2012 .

Witmer, Bob G. and Michael J. Singer. "Measuring presence in virtual environments: a

presence questionnaire." Presence 7.3 (1998): 225 - 240.

Wolfe, Tom. . Farrar, Straus and Giroux, 1979.

Yong, Amanda. Spacesuits The Smithsonian National Air and Space Museum Collection.

Brooklyn: PowerHouse Books, 2009.

Youngblut, Christine. "Educational uses of virtual reality technology." INSTITUTE FOR

DEFENSE ANALYSES ALEXANDRIA VA, 1998.

Youngblut, Christine, et al. "Review of virtual environment interface technology." IDA

Paper P-3186. Alexandria: Institute for defense analyses (IDA), March 1996.

124

125

Appendix A: A Survey on the Content of The Moon Experience

126

127

128

129

130

131

132

133

Appendix B: The narratives of The Moon Experience

Settings:

Where: Tauras-Littrow Valley (Apollo 17 landing site)

Who: Astronaut Jack Cernan (named after astronaut “Jack” Harrison Schmitt and

Gene Cernan) who is in the virtual lunar world.

Astronaut Guest – a participant who enters the lunar virtual world when s/he puts on a head mounted display (HMD).

Scenes:

Scene I – “Hello, welcome to the Moon”

When astronaut G first enters the lunar world as s/he puts on HMD, s/he should see various lunar objects (such as the ground, craters, mountain, the Sun, the Earth, the lunar module, and the moon buggy) and Jack moving around or working on something

……

When Jack notices G, he waves to G, says hello and asks G’s name.

Jack: “Hello, welcome to the Moon. I’m Jack Cernan, your commander for your

Moon exploration. What’s your name?”

Wait for G’s reply ……

134

Jack: “I bet this is your first time on the Moon, isn’t it?”

G’s reply:

Jack: “Right now, we are at the Apollo 17 landing site - Tauras-Littrow Valley.

Taurus-Littrow is a narrow valley in the Montes Taurus mountains, which form the rim of the Serenitatis impact basin. You may want to have a look at your surroundings. We’re a team so stay close by and watch out for craters.”

Jack: “Be careful when you walk. Lunar gravity is about 83% weaker than on the

Earth. It takes a little practice to keep your balance.”

When talking, he tumbles to the ground.

Jack: “see, what I mean? Lead by example I always say!”

Jack manages to get up and continues his work.

Jack: “After I wrap up the work at hand, I’ll show you around.”

Jack: “Wait, you know what? If you see a rock, give it a kick to see how the lunar gravity affects it.”

Astronaut G walks around, leaving his/her footprints. When s/he sees a rock (with the size ranging from a grapefruit to a basketball), s/he kicks the rock. The rock rolls ……

Scene II – “Houston, we have a problem”

While G is walking around and waiting for Jack, suddenly a warning sound is heard from the intercom embedded in G’s helmet, which indicates something wrong with G’s EMU.

135

See the details in page 2-95 of The EMU Operations Handbook

(http://www.hq.nasa.gov/alsj/alsj-EMU1.pdf).

G sounds panic and he may yell something.

The alarm and G’s yelling get Jack’s attention. He hops quickly toward G while reporting to the command center in Huston.

Jack: “Houston, we have a problem. It seems something wrong is with my guest’s

EMU. I’m moving toward the guest, standby.”

Jack: “Calm down. Don’t panic. I’m coming to help you”.

Jack is approaching G. Jack arrives and starts to check G’s EMU.

Jack: “Houston, the guest’s LOW VENT FLOW is on. I’m going to check other components of the EMU.”

Jack: “Stand still, let me check your backpack.” Jack moves behind G.

Jack: “Houston, the guest’s OPS and PLSS work fine”.

Huston: “Jack, first turn off the cycle fan. Hopefully, the flag will clear after 10 seconds. Otherwise, you will need to activate OPS”.

Jack: “Roger, I’m doing it. Now the cycle fan is off”.

After several seconds, the flag P is off.

Jack: “Houston, the LOW VENT FLOW flag is cleared.”

Huston: “Jack, please check the other three displays on your guest’s RCU.”

Jack checks them and replies: “Houston, the three displays are all good, over.”

136

Huston: “Copy that, we also checked the data in the other channels from the

EMU. Everything is normal, over.”

Jack: “Your Extravehicular Mobility Unit works fine now. Are you ready to move around? We can explore by driving the moon buggy. ”

Scene III – Driving the Moon Buggy

Both Jack and G walk toward the moon buggy.

Two options: Jack drives the buggy or G drives it.

Jack: “Come on, hop on.” G gets on the buggy. Jack also hops on it.

Jack: “Ready? Let’s go!” Jack starts to drive.

Jack: “The top speed of the buggy is about 8 miles an hour.”

Jack: “Push the T-handle forward to increase the forward speed.”

Jack: “Move the T-handle left or right to steer.”

Jack: “Pull back the T-handle to brake. Pull all the way back to engage the parking brake.”

Jack: “You got it?”

G replies. Jack may repeat. While Jack is driving, G may look around.

Jack: Want to try it?

G drives the buggy and Jack sits in the buggy as a passenger. When G looks down at the control system of LRV, the detail of the hand controller in LRV should be shown.

G may ask questions about driving the moon buggy. Jack replies.

Scene IV – It’s time to say good-bye! 137

Houston: Jack, You have a new guest ready to explore. Over.

Jack: Roger, Houston. We’re on the way back. Over.

Jack: It’s time for you to go home.

Jack: I hope you enjoy the tour.

Jack: Thank you for coming.

Jack’s additional sayings:

Sure.

Yes.

No.

Where are you going?

This way!

Follow me.

Come over here.

Look over here.

Go left.

Go right.

Go straight.

Good job!

Fantastic!

138

Good Idea.

That’s a good idea.

That’s not a good idea.

Don’t do that.

It could be dangerous.

How are you doing there?

Are you doing okay?

We have a limited time.

Let’s keep it simple.

Can you ask me a yes-no question?

Goodbye!

139