<<

Mälardalen University Press Licentiate Theses No. 181

Mälardalen University Press Licentiate Theses No. 181 TOWARDS IMMERSIVE ACTING

DESIGN, EXPLORATION AND DEVELOPMENT OF AN AUGMENTED SYSTEM SOLUTION TOWARDS IMMERSIVE MOTION CAPTURE ACTING

DESIGN, EXPLORATION AND DEVELOPMENT OF AN AUGMENTED SYSTEM SOLUTION

Daniel Kade

Daniel2014 Kade

2014

School of Innovation, Design and Engineering

School of Innovation, Design and Engineering Abstract

Current and future seek for realistic motions to create a perception of authentic and human-like animations. A technology widely used for such purposes is motion capture. Therefore, to create such human-like animations, motion capture actors enrich the movements of digital avatars with realistic and believable motions and emotions. Acting for motion capture, as it is performed today, does not provide a natural acting environment. This is mostly because motion capture actors do not see and feel the virtual environment they act for, while acting. In many cases this can result in unnatural motions such as stiff looking and emotionless movements. To investigate ways to solve this, we first identify the challenges actors are facing as well as concepts to support a motion capture actor. Furthermore, we discussed, how the task of supporting motion capture actors was approached and which factors were discovered to provide support when designing and im- plementing a solution. Initial prototypes have been created to address the men- tioned issues and to find suitable solutions to support and immerse motion cap- ture actors during their performance. For this thesis, one goal was to conduct research by focusing on the question: What are the experiential qualities of immersion in an interactive system to create an immersive acting environment that supports motion capture actors. The developed application provides a flexibility to set up and modify digital assets and scenes quickly and with an easy to use interface. Furthermore, the Copyright © Daniel Kade, 2014 prototype helps to provide an understanding on which hardware and software ISBN 978-91-7485-161-8 prototypes can be designed and used to build an immersive motion capture ISSN 1651-9256 environment. The built prototype allows to investigate user experiences, user Printed by Arkitektkopia, Västerås, Sweden tests and the satisfaction of users and their effects on motion capture acting.

i Abstract

Current and future animations seek for realistic motions to create a perception of authentic and human-like animations. A technology widely used for such purposes is motion capture. Therefore, to create such human-like animations, motion capture actors enrich the movements of digital avatars with realistic and believable motions and emotions. Acting for motion capture, as it is performed today, does not provide a natural acting environment. This is mostly because motion capture actors do not see and feel the virtual environment they act for, while acting. In many cases this can result in unnatural motions such as stiff looking and emotionless movements. To investigate ways to solve this, we first identify the challenges actors are facing as well as concepts to support a motion capture actor. Furthermore, we discussed, how the task of supporting motion capture actors was approached and which factors were discovered to provide support when designing and im- plementing a solution. Initial prototypes have been created to address the men- tioned issues and to find suitable solutions to support and immerse motion cap- ture actors during their performance. For this thesis, one goal was to conduct research by focusing on the question: What are the experiential qualities of immersion in an interactive system to create an immersive acting environment that supports motion capture actors. The developed application provides a flexibility to set up and modify digital assets and scenes quickly and with an easy to use interface. Furthermore, the prototype helps to provide an understanding on which hardware and software prototypes can be designed and used to build an immersive motion capture environment. The built prototype allows to investigate user experiences, user tests and the satisfaction of users and their effects on motion capture acting.

i Swedish Summary / Sammanfattning

Inom dagens och framtidens efterstravas¨ realistiska rorelser¨ for¨ att ge betraktaren en uppfattning om autentiska och manniskoliknande¨ animeringar. Motion capture anvands¨ i stor utstrackning¨ fr detta andam¨ al.˚ Motion capture- skadespelare˚ ger digitala avatarer realistiska och trovardiga¨ rorelser¨ och kanslor.¨ Dessvarre¨ erbjuder inte nuvarande motion capture-teknik en naturlig skade-˚ spelarmiljo.¨ Det ar¨ framst¨ pa˚ grund av att skadespelare˚ inte ser eller pa˚ annat satt¨ uppfattar den virtuella miljon¨ de agerar i. Det har¨ leder i manga˚ fall till onaturliga rorelser.¨ For¨ att undersoka¨ losningar¨ till detta problem, identifierar vi forst¨ de ut- maningar som skadespelarna˚ stalls¨ infor¨ samt vilka stod¨ som finns for¨ motion capture-skadespelare.˚ Vidare diskuteras hur det ar¨ mojligt¨ att stodja¨ motion capture-skadespelare˚ och vilka faktorer som maste˚ beaktas nar¨ en andam¨ alsenlig˚ losning¨ designas och implementeras. Efter detta skapar vi de forsta¨ prototype- rna for¨ att ta itu med utmaningarna runt motion capture och for¨ att finna lmpliga losningar¨ for¨ att stodja¨ och skapa immersion for¨ motion capture-skadespelare.˚ Ett av avhandlingens mal˚ har varit att fokusera fragan:˚ Vilka upplevelsek- valiteter runt immersion for¨ ett interaktivt system skapar en immersiv miljo¨ som stodjer¨ motion capture skadespelare?˚ Den interaktiva forskningsprototypen ger ett flexibilitet och lattanv¨ ant¨ anvandargr¨ anssnitt¨ for¨ att andra¨ i scener och digitala objekt. Forskningspro- totypen bidrar ocksa˚ till frstaelse˚ for¨ hur hardvara˚ och mjukvara kan utfor- mas och anvandas¨ for¨ att bygga en immersiv motion capture-miljo.¨ Prototypen mojligg¨ or¨ experiment, anvandartester¨ och undersokningar¨ av anvandarupplevelse,¨ anvandartillfredst¨ allelse¨ och dess effekter pa˚ motion capture-skadespeleri.˚

iii Swedish Summary / Sammanfattning

Inom dagens och framtidens animation efterstravas¨ realistiska rorelser¨ for¨ att ge betraktaren en uppfattning om autentiska och manniskoliknande¨ animeringar. Motion capture anvands¨ i stor utstrackning¨ fr detta andam¨ al.˚ Motion capture- skadespelare˚ ger digitala avatarer realistiska och trovardiga¨ rorelser¨ och kanslor.¨ Dessvarre¨ erbjuder inte nuvarande motion capture-teknik en naturlig skade-˚ spelarmiljo.¨ Det ar¨ framst¨ pa˚ grund av att skadespelare˚ inte ser eller pa˚ annat satt¨ uppfattar den virtuella miljon¨ de agerar i. Det har¨ leder i manga˚ fall till onaturliga rorelser.¨ For¨ att undersoka¨ losningar¨ till detta problem, identifierar vi forst¨ de ut- maningar som skadespelarna˚ stalls¨ infor¨ samt vilka stod¨ som finns for¨ motion capture-skadespelare.˚ Vidare diskuteras hur det ar¨ mojligt¨ att stodja¨ motion capture-skadespelare˚ och vilka faktorer som maste˚ beaktas nar¨ en andam¨ alsenlig˚ losning¨ designas och implementeras. Efter detta skapar vi de forsta¨ prototype- rna for¨ att ta itu med utmaningarna runt motion capture och for¨ att finna lmpliga losningar¨ for¨ att stodja¨ och skapa immersion for¨ motion capture-skadespelare.˚ Ett av avhandlingens mal˚ har varit att fokusera fragan:˚ Vilka upplevelsek- valiteter runt immersion for¨ ett interaktivt system skapar en immersiv miljo¨ som stodjer¨ motion capture skadespelare?˚ Den interaktiva forskningsprototypen ger ett flexibilitet och lattanv¨ ant¨ anvandargr¨ anssnitt¨ for¨ att andra¨ i scener och digitala objekt. Forskningspro- totypen bidrar ocksa˚ till frstaelse˚ for¨ hur hardvara˚ och mjukvara kan utfor- mas och anvandas¨ for¨ att bygga en immersiv motion capture-miljo.¨ Prototypen mojligg¨ or¨ experiment, anvandartester¨ och undersokningar¨ av anvandarupplevelse,¨ anvandartillfredst¨ allelse¨ och dess effekter pa˚ motion capture-skadespeleri.˚

iii German Summary / Zusammenfassung

Aktuelle und zukunftige¨ Animationen streben nach realistischen Bewegungen um eine Wahrnehmung von authentischen und menschenahnliche¨ Animatio- nen zu erstellen. Eine Technologie, die haufig¨ fur¨ solche Zwecke verwendet wird, ist Motion-Capture. Deswegen bereichern Motion Capture Schauspieler die Bewegungen der digitalen Avatare mit realistischen und glaubwurdigen¨ Bewegungen und Emotionen um solche menschenahnlichen¨ Animationen zu erstellen. Schauspiel fur¨ Motion Capture, wie es heute durchgefuhrt¨ wird, bietet keine naturlich¨ wirkende Schauspielumgebung. Dies beruht hauptsachlich¨ da- rauf, dass Motion Capture Schauspieler die virtuelle Umgebung in der sie spie- len nicht wahrend¨ des Schauspiels sehen und fuhlen¨ konnen.¨ In vielen Fallen¨ kann dies dann zu unnaturlichen¨ Bewegungen fuhren.¨ Um Losungswege¨ zu finden, haben wir zunachst¨ die Herausforderungen denen Schauspieler konfrontiert sind, sowie die Bedurfnisse¨ und Anforderun- gen an einen Motion Capture Schauspieler identifiziert. Dann argumentieren wir, wie die Unterstutzung¨ von Motion Capture Schauspielern moglich¨ ist und welche Faktoren bei der Konzeption und Implementierung einer Losung¨ berucksichtigt¨ werden mussen.¨ Danach wurden erste Prototypen erstellt, um die genannten Probleme zu addressieren und geeignete Losungen¨ zu finden um Motion Capture Schauspieler auch wahrend¨ des Schauspiels zu unterstutzen¨ und diese in die Umgebungen des Aktes zu vertiefen. Ein Ziel dieser Licentiate- Arbeit war es, Forschung bezuglich¨ folgender Frage durchzufuhren:¨ Was sind die Erlebnisqualitaten¨ der Immersion an ein interaktives System, um eine im- mersive Umgebung zu erstellen, welches Motion Capture Schauspieler un- terstutzt.¨

v German Summary / Zusammenfassung

Aktuelle und zukunftige¨ Animationen streben nach realistischen Bewegungen um eine Wahrnehmung von authentischen und menschenahnliche¨ Animatio- nen zu erstellen. Eine Technologie, die haufig¨ fur¨ solche Zwecke verwendet wird, ist Motion-Capture. Deswegen bereichern Motion Capture Schauspieler die Bewegungen der digitalen Avatare mit realistischen und glaubwurdigen¨ Bewegungen und Emotionen um solche menschenahnlichen¨ Animationen zu erstellen. Schauspiel fur¨ Motion Capture, wie es heute durchgefuhrt¨ wird, bietet keine naturlich¨ wirkende Schauspielumgebung. Dies beruht hauptsachlich¨ da- rauf, dass Motion Capture Schauspieler die virtuelle Umgebung in der sie spie- len nicht wahrend¨ des Schauspiels sehen und fuhlen¨ konnen.¨ In vielen Fallen¨ kann dies dann zu unnaturlichen¨ Bewegungen fuhren.¨ Um Losungswege¨ zu finden, haben wir zunachst¨ die Herausforderungen denen Schauspieler konfrontiert sind, sowie die Bedurfnisse¨ und Anforderun- gen an einen Motion Capture Schauspieler identifiziert. Dann argumentieren wir, wie die Unterstutzung¨ von Motion Capture Schauspielern moglich¨ ist und welche Faktoren bei der Konzeption und Implementierung einer Losung¨ berucksichtigt¨ werden mussen.¨ Danach wurden erste Prototypen erstellt, um die genannten Probleme zu addressieren und geeignete Losungen¨ zu finden um Motion Capture Schauspieler auch wahrend¨ des Schauspiels zu unterstutzen¨ und diese in die Umgebungen des Aktes zu vertiefen. Ein Ziel dieser Licentiate- Arbeit war es, Forschung bezuglich¨ folgender Frage durchzufuhren:¨ Was sind die Erlebnisqualitaten¨ der Immersion an ein interaktives System, um eine im- mersive Umgebung zu erstellen, welches Motion Capture Schauspieler un- terstutzt.¨

v vi

Die entwickelte Applikation bietet Flexibilitat¨ bei der schnellen Einrich- tung und Modifizierung von digitalen Objekten und Szenen durch eine einfach zu bedienende Oberflache.¨ Desweiteren hilft die Applikation ein Verstandnis¨ zu entwickeln, welche Hardware- und Software-Prototypen entwickelt und be- nutzten werden konnen¨ mit denen eine immersive Motion Capture Schaus- pielumgebung aufgebaut werden kann. Der entwickelte Prototyp ermoglicht¨ es Nutzererfahrungen, Nutzertests und die Zufriedenheit der Nutzer und dessen Effekt im Motion-Capture Schauspiel zu erforschen Acknowledgements

At this point I would like to express my sincere gratitude to my supervisors, Professor Dr. Oguzhan˘ Ozcan,¨ Dr. Rikard Lindell and Professor Dr. Dr. Gor- dana Dodig-Crnkovic who have guided and encouraged me throughout my studies. I have learned a lot from their advices, feedback and guidance. More- over, I would like to thank Oguzhan˘ for his efforts in providing the opportunity to study abroad and to participate in a larger research project. A special thanks goes also to the people from Imagination Studios who provided support and a motion capture studio. Another thank you goes to Kaan Aks¸it for the good research cooperation, the good fellowship during research exchanges and the commonly achieved results. Furthermore, I would like to thank all students that contributed with their implementations to my research work. Claudio Redavid implemented an initial design concept idea, providing pseudo 360 degree vision on 4 screens to an ac- tor, during his master thesis. Robert Gustavsson and David Reypka developed software to map motion capture data in real-time to the UDK game engine. Patrick Sjo¨o¨ coded the walk algorithms and the camera and sensor usage of a mobile phone that we used for our proof-of-concept implementation. I would also like to thank all teachers, administrative staff, colleagues and friends at the department, the company and during my research stays, who have helped me to evolve as a researcher. This work has been supported by the Swedish Knowledge Foundation (KKS), Malardalen¨ University and Imagination Studios within the PhD school ITS- EASY.This support has made it possible for me to be an industrial PhD student, so I would like to also express my gratitude for that. It has been an interesting and fun time for me in which I learned a lot. Daniel Kade Vaster¨ as,˚ October, 2014

vii vi

Die entwickelte Applikation bietet Flexibilitat¨ bei der schnellen Einrich- tung und Modifizierung von digitalen Objekten und Szenen durch eine einfach zu bedienende Oberflache.¨ Desweiteren hilft die Applikation ein Verstandnis¨ zu entwickeln, welche Hardware- und Software-Prototypen entwickelt und be- nutzten werden konnen¨ mit denen eine immersive Motion Capture Schaus- pielumgebung aufgebaut werden kann. Der entwickelte Prototyp ermoglicht¨ es Nutzererfahrungen, Nutzertests und die Zufriedenheit der Nutzer und dessen Effekt im Motion-Capture Schauspiel zu erforschen Acknowledgements

At this point I would like to express my sincere gratitude to my supervisors, Professor Dr. Oguzhan˘ Ozcan,¨ Dr. Rikard Lindell and Professor Dr. Dr. Gor- dana Dodig-Crnkovic who have guided and encouraged me throughout my studies. I have learned a lot from their advices, feedback and guidance. More- over, I would like to thank Oguzhan˘ for his efforts in providing the opportunity to study abroad and to participate in a larger research project. A special thanks goes also to the people from Imagination Studios who provided support and a motion capture studio. Another thank you goes to Kaan Aks¸it for the good research cooperation, the good fellowship during research exchanges and the commonly achieved results. Furthermore, I would like to thank all students that contributed with their implementations to my research work. Claudio Redavid implemented an initial design concept idea, providing pseudo 360 degree vision on 4 screens to an ac- tor, during his master thesis. Robert Gustavsson and David Reypka developed software to map motion capture data in real-time to the UDK game engine. Patrick Sjo¨o¨ coded the walk algorithms and the camera and sensor usage of a mobile phone that we used for our proof-of-concept implementation. I would also like to thank all teachers, administrative staff, colleagues and friends at the department, the company and during my research stays, who have helped me to evolve as a researcher. This work has been supported by the Swedish Knowledge Foundation (KKS), Malardalen¨ University and Imagination Studios within the PhD school ITS- EASY.This support has made it possible for me to be an industrial PhD student, so I would like to also express my gratitude for that. It has been an interesting and fun time for me in which I learned a lot. Daniel Kade Vaster¨ as,˚ October, 2014

vii List of Publications

Papers Included in the Licentiate Thesis 1

Paper A An Immersive Motion Capture Environment. Daniel Kade, Oguzhan˘ Ozcan¨ and Rikard Lindell, Proceedings of the ICCGMAT 2013 International Conference on Computer Games, Multimedia and Allied Technology vol. 73, WASET, Zurich, pp.500-506, Available from: World Academy of Science, Engineering and Technology, January 2013.

Paper B Towards Stanislavski-based Principles for Motion Capture Acting in Animation and Computer Games. Daniel Kade, Oguzhan˘ Ozcan¨ and Rikard Lindell, CONFIA 2013 International Conference in Illustration & Animation, Porto, November 2013.

Paper C Ethics of Virtual Reality Applications in Computer Games Produc- tion. Daniel Kade, IACAP 2014 International Association of Computers and Philosophy, Thessaloniki, Greece, July 2014. (Paper accepted)

Paper D Head-worn Mixed Reality Projection Display Application. Kaan Aks¸it, Daniel Kade, Oguzhan˘ Ozcan¨ and Hakan Urey,¨ ACE 2014, Advances in Computer Entertainment Technology, November 11-14th 2014, Funchal, Portugal. (Paper accepted)

1The included articles have been reformatted to comply with the licentiate thesis layout

ix List of Publications

Papers Included in the Licentiate Thesis 1

Paper A An Immersive Motion Capture Environment. Daniel Kade, Oguzhan˘ Ozcan¨ and Rikard Lindell, Proceedings of the ICCGMAT 2013 International Conference on Computer Games, Multimedia and Allied Technology vol. 73, WASET, Zurich, pp.500-506, Available from: World Academy of Science, Engineering and Technology, January 2013.

Paper B Towards Stanislavski-based Principles for Motion Capture Acting in Animation and Computer Games. Daniel Kade, Oguzhan˘ Ozcan¨ and Rikard Lindell, CONFIA 2013 International Conference in Illustration & Animation, Porto, November 2013.

Paper C Ethics of Virtual Reality Applications in Computer Games Produc- tion. Daniel Kade, IACAP 2014 International Association of Computers and Philosophy, Thessaloniki, Greece, July 2014. (Paper accepted)

Paper D Head-worn Mixed Reality Projection Display Application. Kaan Aks¸it, Daniel Kade, Oguzhan˘ Ozcan¨ and Hakan Urey,¨ ACE 2014, Advances in Computer Entertainment Technology, November 11-14th 2014, Funchal, Portugal. (Paper accepted)

1The included articles have been reformatted to comply with the licentiate thesis layout

ix Term Descriptions

Interaction Design

Actor We consider an ’actor’ as someone that acting for a motion capture shoot. This ’actor’ can have different skills or training but must not necessarily have acting training of some sort.

Application Here we mean software that is designed to help performing specific tasks.

Stakeholder Stakeholders are persons involved or to be considered within the design of a solution. In our research these are mainly actors, directors, motion capture staff and researchers.

System A system combines software and hardware with their interactions and components to act as a whole.

Tester A tester is someone that is testing our prototypes, designs and systems. This user can but does not need to have specific skills.

User A user is someone that is using or testing our prototypes, designs and systems. This user can but does not need to have specific skills.

xi Term Descriptions

Interaction Design

Actor We consider an ’actor’ as someone that acting for a motion capture shoot. This ’actor’ can have different skills or training but must not necessarily have acting training of some sort.

Application Here we mean software that is designed to help performing specific tasks.

Stakeholder Stakeholders are persons involved or to be considered within the design of a solution. In our research these are mainly actors, directors, motion capture staff and researchers.

System A system combines software and hardware with their interactions and components to act as a whole.

Tester A tester is someone that is testing our prototypes, designs and systems. This user can but does not need to have specific skills.

User A user is someone that is using or testing our prototypes, designs and systems. This user can but does not need to have specific skills.

xi xii

Computer Science

AR Augmented Reality: An augmented reality superimposes digitally created content onto the real world. This allows a person using AR to see both, the real and the digital world. Usually Ar uses some means of connecting real world objects with the augmented reality. Contents AV Augmented Virtuality: Augmented virtuality aims to include and merge of real world objects into the virtual world. Here physical objects or people can interact with or be seen in as themselves or digital representations in the virtual environment in real-time. I Thesis 1

Avatar An avatar is usually the representation of a person or player in 1 Introduction 3 a digital or virtual environment. In our definition an avatar is equal 1.1Introduction...... 3 to a digital character. 1.2Motivation...... 5 1.3State-of-the-Art...... 6 DOF Degree-of-freedom: We use the term from computer graphics and animation to describe possible movements and rotations around 2 Research Description 13 different axes. 2.1ResearchQuestions...... 13 2.2 Proposed Solution to the Research Problem ...... 14 HD High Definition 2.3 Identified Scientific Challenges ...... 15 2.4ResearchMethods...... 16 HDMI High-Definition Multimedia Interface 2.5ExpectedResults...... 21

LED Light Emitting Diode 3 Thesis Contributions 23

MHL Mobile High-Definition Link 4 Approach, Prototype and Findings 27 4.1 From Vision to Prototype (Part I) ...... 27 MR Mixed Reality merges the real world with the virtual world. This can be 4.2TechnologyReview...... 28 achieved with different means and technologies for different purposes. 4.2.1 WhynotVR?...... 28 The Milgram Taxonomy [1] uses MR as an umbrella term that 4.2.2 Projection Mapping ...... 29 includes VR, AR, Augmented Virtuality (AV) and the real world. 4.2.3 Augmented Reality ...... 30 4.2.4 Mixed Augmented Reality (AR) ...... 31 VR Virtual Reality: A virtual reality creates a reality that is not real. The 4.3Personas...... 32 vision of a person is in many cases occluded by digitally created content 4.4 Improved Motion Capture Process ...... 35 or the person is placed in a virtual reality environment. Synonyms are: 4.5 From Vision to Prototype (Part II) ...... 36 virtual world, virtual environment, and cyberspace. 4.6 Initial Design Concepts ...... 36 4.6.1 Component Based Setups ...... 39

xiii xii

Computer Science

AR Augmented Reality: An augmented reality superimposes digitally created content onto the real world. This allows a person using AR to see both, the real and the digital world. Usually Ar uses some means of connecting real world objects with the augmented reality. Contents AV Augmented Virtuality: Augmented virtuality aims to include and merge of real world objects into the virtual world. Here physical objects or people can interact with or be seen in as themselves or digital representations in the virtual environment in real-time. I Thesis 1

Avatar An avatar is usually the representation of a person or player in 1 Introduction 3 a digital or virtual environment. In our definition an avatar is equal 1.1Introduction...... 3 to a digital character. 1.2Motivation...... 5 1.3State-of-the-Art...... 6 DOF Degree-of-freedom: We use the term from computer graphics and animation to describe possible movements and rotations around 2 Research Description 13 different axes. 2.1ResearchQuestions...... 13 2.2 Proposed Solution to the Research Problem ...... 14 HD High Definition 2.3 Identified Scientific Challenges ...... 15 2.4ResearchMethods...... 16 HDMI High-Definition Multimedia Interface 2.5ExpectedResults...... 21

LED Light Emitting Diode 3 Thesis Contributions 23

MHL Mobile High-Definition Link 4 Approach, Prototype and Findings 27 4.1 From Vision to Prototype (Part I) ...... 27 MR Mixed Reality merges the real world with the virtual world. This can be 4.2TechnologyReview...... 28 achieved with different means and technologies for different purposes. 4.2.1 WhynotVR?...... 28 The Milgram Taxonomy [1] uses MR as an umbrella term that 4.2.2 Projection Mapping ...... 29 includes VR, AR, Augmented Virtuality (AV) and the real world. 4.2.3 Augmented Reality ...... 30 4.2.4 Mixed Augmented Reality (AR) ...... 31 VR Virtual Reality: A virtual reality creates a reality that is not real. The 4.3Personas...... 32 vision of a person is in many cases occluded by digitally created content 4.4 Improved Motion Capture Process ...... 35 or the person is placed in a virtual reality environment. Synonyms are: 4.5 From Vision to Prototype (Part II) ...... 36 virtual world, virtual environment, and cyberspace. 4.6 Initial Design Concepts ...... 36 4.6.1 Component Based Setups ...... 39

xiii xiv Contents Contents xv

4.7 From Vision to Prototype (Part III) ...... 39 7 Paper B: 4.8InitialPrototypes...... 39 Towards Stanislavski-based Principles for Motion Capture Acting 4.8.1 Screens around an actor ...... 39 in Animation and Computer Games 83 4.8.2 Using a motion capture system ...... 40 7.1Introduction...... 85 4.8.3 Portablescreen...... 41 7.2Whatisacting?...... 85 4.8.4 Using a Pico projector ...... 42 7.3 Which principles should we support in motion capture? . . . . 87 7.4 What is the nature of Motion Capture Actors? ...... 90 4.8.5 DesignDecisions...... 43 7.5 Do we need to adapt major acting techniques in motion capture 4.9 Proof-of-Concept Prototype ...... 43 actortraining?...... 91 4.9.1 Technological Categorization of our Prototype . . . . 44 7.6 What motion capture actors think and need? ...... 93 4.9.2 Hardware...... 45 7.7 How to improve current motion capture structures? ...... 94 4.9.3 Software...... 46 7.8Conclusion...... 96 4.9.4 Use of Reflective Materials in Optical Motion Capture? 47 Bibliography...... 97 4.9.5 UserExperiences...... 49 8 Paper C: 5 Conclusions 51 Ethics of Virtual Reality Applications in Computer Games Produc- 5.1Conclusion...... 51 tion 99 5.2FutureWork...... 52 8.1Introduction...... 101 8.2State-of-the-Art...... 102 Bibliography 55 8.2.1 Ethics in Computer Games ...... 102 8.2.2 Ethics in Virtual Realities ...... 103 8.2.3 EthicsinActing...... 105 II Included Papers 61 8.3 Possible Ethical Issues Within Motion Capture ...... 105 8.4EthicalAnalysis...... 109 6 Paper A: 8.5 Discussion Towards an Ethical Guideline ...... 112 An Immersive Motion Capture Environment 63 8.6Conclusion...... 114 6.1Introduction...... 65 Bibliography...... 115 6.2 Current State-of-the-Art ...... 66 6.2.1 Visualization...... 66 9 Paper D: 6.2.2 Tracking...... 68 Head-worn Mixed Reality Projection Display Application 119 9.1Introduction...... 121 6.2.3 Interaction...... 68 9.2State-of-the-Art...... 123 6.2.4 Combined Research Areas ...... 70 9.3 Head-worn Projection Display ...... 126 6.3ConductedResearch...... 71 9.3.1 Hardware Description ...... 126 6.4Findings...... 72 9.3.2 Software Description ...... 130 6.5OpenIssues...... 74 9.4 Application of the Prototype in Motion Capture ...... 133 6.6FutureSolution...... 75 9.5FunctionalityTest...... 134 6.7Conclusion...... 77 9.6Conclusion...... 135 Bibliography...... 79 9.7FutureImprovements...... 136 xiv Contents Contents xv

4.7 From Vision to Prototype (Part III) ...... 39 7 Paper B: 4.8InitialPrototypes...... 39 Towards Stanislavski-based Principles for Motion Capture Acting 4.8.1 Screens around an actor ...... 39 in Animation and Computer Games 83 4.8.2 Using a motion capture system ...... 40 7.1Introduction...... 85 4.8.3 Portablescreen...... 41 7.2Whatisacting?...... 85 4.8.4 Using a Pico projector ...... 42 7.3 Which principles should we support in motion capture? . . . . 87 7.4 What is the nature of Motion Capture Actors? ...... 90 4.8.5 DesignDecisions...... 43 7.5 Do we need to adapt major acting techniques in motion capture 4.9 Proof-of-Concept Prototype ...... 43 actortraining?...... 91 4.9.1 Technological Categorization of our Prototype . . . . 44 7.6 What motion capture actors think and need? ...... 93 4.9.2 Hardware...... 45 7.7 How to improve current motion capture structures? ...... 94 4.9.3 Software...... 46 7.8Conclusion...... 96 4.9.4 Use of Reflective Materials in Optical Motion Capture? 47 Bibliography...... 97 4.9.5 UserExperiences...... 49 8 Paper C: 5 Conclusions 51 Ethics of Virtual Reality Applications in Computer Games Produc- 5.1Conclusion...... 51 tion 99 5.2FutureWork...... 52 8.1Introduction...... 101 8.2State-of-the-Art...... 102 Bibliography 55 8.2.1 Ethics in Computer Games ...... 102 8.2.2 Ethics in Virtual Realities ...... 103 8.2.3 EthicsinActing...... 105 II Included Papers 61 8.3 Possible Ethical Issues Within Motion Capture ...... 105 8.4EthicalAnalysis...... 109 6 Paper A: 8.5 Discussion Towards an Ethical Guideline ...... 112 An Immersive Motion Capture Environment 63 8.6Conclusion...... 114 6.1Introduction...... 65 Bibliography...... 115 6.2 Current State-of-the-Art ...... 66 6.2.1 Visualization...... 66 9 Paper D: 6.2.2 Tracking...... 68 Head-worn Mixed Reality Projection Display Application 119 9.1Introduction...... 121 6.2.3 Interaction...... 68 9.2State-of-the-Art...... 123 6.2.4 Combined Research Areas ...... 70 9.3 Head-worn Projection Display ...... 126 6.3ConductedResearch...... 71 9.3.1 Hardware Description ...... 126 6.4Findings...... 72 9.3.2 Software Description ...... 130 6.5OpenIssues...... 74 9.4 Application of the Prototype in Motion Capture ...... 133 6.6FutureSolution...... 75 9.5FunctionalityTest...... 134 6.7Conclusion...... 77 9.6Conclusion...... 135 Bibliography...... 79 9.7FutureImprovements...... 136 xvi Contents

Bibliography...... 137

I

Thesis

1 xvi Contents

Bibliography...... 137

I

Thesis

1 Chapter 1

Introduction

1.1 Introduction

Todays video games are becoming more and more realistic [2], not only be- cause of hardware and software innovations but also because of the use of highly realistic animations of humans, animals, objects and environments in these games. In many cases, cinematic elements, shown during game-play al- most feel like watching a movie. This is to a large part thanks to motion capture technology that we per- ceive motions in a gaming environment as more realistic than in older video games which did not use this technology. To create this sense of realism, hu- man motions are recorded from skilled performers and then mapped to virtual avatars. Motion capture actors play an important role when creating these re- alistic motions and performances. Therefore, actors, stuntmen and athletes perform recordings of motions to bring virtual characters closer to realism. Motion capture technology is used for various applications from medical to training and entertainment purposes. In this work we focus especially on mo- tion capture for animations, mainly used in computer games. Captured move- ments from human actors are used to map these movements to virtual avatars. The avatars then perform these movements during gameplay or in short video clips during the transition of parts of the game, so called in-game cut scenes. This research started with the goal to explore solutions to support a motion capture studio with their daily work. Quick responses to customer and actor needs, scenery changes, as well as the need to improve their motion capture procedures started our research investigations.

3 Chapter 1

Introduction

1.1 Introduction

Todays video games are becoming more and more realistic [2], not only be- cause of hardware and software innovations but also because of the use of highly realistic animations of humans, animals, objects and environments in these games. In many cases, cinematic elements, shown during game-play al- most feel like watching a movie. This is to a large part thanks to motion capture technology that we per- ceive motions in a gaming environment as more realistic than in older video games which did not use this technology. To create this sense of realism, hu- man motions are recorded from skilled performers and then mapped to virtual avatars. Motion capture actors play an important role when creating these re- alistic motions and performances. Therefore, actors, stuntmen and athletes perform recordings of motions to bring virtual characters closer to realism. Motion capture technology is used for various applications from medical to training and entertainment purposes. In this work we focus especially on mo- tion capture for animations, mainly used in computer games. Captured move- ments from human actors are used to map these movements to virtual avatars. The avatars then perform these movements during gameplay or in short video clips during the transition of parts of the game, so called in-game cut scenes. This research started with the goal to explore solutions to support a motion capture studio with their daily work. Quick responses to customer and actor needs, scenery changes, as well as the need to improve their motion capture procedures started our research investigations.

3 4 Chapter 1. Introduction 1.2 Motivation 5

Through observations and interviews with 18 actors, 10 directors and 4 As literature studies have shown, there seems to be no out-of-the-box so- motion capture operators, we came to the conclusion that the current acting lution that serves all demands, especially not for an application in a motion environment in a motion capture studio does not provide a natural acting en- capture environment. For this thesis we intend to explore how an immersive vironment for an actor, especially when we compare it to stage or film acting. motion capture environment can be created and what needs to be considered This is because the virtual environment, the actors are acting for, is neither vis- when developing such a system. Working prototypes are meant to show the ible nor perceptible while acting. Objects, obstacles, other virtual characters potential of overcoming identified solutions. Designing prototypical solutions or events have to be memorized and are not yet visualized in real-time without as well as explorations of these designs through constructing prototypes are having the actor to turn towards a screen while acting. In a film acting envi- meant show and evaluate these potentials. ronment, an actor is placed in a real-world environment and is surrounded by proper physical objects as well as real persons in appropriate costumes. Even the surrounding conditions like environmental sounds, smoke, rain and simi- 1.2 Motivation lar can be created which allows actors to concentrate more on the acting itself. For current motion capture shoots, an actor has to memorize the scenario, build The industrial motivation for the research presented in this thesis is how to the character, imagine the environment and do the acting; sometimes with only create an acting environment that enables actors to perform more efficiently a brief preparation times. Through our observations and questioning we ex- and naturally. There is also a need for faster motion capture outcomes with perienced that motion capture performances are very much dependent on the improved quality. actors’ capability to imagine the scenery and to put themselves in the desired For the academic motivation of this research the areas of interaction design role and mood quickly [3]. and computation are of main interest. Here, the interests lay in understanding With this research we aim at creating a better motion capture environment the modalities of interaction between the users and interactive artefacts de- to support actors with their task of delivering realistic and believable perfor- signed for a motion capture environment, as well as having a guideline on how mances. Yet to achieve this, we first needed to create an understanding of the interactive systems can be developed and identifying what the user experiences motion capture actor, as well as the needs and wants of an actor. Also, outlin- in the specific area of motion capture are. ing the challenges a motion capture actor is facing in current motion capture My motivation for this research is built on the above-mentioned interests shoots was of importance in order to find a basis to design innovative solutions and is also driven by the interest to create immersive virtual reality environ- to solve the identified issues and to support actors with their work. ments. Furthermore, a personal interest in computer games and the desire to Understanding the users of an immersive motion capture system has been a find smart solutions to improve their quality drive this research. Therefore, major task before implementing any prototypical solutions. This complies with my motivation can also be described through the interest to gain knowledge on the fact that for this thesis, approaches from the fields of computing sciences how immersive environments for gaming and motion capture can be realized. as well as interaction design have been merged to create and explore solutions The overall motivation of this thesis results in a vision. This vision can be towards an immersive motion capture acting environment. Therefore, we also described by saying that: discussed the nature of a motion capture actor and pointed out skills, demands and developed principles to support motion capture actors with their work. Actors will perceive the virtual environment they are acting in, Prototypes have been developed to test the applicability of the solutions. Fur- visually and emotionally, through the design of an immersive ther developed prototypes are planned to be introduced into the research and environment. development cycles. A combination of technologies that address multiple human senses could By constructing an immersive environment, our vision is that actors should be be of use to create such an environment, namely: vision, sound, smell, the capable of perceiving a virtual environment, visually and emotionally while feeling of touch and haptic feedback. Nevertheless, we limited the focus for acting in a way that does not hinder them in their task to act for motion capture this thesis to visual prototypes. shoots. 4 Chapter 1. Introduction 1.2 Motivation 5

Through observations and interviews with 18 actors, 10 directors and 4 As literature studies have shown, there seems to be no out-of-the-box so- motion capture operators, we came to the conclusion that the current acting lution that serves all demands, especially not for an application in a motion environment in a motion capture studio does not provide a natural acting en- capture environment. For this thesis we intend to explore how an immersive vironment for an actor, especially when we compare it to stage or film acting. motion capture environment can be created and what needs to be considered This is because the virtual environment, the actors are acting for, is neither vis- when developing such a system. Working prototypes are meant to show the ible nor perceptible while acting. Objects, obstacles, other virtual characters potential of overcoming identified solutions. Designing prototypical solutions or events have to be memorized and are not yet visualized in real-time without as well as explorations of these designs through constructing prototypes are having the actor to turn towards a screen while acting. In a film acting envi- meant show and evaluate these potentials. ronment, an actor is placed in a real-world environment and is surrounded by proper physical objects as well as real persons in appropriate costumes. Even the surrounding conditions like environmental sounds, smoke, rain and simi- 1.2 Motivation lar can be created which allows actors to concentrate more on the acting itself. For current motion capture shoots, an actor has to memorize the scenario, build The industrial motivation for the research presented in this thesis is how to the character, imagine the environment and do the acting; sometimes with only create an acting environment that enables actors to perform more efficiently a brief preparation times. Through our observations and questioning we ex- and naturally. There is also a need for faster motion capture outcomes with perienced that motion capture performances are very much dependent on the improved quality. actors’ capability to imagine the scenery and to put themselves in the desired For the academic motivation of this research the areas of interaction design role and mood quickly [3]. and computation are of main interest. Here, the interests lay in understanding With this research we aim at creating a better motion capture environment the modalities of interaction between the users and interactive artefacts de- to support actors with their task of delivering realistic and believable perfor- signed for a motion capture environment, as well as having a guideline on how mances. Yet to achieve this, we first needed to create an understanding of the interactive systems can be developed and identifying what the user experiences motion capture actor, as well as the needs and wants of an actor. Also, outlin- in the specific area of motion capture are. ing the challenges a motion capture actor is facing in current motion capture My motivation for this research is built on the above-mentioned interests shoots was of importance in order to find a basis to design innovative solutions and is also driven by the interest to create immersive virtual reality environ- to solve the identified issues and to support actors with their work. ments. Furthermore, a personal interest in computer games and the desire to Understanding the users of an immersive motion capture system has been a find smart solutions to improve their quality drive this research. Therefore, major task before implementing any prototypical solutions. This complies with my motivation can also be described through the interest to gain knowledge on the fact that for this thesis, approaches from the fields of computing sciences how immersive environments for gaming and motion capture can be realized. as well as interaction design have been merged to create and explore solutions The overall motivation of this thesis results in a vision. This vision can be towards an immersive motion capture acting environment. Therefore, we also described by saying that: discussed the nature of a motion capture actor and pointed out skills, demands and developed principles to support motion capture actors with their work. Actors will perceive the virtual environment they are acting in, Prototypes have been developed to test the applicability of the solutions. Fur- visually and emotionally, through the design of an immersive ther developed prototypes are planned to be introduced into the research and environment. development cycles. A combination of technologies that address multiple human senses could By constructing an immersive environment, our vision is that actors should be be of use to create such an environment, namely: vision, sound, smell, the capable of perceiving a virtual environment, visually and emotionally while feeling of touch and haptic feedback. Nevertheless, we limited the focus for acting in a way that does not hinder them in their task to act for motion capture this thesis to visual prototypes. shoots. 6 Chapter 1. Introduction 1.3 State-of-the-Art 7

1.3 State-of-the-Art to the audience. In the mentioned research, the displayed scenery and avatar content around the on-stage actors was used to interact with their acting. Designing and implementing an immersive motion capture environment im- The above mentioned research projects show that theatre and acting adapts plies different research areas like motion capture acquisition, tracking and re- to the digital age and integrates not only technology but also audience and search considering the visualization of virtual content while acting as well as other participants during the play, preparations and rehearsals. Nonetheless, creating the experience with hardware and software. From literature there are less research focuses on acting for motion capture or acting within a virtual technologies and solutions that can be of use to work towards the goal of cre- environment in the sense of physically being in the virtual environment and ating a more immersive acting environment [4, 5, 6, 7]. Some related research where the environment is seen as digital acting support while acting. in similar research areas or that we see as potentially useful are listed below. Designing immersive environments Acting in a virtual environment Designing an immersive environment includes 2 basic concepts, ’Immersion’ There have been research projects exploring the use of virtual reality to support and ’Presence’ which have been explored and defined already [15]. ’Immer- actors with their work. In one research project, acting training and increasing sion’ provides the functionality and usability of an immersive environment and the final performance was approached through sensory-motor rhythm neuro- provides the opportunities to be immersed into the virtual environment. ’Pres- feedback training. Actors have been exposed and trained to different lighting ence’ creates the perception of being in the virtual environment, where ’Pres- conditions, reactions of the audience, and the look of the theatre from stage [8]. ence’ can be seen as an ”increasing function of immersion” [15]. Moreover, acting support through a virtual environment was researched as Through the perception of ’Presence’ it is believed that behaviours in the a distance rehearsal system. This system was used to study to which extent virtual world are consistent with behaviours in the real world under similar cir- virtual reality can be used by actors and director to rehearse their performances cumstances [15]. It was furthermore mentioned that ’Presence’ could influence without being physically present [9]. the performance of a person using a virtual environment. This statement was Furthermore, research and explorations within the area of ’cyberdrama’ to mention that a well designed interface and hardware can increase ’Presence’ [10], ’digital theatre’ [11] and ’narrative in cyberspace’ [12] has been con- and a ”greater vividness in terms of richness of the portrayed environment” ducted. In these forms of acting, participants create the story through active which improves task performance [15]. Moreover it is mentioned that ’Pres- engagement and interactions between technology and participants. According ence’ is of importance to train e.g. fire-fighters or surgeons within a virtual en- to other research this interactivity can be grouped into ’Navigation’, ’Partici- vironment corresponding to behaviours in the real world. Therefore, a virtual pation’, ’Conversation’ and ’Collaboration’ which allows participants to steer environment must be well designed and allow for immersion and the feeling of the play and get involved before the final performance [13]. being in the environment. Another research explained the setup of a completely virtual theatre where Another interesting statement is that the impact of the feeling of immersion actors steer a virtual character in real-time from their computers with data is depended on ”the application or task context and the perceptual requirements gloves and keyboard inputs and the audience can listen and interact to the per- of the individual” [15]. Basically this means that different aspects need to be formance and the theatre by choosing the seat in the theatre and by applauding considered when creating an immersive environment. First, it is important to and booing [14]. create the application according to the task to be performed. So the virtual In other research, even the use of motion capture techniques during a live environment should focus on the most important aspects that the real world in theatrical performance have been used [7]. On-stage, actors were interacting a similar situation would resemble. When a motion capture actor is supposed with digital avatars that were controlled by actors wearing a motion capture suit to act as a musician, an aural feedback might be more important to create an throughout the theatrical act. A screen, which is installed in front of the on- immersive feeling. This might differ when visual aspects are more of impor- stage actors, displayed the avatars controlled by the motion capture actors who tance, e.g. when acting on a futuristic space ship using futuristic technologies. performed their acting in real-time on a close-by motion capture area. Virtual In other occasions a mixture of a visual and aural environment might allow to scenery in context of each act was displayed on a background screen, visible increase immersion. We could think of acting for a war zone where audiovisual 6 Chapter 1. Introduction 1.3 State-of-the-Art 7

1.3 State-of-the-Art to the audience. In the mentioned research, the displayed scenery and avatar content around the on-stage actors was used to interact with their acting. Designing and implementing an immersive motion capture environment im- The above mentioned research projects show that theatre and acting adapts plies different research areas like motion capture acquisition, tracking and re- to the digital age and integrates not only technology but also audience and search considering the visualization of virtual content while acting as well as other participants during the play, preparations and rehearsals. Nonetheless, creating the experience with hardware and software. From literature there are less research focuses on acting for motion capture or acting within a virtual technologies and solutions that can be of use to work towards the goal of cre- environment in the sense of physically being in the virtual environment and ating a more immersive acting environment [4, 5, 6, 7]. Some related research where the environment is seen as digital acting support while acting. in similar research areas or that we see as potentially useful are listed below. Designing immersive environments Acting in a virtual environment Designing an immersive environment includes 2 basic concepts, ’Immersion’ There have been research projects exploring the use of virtual reality to support and ’Presence’ which have been explored and defined already [15]. ’Immer- actors with their work. In one research project, acting training and increasing sion’ provides the functionality and usability of an immersive environment and the final performance was approached through sensory-motor rhythm neuro- provides the opportunities to be immersed into the virtual environment. ’Pres- feedback training. Actors have been exposed and trained to different lighting ence’ creates the perception of being in the virtual environment, where ’Pres- conditions, reactions of the audience, and the look of the theatre from stage [8]. ence’ can be seen as an ”increasing function of immersion” [15]. Moreover, acting support through a virtual environment was researched as Through the perception of ’Presence’ it is believed that behaviours in the a distance rehearsal system. This system was used to study to which extent virtual world are consistent with behaviours in the real world under similar cir- virtual reality can be used by actors and director to rehearse their performances cumstances [15]. It was furthermore mentioned that ’Presence’ could influence without being physically present [9]. the performance of a person using a virtual environment. This statement was Furthermore, research and explorations within the area of ’cyberdrama’ to mention that a well designed interface and hardware can increase ’Presence’ [10], ’digital theatre’ [11] and ’narrative in cyberspace’ [12] has been con- and a ”greater vividness in terms of richness of the portrayed environment” ducted. In these forms of acting, participants create the story through active which improves task performance [15]. Moreover it is mentioned that ’Pres- engagement and interactions between technology and participants. According ence’ is of importance to train e.g. fire-fighters or surgeons within a virtual en- to other research this interactivity can be grouped into ’Navigation’, ’Partici- vironment corresponding to behaviours in the real world. Therefore, a virtual pation’, ’Conversation’ and ’Collaboration’ which allows participants to steer environment must be well designed and allow for immersion and the feeling of the play and get involved before the final performance [13]. being in the environment. Another research explained the setup of a completely virtual theatre where Another interesting statement is that the impact of the feeling of immersion actors steer a virtual character in real-time from their computers with data is depended on ”the application or task context and the perceptual requirements gloves and keyboard inputs and the audience can listen and interact to the per- of the individual” [15]. Basically this means that different aspects need to be formance and the theatre by choosing the seat in the theatre and by applauding considered when creating an immersive environment. First, it is important to and booing [14]. create the application according to the task to be performed. So the virtual In other research, even the use of motion capture techniques during a live environment should focus on the most important aspects that the real world in theatrical performance have been used [7]. On-stage, actors were interacting a similar situation would resemble. When a motion capture actor is supposed with digital avatars that were controlled by actors wearing a motion capture suit to act as a musician, an aural feedback might be more important to create an throughout the theatrical act. A screen, which is installed in front of the on- immersive feeling. This might differ when visual aspects are more of impor- stage actors, displayed the avatars controlled by the motion capture actors who tance, e.g. when acting on a futuristic space ship using futuristic technologies. performed their acting in real-time on a close-by motion capture area. Virtual In other occasions a mixture of a visual and aural environment might allow to scenery in context of each act was displayed on a background screen, visible increase immersion. We could think of acting for a war zone where audiovisual 8 Chapter 1. Introduction 1.3 State-of-the-Art 9 scenery and effects could provide the feeling of presence. Secondly, one needs Developing a virtual environment to consider that individuals are different and react to audiovisual stimulation At this point, we assume that by making the virtual content visible, yields the differently. Therefore, we need to design a virtual environment allowing to greatest positive result in helping an actor perform. This is why we focus on create different audiovisual scenarios and conduct research to find out which exploring ways to provide vision first, before tackling other senses. Several virtual environments we can build that allow to support motion capture actors. applications that offer such an ability are already available: Another research in the area of presence in virtual environments states that A mixed reality environment, which could possibly be used in a motion presence can only be supported when the technologies or devices used to create capture application, is used for many industries such as military-based training the virtual environment feel non-existent to the users [16]. This means that environments. In this respect, some research projects are capable of provid- when creating an immersive virtual environment for motion capture acting, ing an immersive environment without using virtual or augmented reality (VR one needs to consider designing solutions that do not limit the actors’ freedom /AR) glasses [4]. To achieve this aim, an immersive environment is created by of movement or might be uncomfortable to wear. using a mixture of real and projected objects, transparent digital flat screens and Considering the elements of immersion and presence in the designs for our the capability to add smell and temperature changes to the environment. There immersive environments as well as to test and evaluate them must be of impor- is also a wide-area mixed reality application, which was realized to create an tance. It has to be researched how this can be applied and created for immersive immersive virtual reality environment in which users can walk and run freely environments as acting support in motion capture. among simulated rooms, buildings and streets. In such applications, large rear- projection screens that employ digital graphics are used to depict a room’s Interaction interior, a view to an outside world or a building’s exterior. The applications We consider designing useful interaction scenarios for an immersive motion also provide life-size projection displays with physical props and real-time 3D capture system as important and believe that research on interaction and per- graphics [4]. A scenario like the one mentioned above has its limits when con- ception in virtual environments could deliver important insights to interact sidering it for a motion capture shoot. Large projection walls or other props within an immersive motion capture system. In this respect, some research, cannot be placed in front of the optical motion capture cameras because the which works on body-centered interaction in immersive virtual environments, recordings would be blocked and occlusions would occur. Setting up such a provides such insights and shows how interaction and navigation in such virtual training environment requires time and planning which is not economical in environments might be possible [17]. This body of research was done for other a motion capture environment because the scenery for a motion capture shoot purposes but might be adaptable to develop more immersive environments for changes often, sometimes multiple times during a shoot day, and needs to be game-based motion capture shoots. It still needs to be answered, how more dynamic. Therefore, this solution cannot be used, as is, to create an immersive natural interactions between actors and virtual environment can be achieved environment for motion capture actors with the goal to support and allow more during motion capture shoots bwhen using these techniques. Furthermore, it natural acting. would need to be tested how this might work for multiple performers. A novel optical see-through head-worn display that is capable of mutual In a virtual environment, the action and reaction of virtual personas and occlusions could also be considered for motion capture shoots. Here, mutual objects is in most cases fairly primitive. In some research, characters in virtual occlusion is an attribute of an augmented reality display where real objects environments have been trained to react to predicted and unpredicted events in can occlude virtual objects and virtual objects can occlude real objects [19]. order to maintain realism. The approach to solve the problem of motion syn- Mutual occlusions are one of the problems in visualizing 3D content in a real thesis for interactive, humanlike characters was by combining dynamic simu- world environment. Research is also conducted to test the perception of im- lations and human motion capture data [18]. It needs to be investigated, how age motion during head movement [20]. The perception of image and head and if the virtual content in a motion capture environment can be used to steer motion is tested when wearing a head-mounted display (HMD). In another sig- the actors while acting. Other than in the above-mentioned research the actors nificant work, in addition to virtual and augmented reality, two control condi- do not directly steer or interact with virtual content, they far more use virtual tions were studied: viewing real-world objects, and viewing real-world objects and mediated objects like real objects. through a head-mounted display. The presence and absence of motion parallax 8 Chapter 1. Introduction 1.3 State-of-the-Art 9 scenery and effects could provide the feeling of presence. Secondly, one needs Developing a virtual environment to consider that individuals are different and react to audiovisual stimulation At this point, we assume that by making the virtual content visible, yields the differently. Therefore, we need to design a virtual environment allowing to greatest positive result in helping an actor perform. This is why we focus on create different audiovisual scenarios and conduct research to find out which exploring ways to provide vision first, before tackling other senses. Several virtual environments we can build that allow to support motion capture actors. applications that offer such an ability are already available: Another research in the area of presence in virtual environments states that A mixed reality environment, which could possibly be used in a motion presence can only be supported when the technologies or devices used to create capture application, is used for many industries such as military-based training the virtual environment feel non-existent to the users [16]. This means that environments. In this respect, some research projects are capable of provid- when creating an immersive virtual environment for motion capture acting, ing an immersive environment without using virtual or augmented reality (VR one needs to consider designing solutions that do not limit the actors’ freedom /AR) glasses [4]. To achieve this aim, an immersive environment is created by of movement or might be uncomfortable to wear. using a mixture of real and projected objects, transparent digital flat screens and Considering the elements of immersion and presence in the designs for our the capability to add smell and temperature changes to the environment. There immersive environments as well as to test and evaluate them must be of impor- is also a wide-area mixed reality application, which was realized to create an tance. It has to be researched how this can be applied and created for immersive immersive virtual reality environment in which users can walk and run freely environments as acting support in motion capture. among simulated rooms, buildings and streets. In such applications, large rear- projection screens that employ digital graphics are used to depict a room’s Interaction interior, a view to an outside world or a building’s exterior. The applications We consider designing useful interaction scenarios for an immersive motion also provide life-size projection displays with physical props and real-time 3D capture system as important and believe that research on interaction and per- graphics [4]. A scenario like the one mentioned above has its limits when con- ception in virtual environments could deliver important insights to interact sidering it for a motion capture shoot. Large projection walls or other props within an immersive motion capture system. In this respect, some research, cannot be placed in front of the optical motion capture cameras because the which works on body-centered interaction in immersive virtual environments, recordings would be blocked and occlusions would occur. Setting up such a provides such insights and shows how interaction and navigation in such virtual training environment requires time and planning which is not economical in environments might be possible [17]. This body of research was done for other a motion capture environment because the scenery for a motion capture shoot purposes but might be adaptable to develop more immersive environments for changes often, sometimes multiple times during a shoot day, and needs to be game-based motion capture shoots. It still needs to be answered, how more dynamic. Therefore, this solution cannot be used, as is, to create an immersive natural interactions between actors and virtual environment can be achieved environment for motion capture actors with the goal to support and allow more during motion capture shoots bwhen using these techniques. Furthermore, it natural acting. would need to be tested how this might work for multiple performers. A novel optical see-through head-worn display that is capable of mutual In a virtual environment, the action and reaction of virtual personas and occlusions could also be considered for motion capture shoots. Here, mutual objects is in most cases fairly primitive. In some research, characters in virtual occlusion is an attribute of an augmented reality display where real objects environments have been trained to react to predicted and unpredicted events in can occlude virtual objects and virtual objects can occlude real objects [19]. order to maintain realism. The approach to solve the problem of motion syn- Mutual occlusions are one of the problems in visualizing 3D content in a real thesis for interactive, humanlike characters was by combining dynamic simu- world environment. Research is also conducted to test the perception of im- lations and human motion capture data [18]. It needs to be investigated, how age motion during head movement [20]. The perception of image and head and if the virtual content in a motion capture environment can be used to steer motion is tested when wearing a head-mounted display (HMD). In another sig- the actors while acting. Other than in the above-mentioned research the actors nificant work, in addition to virtual and augmented reality, two control condi- do not directly steer or interact with virtual content, they far more use virtual tions were studied: viewing real-world objects, and viewing real-world objects and mediated objects like real objects. through a head-mounted display. The presence and absence of motion parallax 10 Chapter 1. Introduction 1.3 State-of-the-Art 11 was crossed with all conditions. Like many previous studies, another research games has been researched in multiple research already [24, 25, 26]. This could found that depth perception is ”underestimated” in virtual reality, although the provide a basis to assess the feeling of immersion in our research as well. magnitude of the effect was surprisingly low. Underestimation was described Even though there are not many guidelines on how to design and evaluate as a misperception of distances in the virtual reality in comparison to the real virtual environments, a methodology for ensuring the usability of virtual envi- world. The most interesting finding was that no underestimation was observed ronments through user-centered design and evaluation of virtual environment in AR [21]. For motion capture shoots, the interactions between actors and user interaction has been provided [27]. the possibility to see props are important. This is another reason why a VR Measuring user satisfaction [28] and emotions [29] as explained in the cited environment is not the best solution when an actor needs to be able to see real research might be another way to support our task to assess and evaluate our world objects and persons, as well as virtual content at the same time. A gen- efforts towards a more immersive motion capture system. eral problem with HMD’s remains, regardless of future AR glasses solutions: According to in literature mentioned research we will need to identify a significant part of the actor’s face is covered and can limit motion capture which of the mentioned methods to assess and evaluate our research suits best shoots. The glasses especially limit the freedom of movement of an actor and and how it might need to be modified for the area of motion capture acting. the shoots are limited when facial motion captures are of importance. To make virtual environments as visually immersive as possible another so- This state-of-the-art section shows that there are research projects that provide lution has been widely researched by using flat panel 3D displays [5]. Equip- technologies that could be used to create and explore solutions towards an im- ping an entire motion capture room with these 3D displays, or even by just mersive motion capture acting environment. Nevertheless, research needs to using a single flat screen needs to be investigated if this might be a solution be done on how such an environment can be created and how it can be used that could create an immersive acting environment and still is economic and to support actors while acting. To approach this, we describe in the following usable in daily motion capture business. how our research and approach towards a solution is set up. Another method which could be of possible use to display virtual content to the actor, while acting, is by using the emerging laser based pico projector technology [6]. These projectors are of small size [22, 5] and come in differ- ent technologies based on micro-LCDs, the Texas Instruments’s DLP technol- ogy, which uses an array of microelectromechanical systems (MEMS) micro- mirrors and LEDs, or projectors based on laser scanning [23]. The projected data can be shown on small screens or can be reflected to polarized video con- tact lenses which a user is wearing [5]. A screen, which is placed right in front of the actor might limit the actor in some cases but might be applicable to some motion capture shoots. Nevertheless, it would be good to answer the question on how virtual content can be shown to the motion capture actors and it needs to be researched how new technologies can be applied to a motion capture sce- nario.

Assessment and Evaluation Finding ways of assessing and evaluating our research and prototypes accord- ing to its usability, functionality and its ability to provide immersion is what we are aiming for. Some research has already been performed that can help within this matter and is described below. Measuring immersion and presence in virtual environments and computer 10 Chapter 1. Introduction 1.3 State-of-the-Art 11 was crossed with all conditions. Like many previous studies, another research games has been researched in multiple research already [24, 25, 26]. This could found that depth perception is ”underestimated” in virtual reality, although the provide a basis to assess the feeling of immersion in our research as well. magnitude of the effect was surprisingly low. Underestimation was described Even though there are not many guidelines on how to design and evaluate as a misperception of distances in the virtual reality in comparison to the real virtual environments, a methodology for ensuring the usability of virtual envi- world. The most interesting finding was that no underestimation was observed ronments through user-centered design and evaluation of virtual environment in AR [21]. For motion capture shoots, the interactions between actors and user interaction has been provided [27]. the possibility to see props are important. This is another reason why a VR Measuring user satisfaction [28] and emotions [29] as explained in the cited environment is not the best solution when an actor needs to be able to see real research might be another way to support our task to assess and evaluate our world objects and persons, as well as virtual content at the same time. A gen- efforts towards a more immersive motion capture system. eral problem with HMD’s remains, regardless of future AR glasses solutions: According to in literature mentioned research we will need to identify a significant part of the actor’s face is covered and can limit motion capture which of the mentioned methods to assess and evaluate our research suits best shoots. The glasses especially limit the freedom of movement of an actor and and how it might need to be modified for the area of motion capture acting. the shoots are limited when facial motion captures are of importance. To make virtual environments as visually immersive as possible another so- This state-of-the-art section shows that there are research projects that provide lution has been widely researched by using flat panel 3D displays [5]. Equip- technologies that could be used to create and explore solutions towards an im- ping an entire motion capture room with these 3D displays, or even by just mersive motion capture acting environment. Nevertheless, research needs to using a single flat screen needs to be investigated if this might be a solution be done on how such an environment can be created and how it can be used that could create an immersive acting environment and still is economic and to support actors while acting. To approach this, we describe in the following usable in daily motion capture business. how our research and approach towards a solution is set up. Another method which could be of possible use to display virtual content to the actor, while acting, is by using the emerging laser based pico projector technology [6]. These projectors are of small size [22, 5] and come in differ- ent technologies based on micro-LCDs, the Texas Instruments’s DLP technol- ogy, which uses an array of microelectromechanical systems (MEMS) micro- mirrors and LEDs, or projectors based on laser scanning [23]. The projected data can be shown on small screens or can be reflected to polarized video con- tact lenses which a user is wearing [5]. A screen, which is placed right in front of the actor might limit the actor in some cases but might be applicable to some motion capture shoots. Nevertheless, it would be good to answer the question on how virtual content can be shown to the motion capture actors and it needs to be researched how new technologies can be applied to a motion capture sce- nario.

Assessment and Evaluation Finding ways of assessing and evaluating our research and prototypes accord- ing to its usability, functionality and its ability to provide immersion is what we are aiming for. Some research has already been performed that can help within this matter and is described below. Measuring immersion and presence in virtual environments and computer Chapter 2

Research Description

2.1 Research Questions

For this thesis, one main research question drives the research and builds the basis for sub-questions that help answering the main research question, as well as to convert the vision into a working prototype. The main research question of this thesis is:

What interactive system can we build to increase motion cap- ture actors’ immersion into their virtual acting environment?

This new solution is meant to immerse and support motion capture actors to get into their acting faster by providing a visual reference of the virtual envi- ronment, actors are supposed to act in. To support the main research question, two sub-questions will deliver a deeper understanding of the main research question. Designing an immersive acting environment usually involves to build either an extensive scenery or to wear head-mounted hardware, as explained in the state-of-the-art section. For our research we need to focus on building a system that allows to have a large extent of freedom of movement and therefore is to some extent mobile. This is why the first sub-question provides knowledge on what software and hardware setups can be used to build the system securing the usability and flexibility of the system. It also needs to be considered that for motion capture it is important that movements and the capture of movements are not restricted. Therefore, our first sub-question is:

13 Chapter 2

Research Description

2.1 Research Questions

For this thesis, one main research question drives the research and builds the basis for sub-questions that help answering the main research question, as well as to convert the vision into a working prototype. The main research question of this thesis is:

What interactive system can we build to increase motion cap- ture actors’ immersion into their virtual acting environment?

This new solution is meant to immerse and support motion capture actors to get into their acting faster by providing a visual reference of the virtual envi- ronment, actors are supposed to act in. To support the main research question, two sub-questions will deliver a deeper understanding of the main research question. Designing an immersive acting environment usually involves to build either an extensive scenery or to wear head-mounted hardware, as explained in the state-of-the-art section. For our research we need to focus on building a system that allows to have a large extent of freedom of movement and therefore is to some extent mobile. This is why the first sub-question provides knowledge on what software and hardware setups can be used to build the system securing the usability and flexibility of the system. It also needs to be considered that for motion capture it is important that movements and the capture of movements are not restricted. Therefore, our first sub-question is:

13 14 Chapter 2. Research Description 2.2 Proposed Solution to the Research Problem 15

What interactive prototype can be developed that are easy to is head-mounted on a wearable frame or strap, the reflected image can be seen use and are usable within a motion capture environment? clearly by the actors. This allows a hands free 360◦ degree view of the virtual environment and the real world at the same time without occluding the actors’ To validate that our interactive system is usable and provides the claimed fea- vision. Actors can also perform even bodily demanding shoots without any tures, as well as provides an increase in immersion,we test the interactive arte- movement limitations or any sign of motion sickness. facts through answering the second sub-question: Vision is essential to create an immersive motion capture environment but we are aware that other inputs and senses would improve our explorations. What user experiences do actors have when using the devel- Therefore, we think about sounds and animations that are possible to be trig- oped interactive artefacts? gered and arranged in real-time to influence the actors’ performances and to To our knowledge, the originality of these research questions lays in the appli- secure a quickly adaptable setup. We plan to do this through a mobile device cation of technology to the field of motion capture acting. Another novelty is or server interface that communicates with the projector carried by the actor the combination of used hardware and developed software especially designed and sound equipment installed in the room, as well as a tablet connected to to comply with the challenges in motion capture acting. the system allowing a director to control the scenery. This allows directors and motion capture operators to modify, create and arrange digital assets, dig- ital scenery and 3D sounds. Figure 2.1 depicts the scenario of our proposed 2.2 Proposed Solution to the Research Problem solution.

The proposed solution to the research problem for this thesis describes how we intend to answer our main research question. Our application will allow actors to experience a mixed reality, while acting for motion capture. The actors’ vision will not be occluded and the hardware setup will be lightweight, comfortable to wear and usable within motion cap- ture production. To realize this application and to perform user tests, we need a wearable, mobile projector solution, a retro-reflective foil placed around the actors and a game engine to show digital content. This proposed solution is depicted in figure 2.1. Currently, motion capture actors can neither see, hear and feel nor interact with the virtual environment they are acting for, while acting. Minor help is provided through using props or showing the environment on a screen. Never- theless, this cannot be compared to a movie shoot where the scenery consists Figure 2.1: Future motion capture environment, showing an actor wearing a of real world objects and allows the actor to be fully immersed. In a current head-mounted projector, a retro-reflective foil reflecting the digital content and motion capture environment, this is more challenging for the actor because a director which controls the digital environment. scenery is not present and needs to be imagined. To solve this issue, the best existing technological solution is to augment the actors vision by using a laser pico projector showing digital content from a To ensure the usability of our solution and to identify helpful features within the game engine. The actor can see the scenery by looking at a reflective foil which process of a motion capture shoot, we need to investigate the actors’, directors’ is mounted on the walls and at important locations. The laser projector sends and motion capture operators’ needs and wishes, as well as the current issues the picture towards the foil which reflects it back to the source. As the projector with the motion capture procedure and issues actors are currently facing. 14 Chapter 2. Research Description 2.2 Proposed Solution to the Research Problem 15

What interactive prototype can be developed that are easy to is head-mounted on a wearable frame or strap, the reflected image can be seen use and are usable within a motion capture environment? clearly by the actors. This allows a hands free 360◦ degree view of the virtual environment and the real world at the same time without occluding the actors’ To validate that our interactive system is usable and provides the claimed fea- vision. Actors can also perform even bodily demanding shoots without any tures, as well as provides an increase in immersion,we test the interactive arte- movement limitations or any sign of motion sickness. facts through answering the second sub-question: Vision is essential to create an immersive motion capture environment but we are aware that other inputs and senses would improve our explorations. What user experiences do actors have when using the devel- Therefore, we think about sounds and animations that are possible to be trig- oped interactive artefacts? gered and arranged in real-time to influence the actors’ performances and to To our knowledge, the originality of these research questions lays in the appli- secure a quickly adaptable setup. We plan to do this through a mobile device cation of technology to the field of motion capture acting. Another novelty is or server interface that communicates with the projector carried by the actor the combination of used hardware and developed software especially designed and sound equipment installed in the room, as well as a tablet connected to to comply with the challenges in motion capture acting. the system allowing a director to control the scenery. This allows directors and motion capture operators to modify, create and arrange digital assets, dig- ital scenery and 3D sounds. Figure 2.1 depicts the scenario of our proposed 2.2 Proposed Solution to the Research Problem solution.

The proposed solution to the research problem for this thesis describes how we intend to answer our main research question. Our application will allow actors to experience a mixed reality, while acting for motion capture. The actors’ vision will not be occluded and the hardware setup will be lightweight, comfortable to wear and usable within motion cap- ture production. To realize this application and to perform user tests, we need a wearable, mobile projector solution, a retro-reflective foil placed around the actors and a game engine to show digital content. This proposed solution is depicted in figure 2.1. Currently, motion capture actors can neither see, hear and feel nor interact with the virtual environment they are acting for, while acting. Minor help is provided through using props or showing the environment on a screen. Never- theless, this cannot be compared to a movie shoot where the scenery consists Figure 2.1: Future motion capture environment, showing an actor wearing a of real world objects and allows the actor to be fully immersed. In a current head-mounted projector, a retro-reflective foil reflecting the digital content and motion capture environment, this is more challenging for the actor because a director which controls the digital environment. scenery is not present and needs to be imagined. To solve this issue, the best existing technological solution is to augment the actors vision by using a laser pico projector showing digital content from a To ensure the usability of our solution and to identify helpful features within the game engine. The actor can see the scenery by looking at a reflective foil which process of a motion capture shoot, we need to investigate the actors’, directors’ is mounted on the walls and at important locations. The laser projector sends and motion capture operators’ needs and wishes, as well as the current issues the picture towards the foil which reflects it back to the source. As the projector with the motion capture procedure and issues actors are currently facing. 16 Chapter 2. Research Description 2.4 Research Methods 17

2.3 Identified Scientific Challenges To comply with these research goals and to produce knowledge through the creation of artefacts, we see the paradigm of constructive research as suitable For this thesis work, we identified scientific challenges that will be considered and beneficial. Constructive research suits engineering and science through the and addressed within this research. The challenges that we see, are described creation of new objects from existing ones and through the formation of new below. concepts and models. It furthermore implies the creation of an artifact such as models, system designs, human-computer interfaces, software and hardware Interaction Design that aim at solving a domain specific problem and creating knowledge about the problem and its solutions [31]. Furthermore, it builds a deepend understand- * Building solutions that meet the users’ needs ing of the problem at hand and its domain before approaching an engineering * Exploring technologies and solutions that are usable in motion capture and problem solving phase. Knowledge creation is not only defined through production theoretical results but also through practical and analytical results. Even the * Evaluating the experiential qualities of our built prototypes within the creation of an artefact itself can be seen as knowledge contribution. area of acting Producing knowledge through building prototypes is what we apply to our methods as an essential part to approach our research questions as well as to Computer Science answer if produced artefacts can be used within motion capture and what is practical. Then we analyse the use of the designed artifacts in order to under- * Software development complying with rapid prototyping of hardware stand, explain and improve the designed systems. In the following, we explain and design prototypes which methods we use from the constructive research methodology to comply * Software development ensuring component-based integratability of with our goal to produce knowledge through the creation of artefacts. software components to an dynamically changing overall system To approach the beforementioned research questions and to prove the hy- pothesis, we conduct multi-disciplinary research and address in particular the areas of interaction design and computer science. The problem at hand and the 2.4 Research Methods research questions already indicate that a pure interaction design or computer science approach might not address or solve the given problem of support- The research methods used within this Licentiate thesis can be placed within ing actors by immersing them into a virtual environment completely. When the area of intervention research, which is characterized by problem solving purely focusing on a computer science approach, technology would be created where typically researchers enter an environment and intervene with its struc- to provide such an environment. On the other hand, using an interaction de- tures and processes [30]. Based on the knowledge generated by intervention sign approach would focus on the design, the interactions and the look of e.g. from individuals with a perspective from within the system, the academic part interfaces. of the study is performed from an outside perspective, connecting new insights In other words we can say, as mentioned by others [32], that an engineer- with the existing academic knowledge. The traditional Intervention research ing approach focuses on finding a solution to a problem and a design-oriented ranges from the studies with strong focus on practical solutions, such as found approach focuses on understanding the problem, the final artefacts and the dif- in Action research and Clinical research to significantly more academically ori- ferent paths leading towards a solution. By looking at this simple explanation, ented Action science and Design science [30]. In our case the main knowledge one can see that the focus and the goals for the above-mentioned approaches are production method is constructive, where the construction is both on the level slightly different. One approach focuses on finding a determined solution and of the technological artifact and on the level of theoretical analysis [31]. For to develop a final artefact for it; the other approach concentrates more on the our research, we introduce and explore solutions towards a more immersive problem definition and the different possible variations of an artefact that can acting environment and place our research within and around stakeholders and lead to a final solution. By merging those two approaches we assume to have the traditional ways of working in the field of motion capture acting. a clear problem definition, multiple artefacts that allow exploring the possibil- 16 Chapter 2. Research Description 2.4 Research Methods 17

2.3 Identified Scientific Challenges To comply with these research goals and to produce knowledge through the creation of artefacts, we see the paradigm of constructive research as suitable For this thesis work, we identified scientific challenges that will be considered and beneficial. Constructive research suits engineering and science through the and addressed within this research. The challenges that we see, are described creation of new objects from existing ones and through the formation of new below. concepts and models. It furthermore implies the creation of an artifact such as models, system designs, human-computer interfaces, software and hardware Interaction Design that aim at solving a domain specific problem and creating knowledge about the problem and its solutions [31]. Furthermore, it builds a deepend understand- * Building solutions that meet the users’ needs ing of the problem at hand and its domain before approaching an engineering * Exploring technologies and solutions that are usable in motion capture and problem solving phase. Knowledge creation is not only defined through production theoretical results but also through practical and analytical results. Even the * Evaluating the experiential qualities of our built prototypes within the creation of an artefact itself can be seen as knowledge contribution. area of acting Producing knowledge through building prototypes is what we apply to our methods as an essential part to approach our research questions as well as to Computer Science answer if produced artefacts can be used within motion capture and what is practical. Then we analyse the use of the designed artifacts in order to under- * Software development complying with rapid prototyping of hardware stand, explain and improve the designed systems. In the following, we explain and design prototypes which methods we use from the constructive research methodology to comply * Software development ensuring component-based integratability of with our goal to produce knowledge through the creation of artefacts. software components to an dynamically changing overall system To approach the beforementioned research questions and to prove the hy- pothesis, we conduct multi-disciplinary research and address in particular the areas of interaction design and computer science. The problem at hand and the 2.4 Research Methods research questions already indicate that a pure interaction design or computer science approach might not address or solve the given problem of support- The research methods used within this Licentiate thesis can be placed within ing actors by immersing them into a virtual environment completely. When the area of intervention research, which is characterized by problem solving purely focusing on a computer science approach, technology would be created where typically researchers enter an environment and intervene with its struc- to provide such an environment. On the other hand, using an interaction de- tures and processes [30]. Based on the knowledge generated by intervention sign approach would focus on the design, the interactions and the look of e.g. from individuals with a perspective from within the system, the academic part interfaces. of the study is performed from an outside perspective, connecting new insights In other words we can say, as mentioned by others [32], that an engineer- with the existing academic knowledge. The traditional Intervention research ing approach focuses on finding a solution to a problem and a design-oriented ranges from the studies with strong focus on practical solutions, such as found approach focuses on understanding the problem, the final artefacts and the dif- in Action research and Clinical research to significantly more academically ori- ferent paths leading towards a solution. By looking at this simple explanation, ented Action science and Design science [30]. In our case the main knowledge one can see that the focus and the goals for the above-mentioned approaches are production method is constructive, where the construction is both on the level slightly different. One approach focuses on finding a determined solution and of the technological artifact and on the level of theoretical analysis [31]. For to develop a final artefact for it; the other approach concentrates more on the our research, we introduce and explore solutions towards a more immersive problem definition and the different possible variations of an artefact that can acting environment and place our research within and around stakeholders and lead to a final solution. By merging those two approaches we assume to have the traditional ways of working in the field of motion capture acting. a clear problem definition, multiple artefacts that allow exploring the possibil- 18 Chapter 2. Research Description 2.4 Research Methods 19 ities towards a final prototype and furthermore be able to validate the design Phase 1: Problem Identification and Background through providing a final prototype that can be tested and allows capturing the At first we need to understand the problem we are addressing and the users user experiences with it. of our system to be created, as well as to understand what other research has Therefore, it is the intention for this thesis to make a hybrid of the above- been done in similar or related projects. Therefore, we use empirical qualita- mentioned research areas where we intend to design interactive artefacts and tive methods such as conducting Interviews and Surveys with motion capture interfaces, explore and craft different ways of designing, using and interacting actors, directors and other professionals. Participant Observations [36] of with them. So this sets the focus on the user and the design of the system. motion capture shoots will be conducted to get a deeper understanding of the On the other hand we also develop the system to prove the concept and to get problems and as means of collecting more data to support the beforehand iden- hands-on experiences from the end users and guidelines that describe which tified issues. We intend to perform an ethical analysis on how virtual realities created artefacts can actually be used for motion capture. To do this we need placed around an actor as a work environment can imply current or possible fu- to close the gap where interaction design ends and computer science begins. ture ethical issues. From all data collected, we can then analyse and understand In other words we intend to merge the two idioms of doing research to com- the problems at hand as well as to Identify a Persona that describes the users ply with the goals of exploring different designs, interactive artefacts, ways of of the system. A Literature Review to establish the state-of-the-art and to interacting but also develop a working prototype that can be used for actual prove the originality of the research problem was also chosen as a method. We motion capture shoots. Conducting user experiences, testing the users’ level of see these methods as suitable for Phase 1, as they are commonly used methods immersion and capturing the satisfaction of the system’s users is therefore also in computation as well as in HCI and Interaction Design. Furthermore, we see an important part we need to consider. the value in getting an qualitative overview of the problem at hand. An indication of the change of thinking in interaction design, which basi- cally addresses the need of bridging gaps in the way of how research is done Phase 2: Design Scenarios and what can be considered as design artefacts have been shown in other re- After collecting data and knowledge about the issues to overcome, we approach search [33, 34, 35]. Therefore, we borrow these thoughts on how to bridge designing a solution through discussing ideas in Brainstorming sessions, cre- these gaps and build our methods upon these thoughts and methods. Craft- ating Design Scenarios and discussing them through Video Sketches [37]. A ing design artefacts through interactive means like using code as an interactive Reality Check was then performed to evaluate if and which design scenar- artefact have been addressed as a way of complying and doing research with ios can be implemented. We see this phase as important to generate ideas, to modern technology [33]. Furthermore, the method of ’Strong Concepts’ was assess and to discuss the feasibility and applicability of the created ideas and introduced that allows for knowledge-creation where research in interaction scenarios. Furthermore, this phase is meant to reevaluate if the design scenar- design lies between design theory and the abstraction of design instances [34]. ios match the identified problems. An implementation of the design scenarios Researchers in the field of interaction design and HCI have also already ad- is then performed in Phase 3. Prototypes, Design Scenarios and understand- dressed the lack of practice and the gap between design thinking and practical ing the problem will be shaped throughout the process of building prototypes proof of the theories or design artefacts [35]. as occurring issues or changes might need rethinking of the design scenarios With this thesis we are aiming at integrating interaction design and com- or the identified problems. This process is depicted in figure 2.2 where the puter science to work towards design-based research results that are applicable reevaluation between phase 1 and 2 as well as the loops with the reality check and useable in practice. The following methods explains how we intend to get is shown. to this point in more detail. 18 Chapter 2. Research Description 2.4 Research Methods 19 ities towards a final prototype and furthermore be able to validate the design Phase 1: Problem Identification and Background through providing a final prototype that can be tested and allows capturing the At first we need to understand the problem we are addressing and the users user experiences with it. of our system to be created, as well as to understand what other research has Therefore, it is the intention for this thesis to make a hybrid of the above- been done in similar or related projects. Therefore, we use empirical qualita- mentioned research areas where we intend to design interactive artefacts and tive methods such as conducting Interviews and Surveys with motion capture interfaces, explore and craft different ways of designing, using and interacting actors, directors and other professionals. Participant Observations [36] of with them. So this sets the focus on the user and the design of the system. motion capture shoots will be conducted to get a deeper understanding of the On the other hand we also develop the system to prove the concept and to get problems and as means of collecting more data to support the beforehand iden- hands-on experiences from the end users and guidelines that describe which tified issues. We intend to perform an ethical analysis on how virtual realities created artefacts can actually be used for motion capture. To do this we need placed around an actor as a work environment can imply current or possible fu- to close the gap where interaction design ends and computer science begins. ture ethical issues. From all data collected, we can then analyse and understand In other words we intend to merge the two idioms of doing research to com- the problems at hand as well as to Identify a Persona that describes the users ply with the goals of exploring different designs, interactive artefacts, ways of of the system. A Literature Review to establish the state-of-the-art and to interacting but also develop a working prototype that can be used for actual prove the originality of the research problem was also chosen as a method. We motion capture shoots. Conducting user experiences, testing the users’ level of see these methods as suitable for Phase 1, as they are commonly used methods immersion and capturing the satisfaction of the system’s users is therefore also in computation as well as in HCI and Interaction Design. Furthermore, we see an important part we need to consider. the value in getting an qualitative overview of the problem at hand. An indication of the change of thinking in interaction design, which basi- cally addresses the need of bridging gaps in the way of how research is done Phase 2: Design Scenarios and what can be considered as design artefacts have been shown in other re- After collecting data and knowledge about the issues to overcome, we approach search [33, 34, 35]. Therefore, we borrow these thoughts on how to bridge designing a solution through discussing ideas in Brainstorming sessions, cre- these gaps and build our methods upon these thoughts and methods. Craft- ating Design Scenarios and discussing them through Video Sketches [37]. A ing design artefacts through interactive means like using code as an interactive Reality Check was then performed to evaluate if and which design scenar- artefact have been addressed as a way of complying and doing research with ios can be implemented. We see this phase as important to generate ideas, to modern technology [33]. Furthermore, the method of ’Strong Concepts’ was assess and to discuss the feasibility and applicability of the created ideas and introduced that allows for knowledge-creation where research in interaction scenarios. Furthermore, this phase is meant to reevaluate if the design scenar- design lies between design theory and the abstraction of design instances [34]. ios match the identified problems. An implementation of the design scenarios Researchers in the field of interaction design and HCI have also already ad- is then performed in Phase 3. Prototypes, Design Scenarios and understand- dressed the lack of practice and the gap between design thinking and practical ing the problem will be shaped throughout the process of building prototypes proof of the theories or design artefacts [35]. as occurring issues or changes might need rethinking of the design scenarios With this thesis we are aiming at integrating interaction design and com- or the identified problems. This process is depicted in figure 2.2 where the puter science to work towards design-based research results that are applicable reevaluation between phase 1 and 2 as well as the loops with the reality check and useable in practice. The following methods explains how we intend to get is shown. to this point in more detail. 20 Chapter 2. Research Description 2.4 Research Methods 21

Phase 3: Building Prototypes prototype is still in a design exploration phase as the functionality was just set For the Licentiate thesis and to address the mentioned research questions, we up and provided. Therefore, we see ’sound’ as a future work. chose a constructive paradigm where we create digital artefacts by applying hardware, using existing software frameworks and developing new software to Phase 4: Testing and Evaluation build the interactive system. On top we use an iterative design process to create For testing of the developed prototypes we will use research methods used and evaluate the constructed prototypes and artefacts. in software engineering and in Human Computer Interaction. An experience Producing digital artefacts and prototypes have been discussed and proven and decision based approach to compare and test the system in actual use was in the field of HCI and Interaction Design as a valid practice for knowledge cre- therefore chosen. To do this, we will isolate several prototypes and test these in ation and as a research method [38, 35]. Even for computer science research experiments by also integrating and considering the users. Feedback collected it has been argued that experiments and observations could bring benefits to through experiments, user tests, surveys, observations and interviews will the field of computer science and it was argued that more computer science re- help to verify the design decisions. User tests will also ensure the function- search should be based or proven through experiments and observations [39]. ality of the system continuously. We see the methods mentioned in Phase 4 For our approach, building prototypes which are tested and assessed through as common and proven methods in the areas of HCI, Interaction Design and an iterative design process and by conducting experiments, we will also en- computation. Furthermore, these methods suit our need to test the created arte- sure the user oriented approach as well as the usability of the prototypes and facts in usability, functionality and allow to capture the user experiences of the thesis outcomes. The method used here was based on Krippendorff’s way created digital artefacts. In figure 2.2 we can see that the testing and evaluation of conducting research by running ’Experiments’ on the basis of prototypes phase could lead to a possible loop back to the start. This would imply that our within the research area of computing [40]. Also in the research area of inter- gained knowledge leads to changes and modifications that need reevaluation of action design, the use of prototypes for empirical evaluation with the intention the previous phases and might help to shape the overall outcome. to study the qualities of new design ideas in use, as well as observations as an empirical grounding have been used and argued to be of value [38]. We see the use of building prototypes and running experiments with users of our system as an important step to iteratively evaluate the design, the usability and function- ality of our system to be build as the best method for phase 3. This is especially the case, as the methods also suit best to our research goals and comply very well with the research areas involved, as all of them use the general thought on building prototypes, running experiments and conducting user tests. Figure 2.2 shows the loop, that we see as important, between creating prototypes and testing as explained above. Two main prototypes have been built for this research project. One is to visualize a virtual environment to the actors. This includes to adapt hardware and software to be portable, lightweight and needs development in terms of moving in a mixed reality as well as creating the digital environment. It also involves finding a solution in terms of tracking of persons’ positions and trans- mitting data between mobile devices, projectors and a server. For this kind of prototype we have currently built initial prototypes which need more develop- ment and user tests. Another prototype is an interactive system, allowing to use 3D sound to enhance the environment through environmental sound and by triggering 3D sound events to steer the natural reactions of the actors. This Figure 2.2: Used research method 20 Chapter 2. Research Description 2.4 Research Methods 21

Phase 3: Building Prototypes prototype is still in a design exploration phase as the functionality was just set For the Licentiate thesis and to address the mentioned research questions, we up and provided. Therefore, we see ’sound’ as a future work. chose a constructive paradigm where we create digital artefacts by applying hardware, using existing software frameworks and developing new software to Phase 4: Testing and Evaluation build the interactive system. On top we use an iterative design process to create For testing of the developed prototypes we will use research methods used and evaluate the constructed prototypes and artefacts. in software engineering and in Human Computer Interaction. An experience Producing digital artefacts and prototypes have been discussed and proven and decision based approach to compare and test the system in actual use was in the field of HCI and Interaction Design as a valid practice for knowledge cre- therefore chosen. To do this, we will isolate several prototypes and test these in ation and as a research method [38, 35]. Even for computer science research experiments by also integrating and considering the users. Feedback collected it has been argued that experiments and observations could bring benefits to through experiments, user tests, surveys, observations and interviews will the field of computer science and it was argued that more computer science re- help to verify the design decisions. User tests will also ensure the function- search should be based or proven through experiments and observations [39]. ality of the system continuously. We see the methods mentioned in Phase 4 For our approach, building prototypes which are tested and assessed through as common and proven methods in the areas of HCI, Interaction Design and an iterative design process and by conducting experiments, we will also en- computation. Furthermore, these methods suit our need to test the created arte- sure the user oriented approach as well as the usability of the prototypes and facts in usability, functionality and allow to capture the user experiences of the thesis outcomes. The method used here was based on Krippendorff’s way created digital artefacts. In figure 2.2 we can see that the testing and evaluation of conducting research by running ’Experiments’ on the basis of prototypes phase could lead to a possible loop back to the start. This would imply that our within the research area of computing [40]. Also in the research area of inter- gained knowledge leads to changes and modifications that need reevaluation of action design, the use of prototypes for empirical evaluation with the intention the previous phases and might help to shape the overall outcome. to study the qualities of new design ideas in use, as well as observations as an empirical grounding have been used and argued to be of value [38]. We see the use of building prototypes and running experiments with users of our system as an important step to iteratively evaluate the design, the usability and function- ality of our system to be build as the best method for phase 3. This is especially the case, as the methods also suit best to our research goals and comply very well with the research areas involved, as all of them use the general thought on building prototypes, running experiments and conducting user tests. Figure 2.2 shows the loop, that we see as important, between creating prototypes and testing as explained above. Two main prototypes have been built for this research project. One is to visualize a virtual environment to the actors. This includes to adapt hardware and software to be portable, lightweight and needs development in terms of moving in a mixed reality as well as creating the digital environment. It also involves finding a solution in terms of tracking of persons’ positions and trans- mitting data between mobile devices, projectors and a server. For this kind of prototype we have currently built initial prototypes which need more develop- ment and user tests. Another prototype is an interactive system, allowing to use 3D sound to enhance the environment through environmental sound and by triggering 3D sound events to steer the natural reactions of the actors. This Figure 2.2: Used research method 22 Chapter 2. Research Description

To provide an overview of the used methods, figure 2.2 shows how we see that the above-mentioned phases are connected. The black arrows would show a more traditional computer science related approach. We see the blue arrows as a necessary complement to add the missing connections and loops back that we need for our method, which merge interaction design and computation science. The colouring of the ellipses only serve as means of distinguishing the different areas in the figure. Chapter 3 2.5 Expected Results In the above perspective we addressed and introduced our goals and research Thesis Contributions description which are showing the ’big picture’ of the research project. For this Licentiate thesis we intend to provide a foundation of the challenges to be solved, an understanding of the users and their needs as well as to provide prototypes to prove the feasibility of our approach. For the Licentiate thesis we This Licentiate thesis is written as a collection of papers which describe the will perform Phase 1, Phase 2 and give an initial insight into Phase 3 explained contributions made in more detail. The contributions provided within this Li- in the previous method section. centiate thesis are listed below with relation to the method overview from the previous chapter. The expected results for this thesis are: * Knowledge about persona, limitations and challenges Contribution A: * Software prototypes with UI For the first contribution it was the purpose to show the research community * User experiences with the software that certain issues within current motion capturing environments exist. There- * A guideline to develop such an interactive system fore, a state-of-the-art overview of the addressed research areas was provided. After founding a basis of what research has been done in motion capture and areas that we think are of interest to create an immersive motion capture envi- ronment, we stated that less research has been done that considers supporting a motion capture actor’s performance. Furthermore, we identified challenges that actors are currently facing in a motion capturing environment. A potential solution was introduced as a basis for discussions to solve the identified issues. Moreover, a method to address the interaction and systems design issues was proposed. In relation to our used method 2.2, this contribution is within the yellow ’Problem Identification’ phase.

Contributions: * Make aware of issues in mocap acting * Identified acting challenges in mocap * Discussion towards a potential solution * Paper A

23 22 Chapter 2. Research Description

To provide an overview of the used methods, figure 2.2 shows how we see that the above-mentioned phases are connected. The black arrows would show a more traditional computer science related approach. We see the blue arrows as a necessary complement to add the missing connections and loops back that we need for our method, which merge interaction design and computation science. The colouring of the ellipses only serve as means of distinguishing the different areas in the figure. Chapter 3 2.5 Expected Results In the above perspective we addressed and introduced our goals and research Thesis Contributions description which are showing the ’big picture’ of the research project. For this Licentiate thesis we intend to provide a foundation of the challenges to be solved, an understanding of the users and their needs as well as to provide prototypes to prove the feasibility of our approach. For the Licentiate thesis we This Licentiate thesis is written as a collection of papers which describe the will perform Phase 1, Phase 2 and give an initial insight into Phase 3 explained contributions made in more detail. The contributions provided within this Li- in the previous method section. centiate thesis are listed below with relation to the method overview from the previous chapter. The expected results for this thesis are: * Knowledge about persona, limitations and challenges Contribution A: * Software prototypes with UI For the first contribution it was the purpose to show the research community * User experiences with the software that certain issues within current motion capturing environments exist. There- * A guideline to develop such an interactive system fore, a state-of-the-art overview of the addressed research areas was provided. After founding a basis of what research has been done in motion capture and areas that we think are of interest to create an immersive motion capture envi- ronment, we stated that less research has been done that considers supporting a motion capture actor’s performance. Furthermore, we identified challenges that actors are currently facing in a motion capturing environment. A potential solution was introduced as a basis for discussions to solve the identified issues. Moreover, a method to address the interaction and systems design issues was proposed. In relation to our used method 2.2, this contribution is within the yellow ’Problem Identification’ phase.

Contributions: * Make aware of issues in mocap acting * Identified acting challenges in mocap * Discussion towards a potential solution * Paper A

23 24 Chapter 3. Thesis Contributions 25

Contribution B: Contribution D: The following contribution lies in deepening the understanding of the identified The findings up to contribution C founded the basis to design smart solutions issues in contribution A. Therefore, we addressed these issues and argue how to to create first prototypes that address the identified issues. Therefore, this con- support motion capture actors, especially when acting for computer games. We tribution is built upon the previous contributions and addresses a major point to discuss the nature of motion capture acting in the view of Stanislavski’s acting create a more immersive motion capture acting environment. Here we intend to principles and point out the actors skills and demands. These findings allow solve the issue of augmenting a virtual reality through hardware and software a reflection on the basic skills and needs of a motion capture actor and show that can be used especially in a motion capture environment. The hardware is which points need to be addressed to help an actor to perform better. These light-weight, mobile and allows for dynamic hands-free movements of actors identified principles show where technical solutions but also improvements in without occluding their natural vision. Our research prototype also allows to motion capture procedures can be used to create a more natural and immer- overlay a virtual reality onto the real world. This happens through a projection sive motion capture acting environment. This contribution is as contribution A technique using laser projectors and retro-reflective foils throwing the digital within the yellow ’Problem Identification’ phase in figure 2.2. image back into the users eyes. This research contributes to different research areas and serves as a proof of concept for this Licentiate thesis. Contributions Contributions: are made within the green ’Design Scenario’ phase and the orange ’Prototypes’ * Identifying the needs and demands on mocap actors phase in figure 2.2. This contribution was made in collaboration with the Koc¸ * Discussing ways to support mocap actors University Micro Optics Lab where the lab provided the initial hardware topol- * Paper B ogy of the stripped down laser projector. In cooperation we created a wearable solution and shaped its design to be more comfortable and useful for daily but Contribution C: also motion capture uses. Software and digital content to create a virtual and In an ethical investigation, we discuss the influences and impacts of virtual re- augmented world was provided through this thesis research. Part of contri- alities used for creating and playing computer games. We address the aspects bution D is paper D, which was written in shared collaboration with the first of virtual realities used for computer games production but also for the future author (50% first author, 50% author of this thesis). of gaming and home entertainment. A special emphasis is on how virtual re- alities placed around an actor as a work environment can imply ethical issues. Contributions: With this research we point out identified issues and possible future issues. * Mixed reality prototype This contribution rounds up our investigations on the users of a more natural * Proof-of-concept implementation motion capture system and also shows issues that need to be considered when * Paper D designing and developing such a system. In relation to our used method, this contribution also lies within the yellow ’Problem Identification’ phase in figure The contributions made up to the Licentiate thesis, provide a deeper under- 2.2. standing of the problems identified within motion capture acting as well as show ways towards a solution. Furthermore, we developed a prototype that Contributions: solves parts of the identified issues and will serve as a basis to drive further * Investigating possible ethical implications when creating a VR/MR for research. motion capture * Paper C 24 Chapter 3. Thesis Contributions 25

Contribution B: Contribution D: The following contribution lies in deepening the understanding of the identified The findings up to contribution C founded the basis to design smart solutions issues in contribution A. Therefore, we addressed these issues and argue how to to create first prototypes that address the identified issues. Therefore, this con- support motion capture actors, especially when acting for computer games. We tribution is built upon the previous contributions and addresses a major point to discuss the nature of motion capture acting in the view of Stanislavski’s acting create a more immersive motion capture acting environment. Here we intend to principles and point out the actors skills and demands. These findings allow solve the issue of augmenting a virtual reality through hardware and software a reflection on the basic skills and needs of a motion capture actor and show that can be used especially in a motion capture environment. The hardware is which points need to be addressed to help an actor to perform better. These light-weight, mobile and allows for dynamic hands-free movements of actors identified principles show where technical solutions but also improvements in without occluding their natural vision. Our research prototype also allows to motion capture procedures can be used to create a more natural and immer- overlay a virtual reality onto the real world. This happens through a projection sive motion capture acting environment. This contribution is as contribution A technique using laser projectors and retro-reflective foils throwing the digital within the yellow ’Problem Identification’ phase in figure 2.2. image back into the users eyes. This research contributes to different research areas and serves as a proof of concept for this Licentiate thesis. Contributions Contributions: are made within the green ’Design Scenario’ phase and the orange ’Prototypes’ * Identifying the needs and demands on mocap actors phase in figure 2.2. This contribution was made in collaboration with the Koc¸ * Discussing ways to support mocap actors University Micro Optics Lab where the lab provided the initial hardware topol- * Paper B ogy of the stripped down laser projector. In cooperation we created a wearable solution and shaped its design to be more comfortable and useful for daily but Contribution C: also motion capture uses. Software and digital content to create a virtual and In an ethical investigation, we discuss the influences and impacts of virtual re- augmented world was provided through this thesis research. Part of contri- alities used for creating and playing computer games. We address the aspects bution D is paper D, which was written in shared collaboration with the first of virtual realities used for computer games production but also for the future author (50% first author, 50% author of this thesis). of gaming and home entertainment. A special emphasis is on how virtual re- alities placed around an actor as a work environment can imply ethical issues. Contributions: With this research we point out identified issues and possible future issues. * Mixed reality prototype This contribution rounds up our investigations on the users of a more natural * Proof-of-concept implementation motion capture system and also shows issues that need to be considered when * Paper D designing and developing such a system. In relation to our used method, this contribution also lies within the yellow ’Problem Identification’ phase in figure The contributions made up to the Licentiate thesis, provide a deeper under- 2.2. standing of the problems identified within motion capture acting as well as show ways towards a solution. Furthermore, we developed a prototype that Contributions: solves parts of the identified issues and will serve as a basis to drive further * Investigating possible ethical implications when creating a VR/MR for research. motion capture * Paper C Chapter 4

Approach, Prototype and Findings

4.1 From Vision to Prototype (Part I)

The starting point for this thesis was the vision to create a more immersive mo- tion capture acting environment complying with the challenges and the needs of a motion capture environment. We started the project with brainstorming sessions with the representatives of the involved stakeholders. These brain storming sessions served as a tool to trigger inspiration, to exchange knowledge between the participating stakeholders, and to investigate the current state-of- practice in motion capture. We found the phase of understanding the current state-of-practice for mo- tion capture shoots as well as acting for motion capture as important for our research. Therefore, we observed five motion capture shoots in a motion cap- ture studio and had discussions, interviews and questionnaires filled out by stakeholders of this project. These are further described in paper A and paper B of this thesis. Afterwards, we did literature research on current technologies on the mar- ket and what research has been done that could be of potential use towards the goal of providing a more immersive motion capture acting experience. In this step we realised that a vast amount of technologies and research could be of thought to create immersive virtual environments. Initial ideas let our thoughts from VR, AR, mixed AR, 3D sound, smell, ambient lighting, wind, tempera-

27 Chapter 4

Approach, Prototype and Findings

4.1 From Vision to Prototype (Part I)

The starting point for this thesis was the vision to create a more immersive mo- tion capture acting environment complying with the challenges and the needs of a motion capture environment. We started the project with brainstorming sessions with the representatives of the involved stakeholders. These brain storming sessions served as a tool to trigger inspiration, to exchange knowledge between the participating stakeholders, and to investigate the current state-of- practice in motion capture. We found the phase of understanding the current state-of-practice for mo- tion capture shoots as well as acting for motion capture as important for our research. Therefore, we observed five motion capture shoots in a motion cap- ture studio and had discussions, interviews and questionnaires filled out by stakeholders of this project. These are further described in paper A and paper B of this thesis. Afterwards, we did literature research on current technologies on the mar- ket and what research has been done that could be of potential use towards the goal of providing a more immersive motion capture acting experience. In this step we realised that a vast amount of technologies and research could be of thought to create immersive virtual environments. Initial ideas let our thoughts from VR, AR, mixed AR, 3D sound, smell, ambient lighting, wind, tempera-

27 28 Chapter 4. Approach, Prototype and Findings 4.2 Technology Review 29 ture, haptic feedback to projected environments and many other technologies. Furthermore, it needs to be made clear that most off-the-shelf VR systems One can clearly see that different areas of technology and research could be are powered and driven via cable connection at the moment. This further lim- addressed to create an immersive virtual environment. Nonetheless, it has to its the usability of current VR systems for the creation of immersive motion be mentioned that a major task was to apply technologies that comply with the capture environments. motion capture environment. We also concentrated on the foundations and es- A last point to consider when thinking about using VR for motion capture sential technologies to approach the task of creating a more immersive acting is that a normal motion capture shoot day could last between ca. 4-8 hours. experience and supporting actors with their work. This then led to the decision Using a VR system for a long time, or even when performing fast movements of focusing on getting an understanding of the stakeholders in motion capture, or showing fast animations can create side effects such as motion sickness. It understanding their needs and focusing on supporting the motion capture ac- has been discussed in different contexts and evaluations that using VR systems tors. can have side effects for the users [43, 44, 45]. It is therefore questionable While conducting literature and technology research, many ideas seemed to if actors should be using such a system when it could cause nausea. These be plausible to be used for providing visual aid to actors, while acting. There- practical as well as ethical questions should be considered for VR and other fore, we had to reevaluate the technologies that seemed to be promising to systems before their use in practise. Such questions and other ethical questions provide a solution to our research questions. This is based on our gained in- that could occur within motion capture, gaming and the use of virtual reality sights into the workflow and the business of motion capture and the needs of technology have been further discussed in paper C. the actors and stakeholders. In conclusion, using VR glasses for motion capture is possible but must be considered as limited in its usability for most capture shoots. 4.2 Technology Review In the sections below, we provide an excerpt of our findings when reviewing 4.2.2 Projection Mapping technologies that might be of thought to approach a solution towards solving our research questions and towards a visual prototype. An ambient projected environment for actors could also be created with the help of projection mapping and light effects to project imagery to the roof and the floor. As this technology seems to be an interesting way to extend our proto- 4.2.1 Why not VR? type, especially for static projections in areas where no motion capture camera Intuitively, state-of-the-art virtual reality (VR) glasses such as the Oculus Rift is directly influenced, we consider projection mapping for future development. [41] or from other developers such as Vuzix [42] or others, seemed usable to Providing support to the actors by showing environmental surroundings like create a visually perceivable virtual reality. For some motion capture shoots the sky, thunderstorms or treetops in a forest for the roof projection and en- this might be true. However for most motion capture shoots one would rather vironmental projections like roads, sand dunes and other terrains for the floor avoid the use of such devices, because of three main reasons. One, current projection could be of use with this technology. VR glasses block important parts of the face such as eyebrows, eyelids and Our ideas also included to use ambient projection techniques to project and forehead and therefore do not allow for facial motion capture shoots. Two, the display specific locations that could be used as displays, similar to research that majority of motion capture shoots imply to perform bodily demanding move- has been done before [46]. In that research ambient projection has been used to ments or interactions with real objects or persons. VR glasses limit the actors project screens at intelligent locations to show a photo wall or other content. In in these shoots in terms of movement and vision of persons and objects. Three, our idea, as depicted in figure 4.1, we would define the locations according to VR glasses impede interactions with real objects and persons. Interacting with the desired acting scenery and use the projections to create a ’window’ into the real world objects requires very precise movements and hand-eye coordination. digital world, the actors are acting for or even overlay an image onto objects Current VR cannot yet provide this, in a sufficient enough manner. e.g. on a table. 28 Chapter 4. Approach, Prototype and Findings 4.2 Technology Review 29 ture, haptic feedback to projected environments and many other technologies. Furthermore, it needs to be made clear that most off-the-shelf VR systems One can clearly see that different areas of technology and research could be are powered and driven via cable connection at the moment. This further lim- addressed to create an immersive virtual environment. Nonetheless, it has to its the usability of current VR systems for the creation of immersive motion be mentioned that a major task was to apply technologies that comply with the capture environments. motion capture environment. We also concentrated on the foundations and es- A last point to consider when thinking about using VR for motion capture sential technologies to approach the task of creating a more immersive acting is that a normal motion capture shoot day could last between ca. 4-8 hours. experience and supporting actors with their work. This then led to the decision Using a VR system for a long time, or even when performing fast movements of focusing on getting an understanding of the stakeholders in motion capture, or showing fast animations can create side effects such as motion sickness. It understanding their needs and focusing on supporting the motion capture ac- has been discussed in different contexts and evaluations that using VR systems tors. can have side effects for the users [43, 44, 45]. It is therefore questionable While conducting literature and technology research, many ideas seemed to if actors should be using such a system when it could cause nausea. These be plausible to be used for providing visual aid to actors, while acting. There- practical as well as ethical questions should be considered for VR and other fore, we had to reevaluate the technologies that seemed to be promising to systems before their use in practise. Such questions and other ethical questions provide a solution to our research questions. This is based on our gained in- that could occur within motion capture, gaming and the use of virtual reality sights into the workflow and the business of motion capture and the needs of technology have been further discussed in paper C. the actors and stakeholders. In conclusion, using VR glasses for motion capture is possible but must be considered as limited in its usability for most capture shoots. 4.2 Technology Review In the sections below, we provide an excerpt of our findings when reviewing 4.2.2 Projection Mapping technologies that might be of thought to approach a solution towards solving our research questions and towards a visual prototype. An ambient projected environment for actors could also be created with the help of projection mapping and light effects to project imagery to the roof and the floor. As this technology seems to be an interesting way to extend our proto- 4.2.1 Why not VR? type, especially for static projections in areas where no motion capture camera Intuitively, state-of-the-art virtual reality (VR) glasses such as the Oculus Rift is directly influenced, we consider projection mapping for future development. [41] or from other developers such as Vuzix [42] or others, seemed usable to Providing support to the actors by showing environmental surroundings like create a visually perceivable virtual reality. For some motion capture shoots the sky, thunderstorms or treetops in a forest for the roof projection and en- this might be true. However for most motion capture shoots one would rather vironmental projections like roads, sand dunes and other terrains for the floor avoid the use of such devices, because of three main reasons. One, current projection could be of use with this technology. VR glasses block important parts of the face such as eyebrows, eyelids and Our ideas also included to use ambient projection techniques to project and forehead and therefore do not allow for facial motion capture shoots. Two, the display specific locations that could be used as displays, similar to research that majority of motion capture shoots imply to perform bodily demanding move- has been done before [46]. In that research ambient projection has been used to ments or interactions with real objects or persons. VR glasses limit the actors project screens at intelligent locations to show a photo wall or other content. In in these shoots in terms of movement and vision of persons and objects. Three, our idea, as depicted in figure 4.1, we would define the locations according to VR glasses impede interactions with real objects and persons. Interacting with the desired acting scenery and use the projections to create a ’window’ into the real world objects requires very precise movements and hand-eye coordination. digital world, the actors are acting for or even overlay an image onto objects Current VR cannot yet provide this, in a sufficient enough manner. e.g. on a table. 30 Chapter 4. Approach, Prototype and Findings 4.2 Technology Review 31

velopers show innovations that are getting closer to a final off-the-self product which then could be used to create immersive augmented realities. Many AR glasses or projection solutions are either in a research or prototype stage and consumer versions are not yet available. To our knowledge there is currently no off-the-self product that exactly suits our needs and can be used for all common motion capture shoot scenarios in computer games productions. One research prototype that uses similar technologies to our proof-of-concept prototype (see section 4.9) is the CastAR system. CastAR serves as an example to see which issues might occur when using similar AR systems in motion capture. This is further explained in the following section.

Why not just use CastAR?

CastAR is a projector-based augmented reality system using 2 HD projectors, head tracking electronics, shutter glasses and a camera to track markers. The Figure 4.1: Idea on how to integrate projection mapping. Window, landscape projectors send out an image which is then reflected by a retro-reflective foil and dices are here projected objects. back into the users eyes. CastAR allows for 6 DOF and uses tracking markers We see projection and ambient mapping as a supportive technology that can be (5 infrared LED’s on a panel) that are placed on the retroreflective material as used to explore ways to support actors with their work. Nonetheless, we set additional aid for navigation and orientation in 3D space. These markers need our primary focus on exploring technologies that allow to display the digital to be placed at least every 110 degrees from the previous tracking marker. world in an experience that is perceived as a first person view showing the There are three reason why CastAR cannot be used for our purposes in its complete digital environment rather than a third person view showing only top current state. One, even though the CastAR glasses or even similar AR glasses and bottom of the digital world. are about the size of normal sport sunglasses, the eyelids and eyebrows are For a later stage, when other prototypes have evolved it might still be of still covered when acting for facial motion capture. Two, two projectors are interest for our research questions. In this case it has to be considered that pro- used and the issue of combining left and right eye image to an image without jectors emit light and can therefore interfere with the motion capture cameras. crosstalk stays and could potentially lead to motion sickness when wearing the This would mean that the limitations of a projection need to be considered glasses for a motion capture shoot. As this has not yet been tested, one cannot when creating ambient projection or using projection mapping. A horizontal state this for certain but it is essential to keep this in mind when using the projection would most likely affect the motion cameras whereas roof and floor CastAR or similar glasses for motion capture. projections must be planned carefully by considering the room and the camera Three, another more practical issue lays within the fact that the CastAR setup so that interferences with the motion capture system can be reduced or system uses tracking markers with infrared LED’s on their reflective display avoided. material. Motion capture cameras would pick up the light and could get af- fected by it. A standard practice to avoid this could be to mask the regions 4.2.3 Augmented Reality of those markers within the motion capture software so that these regions are simply ignored while acquiring motion capture data. For stationary reflective Augmented reality does allow to create virtual realities superimposed onto the surfaces or objects this can be a solution but for dynamic setups it is rather real world. This technology shows potential to be used for our purposes. Es- unlikely to mask and recalibrate a motion capture system after every setup pecially products like CastAR from Technical Illusions [47] or from other de- change; it is simply ineffective. 30 Chapter 4. Approach, Prototype and Findings 4.2 Technology Review 31

velopers show innovations that are getting closer to a final off-the-self product which then could be used to create immersive augmented realities. Many AR glasses or projection solutions are either in a research or prototype stage and consumer versions are not yet available. To our knowledge there is currently no off-the-self product that exactly suits our needs and can be used for all common motion capture shoot scenarios in computer games productions. One research prototype that uses similar technologies to our proof-of-concept prototype (see section 4.9) is the CastAR system. CastAR serves as an example to see which issues might occur when using similar AR systems in motion capture. This is further explained in the following section.

Why not just use CastAR?

CastAR is a projector-based augmented reality system using 2 HD projectors, head tracking electronics, shutter glasses and a camera to track markers. The Figure 4.1: Idea on how to integrate projection mapping. Window, landscape projectors send out an image which is then reflected by a retro-reflective foil and dices are here projected objects. back into the users eyes. CastAR allows for 6 DOF and uses tracking markers We see projection and ambient mapping as a supportive technology that can be (5 infrared LED’s on a panel) that are placed on the retroreflective material as used to explore ways to support actors with their work. Nonetheless, we set additional aid for navigation and orientation in 3D space. These markers need our primary focus on exploring technologies that allow to display the digital to be placed at least every 110 degrees from the previous tracking marker. world in an experience that is perceived as a first person view showing the There are three reason why CastAR cannot be used for our purposes in its complete digital environment rather than a third person view showing only top current state. One, even though the CastAR glasses or even similar AR glasses and bottom of the digital world. are about the size of normal sport sunglasses, the eyelids and eyebrows are For a later stage, when other prototypes have evolved it might still be of still covered when acting for facial motion capture. Two, two projectors are interest for our research questions. In this case it has to be considered that pro- used and the issue of combining left and right eye image to an image without jectors emit light and can therefore interfere with the motion capture cameras. crosstalk stays and could potentially lead to motion sickness when wearing the This would mean that the limitations of a projection need to be considered glasses for a motion capture shoot. As this has not yet been tested, one cannot when creating ambient projection or using projection mapping. A horizontal state this for certain but it is essential to keep this in mind when using the projection would most likely affect the motion cameras whereas roof and floor CastAR or similar glasses for motion capture. projections must be planned carefully by considering the room and the camera Three, another more practical issue lays within the fact that the CastAR setup so that interferences with the motion capture system can be reduced or system uses tracking markers with infrared LED’s on their reflective display avoided. material. Motion capture cameras would pick up the light and could get af- fected by it. A standard practice to avoid this could be to mask the regions 4.2.3 Augmented Reality of those markers within the motion capture software so that these regions are simply ignored while acquiring motion capture data. For stationary reflective Augmented reality does allow to create virtual realities superimposed onto the surfaces or objects this can be a solution but for dynamic setups it is rather real world. This technology shows potential to be used for our purposes. Es- unlikely to mask and recalibrate a motion capture system after every setup pecially products like CastAR from Technical Illusions [47] or from other de- change; it is simply ineffective. 32 Chapter 4. Approach, Prototype and Findings 4.3 Personas 33

4.2.4 Mixed Augmented Reality (AR) can say John has a talent to perform for motion capture. He got used to the environment and the procedures within motion capture. John is certain that he Mixed AR is another way to create an immersive environment. Here, real performs well, as directors tell him so, and he is booked often. However, he world objects and decorations are used to enrich the digital environment with is very curious about technologies and would like to improve his acting skills. a touch of realism. Depending on the depth of the mixed AR installation, an Moreover, he is wondering how he can act out more natural emotions on the immersive environment can be created. Such an installation was demonstrated spot. in other research already [4]. This type of setup is assumed to be a good support for actors, as it resembles a scenery as in a theatre or for a movie. Nonetheless, there are two issues with mixed AR used for motion capture shoots. One, static Actor2: Piet, 34 years old. Piet is an experienced stuntman and has performed for movies several movies mixed AR installations for motion capture shoots are too static and not efficient and motion capture shoots already. He works for a stunt academy and trains enough in computer games, short movies, or animations. Scenery for motion stunt moves and martial arts together with John. John is very experienced in capture shoots, depending on the shoot, can change quickly and must therefore the procedures of motion capture as he did motion capture performances al- be dynamic and quick to set up or modify. ready since 8 years. Piet even took acting classes to be better prepared when Two, another point that needs to be considered is that most in-door motion he works for movie shoots. Piet usually works with John for motion capture capture studios use optical camera setups. So scenery should not occlude the shoots when 2 or more actors with stunt and martial art experiences are needed. vision of the cameras to the actors or decrease the camera’s capturing ability. He knows very well how John moves and reacts. They are a good team. The Nonetheless, we think that some prebuilt components and elements can be short preparation times and the less acting support in motion capture do not added into our research as well as to create an even more immersive experience. bother Piet, only if he has to take over a role that involves acting in the sense of Components that are meant to increase the actors’ immersion could contain, movie or theatre acting rather then movement or body acting, he would prefer moveable screens and effects such as wind or light sources or props that allow to have some support. for augmented digital contents like for example the dashboard of a car.

Actor3: Sven, 36 years old. 4.3 Personas Sven has been an actor for 16 years now. He worked in theatre for 8 years and then changed to film acting. He went through traditional acting training at ma- As planned for the ’Problem Identification’ phase, we identified different per- jor schools in Europe and in New York. Sven even performed for 2 well known sonas that are stakeholders and also will be users of our system. These personas cinema movies. Nonetheless, he is very interested in black box and improv act- helped to understand what the users want and were used to create the design ing. When there is time he likes to perform in front of a small audience and concepts (see section 4.6) as well as to suggest improvements of the current even includes the audience in his act to steer his own acting. When Sven is on motion capture procedure see section 4.4). stage, he draws the attention of the audience towards him and one can clearly see that he now runs the show. He likes to experiment and play with his act- Actor1: John, 25 years old. ing skills. Lately, Sven has been asked to perform for a motion capture shoot. John is a passionate young stuntman. He works for a stunt academy and trains He got the script 3 days before the shoot with very limited information about stunt moves and martial arts weekly. Three years ago he came to act for mo- the shoot, the scenario and the character. John did his best to be prepared but tion capture. Since then, it became his second job. John developed a passion is not happy about it because he could have done better. Motion capture was to perform for motion capture as well. The thought to shape a character that new to him and he did not understand all the procedures but he gave his best will hold his movements makes him proud and drives his interest to deliver a to perform. After the shoot day he reflected on his performance and the shoot good performance. The commands and the procedure of motion capture, as procedure and agreed that he could have done better. Things should change to well as the bits and pieces that he needed to act and to set the character into help Sven and others to perform better. scene in a computer game, he learned by doing and through experience. One 32 Chapter 4. Approach, Prototype and Findings 4.3 Personas 33

4.2.4 Mixed Augmented Reality (AR) can say John has a talent to perform for motion capture. He got used to the environment and the procedures within motion capture. John is certain that he Mixed AR is another way to create an immersive environment. Here, real performs well, as directors tell him so, and he is booked often. However, he world objects and decorations are used to enrich the digital environment with is very curious about technologies and would like to improve his acting skills. a touch of realism. Depending on the depth of the mixed AR installation, an Moreover, he is wondering how he can act out more natural emotions on the immersive environment can be created. Such an installation was demonstrated spot. in other research already [4]. This type of setup is assumed to be a good support for actors, as it resembles a scenery as in a theatre or for a movie. Nonetheless, there are two issues with mixed AR used for motion capture shoots. One, static Actor2: Piet, 34 years old. Piet is an experienced stuntman and has performed for movies several movies mixed AR installations for motion capture shoots are too static and not efficient and motion capture shoots already. He works for a stunt academy and trains enough in computer games, short movies, or animations. Scenery for motion stunt moves and martial arts together with John. John is very experienced in capture shoots, depending on the shoot, can change quickly and must therefore the procedures of motion capture as he did motion capture performances al- be dynamic and quick to set up or modify. ready since 8 years. Piet even took acting classes to be better prepared when Two, another point that needs to be considered is that most in-door motion he works for movie shoots. Piet usually works with John for motion capture capture studios use optical camera setups. So scenery should not occlude the shoots when 2 or more actors with stunt and martial art experiences are needed. vision of the cameras to the actors or decrease the camera’s capturing ability. He knows very well how John moves and reacts. They are a good team. The Nonetheless, we think that some prebuilt components and elements can be short preparation times and the less acting support in motion capture do not added into our research as well as to create an even more immersive experience. bother Piet, only if he has to take over a role that involves acting in the sense of Components that are meant to increase the actors’ immersion could contain, movie or theatre acting rather then movement or body acting, he would prefer moveable screens and effects such as wind or light sources or props that allow to have some support. for augmented digital contents like for example the dashboard of a car.

Actor3: Sven, 36 years old. 4.3 Personas Sven has been an actor for 16 years now. He worked in theatre for 8 years and then changed to film acting. He went through traditional acting training at ma- As planned for the ’Problem Identification’ phase, we identified different per- jor schools in Europe and in New York. Sven even performed for 2 well known sonas that are stakeholders and also will be users of our system. These personas cinema movies. Nonetheless, he is very interested in black box and improv act- helped to understand what the users want and were used to create the design ing. When there is time he likes to perform in front of a small audience and concepts (see section 4.6) as well as to suggest improvements of the current even includes the audience in his act to steer his own acting. When Sven is on motion capture procedure see section 4.4). stage, he draws the attention of the audience towards him and one can clearly see that he now runs the show. He likes to experiment and play with his act- Actor1: John, 25 years old. ing skills. Lately, Sven has been asked to perform for a motion capture shoot. John is a passionate young stuntman. He works for a stunt academy and trains He got the script 3 days before the shoot with very limited information about stunt moves and martial arts weekly. Three years ago he came to act for mo- the shoot, the scenario and the character. John did his best to be prepared but tion capture. Since then, it became his second job. John developed a passion is not happy about it because he could have done better. Motion capture was to perform for motion capture as well. The thought to shape a character that new to him and he did not understand all the procedures but he gave his best will hold his movements makes him proud and drives his interest to deliver a to perform. After the shoot day he reflected on his performance and the shoot good performance. The commands and the procedure of motion capture, as procedure and agreed that he could have done better. Things should change to well as the bits and pieces that he needed to act and to set the character into help Sven and others to perform better. scene in a computer game, he learned by doing and through experience. One 34 Chapter 4. Approach, Prototype and Findings 4.4 Improved Motion Capture Process 35

Actor 4: Ron, 22 years old. takes time for her to make clear what she wants and expects. Therefore, she Ron is a young actor that just completed his first acting training. So far he only would want a tool that allows her to explain to the actors better and faster what had some minor acts in school, during acting training, and for his final acting should happen in the scenes. Sarah also wants to get the best performance and class a show off play. He also worked for a commercial spot on TV once. Now, emotions out of the actors. A tool to support her with this task and allowing he went to an audition for motion capture shoots. He found the day there and her to rearrange the scenery and events quickly would help her a lot. his tasks very interesting and amusing. As he got selected to play a role for in-game cut scenes for a large computer game production, he agreed and was The Motion Capture Operator: Sam, 24 years old. excited to perform. On the shoot day it took quite some time for him to under- Sam works since 3 years for a motion capture studio and takes care of the stand what his job was and what was expected of him. Ron had to use what he motion capture system and the preparations for a shoot. Sometimes he builds had learned from his acting education quickly and on the spot. Scenes, acts and props for the shoots and arranges them to create the scenery. His job is to make emotions changed quickly, somehow the scenes do not follow the order in the sure that the equipment and the captures are working. The wishes of direc- script. Ron had to repeat scenes and poses often to deliver the right movements tors to change scenery and to modify setups are usually time consuming. He and express the emotions clearly. He would have liked to be supported by some would like to have a way to allow for quicker and more impressive changes of visual aid to get into moods quicker and to have an orientation where he should scenery to have an easier job himself but also to allow directors to live their turn to and also to allow to use his imagination more. After the shoot day, he inspiration. A happy customer and options like this brings good feedback and was tired and feelt that he could not create a character and bring his touch and new contracts. This also saves his job. skills to the play.

Director 1: Marc, 28 years old. Marc works as a producer for a game company. He was asked to take over 4.4 Improved Motion Capture Process the directing of the motion capture shoot that the company needs for their new To further understand the current state-of-practice in motion capture, we looked flagship game. Marc attended a motion capture shoot before, but this is his first into the standard procedures to do motion capture shoots. We observed motion time as director. He arrives to the shoot with the shoot list and the scenes in capture shoots in a motion capture studio and compared the procedures to sug- his mind. Marc is not familiar with the motion capture procedure. He also has gestions on how a motion capture procedure is described in literature [48]. The no acting experience so far. What he has is the clear order to get all shots in findings have then been discussed with the motion capture studio staff. Fur- today so that his company does not need to pay for an extra day. As shots have thermore, the identified personas clarified weaknesses in the current process. to be repeated and his vision of the scene does not work out right away, Marc From these input sources we defined a table, shown in figure 4.2 that describes gets impatient. Somehow the actor does not get what he wants. A tool helping the steps from preproduction to motion capture shoot in a simplified way. The him to guide through the shot day, to get things under control and to support ’Current Procedure’ column in figure 4.2 shows an ideal procedure described explaining his vision would help him a lot. by the studio that we visited. A modified version of this column is shown in the ’Improved Procedure’ column which includes our observations and discussions Director 2: Sarah, 35 years old. as well as literature suggestions to describe an ideal motion capture procedure. Sarah is an experienced actor and director. She has 8 years of acting experi- ence and 5 years of directing experiences. Sarah has directed a few motion capture shoots before as well and knows what is needed. She is used to react to changes in the shoot list and on the shoot floor quickly. Sarah is able to make a connection to the actors and one can feel that she is steering the shoot and con- trols what is going on. Working with inexperienced motion capture stuntmen and actors is not what she prefers but is not a problem for her. Nonetheless, it 34 Chapter 4. Approach, Prototype and Findings 4.4 Improved Motion Capture Process 35

Actor 4: Ron, 22 years old. takes time for her to make clear what she wants and expects. Therefore, she Ron is a young actor that just completed his first acting training. So far he only would want a tool that allows her to explain to the actors better and faster what had some minor acts in school, during acting training, and for his final acting should happen in the scenes. Sarah also wants to get the best performance and class a show off play. He also worked for a commercial spot on TV once. Now, emotions out of the actors. A tool to support her with this task and allowing he went to an audition for motion capture shoots. He found the day there and her to rearrange the scenery and events quickly would help her a lot. his tasks very interesting and amusing. As he got selected to play a role for in-game cut scenes for a large computer game production, he agreed and was The Motion Capture Operator: Sam, 24 years old. excited to perform. On the shoot day it took quite some time for him to under- Sam works since 3 years for a motion capture studio and takes care of the stand what his job was and what was expected of him. Ron had to use what he motion capture system and the preparations for a shoot. Sometimes he builds had learned from his acting education quickly and on the spot. Scenes, acts and props for the shoots and arranges them to create the scenery. His job is to make emotions changed quickly, somehow the scenes do not follow the order in the sure that the equipment and the captures are working. The wishes of direc- script. Ron had to repeat scenes and poses often to deliver the right movements tors to change scenery and to modify setups are usually time consuming. He and express the emotions clearly. He would have liked to be supported by some would like to have a way to allow for quicker and more impressive changes of visual aid to get into moods quicker and to have an orientation where he should scenery to have an easier job himself but also to allow directors to live their turn to and also to allow to use his imagination more. After the shoot day, he inspiration. A happy customer and options like this brings good feedback and was tired and feelt that he could not create a character and bring his touch and new contracts. This also saves his job. skills to the play.

Director 1: Marc, 28 years old. Marc works as a producer for a game company. He was asked to take over 4.4 Improved Motion Capture Process the directing of the motion capture shoot that the company needs for their new To further understand the current state-of-practice in motion capture, we looked flagship game. Marc attended a motion capture shoot before, but this is his first into the standard procedures to do motion capture shoots. We observed motion time as director. He arrives to the shoot with the shoot list and the scenes in capture shoots in a motion capture studio and compared the procedures to sug- his mind. Marc is not familiar with the motion capture procedure. He also has gestions on how a motion capture procedure is described in literature [48]. The no acting experience so far. What he has is the clear order to get all shots in findings have then been discussed with the motion capture studio staff. Fur- today so that his company does not need to pay for an extra day. As shots have thermore, the identified personas clarified weaknesses in the current process. to be repeated and his vision of the scene does not work out right away, Marc From these input sources we defined a table, shown in figure 4.2 that describes gets impatient. Somehow the actor does not get what he wants. A tool helping the steps from preproduction to motion capture shoot in a simplified way. The him to guide through the shot day, to get things under control and to support ’Current Procedure’ column in figure 4.2 shows an ideal procedure described explaining his vision would help him a lot. by the studio that we visited. A modified version of this column is shown in the ’Improved Procedure’ column which includes our observations and discussions Director 2: Sarah, 35 years old. as well as literature suggestions to describe an ideal motion capture procedure. Sarah is an experienced actor and director. She has 8 years of acting experi- ence and 5 years of directing experiences. Sarah has directed a few motion capture shoots before as well and knows what is needed. She is used to react to changes in the shoot list and on the shoot floor quickly. Sarah is able to make a connection to the actors and one can feel that she is steering the shoot and con- trols what is going on. Working with inexperienced motion capture stuntmen and actors is not what she prefers but is not a problem for her. Nonetheless, it 36 Chapter 4. Approach, Prototype and Findings 4.5 From Vision to Prototype (Part II) 37

in the current procedure. We think that this is an important part to consider in the improved procedure and therefore we aim to address this through our prototypes.

4.5 From Vision to Prototype (Part II)

The basic findings are based on interviews, questionnaires and the literature review on which technologies and research might be of use to approach the research goals. These findings have also been published in paper A. To get a deeper understanding of the users, we investigated acting for mo- tion capture and the needs and demands on a good motion capture actor. We suggest ways to support actors through focusing on the process of conducting a motion capture shoot based on established acting principles. These improve- ments do not necessarily need to be achieved through technology. This was then published in paper B. After the ’Problem Identification’ phase we went over to designing ideas and scenarios on how we think a solution could look like. This has then been discussed with stakeholders again. The outcomes have altered our understand- ing and shaped the design ideas, which are described even further below.

4.6 Initial Design Concepts

The main initial design concept, shown in figure 4.3, consisted of a head-worn visual component that allows the user to see the augmented reality in an im- mersive way without blocking the user’s vision and allowing the user to move freely. We integrated sound and its position in 3D space to create a more im- Figure 4.2: Motion capture procedure improvements (improvements are mersive experience complementary to addressing the visual aspect. Wind ma- marked yellow) chines were thought of as a design suggestion to add immersion but has not been implemented yet.

We identified, that especially before the shoot day there were many variations from the ’Current Procedure’ to the ’Improved Procedure’ which resemble the ideal case. These variations in the procedure had mainly resulted from changes in shot lists and scripts that were changed in short notice. Even rehearsals and preparation times were kept to a minimum or were not even planned. This also shows that motion capture procedures have to be dynamic and actors need to able to act with less preparation time, rely on their improv skills. Supporting the performance of an actors to allow for more immersion was not consider 36 Chapter 4. Approach, Prototype and Findings 4.5 From Vision to Prototype (Part II) 37

in the current procedure. We think that this is an important part to consider in the improved procedure and therefore we aim to address this through our prototypes.

4.5 From Vision to Prototype (Part II)

The basic findings are based on interviews, questionnaires and the literature review on which technologies and research might be of use to approach the research goals. These findings have also been published in paper A. To get a deeper understanding of the users, we investigated acting for mo- tion capture and the needs and demands on a good motion capture actor. We suggest ways to support actors through focusing on the process of conducting a motion capture shoot based on established acting principles. These improve- ments do not necessarily need to be achieved through technology. This was then published in paper B. After the ’Problem Identification’ phase we went over to designing ideas and scenarios on how we think a solution could look like. This has then been discussed with stakeholders again. The outcomes have altered our understand- ing and shaped the design ideas, which are described even further below.

4.6 Initial Design Concepts

The main initial design concept, shown in figure 4.3, consisted of a head-worn visual component that allows the user to see the augmented reality in an im- mersive way without blocking the user’s vision and allowing the user to move freely. We integrated sound and its position in 3D space to create a more im- Figure 4.2: Motion capture procedure improvements (improvements are mersive experience complementary to addressing the visual aspect. Wind ma- marked yellow) chines were thought of as a design suggestion to add immersion but has not been implemented yet.

We identified, that especially before the shoot day there were many variations from the ’Current Procedure’ to the ’Improved Procedure’ which resemble the ideal case. These variations in the procedure had mainly resulted from changes in shot lists and scripts that were changed in short notice. Even rehearsals and preparation times were kept to a minimum or were not even planned. This also shows that motion capture procedures have to be dynamic and actors need to able to act with less preparation time, rely on their improv skills. Supporting the performance of an actors to allow for more immersion was not consider 38 Chapter 4. Approach, Prototype and Findings 4.7 From Vision to Prototype (Part III) 39

Figure 4.3: Initial design concept Figure 4.4: Extended design concept

However, a system limited to play back is not enough for a more evolved de- sign. Personas of director 1 and 2 contributed to the insight that a more dy- 4.6.1 Component Based Setups namic design including the director is a more usable design to support immer- sion of actors into their acting environment. In the current design, actors are A further step for a more dynamic setup is a design that allows attachment and passive users that rather explore and experience the augmented reality instead detachment of components that constitute of combinations of visual effects, of actively steering it. A design where the director triggers events in the 3D sound effects, and wind effects to the scenery according to the available hard- world such as wind, sounds of explosions and thunderstorm, and visual effects ware, for instance wind machines, loud speakers, projectors, and light sources. allow more realistic and natural emotions. This would allow for events to hap- This design concept served the need in a motion capture business to react to pen in a more realistic but also unexpected way. One initial design concept was changes in the acting scene and to support the directors to alter the acting en- to allow the director to shape scenarios and sceneries before and during the act. vironment according to their inspiration and needs. Figure 4.4 shows this slightly altered design idea with added ambient lighting or projected lighting effects. 4.7 From Vision to Prototype (Part III)

A brief reality check on the design ideas was performed by the stakeholders of the project. It was still an open issue at this stage if the design concepts and scenarios would allow for more immersion. Exploring and desiging technolo- gies as well as developing the prototypes allowed to create knowledge about the feasibility and the usefulness of the prototypes. This also served as a reality check during the development process and as a way of assessing the feasibility of your design concepts. 38 Chapter 4. Approach, Prototype and Findings 4.7 From Vision to Prototype (Part III) 39

Figure 4.3: Initial design concept Figure 4.4: Extended design concept

However, a system limited to play back is not enough for a more evolved de- sign. Personas of director 1 and 2 contributed to the insight that a more dy- 4.6.1 Component Based Setups namic design including the director is a more usable design to support immer- sion of actors into their acting environment. In the current design, actors are A further step for a more dynamic setup is a design that allows attachment and passive users that rather explore and experience the augmented reality instead detachment of components that constitute of combinations of visual effects, of actively steering it. A design where the director triggers events in the 3D sound effects, and wind effects to the scenery according to the available hard- world such as wind, sounds of explosions and thunderstorm, and visual effects ware, for instance wind machines, loud speakers, projectors, and light sources. allow more realistic and natural emotions. This would allow for events to hap- This design concept served the need in a motion capture business to react to pen in a more realistic but also unexpected way. One initial design concept was changes in the acting scene and to support the directors to alter the acting en- to allow the director to shape scenarios and sceneries before and during the act. vironment according to their inspiration and needs. Figure 4.4 shows this slightly altered design idea with added ambient lighting or projected lighting effects. 4.7 From Vision to Prototype (Part III)

A brief reality check on the design ideas was performed by the stakeholders of the project. It was still an open issue at this stage if the design concepts and scenarios would allow for more immersion. Exploring and desiging technolo- gies as well as developing the prototypes allowed to create knowledge about the feasibility and the usefulness of the prototypes. This also served as a reality check during the development process and as a way of assessing the feasibility of your design concepts. 40 Chapter 4. Approach, Prototype and Findings 4.8 Initial Prototypes 41

We started the development and exploration of tools and technologies that The investigations with this setup explored the options and capabilities of the we thought would create a more immersive motion capture acting environment. Unreal Development Kit (UDK) [51], low-cost computer screens, and mini- This is furthermore explained in the following sections. malistic motion capture equipment. We realized early that the Unreal engine did not allow the malleability for our purpose of creating digital environments swiftly. Furthermore, placing computer screens around an actor revealed, as 4.8 Initial Prototypes one could imagine, multiple drawbacks. One drawback was that long cables are needed to each of the screens. Setting up screens on a large shoot floor is At first, we explored a low cost setup to approach the initial design concepts not dynamic and using cables with a length of 10 to over 20 m is not a practical and to get a better understanding of which available technologies might be of solution for motion capture shoots. It was also clear that the Kinect worked use. only for motion capture experimentation. One of Kinects limititations was its depth sensor with a the range of approximately 80 cm to 4 m [52]. We dis- covered that the practical distance of the sensor was between 2 and 3 m; thus, a smaller area of operation. Finally, placing screens around an actor blocks 4.8.1 Screens around an actor the vision of the motion capture cameras, and impedes optical motion capture shoots. We placed four computer screens on approximately eye height in different ge- ometrical setups around an actor. The actors movements were captured with a Microsoft Kinect [49]. The Unreal game engine [50] was used to display the digital content. An actor controlled the system through gestures. Figure 4.8.2 Using a motion capture system 4.5 portrays the setup. The implementation was performed in a master thesis A further step in our development as well as exploration towards a more suit- within our research [49]. able solution was to use the facilities and technology of a motion capture studio as shown in picture 4.6.

Figure 4.5: Multiple screen setup 40 Chapter 4. Approach, Prototype and Findings 4.8 Initial Prototypes 41

We started the development and exploration of tools and technologies that The investigations with this setup explored the options and capabilities of the we thought would create a more immersive motion capture acting environment. Unreal Development Kit (UDK) [51], low-cost computer screens, and mini- This is furthermore explained in the following sections. malistic motion capture equipment. We realized early that the Unreal engine did not allow the malleability for our purpose of creating digital environments swiftly. Furthermore, placing computer screens around an actor revealed, as 4.8 Initial Prototypes one could imagine, multiple drawbacks. One drawback was that long cables are needed to each of the screens. Setting up screens on a large shoot floor is At first, we explored a low cost setup to approach the initial design concepts not dynamic and using cables with a length of 10 to over 20 m is not a practical and to get a better understanding of which available technologies might be of solution for motion capture shoots. It was also clear that the Kinect worked use. only for motion capture experimentation. One of Kinects limititations was its depth sensor with a the range of approximately 80 cm to 4 m [52]. We dis- covered that the practical distance of the sensor was between 2 and 3 m; thus, a smaller area of operation. Finally, placing screens around an actor blocks 4.8.1 Screens around an actor the vision of the motion capture cameras, and impedes optical motion capture shoots. We placed four computer screens on approximately eye height in different ge- ometrical setups around an actor. The actors movements were captured with a Microsoft Kinect [49]. The Unreal game engine [50] was used to display the digital content. An actor controlled the system through gestures. Figure 4.8.2 Using a motion capture system 4.5 portrays the setup. The implementation was performed in a master thesis A further step in our development as well as exploration towards a more suit- within our research [49]. able solution was to use the facilities and technology of a motion capture studio as shown in picture 4.6.

Figure 4.5: Multiple screen setup 42 Chapter 4. Approach, Prototype and Findings 4.8 Initial Prototypes 43

have an unnatural movement when looking up, down, left or right to a screen. A solution could be to place screens or projection walls at locations where cameras are not interfered. Depending on cameras and size of the shoot volume this might become a challenge and furthermore only allows to have a small ’window’ showing the digital world.

4.8.3 Portable screen Using a portable screen to show content to an actor while acting, for instance on a head-mounted display, could be a potential solution. This involves to wear equipment placed in front of an actor’s face. This might become an issue for bodily demanding movements and interactions with objects, for instance the simple task of aiming a rifle. The design requires extra hardware or cable connection to show content on the screen. Hence, we did not further investigate this concept but focused on a different approach as explained below.

4.8.4 Using a Pico projector Figure 4.6: Motion capture studio used for testing Through gathering experiences from earlier conducted research and created Here we used the marker-based motion capture system to track the position of prototypes as well as to converge reality with our design concept, we ap- an actor on the shoot floor. An actor was equipped with a frame of lightweight proached the task to provide visual acting support that complies with a motion sport sun glasses that held three reflective markers. The position data was then capture environment through a different approach. Instead of using VR tech- sent into the Unreal engine and was used to map the movements of an actor to a nology, displays around the actor or a head-mounted screen, we started explor- digital character in first person perspective. As our test motion capture studio, ing the use of head-mounted laser projectors, so called Pico projectors. Figure provided a projector screen placed in the shoot volume without interfering with 4.7 shows the setup that we used to explore the usability of the technology. the cameras, we used it to explore a digital world through walking, looking, jumping and crouching in real-time without noticeable delay. The coding of this prototype was mainly performed by two master students working on a course project, supervised within our research. This setup allowed to explore a digital environment freely in a larger space area without blocking the vision of any camera. However, the screen was static and actors needed to look at the screen to see the content. This was an essential flaw in this design. The primary goal of motion capture for computer games is to capture natural movements to enrich animations with those movements. Any unnatural turns of the head towards the screen would result in extra work for and solvers of motion capture data. So this initial prototype could only be a solution when actors constantly face the screen, such as e.g. when driving in a car. Even here the placement of the screen is of importance to not 42 Chapter 4. Approach, Prototype and Findings 4.8 Initial Prototypes 43

have an unnatural movement when looking up, down, left or right to a screen. A solution could be to place screens or projection walls at locations where cameras are not interfered. Depending on cameras and size of the shoot volume this might become a challenge and furthermore only allows to have a small ’window’ showing the digital world.

4.8.3 Portable screen Using a portable screen to show content to an actor while acting, for instance on a head-mounted display, could be a potential solution. This involves to wear equipment placed in front of an actor’s face. This might become an issue for bodily demanding movements and interactions with objects, for instance the simple task of aiming a rifle. The design requires extra hardware or cable connection to show content on the screen. Hence, we did not further investigate this concept but focused on a different approach as explained below.

4.8.4 Using a Pico projector Figure 4.6: Motion capture studio used for testing Through gathering experiences from earlier conducted research and created Here we used the marker-based motion capture system to track the position of prototypes as well as to converge reality with our design concept, we ap- an actor on the shoot floor. An actor was equipped with a frame of lightweight proached the task to provide visual acting support that complies with a motion sport sun glasses that held three reflective markers. The position data was then capture environment through a different approach. Instead of using VR tech- sent into the Unreal engine and was used to map the movements of an actor to a nology, displays around the actor or a head-mounted screen, we started explor- digital character in first person perspective. As our test motion capture studio, ing the use of head-mounted laser projectors, so called Pico projectors. Figure provided a projector screen placed in the shoot volume without interfering with 4.7 shows the setup that we used to explore the usability of the technology. the cameras, we used it to explore a digital world through walking, looking, jumping and crouching in real-time without noticeable delay. The coding of this prototype was mainly performed by two master students working on a course project, supervised within our research. This setup allowed to explore a digital environment freely in a larger space area without blocking the vision of any camera. However, the screen was static and actors needed to look at the screen to see the content. This was an essential flaw in this design. The primary goal of motion capture for computer games is to capture natural movements to enrich animations with those movements. Any unnatural turns of the head towards the screen would result in extra work for animators and solvers of motion capture data. So this initial prototype could only be a solution when actors constantly face the screen, such as e.g. when driving in a car. Even here the placement of the screen is of importance to not 44 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 45

and tested different prototypes of a more rigid head-mount with researchers from the field of micro-optics. This led to the design of our proof-of-concept prototype.

4.9 Proof-of-Concept Prototype

After researching and discussing the options to build an immersive acting sup- port within motion capture, we decided to explore and create a more evolved prototype for the visual representation of our system. We built a projection system which is portable, lightweight and low-cost in cooperation with the Koc¸ University Micro Optics Lab [53]. This prototype serves as the proof of con- cept for this thesis and as a starting point for further research. The prototype and the use of the reflective material can be seen in figure 4.8 and is described more detail in paper D.

Figure 4.7: Picture of the explorations with a head-worn pico projector

4.8.5 Design Decisions From the problem definition phase, we gained a solid understanding on what can be placed on a motion capture shoot floor and what would potentially in- terfere with the capture of movements or the motion capture procedure. Then we discussed an idea through a video sketch, which only showed the possible functionality a prototype with motion capture operators. From these discus- sions and the previous research, we looked at technologies usable to be imple- mented in motion capture. The initial prototypes, that we created and tested in the studio delivered a deeper understanding of which technologies are feasible as well as practical for motion capture shoots. Displays or screens around an actor had many drawbacks, especially in size, occlusion of markers, costs and Figure 4.8: Proof-of-concept hardware time and space to set the system up. A portable screen or VR glasses mounted in front of an actor were not usable either as they limit the movements and 4.9.1 Technological Categorization of our Prototype agility of the actors. Then, we explored to use a pico projector mounted in different positions on the head. This allowed for an always in focus image and Our research prototype can be placed within Milgram’s virtuality continuum a lightweight setup. However, it was simply mounted on a wool cap and had to under the umbrella term mixed reality (MR) [1]. In Milgram’s taxonomy the be a bit off its optimal position, close and in the middle to the eyes. Therefore, current proof-of-concept prototype lays somewhere in the grey zone of aug- it had to be mounted in a proper way and made more stable. We discussed mented virtuality and partially in the definitions of creating an artificial AR 44 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 45

and tested different prototypes of a more rigid head-mount with researchers from the field of micro-optics. This led to the design of our proof-of-concept prototype.

4.9 Proof-of-Concept Prototype

After researching and discussing the options to build an immersive acting sup- port within motion capture, we decided to explore and create a more evolved prototype for the visual representation of our system. We built a projection system which is portable, lightweight and low-cost in cooperation with the Koc¸ University Micro Optics Lab [53]. This prototype serves as the proof of con- cept for this thesis and as a starting point for further research. The prototype and the use of the reflective material can be seen in figure 4.8 and is described more detail in paper D.

Figure 4.7: Picture of the explorations with a head-worn pico projector

4.8.5 Design Decisions From the problem definition phase, we gained a solid understanding on what can be placed on a motion capture shoot floor and what would potentially in- terfere with the capture of movements or the motion capture procedure. Then we discussed an idea through a video sketch, which only showed the possible functionality a prototype with motion capture operators. From these discus- sions and the previous research, we looked at technologies usable to be imple- mented in motion capture. The initial prototypes, that we created and tested in the studio delivered a deeper understanding of which technologies are feasible as well as practical for motion capture shoots. Displays or screens around an actor had many drawbacks, especially in size, occlusion of markers, costs and Figure 4.8: Proof-of-concept hardware time and space to set the system up. A portable screen or VR glasses mounted in front of an actor were not usable either as they limit the movements and 4.9.1 Technological Categorization of our Prototype agility of the actors. Then, we explored to use a pico projector mounted in different positions on the head. This allowed for an always in focus image and Our research prototype can be placed within Milgram’s virtuality continuum a lightweight setup. However, it was simply mounted on a wool cap and had to under the umbrella term mixed reality (MR) [1]. In Milgram’s taxonomy the be a bit off its optimal position, close and in the middle to the eyes. Therefore, current proof-of-concept prototype lays somewhere in the grey zone of aug- it had to be mounted in a proper way and made more stable. We discussed mented virtuality and partially in the definitions of creating an artificial AR 46 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 47 environment, as introduced in other research [54]. This is mainly because in- teractions between VR and real world are only implemented in a simplistic way. Interactions in our current MR prototype are possible through inertial sensors and predefined animations. We do not consider the real world within the virtual world yet. Therefore, we consider the current prototype leaning more towards the definitions of augmented virtuality. The technology that we are using falls under class 3 (HMD’s equipped with a see-through capability superimposed onto directly viewed real-world scenes) and class 5 (completely graphic display environments) of Milgrams virtuality continuum [1]. Some other aspects of AR taxonomies also apply to our prototype. The general definitions of AR do not match in the sense of tracking real world ob- jects and augmenting them with digital content but according to other research is in the range of an ’augmented performance’ which enables users to carry out tasks in the real world in a new way [54]. Furthermore, our prototype matches with the Hugues taxonomy categoriza- tion of ’augmented perception of reality’. This ”functionality consists of high- lighting the fact that AR constitutes a tool for assisting decision-making” [54]. Figure 4.9: Proof-of-concept hardware Decision making in our prototype is left to the actors and directors. We allow using the projected environment and the possible feelings triggered through it as a tool of artistic freedom and decision making. 4.9.3 Software For our future developments we think about applying another term from For the software of our proof-of-concept prototype, we selected the game en- Hugues taxonomy, imagine an impossible reality which involves associating gine Unity [55] instead of the Unreal engine which we used in previous proto- a real environment [54]. This could allow to imposter ’virtual windows’ or types. The game engine was meant to run on a smartphone instead of a server ’virtual tables’ into a real world scenery. Therefore, we would add an AR and the Unity game engine offers the support of mobile devices we needed environment to our current prototype. Nonetheless this would leave the overall for the prototype. Changing to a mobile platform allowed us to decouple the categorization of the prototype with mixed reality. system from a server. We used the mobile phone integrated sensors to create a head tracking system without using motion capture cameras. The coding of 4.9.2 Hardware the walking algorithms and the use of the phone’s sensors to allow for differ- ent views in the game engine was performed by a student within his Bachelor The hardware, shown in figure 4.9, went through multiple redesigns. The final thesis project. The virtual environment was created in the game engine’s edi- prototype was more comfortable to wear, does not occlude the user’s vision, tor. We used the engine’s functionality to address the phone’s sensors and to supports different mobile devices, and carries all components on a head strap. distribute the final app to the phone. 46 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 47 environment, as introduced in other research [54]. This is mainly because in- teractions between VR and real world are only implemented in a simplistic way. Interactions in our current MR prototype are possible through inertial sensors and predefined animations. We do not consider the real world within the virtual world yet. Therefore, we consider the current prototype leaning more towards the definitions of augmented virtuality. The technology that we are using falls under class 3 (HMD’s equipped with a see-through capability superimposed onto directly viewed real-world scenes) and class 5 (completely graphic display environments) of Milgrams virtuality continuum [1]. Some other aspects of AR taxonomies also apply to our prototype. The general definitions of AR do not match in the sense of tracking real world ob- jects and augmenting them with digital content but according to other research is in the range of an ’augmented performance’ which enables users to carry out tasks in the real world in a new way [54]. Furthermore, our prototype matches with the Hugues taxonomy categoriza- tion of ’augmented perception of reality’. This ”functionality consists of high- lighting the fact that AR constitutes a tool for assisting decision-making” [54]. Figure 4.9: Proof-of-concept hardware Decision making in our prototype is left to the actors and directors. We allow using the projected environment and the possible feelings triggered through it as a tool of artistic freedom and decision making. 4.9.3 Software For our future developments we think about applying another term from For the software of our proof-of-concept prototype, we selected the game en- Hugues taxonomy, imagine an impossible reality which involves associating gine Unity [55] instead of the Unreal engine which we used in previous proto- a real environment [54]. This could allow to imposter ’virtual windows’ or types. The game engine was meant to run on a smartphone instead of a server ’virtual tables’ into a real world scenery. Therefore, we would add an AR and the Unity game engine offers the support of mobile devices we needed environment to our current prototype. Nonetheless this would leave the overall for the prototype. Changing to a mobile platform allowed us to decouple the categorization of the prototype with mixed reality. system from a server. We used the mobile phone integrated sensors to create a head tracking system without using motion capture cameras. The coding of 4.9.2 Hardware the walking algorithms and the use of the phone’s sensors to allow for differ- ent views in the game engine was performed by a student within his Bachelor The hardware, shown in figure 4.9, went through multiple redesigns. The final thesis project. The virtual environment was created in the game engine’s edi- prototype was more comfortable to wear, does not occlude the user’s vision, tor. We used the engine’s functionality to address the phone’s sensors and to supports different mobile devices, and carries all components on a head strap. distribute the final app to the phone. 48 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 49

4.9.4 Use of Reflective Materials in Optical Motion Capture? During our research, we encountered an obvious issue of our design: the light reflections from the projector and the retro-reflective material affected the opti- cal motion capture system. We tested different retro-reflective materials within a motion capture studio and found that all reflective foils were recognised by the motion capture system. In some situations the cameras were not functional for a short time anymore as the retro-reflective foil returned too much light to handle for the camera. As a test environment, we used a motion capture studio with 32 Eagle 4 cameras and the Cortex software from Motion Analysis [56]. The foils that we tested were the Reflexite VC310 [57] and the 3M 4090 [58]. A4-sized samples of each foil were picked up by the motion capture cam- eras in an equal manner. Moving an A4-sized piece of the foils towards one camera led to a shutdown in about 2 m distance to the camera. We placed a larger 9 m x 0.775 m piece of the Reflexite VC310 foil on a wall of the shoot floor, behind the cameras. Cameras on the other side of the shoot floor in about 8 m distance from the foil shut down immediately when we ran the motion capture software. Other cameras that were angled to pick up the reflections Figure 4.10: Setup of the test environment from the foil but located at the very end of the shoot floor were still affected by the returned light and sporadically shut down as well. Figure 4.10 depicts this We found two solutions to the above explained problem: and shows the basic setup of the test environment. 1) the foil in the motion capture software, so that it will not be con- sidered. 2) Applying a notch filter to the retroreflective foil.

The first solution is only possible when the reflective materials are not large enough to affect the cameras and when the materials will not be moved. Oth- erwise re-masking would be necessary. The second solution worked in our quick tests surprisingly well. To avoid this problem reflecting infrared light that the motion capture cameras could pick up, a screen has to have a notch filter which absorbs infrared wavelength light, but allows visible light to pass. Initially, we tested a see-through plastic coating on top of the retro-reflective material. We observed that the see-through plastic coating seems to act as a notch filter for infra-red light. Nonetheless, this finding needs extended empirical testing and development. So far, we did not yet conduct a further analysis of this phenomenon yet as we think that a plastic coating on the foil would be more beneficial. 48 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 49

4.9.4 Use of Reflective Materials in Optical Motion Capture? During our research, we encountered an obvious issue of our design: the light reflections from the projector and the retro-reflective material affected the opti- cal motion capture system. We tested different retro-reflective materials within a motion capture studio and found that all reflective foils were recognised by the motion capture system. In some situations the cameras were not functional for a short time anymore as the retro-reflective foil returned too much light to handle for the camera. As a test environment, we used a motion capture studio with 32 Eagle 4 cameras and the Cortex software from Motion Analysis [56]. The foils that we tested were the Reflexite VC310 [57] and the 3M 4090 [58]. A4-sized samples of each foil were picked up by the motion capture cam- eras in an equal manner. Moving an A4-sized piece of the foils towards one camera led to a shutdown in about 2 m distance to the camera. We placed a larger 9 m x 0.775 m piece of the Reflexite VC310 foil on a wall of the shoot floor, behind the cameras. Cameras on the other side of the shoot floor in about 8 m distance from the foil shut down immediately when we ran the motion capture software. Other cameras that were angled to pick up the reflections Figure 4.10: Setup of the test environment from the foil but located at the very end of the shoot floor were still affected by the returned light and sporadically shut down as well. Figure 4.10 depicts this We found two solutions to the above explained problem: and shows the basic setup of the test environment. 1) Masking the foil in the motion capture software, so that it will not be con- sidered. 2) Applying a notch filter to the retroreflective foil.

The first solution is only possible when the reflective materials are not large enough to affect the cameras and when the materials will not be moved. Oth- erwise re-masking would be necessary. The second solution worked in our quick tests surprisingly well. To avoid this problem reflecting infrared light that the motion capture cameras could pick up, a screen has to have a notch filter which absorbs infrared wavelength light, but allows visible light to pass. Initially, we tested a see-through plastic coating on top of the retro-reflective material. We observed that the see-through plastic coating seems to act as a notch filter for infra-red light. Nonetheless, this finding needs extended empirical testing and development. So far, we did not yet conduct a further analysis of this phenomenon yet as we think that a plastic coating on the foil would be more beneficial. 50 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 51

4.9.5 User Experiences gear. The parts touching the forehead and occluding the field of vision, though very small, drew the attention of the users away from the projected scenery We have constantly designed and developed our concept and evaluated the user towards the slightly uncomfortable and occluding parts of the prototype. This experience of our designs through ’reality checks’, see the method section 2.4. was clearly an issue that limited the experience and the opportunity to allow for The main focus was set on the development of an in motion capture usable pro- immersion or the feeling of presence. Unresponsive behaviour of the phone’s totype showing digital content. Hence, we have not yet performed any quan- sensors, that we used to create a natural walking algorithm, was another issue titative usability evaluations with motion capture or trained actors. A state- that the testers pointed out. of-practise and technology research has shown that there is no off-the-shelf With our current proof-of-concept implementation, a basic functionality of solution to deliver a fully usable prototype for motion capture. the visual prototype was built that we see, from our explorations and research in After the problem definition phase we experienced that discussing design the problem identification phase, as useable in motion capture. With the current ideas with actors or motion capture operators was a delicate task. Actors ex- prototype, a user can explore an mixed reality with a lightweight and low-cost pressed there excitement to use new technology but at the same time created a head-worn setup that allows for head movements in x-y-z axis and walk-in- distance to this new idea. Experienced motion capture actors expressed that for place movements as means of navigation. It is clear that further improvements them it is not necessary to use technology because they are trained and experi- are needed to achieve even better results. The main problems that we want to enced enough to provide a good job. When talking to motion capture operators address in future explorations are described in the future work section of this a similar approach was seen. Driving technology further and using it was here thesis. seen as interesting as well. However, the fear to interrupt their motion capture We are also aware that a system such as the one we are exploring and procedure has trigged them to find as many weaknesses of ideas as possible. have built in our proof-of-concept has potential to be used in application areas Both reactions from the stakeholders were helpful in finding ideas on what can other than supporting actors with their work. Entertainment, gaming, sports, or should be done. Design concepts, through the use of video sketching as well arts, training, medical or other professional fields could be thought of. As the as the initial prototypes were then further demonstrated and discussed with prototype gains features, it would be interesting to explore other areas. motion capture operators. During these non-formal demonstration sessions, discussions about feasibility and usability in motion capture have shown how to further direct the development of our prototype. It has also shown that users were more open to bring new ideas and accept new technology in the motion capture procedure. Therefore we chose the approach to use the experiences and earlier collected data from interviews and discussions to design and develop a functional prototype first and use non-professional users to improve the proto- type. Another reason for this decision was that the access to professional actors during the design and implementation phase was limited. A usability evalua- tion with professional actors and a workshop to refine the design and use of the technology are planned to be held after reaching a field testable prototype. User experiences, so far, have been collected with users that have no profes- sional acting or motion capture experience. Most of the users are inexperienced with augmented and virtual reality head-worn gear too. This allowed us to get an initial feedback on what does not feel right or unnatural to the users. For instance, from tests in previous models of our proof-of-concept head-gear we found that it was not comfortable to wear, the 3D printed parts were annoyingly touching the forehead of a user and were recognizable while using the head- 50 Chapter 4. Approach, Prototype and Findings 4.9 Proof-of-Concept Prototype 51

4.9.5 User Experiences gear. The parts touching the forehead and occluding the field of vision, though very small, drew the attention of the users away from the projected scenery We have constantly designed and developed our concept and evaluated the user towards the slightly uncomfortable and occluding parts of the prototype. This experience of our designs through ’reality checks’, see the method section 2.4. was clearly an issue that limited the experience and the opportunity to allow for The main focus was set on the development of an in motion capture usable pro- immersion or the feeling of presence. Unresponsive behaviour of the phone’s totype showing digital content. Hence, we have not yet performed any quan- sensors, that we used to create a natural walking algorithm, was another issue titative usability evaluations with motion capture or trained actors. A state- that the testers pointed out. of-practise and technology research has shown that there is no off-the-shelf With our current proof-of-concept implementation, a basic functionality of solution to deliver a fully usable prototype for motion capture. the visual prototype was built that we see, from our explorations and research in After the problem definition phase we experienced that discussing design the problem identification phase, as useable in motion capture. With the current ideas with actors or motion capture operators was a delicate task. Actors ex- prototype, a user can explore an mixed reality with a lightweight and low-cost pressed there excitement to use new technology but at the same time created a head-worn setup that allows for head movements in x-y-z axis and walk-in- distance to this new idea. Experienced motion capture actors expressed that for place movements as means of navigation. It is clear that further improvements them it is not necessary to use technology because they are trained and experi- are needed to achieve even better results. The main problems that we want to enced enough to provide a good job. When talking to motion capture operators address in future explorations are described in the future work section of this a similar approach was seen. Driving technology further and using it was here thesis. seen as interesting as well. However, the fear to interrupt their motion capture We are also aware that a system such as the one we are exploring and procedure has trigged them to find as many weaknesses of ideas as possible. have built in our proof-of-concept has potential to be used in application areas Both reactions from the stakeholders were helpful in finding ideas on what can other than supporting actors with their work. Entertainment, gaming, sports, or should be done. Design concepts, through the use of video sketching as well arts, training, medical or other professional fields could be thought of. As the as the initial prototypes were then further demonstrated and discussed with prototype gains features, it would be interesting to explore other areas. motion capture operators. During these non-formal demonstration sessions, discussions about feasibility and usability in motion capture have shown how to further direct the development of our prototype. It has also shown that users were more open to bring new ideas and accept new technology in the motion capture procedure. Therefore we chose the approach to use the experiences and earlier collected data from interviews and discussions to design and develop a functional prototype first and use non-professional users to improve the proto- type. Another reason for this decision was that the access to professional actors during the design and implementation phase was limited. A usability evalua- tion with professional actors and a workshop to refine the design and use of the technology are planned to be held after reaching a field testable prototype. User experiences, so far, have been collected with users that have no profes- sional acting or motion capture experience. Most of the users are inexperienced with augmented and virtual reality head-worn gear too. This allowed us to get an initial feedback on what does not feel right or unnatural to the users. For instance, from tests in previous models of our proof-of-concept head-gear we found that it was not comfortable to wear, the 3D printed parts were annoyingly touching the forehead of a user and were recognizable while using the head- Chapter 5

Conclusions

5.1 Conclusion

With this thesis it was the goal to build a basis to explore solutions towards supporting motion capture actors with their work of providing realistic and believable motions by immersing the actors into their digital acting environ- ment. To reach this goal, we first performed a background investigation on the needs and demands on motion capture actors. Furthermore, the challenges motion capture actors are facing have been stated. A literature research to look for available research and hardware helping to explore ways towards a solu- tion was then performed. The investigations around the problem identification phase and to define personas, as well as the requirements towards a system so- lution have been rounded up by an ethical investigation on possible issues when introducing virtual realties as work environments in motion capture. Through exploring different prototypes that we built, we altered our design ideas and understanding of the issues to overcome. This led to a prototype that was de- scribed in the proof-of-concept section and will serve as basis for further tests and developments. The proof-of-concept implementation shows that we were able to provide a prototype that has the potential to be used in motion capture and possibly in other applications as well. We believe that through research in the problem identification phase, through the design ideas and the built prototypes, we created a basis to explore ways of supporting motion capture actors and to explore an immersive motion capture environment. The poof-of-concept implementation also shows that providing

53 Chapter 5

Conclusions

5.1 Conclusion

With this thesis it was the goal to build a basis to explore solutions towards supporting motion capture actors with their work of providing realistic and believable motions by immersing the actors into their digital acting environ- ment. To reach this goal, we first performed a background investigation on the needs and demands on motion capture actors. Furthermore, the challenges motion capture actors are facing have been stated. A literature research to look for available research and hardware helping to explore ways towards a solu- tion was then performed. The investigations around the problem identification phase and to define personas, as well as the requirements towards a system so- lution have been rounded up by an ethical investigation on possible issues when introducing virtual realties as work environments in motion capture. Through exploring different prototypes that we built, we altered our design ideas and understanding of the issues to overcome. This led to a prototype that was de- scribed in the proof-of-concept section and will serve as basis for further tests and developments. The proof-of-concept implementation shows that we were able to provide a prototype that has the potential to be used in motion capture and possibly in other applications as well. We believe that through research in the problem identification phase, through the design ideas and the built prototypes, we created a basis to explore ways of supporting motion capture actors and to explore an immersive motion capture environment. The poof-of-concept implementation also shows that providing

53 54 Chapter 5. Conclusions 5.2 Future Work 55 a mixed reality environment that actors are supposed to act in, is possible in the focus on researching the use of the technology and the responses of users real-time and by complying with the motion capture environment. This is even with the technology. This also means to evaluate the proof-of-concept imple- achieved through a lightweight and low-cost head-worn system that does not mentation in a motion capture studio with professional motion capture actors. need any cable connection to a stationary computing unit which might limit These user experience tests and evaluations will help to further investigate our the actors’ freedom of movement. Furthermore, the head-worn projector sys- concept and efforts towards a more immersive motion capture environment. tem does not cover the actors’ field of view. The symptoms of nausea or motion sickness are believed to be less present as in other VR and AR devices. This lays in the setup of the prototype as no shutter glasses or fully covering field of view glasses are used. Moreover, only one projector is used in our prototype. Therefore, no combination of two images is needed. With the limited amount of tests that we performed so far, no signs of nausea or motion sickness have been experienced. For sure, this needs to be further investigated and tested to make a clearer statement. The findings we presented in this thesis and the created proof-of-concept prototype founded a basis for further research and explorations as well as an un- derstanding of which requirments are of importance when implementing new technologies to a motion capture environment. With the research and the expe- riences provided in this thesis we have also provided a basis and guideline to allow supporting and creating further applications that are planned to be used in motion capture acting or similar environments.

5.2 Future Work

Our proof-of-concept implementation shows potential to be used in motion capture and possibly in other applications but we already see some improve- ments that we believe will increase the experience and usability of the system. One is to provide a wider field of view through the attachment of a lens in front of the laser projector. Through the attachment of a lens we hope to achieve a projection that allows to cover a higher field of view. Explorations and re- search to immersed actors into the projected digital environment is believed to be stronger through a higher field of view but we intend to extend research to exlpore immersion and presence with our prototype. Another issue that we want to improve is to provide a better way to simulate natural walking in 3D space. In our current proof-of-concept prototype, we use the inertial sensors of a mobile phone. We encounter, as others before [59], that the accelerometer sensor on the mobile phone delivers data that is too noisy to be used for accurate measurements. After some improvements of the proof-of-concept prototype, we will set 54 Chapter 5. Conclusions 5.2 Future Work 55 a mixed reality environment that actors are supposed to act in, is possible in the focus on researching the use of the technology and the responses of users real-time and by complying with the motion capture environment. This is even with the technology. This also means to evaluate the proof-of-concept imple- achieved through a lightweight and low-cost head-worn system that does not mentation in a motion capture studio with professional motion capture actors. need any cable connection to a stationary computing unit which might limit These user experience tests and evaluations will help to further investigate our the actors’ freedom of movement. Furthermore, the head-worn projector sys- concept and efforts towards a more immersive motion capture environment. tem does not cover the actors’ field of view. The symptoms of nausea or motion sickness are believed to be less present as in other VR and AR devices. This lays in the setup of the prototype as no shutter glasses or fully covering field of view glasses are used. Moreover, only one projector is used in our prototype. Therefore, no combination of two images is needed. With the limited amount of tests that we performed so far, no signs of nausea or motion sickness have been experienced. For sure, this needs to be further investigated and tested to make a clearer statement. The findings we presented in this thesis and the created proof-of-concept prototype founded a basis for further research and explorations as well as an un- derstanding of which requirments are of importance when implementing new technologies to a motion capture environment. With the research and the expe- riences provided in this thesis we have also provided a basis and guideline to allow supporting and creating further applications that are planned to be used in motion capture acting or similar environments.

5.2 Future Work

Our proof-of-concept implementation shows potential to be used in motion capture and possibly in other applications but we already see some improve- ments that we believe will increase the experience and usability of the system. One is to provide a wider field of view through the attachment of a lens in front of the laser projector. Through the attachment of a lens we hope to achieve a projection that allows to cover a higher field of view. Explorations and re- search to immersed actors into the projected digital environment is believed to be stronger through a higher field of view but we intend to extend research to exlpore immersion and presence with our prototype. Another issue that we want to improve is to provide a better way to simulate natural walking in 3D space. In our current proof-of-concept prototype, we use the inertial sensors of a mobile phone. We encounter, as others before [59], that the accelerometer sensor on the mobile phone delivers data that is too noisy to be used for accurate measurements. After some improvements of the proof-of-concept prototype, we will set Bibliography

[1] Paul Milgram and Fumio Kishino. A taxonomy of mixed reality vi- sual displays. IEICE TRANSACTIONS on Information and Systems, 77(12):1321–1329, 1994.

[2] Richard Wages, Stefan M Grunvogel,¨ and Benno Grutzmacher.¨ How re- alistic is realism? considerations on the aesthetics of computer games. In Entertainment Computing–ICEC 2004, pages 216–225. Springer, 2004.

[3] Daniel Kade, Oguzhan Ozcan,¨ and Rikard Lindell. Towards stanislavski- based principles for motion capture acting in animation and computer games. In Proceedings of CONFIA 2013, International Conference in Illustration and Animation, pages 277–292. IPCA, 2013.

[4] University of Southern California. Flatworld project. http://ict.usc.edu/projects/flatworld (last accessed: April 24, 2014).

[5] Hakan Urey, Kishore V Chellappan, Erdem Erden, and Phil Surman. State of the art in stereoscopic and autostereoscopic displays. Proceed- ings of the IEEE, 99(4):540–555, 2011.

[6] Jason Tauscher, Wyatt O Davis, Dean Brown, Matt Ellis, Yunfei Ma, Michael E Sherwood, David Bowman, Mark P Helsel, Sung Lee, and John Wyatt Coy. Evolution of mems scanning mirrors for laser projection in compact consumer electronics. In MOEMS-MEMS, pages 75940A– 75940A. International Society for Optics and Photonics, 2010.

[7] Anthousis Andreadis, Alexander Hemery, Andronikos Antonakakis, Gabriel Gourdoglou, Pavlos Mauridis, Dimitrios Christopoulos, and John N Karigiannis. Real-time motion capture technology on a live

57 Bibliography

[1] Paul Milgram and Fumio Kishino. A taxonomy of mixed reality vi- sual displays. IEICE TRANSACTIONS on Information and Systems, 77(12):1321–1329, 1994.

[2] Richard Wages, Stefan M Grunvogel,¨ and Benno Grutzmacher.¨ How re- alistic is realism? considerations on the aesthetics of computer games. In Entertainment Computing–ICEC 2004, pages 216–225. Springer, 2004.

[3] Daniel Kade, Oguzhan Ozcan,¨ and Rikard Lindell. Towards stanislavski- based principles for motion capture acting in animation and computer games. In Proceedings of CONFIA 2013, International Conference in Illustration and Animation, pages 277–292. IPCA, 2013.

[4] University of Southern California. Flatworld project. http://ict.usc.edu/projects/flatworld (last accessed: April 24, 2014).

[5] Hakan Urey, Kishore V Chellappan, Erdem Erden, and Phil Surman. State of the art in stereoscopic and autostereoscopic displays. Proceed- ings of the IEEE, 99(4):540–555, 2011.

[6] Jason Tauscher, Wyatt O Davis, Dean Brown, Matt Ellis, Yunfei Ma, Michael E Sherwood, David Bowman, Mark P Helsel, Sung Lee, and John Wyatt Coy. Evolution of mems scanning mirrors for laser projection in compact consumer electronics. In MOEMS-MEMS, pages 75940A– 75940A. International Society for Optics and Photonics, 2010.

[7] Anthousis Andreadis, Alexander Hemery, Andronikos Antonakakis, Gabriel Gourdoglou, Pavlos Mauridis, Dimitrios Christopoulos, and John N Karigiannis. Real-time motion capture technology on a live

57 58 Bibliography Bibliography 59

theatrical performance with computer generated scenery. In Informat- [19] Ozan Cakmakci, Yonggang Ha, and Jannick P Rolland. A compact op- ics (PCI), 2010 14th Panhellenic Conference on, pages 148–152. IEEE, tical see-through head-worn display with occlusion support. In Proceed- 2010. ings of the 3rd IEEE/ACM International Symposium on Mixed and Aug- mented Reality, pages 16–25. IEEE Computer Society, 2004. [8] John Gruzelier, Atsuko Inoue, Roger Smart, Anthony Steed, and Tony Steffert. Acting performance and flow state enhanced with sensory-motor [20] Li Li, Bernard D Adelstein, and Stephen R Ellis. Perception of image rhythm neurofeedback comparing ecologically valid immersive vr and motion during head movement. ACM Transactions on Applied Perception training screen scenarios. Neuroscience Letters, 480(2):112–116, 2010. (TAP), 6(1):5, 2009.

[9] Mel Slater, J Howell, A Steed, David-Paul Pertaub, and Maia Garau. Act- [21] J Adam Jones, J Edward Swan II, Gurjot Singh, Eric Kolstad, and ing in virtual reality. In Proceedings of the third international conference Stephen R Ellis. The effects of virtual reality, augmented reality, and mo- on Collaborative virtual environments, pages 103–110. ACM, 2000. tion parallax on egocentric depth perception. In Proceedings of the 5th [10] Susan Davis. Interactive drama using cyberspaces. Drama Education symposium on Applied perception in graphics and visualization, pages with Digital Technology, page 149, 2009. 9–14. ACM, 2008. [11] Rebecca Wotzko andjohn Carroll. Digital theatre and online narrative. [22] Arda D Yalcinkaya, Hakan Urey, Dean Brown, Tom Montague, and Drama Education with Digital Technology, page 168, 2009. Randy Sprague. Two-axis electromagnetic microscanner for high reso- lution displays. Microelectromechanical Systems, Journal of, 15(4):786– [12] Janet Horowitz Murray. Hamlet on the holodeck: The future of narrative 794, 2006. in cyberspace. Simon and Schuster, 1997. [23] Kishore V Chellappan, Erdem Erden, and Hakan Urey. Laser-based dis- [13] Steve Dixon. Digital performance: a history of new media in theater, plays: a review. Applied optics, 49(25):F79–F98, 2010. dance, performance art, and installation. MIT Press (MA), 2007. [24] Bob G Witmer and Michael J Singer. Measuring presence in virtual envi- [14] Joe Geigel and Marla Schweppe. Theatrical storytelling in a virtual space. ronments: A presence questionnaire. Presence: Teleoperators and virtual In Proceedings of the 1st ACM workshop on Story representation, mech- environments, 7(3):225–240, 1998. anism and context, pages 39–46. ACM, 2004. [15] Mel Slater and Sylvia Wilbur. A framework for immersive virtual envi- [25] Hua Qin, Pei-Luen Patrick Rau, and Gavriel Salvendy. Measuring player ronments (five): Speculations on the role of presence in virtual environ- immersion in the computer game narrative. Intl. Journal of Human– ments. Presence: Teleoperators and virtual environments, 6(6):603–616, Computer Interaction, 25(2):107–133, 2009. 1997. [26] Charlene Jennett, Anna L Cox, Paul Cairns, Samira Dhoparee, Andrew [16] R.M. Held and N.I. Durlach. Telepresence, presence: Teleoperators and Epps, Tim Tijs, and Alison Walton. Measuring and defining the experi- virtual environments. MIT Press, 1:109–112, 1992. ence of immersion in games. International journal of human-computer studies, 66(9):641–661, 2008. [17] Mel Slater and Martin Usoh. Body centred interaction in immersive vir- tual environments. Artificial life and virtual reality, 1, 1994. [27] Joseph L Gabbard, Deborah Hix, and J Edward Swan. User-centered design and evaluation of virtual environments. Computer Graphics and [18] Victor Brian Zordan and Jessica K Hodgins. Motion capture-driven Applications, IEEE, 19(6):51–59, 1999. simulations that hit and react. In Proceedings of the 2002 ACM SIG- GRAPH/Eurographics symposium on , pages 89–96. [28] John P Chin, Virginia A Diehl, and Kent L Norman. Development of an ACM, 2002. instrument measuring user satisfaction of the human-computer interface. 58 Bibliography Bibliography 59 theatrical performance with computer generated scenery. In Informat- [19] Ozan Cakmakci, Yonggang Ha, and Jannick P Rolland. A compact op- ics (PCI), 2010 14th Panhellenic Conference on, pages 148–152. IEEE, tical see-through head-worn display with occlusion support. In Proceed- 2010. ings of the 3rd IEEE/ACM International Symposium on Mixed and Aug- mented Reality, pages 16–25. IEEE Computer Society, 2004. [8] John Gruzelier, Atsuko Inoue, Roger Smart, Anthony Steed, and Tony Steffert. Acting performance and flow state enhanced with sensory-motor [20] Li Li, Bernard D Adelstein, and Stephen R Ellis. Perception of image rhythm neurofeedback comparing ecologically valid immersive vr and motion during head movement. ACM Transactions on Applied Perception training screen scenarios. Neuroscience Letters, 480(2):112–116, 2010. (TAP), 6(1):5, 2009.

[9] Mel Slater, J Howell, A Steed, David-Paul Pertaub, and Maia Garau. Act- [21] J Adam Jones, J Edward Swan II, Gurjot Singh, Eric Kolstad, and ing in virtual reality. In Proceedings of the third international conference Stephen R Ellis. The effects of virtual reality, augmented reality, and mo- on Collaborative virtual environments, pages 103–110. ACM, 2000. tion parallax on egocentric depth perception. In Proceedings of the 5th [10] Susan Davis. Interactive drama using cyberspaces. Drama Education symposium on Applied perception in graphics and visualization, pages with Digital Technology, page 149, 2009. 9–14. ACM, 2008. [11] Rebecca Wotzko andjohn Carroll. Digital theatre and online narrative. [22] Arda D Yalcinkaya, Hakan Urey, Dean Brown, Tom Montague, and Drama Education with Digital Technology, page 168, 2009. Randy Sprague. Two-axis electromagnetic microscanner for high reso- lution displays. Microelectromechanical Systems, Journal of, 15(4):786– [12] Janet Horowitz Murray. Hamlet on the holodeck: The future of narrative 794, 2006. in cyberspace. Simon and Schuster, 1997. [23] Kishore V Chellappan, Erdem Erden, and Hakan Urey. Laser-based dis- [13] Steve Dixon. Digital performance: a history of new media in theater, plays: a review. Applied optics, 49(25):F79–F98, 2010. dance, performance art, and installation. MIT Press (MA), 2007. [24] Bob G Witmer and Michael J Singer. Measuring presence in virtual envi- [14] Joe Geigel and Marla Schweppe. Theatrical storytelling in a virtual space. ronments: A presence questionnaire. Presence: Teleoperators and virtual In Proceedings of the 1st ACM workshop on Story representation, mech- environments, 7(3):225–240, 1998. anism and context, pages 39–46. ACM, 2004. [15] Mel Slater and Sylvia Wilbur. A framework for immersive virtual envi- [25] Hua Qin, Pei-Luen Patrick Rau, and Gavriel Salvendy. Measuring player ronments (five): Speculations on the role of presence in virtual environ- immersion in the computer game narrative. Intl. Journal of Human– ments. Presence: Teleoperators and virtual environments, 6(6):603–616, Computer Interaction, 25(2):107–133, 2009. 1997. [26] Charlene Jennett, Anna L Cox, Paul Cairns, Samira Dhoparee, Andrew [16] R.M. Held and N.I. Durlach. Telepresence, presence: Teleoperators and Epps, Tim Tijs, and Alison Walton. Measuring and defining the experi- virtual environments. MIT Press, 1:109–112, 1992. ence of immersion in games. International journal of human-computer studies, 66(9):641–661, 2008. [17] Mel Slater and Martin Usoh. Body centred interaction in immersive vir- tual environments. Artificial life and virtual reality, 1, 1994. [27] Joseph L Gabbard, Deborah Hix, and J Edward Swan. User-centered design and evaluation of virtual environments. Computer Graphics and [18] Victor Brian Zordan and Jessica K Hodgins. Motion capture-driven Applications, IEEE, 19(6):51–59, 1999. simulations that hit and react. In Proceedings of the 2002 ACM SIG- GRAPH/Eurographics symposium on Computer animation, pages 89–96. [28] John P Chin, Virginia A Diehl, and Kent L Norman. Development of an ACM, 2002. instrument measuring user satisfaction of the human-computer interface. 60 Bibliography Bibliography 61

In Proceedings of the SIGCHI conference on Human factors in computing [39] Walter F Tichy. Should computer scientists experiment more? Computer, systems, pages 213–218. ACM, 1988. 31(5):32–40, 1998.

[29] Pieter Desmet. Measuring emotion: Development and application of an [40] Klaus Krippendorff. The semantic turn: A new foundation for design. crc instrument to measure emotional responses to products. In Funology, Press, 2005. pages 111–123. Springer, 2005. [41] OculusVR. Next-gen virtual reality, 2014. http://www.oculusvr.com/rift (last accessed: April 24, 2014). [30] Sten Jonsson¨ and Kari Lukka. Doing interventionist research in manage- ment accounting. Technical report, University of Gothenburg, Gothen- [42] Vuzix. Vuzix, 2014. http://www.vuzix.com/ (last accessed: April 24, burg Research Institute GRI, 2005. 2014). [31] Gordana Dodig Crnkovic. Constructive research and info-computational [43] EC Regan and KR Price. The frequency of occurrence and severity of knowledge generation. In Model-Based Reasoning in Science and Tech- side-effects of immersion virtual reality. Aviation, Space, and Environ- nology, pages 359–380. Springer, 2010. mental Medicine, 1994.

[32] Jonas Lowgren.¨ Applying design methodology to software development. [44] Susan Bruck and Paul Andrew Watters. Estimating cybersickness of sim- In Proceedings of the 1st conference on Designing interactive systems: ulated motion using the simulator sickness questionnaire (ssq): A con- processes, practices, methods, & techniques, pages 87–95. ACM, 1995. trolled study. In Computer Graphics, Imaging and Visualization, 2009. CGIV’09. Sixth International Conference on, pages 486–488. IEEE, [33] Rikard Lindell. Crafting interaction: The epistemology of modern pro- 2009. gramming. Personal and Ubiquitous Computing, pages 1–12, 2013. [45] Sue VG Cobb, Sarah Nichols, Amanda Ramsey, and John R Wilson. Vir- [34] Kristina Ho¨ok¨ and Jonas Lowgren.¨ Strong concepts: Intermediate- tual reality-induced symptoms and effects (vrise). Presence: teleopera- level knowledge in interaction design research. ACM Transactions on tors and virtual environments, 8(2):169–186, 1999. Computer-Human Interaction (TOCHI), 19(3):23, 2012. [46] Rahul Sukthankar. Towards ambient projection for intelligent environ- ments. In Computer Vision for Interactive and Intelligent Environment, [35] John Zimmerman, Jodi Forlizzi, and Shelley Evenson. Research through 2005, pages 162–172. IEEE, 2005. design as a method for interaction design research in hci. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages [47] Technical Illusions. Castar. http://technicalillusions.com/castar/, 2014. 493–502. ACM, 2007. [48] Midori Kitagawa and Brian Windsor. MoCap for artists: workflow and [36] Barbara B Kawulich. Participant observation as a data collection method. techniques for motion capture. Taylor & Francis US, 2008. In Forum Qualitative Sozialforschung/Forum: Qualitative Social Re- [49] Claudio Redavid. Virtual test environment for motion capture shoots, search, volume 6, 2005. 2012. [37] John Zimmerman. Video sketches: Exploring pervasive computing inter- [50] Unreal Engine. What is unreal engine 4?, 2014. action designs. Pervasive Computing, IEEE, 4(4):91–94, 2005. http://www.unrealengine.com/ (last accessed: October 5, 2014). [38] Jonas Lowgren.¨ Interaction design, research practices, and design re- [51] Unreal Engine. Unreal development kit, 2014. search on the digital materials. Available at webzone.k3.mah.se/k3jolo. https://www.unrealengine.com/products/udk/ (last accessed: Octo- Accessed May, 2007. ber 5, 2014). 60 Bibliography Bibliography 61

In Proceedings of the SIGCHI conference on Human factors in computing [39] Walter F Tichy. Should computer scientists experiment more? Computer, systems, pages 213–218. ACM, 1988. 31(5):32–40, 1998.

[29] Pieter Desmet. Measuring emotion: Development and application of an [40] Klaus Krippendorff. The semantic turn: A new foundation for design. crc instrument to measure emotional responses to products. In Funology, Press, 2005. pages 111–123. Springer, 2005. [41] OculusVR. Next-gen virtual reality, 2014. http://www.oculusvr.com/rift (last accessed: April 24, 2014). [30] Sten Jonsson¨ and Kari Lukka. Doing interventionist research in manage- ment accounting. Technical report, University of Gothenburg, Gothen- [42] Vuzix. Vuzix, 2014. http://www.vuzix.com/ (last accessed: April 24, burg Research Institute GRI, 2005. 2014). [31] Gordana Dodig Crnkovic. Constructive research and info-computational [43] EC Regan and KR Price. The frequency of occurrence and severity of knowledge generation. In Model-Based Reasoning in Science and Tech- side-effects of immersion virtual reality. Aviation, Space, and Environ- nology, pages 359–380. Springer, 2010. mental Medicine, 1994.

[32] Jonas Lowgren.¨ Applying design methodology to software development. [44] Susan Bruck and Paul Andrew Watters. Estimating cybersickness of sim- In Proceedings of the 1st conference on Designing interactive systems: ulated motion using the simulator sickness questionnaire (ssq): A con- processes, practices, methods, & techniques, pages 87–95. ACM, 1995. trolled study. In Computer Graphics, Imaging and Visualization, 2009. CGIV’09. Sixth International Conference on, pages 486–488. IEEE, [33] Rikard Lindell. Crafting interaction: The epistemology of modern pro- 2009. gramming. Personal and Ubiquitous Computing, pages 1–12, 2013. [45] Sue VG Cobb, Sarah Nichols, Amanda Ramsey, and John R Wilson. Vir- [34] Kristina Ho¨ok¨ and Jonas Lowgren.¨ Strong concepts: Intermediate- tual reality-induced symptoms and effects (vrise). Presence: teleopera- level knowledge in interaction design research. ACM Transactions on tors and virtual environments, 8(2):169–186, 1999. Computer-Human Interaction (TOCHI), 19(3):23, 2012. [46] Rahul Sukthankar. Towards ambient projection for intelligent environ- ments. In Computer Vision for Interactive and Intelligent Environment, [35] John Zimmerman, Jodi Forlizzi, and Shelley Evenson. Research through 2005, pages 162–172. IEEE, 2005. design as a method for interaction design research in hci. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages [47] Technical Illusions. Castar. http://technicalillusions.com/castar/, 2014. 493–502. ACM, 2007. [48] Midori Kitagawa and Brian Windsor. MoCap for artists: workflow and [36] Barbara B Kawulich. Participant observation as a data collection method. techniques for motion capture. Taylor & Francis US, 2008. In Forum Qualitative Sozialforschung/Forum: Qualitative Social Re- [49] Claudio Redavid. Virtual test environment for motion capture shoots, search, volume 6, 2005. 2012. [37] John Zimmerman. Video sketches: Exploring pervasive computing inter- [50] Unreal Engine. What is unreal engine 4?, 2014. action designs. Pervasive Computing, IEEE, 4(4):91–94, 2005. http://www.unrealengine.com/ (last accessed: October 5, 2014). [38] Jonas Lowgren.¨ Interaction design, research practices, and design re- [51] Unreal Engine. Unreal development kit, 2014. search on the digital materials. Available at webzone.k3.mah.se/k3jolo. https://www.unrealengine.com/products/udk/ (last accessed: Octo- Accessed May, 2007. ber 5, 2014). [52] Microsoft MSDN. Kinect sensor, 2012. http://www.http://msdn.microsoft.com/en-us/library/hh438998.aspx (last accessed: April 29, 2014). [53] OML. Koc¸ university optical microsystems laboratory, 2014. http://mems.ku.edu.tr/ (last accessed: October 5, 2014). [54] Olivier Hugues, Philippe Fuchs, and Olivier Nannipieri. New augmented reality taxonomy: Technologies and features of augmented environment. In Handbook of augmented reality, pages 47–63. Springer, 2011. [55] Unity. Unity game engine, 2014. http://unity3d.com/ (last accessed: Oc- tober 5, 2014). [56] Motion Analysis. Cortex, 2014. II http://www.motionanalysis.com/html/industrial/cortex.html (last ac- cessed: October 5, 2014). [57] Reflexite. Reflexite vc 310 durabright, 2014. http://www.orafol.com/rs/europe/en/products/reflective-livery-films- product-details/items/reflexite-vc-310-durabright (last accessed: October Included Papers 5, 2014). [58] 3M. 3m diamond grade reflective sheeting 4090, 2014. http://solutions.3m.com (last accessed: October 5, 2014). [59] Eladio Martin, Oriol Vinyals, Gerald Friedland, and Ruzena Bajcsy. Pre- cise indoor localization using smart phones. In Proceedings of the inter- national conference on Multimedia, pages 787–790. ACM, 2010.

63