<<

Architecture and implementation of the system for serious games in 3D

Master Thesis

Aleksei Penzentcev

Brno, May 2015

Declaration

Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due .

Aleksei Penzentcev

Advisor: RNDr. Barbora Kozliková, Ph.D.

ii

Acknowledgement

I would like to thank my supervisor RNDr. Barbora Kozliková, Ph.D. for the opportunity to participate in the project of Department of Computer Graphics and Design, for her consultation and comments during the work on the thesis.

Also I would like to thank Mgr. Jiří Chmelík, Ph.D. for the technical support and leading the mother project for my thesis called Newron1 and to all people participated in this project for inspiring communication and contribution to it.

1 http://www.newron.cz iii

Abstract

This thesis describes a way of designing architecture for a serious game in Unity 3D environment and implementation of interaction with the game via Kinect device. The architecture is designed using event-based system which makes easier adding new components into the game including support of other devices. The thesis describes main approaches used for creation of such system. The mini-game “Tower of Hanoi” is implemented as an example of usage event-based system.

iv

Keywords

Unity 3D, game architecture, events, C#, game, game engines, Kinect, Tower of Hanoi, gestures recognition, patterns, singleton, observer, SortedList.

v

Contents

1 Introduction ...... 5

2 Overview ...... 7

2.1 Gathering of requirement ...... 7

2.2 Total list of requirement ...... 7

2.3 Planned system overview ...... 8

3 Choosing of a ...... 9

3.1 Why Unity?...... 9

3.2 CryEngine (4d generation) ...... 10

3.2.1 License ...... 10

3.2.2 Features ...... 10

3.3 4 ...... 11

3.3.1 Features ...... 12

3.3.2 License ...... 12

3.4 Ogre ...... 13

3.4.1 Features ...... 13

3.4.2 License ...... 14

3.5 Unity 3d ...... 14

3.5.1 License ...... 15

3.5.2 Features ...... 15

3.6 Project Anarchy ...... 16

3.6.1 Features ...... 17

3.6.2 License ...... 17

3.7 Comparison of game engines ...... 18

1

4 Devices ...... 19

4.1 Kinect device ...... 19

4.2 Other devices ...... 21

4.2.1 Mouse/keyboard ...... 21

4.2.2 Leap motion ...... 22

4.2.3 Touchscreen ...... 23

4.2.4 Oculus Rift ...... 24

5 General Unity ...... 26

5.1 Common elements ...... 26

5.1.1 MonoBehaviour ...... 26

5.1.2 GameObject ...... 26

5.1.3 Scripts ...... 26

5.1.4 Prefabs ...... 27

5.1.5 Unity Resources ...... 27

5.1.6 Unity Scenes ...... 27

5.1.7 Unity Packages ...... 28

5.2 Unity Lifecycle ...... 28

5.2.1 First scene load ...... 28

5.2.2 Before first frame update ...... 29

5.2.3 Update order ...... 29

5.2.4 When quitting ...... 29

5.2.5 When object is destroyed ...... 29

5.3 Patterns implemented in Unity ...... 30

5.4 Unity as a component-based engine ...... 31

6 Game architecture...... 32 2

6.1 Architecture layers ...... 32

6.1.1 The device driver level ...... 32

6.1.2 The wrappers level ...... 32

6.1.3 The common interface level ...... 33

6.1.4 The game core ...... 33

6.1.5 The mini-game level ...... 33

6.2 Design patterns ...... 34

6.2.1 Wrapper ...... 35

6.2.2 Singleton ...... 35

6.2.3 Observer ...... 35

6.3 Event-based system ...... 37

6.4 Event-based system implementation ...... 39

6.4.1 CommandMonoBehaviour ...... 40

6.4.2 PendingCommand...... 41

6.4.3 GameEvent ...... 42

6.4.4 GameEventArgs ...... 44

6.5 Basic classes ...... 44

6.5.1 MonoBehaviourBase ...... 44

6.5.2 Singleton ...... 45

6.5.3 MouseControl ...... 45

7 Kinect Interaction ...... 46

7.1 Assets used ...... 46

7.1.1 Kinect with MS-SDK ...... 46

7.1.2 KinectExtras with MsSDK ...... 46

7.2 Kinect Manager ...... 48 3

7.3 Interaction Manager ...... 49

7.4 Gesture Listener ...... 50

7.5 Prefabs ...... 52

8 Game Implementation ...... 53

8.1 Basic scene ...... 53

8.2 Mini-game example ...... 54

8.2.1 The Tower of Hanoi description ...... 54

8.2.2 Simple State Machine for The Tower of Hanoi ...... 55

8.2.3 Simple State Machine in Hanoi Towers ...... 56

8.3 Logging ...... 58

9 Conclusion ...... 59

10 Bibliography ...... 61

11 Appendix I. Supported gestures ...... 63

12 Appendix II. Prefabs settings ...... 65

4

1 Introduction The game industry is growing very fast now. market revenue in 2013 was $93 billion with prediction to become $111 in 2015 according to Gather agency research [1]. This fact gives rise to a growing interest to the game industry from students, indie developers and big companies. Game elements are coming into everyday life: more and more activities are using game thinking and game mechanics. Gamification is a fashion trend. Increasing of productivity has been noticed in many areas after its implementation [2].

One of the areas where gamification can be useful is people rehabilitation after injury and helping to children with some neurodevelopmental disorder (for example, autism). The game can motivate the first group of people to exercise limbs and help to autistics in acquisition of social interaction and communication skills with which they have problems. Design of this game is a task for psychologists. The thesis is focused on developing of a game architecture which will allow implementation of such games faster and easier.

One of the tasks during rehabilitation is ensure in required level of activity. Therefore, people should not play by using mouse or keyboard, but by moving. Special devices will track their movements and gestures and translate them into manipulations in the game. One of devices with such possibility is Kinect. It can track person’s position and performed gestures. This topic is described more in the Chapter 4.1. Chapter 7 describes the way of implementation Kinect interaction in Unity 3d environment used in the thesis. Using of the Kinect requires moving from the person to play the game. This will increase his level of mobility.

The idea of developing such game has been suggested by psychologists from Faculty of psychology. The game should be a set of different mini-games in the end. Each mini-game will solve one rehabilitation task and activate special areas of the brain during playing in it. The goal of the thesis is developing of the

5 architecture for the system which will allow adding into it more and more new mini-games in the future. This task can be solved by decreasing of connectivity between used components which can be achieved by implementing event-based system which is described in Subchapters 6.3 and 6.54. The Chapter 5 is focused on structure and a way of working with Unity 3d since it has been chosen as the game engine for the game (reasons for this and other considered alternatives are described in the Chapter 3).

The same event-based system is used for facilitation of possibility to add support of new devices and their integration in the game. The structure for this is described in in the Subchapter 6.1.

The Chapter 8 is focused on common elements which are created for the game system and described an example of mini-game implementation using developed event-based system. Implementation of scripts interaction via events allows making logging system independent from the specific game implementation and giving a possibility to create by different developers different loggers which can work in parallel in the same game and log data differently. Developers need to know only common interface for this. The logging is described in the Subchapter 8.3.

The Chapter 2 gives an overview of the project requirements which are a reason for decisions made during work on the thesis.

6

2 Overview

2.1 Gathering of requirement As was mentioned in the Introduction, the idea of the project was given by psychologists from Faculty of Psychology. They can be considered as “project owner” in terms of Scrum [3]. In case of any questions about the implementation of concrete game or about new ideas for game creation they can be asked. They evaluate the final result and if it fits their needs. Therefore, it is important to properly understand their needs. Several meetings with representative of Faculty of Phycology have been organized for this purpose.

All requirements for the complete project can be divided in two parts: requirements for the whole system and requirement for concrete games. In this thesis we are interested in first part of it to design proper architecture and choose right technology implement it.

2.2 Total list of requirement The total list of requirements has been formed based on mentioned in subchapter 2.1 meetings.

 The project is uncommercial;  The project is open source;  Target platform is only personal computer (PC);  The project should be scalable;  Adding of new mini-games should be easy and quick;  There should be possibility to develop each mini-game separately;  The system design should allow adding new devices for interaction in the future;  The main interaction device is Kinect. All other devices are optional;  The main target group is children 4-7 years old, but can be also others including adults.

7

2.3 Planned system overview General requirements for the whole project already have been described in this chapter. From there the requirements for the application on the architecture level can be taken.

The way how the application is going to be extended in the future requires having some standardized independent interface to any component of the system. Independency in this context can be understood as the possibility to use it in a separate game while plan to it into the bigger project in the future without much editing of the project code.

The common principal scheme of the whole planned system is shown in Figure 2.1. This thesis is focused on the left square there, on the PC application. But understanding of the whole picture is needed for the planning of the whole application structure.

Figure 2.1: Simple scheme of the whole system

8

3 Choosing of a game engine The list of gathered initial requirements gives us an overview of supported features expected from the graphical and other subsystems of the game (physic, audio, artificial intelligence (AI) subsystems). As the point of the thesis and the project itself is not a reimplementation of all subsystems the game usually has, a good choice could be to use some already existing solution for it. For the purpose of making a right choice, the following list of special requirements was created.

The chosen engine should:

1. Have a low entry level; 2. Be able to produce a rendered image of reasonable quality; 3. Be able to be run on a regular PC; 4. Be extendible; 5. Be available for reasonable price/free license for educational/research purposes. 6. Form the community around; 7. Have support of various modern features e.g. dynamic water, real-time shadows.

These requirements led to the selection of the Unity game engine. There is a survey behind evaluating possibilities and perspectives with respect to other game engines and the combination of engines. The results and conclusions about features of each considered product are listed below.

3.1 Why Unity? There are a lot of game and graphical engines in the market now, such as Cry Engine, Unreal Development Kit (UDK), Unreal Engine itself, Unity 3d, Project Anarchy (by ), Ogre, etc. Unity 3d is one of the most popular game engines. In this chapter we will shortly describe why Unity was chosen for our project and what were the other possibilities. 9

Our list of possible engines contains also some “professional” game engines. Or, better to say, engines for AAA-class games. It is a big advantage for them and, at the same time, a big disadvantage for our project. It gives us a lot of possibilities, but requires better computers to run the game. Let’s shortly go through all of these engines and discuss their strong points and weaknesses as well as the license restrictions.

3.2 CryEngine (4d generation) CryEngine2 gives one of the best rendering core on the game market. In the previous versions of this game engine there were developed several well-known titles, such as FarCry or . The game Ryse: Son of Rome was the first product which used the new version with number 4.

CryEngine supports , , PlayStation 3, PlayStation 4, U, , , iOS and Android. Games on this engine have usually very high requirements, but also the standard of quality is on the same level.

CryEngine does not have so big community of developers and users as its competitors. Most of its users are professionals in game development. The main development language is C++. Lua is used for scripting.

3.2.1 License Since the first release of 4d generation of CryEngine in 2013 it started to be available for free for educational and uncommercial projects. Since May of 2014, it is available also for commercial usage by indie-developers for $9.90 per user per month with no royalty fee.

3.2.2 Features CryEngine provides the numbers of advanced features in computer graphics, such as parametric skeletal animation, real time dynamic global illumination, high quality 3D water rendering, dynamic soft shadows and many others features.

2 http://cryengine.com 10

Screenshot in Figure 3.1 demonstrate the quality of soft shadows rendered by CryEngine. The payment for such quality is high system requirements. Games developed with CryEngine usually require very powerful computer to run the game with high detail.

Figure 3.1: Soft shadows in the game Kingdom Come: Deliverance3 3.3 Unreal Engine 4 Unreal Engine 4 (UE4)4 from Epic Games company was introduced in March 2014. It is an AAA game engine containing a lot of possibilities. It supports Microsoft Windows, Linux, Mac OS X, Xbox One, PlayStation 4, HTML5, iOS and Android.

It supports the C++ programming language. One of the main features of UE4 is Blueprint – the graphical interface for game programming through visual blocks, without writing the code. UE4 has quite high system requirements for developing. It is recommended to have 64-bit operating system with 8 GB RAM and a quad- core processor and DX11 compatible video card.

3 https://www.kingdomcomerpg.com/media 4 https://www.unrealengine.com 11

In September 2014 there was introduced the educational license for universities and their students. It is for free and as soon as students will want to sell their products developed with UE4, they will need to buy the full license.

3.3.1 Features Unreal Engine 4 supports also DirectX 11 rendering features including the full- scene HDR reflections, thousands of dynamic lights per scene, artist- programmable tessellation and displacement, temporal anti-aliasing and IES lighting profiles.

In Figure 3.2 is shown render from demo project for Unreal Engine created for realistic rendering.

Figure 3.2: Realistic rendering on Unreal Engine 45 3.3.2 License The EU4 is available for $19 per month plus 5% of gross revenue resulting from any commercial products built using UE4. The principle here is that you can by a license just for 1 month, cancel it later, and still use the downloaded version

5 https://docs.unrealengine.com/latest/INT/Resources/Showcases/RealisticRendering/index.html 12 legally, but without any updates. Epic Games canceled monthly payments from March 2015 and left only 5% royalties.

3.4 Ogre Ogre6 is an open source, purely graphical engine. To develop a game using Ogre, we would need to add another physical, sound and script engines.

3.4.1 Features There is a bunch of features supported by Ogre [4]. Here we will mention only some of them. The overlay system allows to build menus using 2D or 3D objects, flexible fog control, post-processing effects. Ogre supports skeletal animation, including blending of multiple animations and variable bone weight skinning. There is a possibility to use progressive meshes (LOD).

Figure 3.3: Screenshot from the game Zombie Driver CompleteFigure 3.3 and Figure 3.4 show what can be achieved with Ogre engine by a skilled developer.

Figure 3.3: Screenshot from the game Zombie Driver Complete7

6 http://www.ogre3d.org 7 http://store.steampowered.com/app/220820 13

Figure 3.4: Screenshot from the game InSomnia8 It is powerful engine using C++ as the development language.

3.4.2 License Ogre is an open-source and free to use graphical engine. Since version 1.7.0 (2010) it uses the MIT License, which allows any usage for any purpose (including commercial) and making any modification in the code without paying any fees or royalties.

3.5 Unity 3d Unity9 is one of the most popular game engines. In the middle of 2013 it was used by more than two millions of developers worldwide. The last version of the engine is Unity 5. It was released in March of 2015. Unity 3d has the biggest list of supported platforms compared to other game engines [5]. It also has webplayer plugin which allows running the game in the browser. Unity has one of the lowest system requirements for developers compared to its competitors: it can be to run on Windows XP SP2 with graphics card made since 2004.

8 https://www.kickstarter.com/projects/1892480689/insomnia-an-rpg-set-in-a-brutal-dieselpunk-univers/description 9 https://unity3d.com 14

Compared to other engines, Unity allows using several languages to program games. For the development Unity supports:

 C#  JavaScript  Boo

All mentioned languages can be combined in one scene and even a half of scripts of one object can be written in one language when another part can use different one.

3.5.1 License Unity use two main types of licenses: free and Pro. The free version is limited by supported features. Also, in case of more than $100 000 of income, the Pro version must be purchased.

3.5.2 Features The most of advanced features in Unity 4 were available only on Pro version. Since Unity 5 was released, all graphical features start to be available also in free version of the engine, but the Pro version includes also profiler to measure performance and features for teamwork.

Features, missed in the free version of Unity 4, include LOD Groups, soft shadows, render-to-texture effects. Creation of games using AI in changing environment is harder in free version of Unity 4 [6] because it does not provide dynamic navigation meshes generation support.

Figure 3.5 demonstrates an example what can be achieved using Pro version:

15

Figure 3.5: Shadows in the game Raindrop10 3.6 Project Anarchy Project Anarchy11 is a young competitor of Unity engine. The development is supported only in C++. It uses Lua as a scripting language. Firstly it was unavailable for PC platforms, now it costs around $500 to be able to run the games on the PC.

Figure 3.6 shows stylized demo game made with Project Anarchy.

10 https://www.kickstarter.com/projects/1847847657/raindrop-0 11 https://www.projectanarchy.com 16

Figure 3.6: Scene from the demo game for Project Anarchy engine 3.6.1 Features Engine features common to all versions: physics engine, AI with high speed navigation mesh generation and path following, render-to-texture effects, particles system, real-time shadows, low-level rendering access. Part of these features such as navigation mesh generation and render-to-texture in case of Unity 4 are available only in Pro version.

Vision core, physics and AI SDK are provided only as headers files with no access to the source code. But mobile shaders are provided with full code as well as samples and tutorials

3.6.2 License Project Anarchy can be used for free with no royalty. Although the mobile platforms are the main target for this engine, there is a possibility to port it to PC as well. This feature called “PC Exporter”, cost $499 allow to port created project to be run in Win32 environment.

As for the free functionality, Project Anarchy could be evaluated as better choice than, for example, Unity. But to release the game on the PC platform, the PC 17

Exporter needs to be bought. It is almost $1000 cheaper, but still requires some expenses.

3.7 Comparison of game engines Table 3.1 shows key cumulative factors which influence the selection of an engine for the project. Big cons marked are as red, better pros are marked as green. Orange highlights acceptable points but bigger effort will be required to deal with it.

CryEngine Unreal engine 4 Ogre Unity3d Project Anarchy Entry level Very high Middle High Low Middle Language C++/Lua C++/ C++ C#, C++/Lua UnrealScript JavaScrtipt, Boo Build-in AI Yes Yes No No (in free Yes system version) Community Little Big Little Very big Medium Price $9.90/month $19.90/month Free Free; $1500 Free (free from or ($500 for March 2015) + $75/month PC 5% royalty (for Pro) exporter) Requirements High High Low Medium Medium for PC Graphic Very high Very high Mediu Medium Medium quality m Table 3.1: Comparison of game engines Based on this table it is visible, that Unity has no red squares, but has three green and three orange. Other competitors contain red or less green points.

18

4 Devices

4.1 Kinect device Kinect is a motion sensing input device by Microsoft for Xbox 360 and Xbox One video game consoles and Windows PCs. Based around a webcam-style add-on peripheral, it enables users to control and interact with their console/computer without the need for any game controller, through a natural user interface using gestures and spoken commands.

Currently there is two versions of Kinect are available. Microsoft launched Kinect version two for Xbox and Windows in 2014. The second version of this device is more powerful, has better recognition quality and is able to track full skeleton of up to six people at the same time. In this thesis only Kinect version one is used, because only this version was available for development at the time.

Kinect v1 is able to track full skeleton of two persons and the presence of six people in one cadre. It has the RGB camera and it can measure depth as well. The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The device has its physical limitation for the working area. It will be described in this chapter later. It is very important to know and to mention before the beginning of the development, at the planning stage, that the Kinect sensor can be very sensitive so planning a proper way to interact with users is necessary in order to provide them with better user experience.

Kinect can work in the “near (seated) mode” and the “default mode” for depth ranges. In the near mode a part of the skeleton might be cut off, but it is still acceptable for the “seated mode”, where only the upper part of the skeleton will be tracked. They main difference between the near and the default mode is the distance in which Kinect can recognize gestures. For example, the best range for the near mode is from 0.8m to 2.5m. The physical limitation is a bit different, but 19 in case of range from 0.4m to 3m the device cannot guarantee the quality of recognition or the skeleton can be even invisible for it (in case of the closest range). The ranges for individual modes can be seen in the table 3.1.

Characteristic From To “Near mode” sweet spot 0.8m 2.5m “Near mode” physical 0.4m 3m limits “Default mode” sweet 1.2m 3.5m spot “Default mode” physical 0.8m 4m limits Table 4.1: Ranges for individual Kinect modes Another important characteristic of Kinect is the angle of (field of view). In the horizontal perspective the device can track in 57.5 degrees. For the vertical it is 43.5 degrees, but with +/- 27 degree tilt range up and down.

The mentioned limitations should be taken into account and provided to the end users as a recommendation to be sure they have enough space for playing a certain game.

Kinect also contains a microphone and is able recognize the direction of the incoming sound. It can point at 10-degree increments within the 100-degree range. There is a sound threshold for Kinect microphone on level of a whisper (about 20 decibels) to avoid trying to recognize noise. So there should be any need during the interaction with the user to say commands so quiet. Also, it brings an obvious limitation for the environment voice recognition which can be reached with Kinect.

The limitations strongly influence the requirements to the user interface and, as a result, to the game design itself. An interface element should be big enough not only to be visible for the end user standing 3 meters far from the screen, but also to allow the user to interact with this element properly via gestures [7]. Gestures 20 recognition in Kinect v1 is still far from being perfect. Especially, in case of mouse imitation, the natural human tremor should be taken into account. It will be very hard for a person to point at some small element on the screen. In the Human Interface Guidelines from Microsoft [8] the authors even mention that it is not recommended to simply map gestures to a touch interface. It is better to use special gestures instead.

In the Human Interface Guidelines from Microsoft [8] there can be found much more points to consider in game design when using Kinect and to plan the proper way of user interaction to gain better user experience. In this part only basic principles were mentioned, which will lead to some issues and deviations in cases when are not solved or treated on the architecture level.

4.2 Other devices One of the goals of this project on the architectural level is to have the possibility to easy add the interaction with other devices into games. It can be already existing devices and new ones. There are already several existing devices with already known features. They should be planned for the integration first. Among these devices belong the following.

4.2.1 Mouse/keyboard There is a possibility to create a game which would interact only with Kinect and ignore these classical devices. But users get used to them and can feel uncomfortable trying to use only Kinect for simple operations by gestures they have never done.

A mouse and a keyboard are classical devices. Even when Kinect was determined to be the main device for the project, there should be the possibility to use them. It is important also because mouse and keyboard are the main devices on PC platform.

21

The interaction via mouse or keyboard is very precise. The user can point exactly on single pixel when needed. The keyboard has the same ability; if key is pressed, the message will come to the controller immediately and the program will know the key code belonging to the given key. In this thesis the mouse and the keyboard are used for testing purposes in case of Kinect unavailability and as main device. Therefore, is always the way to interact with games via these two devices.

4.2.2 Leap motion

Figure 4.1: Leap Motion device (on the table) and interaction with it12 Leap Motion13 is a quite new device which was released in 2012. It supports hand and finger motions as an input. Hands are all the time moving in the air and do not touch anything. All manipulations have to be done from this position. The way of interaction with the device is shown in Figure 4.1.

The hands need to be always positioned above the device. Otherwise it will not be able to recognize them. The device tracks only hands and fingers.

12 http://www.vjsmag.com/wp-content/uploads/2014/05/vjs-magazine-leap-motion-2.jpg 13 https://www.leapmotion.com 22

The device in general has some problems and limitations [9]. One of them is recognizing the hand crossing. When one hand is crossing the other, the device loses the tracking. If the interaction is not planned with respect of this fact, it can bring unexpected reactions from the program. Another limitation appears when putting fingers together. Because of this the tracking is lost until the fingers will become separated again.

Leap Motion will not be supported “from the box” in the game, but later, architecturally, it will be possible to add into a project the wrapper to the device to be able to interact with it and enable sending standard events, which will be defined in this thesis.

4.2.3 Touchscreen Touchscreens are commonly used in cellphones and tablets and over the last years they are becoming more and more popular. Companies already started to produce laptops with it. The market of devices with touchscreen is growing very fast. With respect to this fact, the touchscreen has been considered as one of possible ways to interact with the game.

The interaction via touchscreens has several features:

 It is “finger dependent”. The user is usually not able to touch an object which is smaller than the size of his or her finger. But fingers have different sizes from person to person. This means, the game or any other application needs to have bigger elements with some space between them. Otherwise the user may miss the touched object.  Touchscreens are not precise. The “touch” there is not the same as mouse “click”. Mouse click has concrete coordinates and it is visible, where it has happened. In case of touch, the interaction happens in some area and the coordinates provided by system are a kind of approximation of this touched area. This means that the game with touchscreen support should not count

23

with very precise input, because it is impossible to reach this with touchscreens [10].

Despite these disadvantages, touchscreens are usually tracked in the same way as mouse. Game designer should take care about elimination mentioned disadvantages.

The support of touchscreens is not implemented in this thesis. But it gives the ability to add it later by mapping the touchscreen events to given mouse events.

4.2.4 Oculus Rift

Figure 4.2: Oculus Rift Developer Kit 214 Oculus Rift15 (shown in Figure 4.2) is a Virtual Reality Helmet. It is not released to the market yet. Currently only the Developer Kit versions are available.

14 https://www.oculus.com/order 15 https://www.oculus.com 24

The screen of the device is located very close to eyes and fixed on the head. It can track the movements of the head which can be used to move the camera or other objects in the game.

Unity 3D supports the Oculus Rift from the box. The integration into the games can be done by using movement events and mouse events which are provided by the game core.

25

5 General Unity

5.1 Common elements Unity basics are described in this chapter. Here they are described briefly at the level needed for understanding terms used in the thesis and the reasons of applied architecture decisions. More of them can be seen in official Unity tutorials [11].

5.1.1 MonoBehaviour MonoBehaviour is the basic class in Unity. All new scripts are inherited from this class by default. MonoBehaviour provides methods which implement the Unity lifecycle.

If class is inherited from MonoBehaviour, it:

 can be attached to the Unity3d objects as a component;  provides the possibility to have access to variables, lists, arrays of such class with Inspector pane in Unity3d Editor;  supports standard of Unity3d methods and events, such as Start(), OnGUI(), Update() etc.

5.1.2 GameObject Game object is the main class for all existing game objects. Each object is a container. In can contain different elements (components) and their combinations which can bring special properties to them. Game objects are displayed in the hierarchy view in the Unity editor [12]. Every game object contains already the Transform component. It is not possible to create the game object without it. Transform is used very often. It defines position, rotation angle and scale of the object.

5.1.3 Scripts When the script is created and attached to the object, it is displayed in Inspector as a component. Any public variables (if they are not marked differently) defined in 26 the script will be available in the Inspector for the object. If any class (script) is attached to the game object, it is possible to get direct access to it via the script. In case there is a need to get access to the component which was attached to another component, GetComponent() function can be used. But it is not very fast function and it is recommended to cache the output from it [13].

5.1.4 Prefabs Prefab is one of resource types used for multiple usages. It can be inserted in any amount of scenes and multiple times into one scene. When a prefab is adding into the scene, its instance is created. All instances are links to the original prefab and, in fact, its clones. If prefab itself will be changed, all its instances will be changed accordingly.

5.1.5 Unity Resources Unity operates not only with “prefabs” but also with “resources”. The resource can be a texture, audio or video. But also it can be saved as a file unity object. The main difference between resources and prefabs is the following: the resource is always loading into the scene “as it is” and changes in the editor does not influence on the initial resource file and will apply only to a particular object they were made on, but editing settings of a prefab in the Unity editor will change start settings of all objects created from this prefab.

By default, Unity is looking for resources in a folder named “Resources”. If many folders with this name will exist, each of them will be examined. The content of “Resources” folders is always copied into the build of the game.

5.1.6 Unity Scenes Unity game is in fact a set of scenes. Each scene can contain its own objects with special settings. Scenes are used to split the game into several pieces and operate them separately. Usually, the game is a set of scenes ordered in special sequences designed by the author. Scenes can be used to create a main menu, individual

27 levels, and anything else. Documentation suggests to think of each unique Scene file as a unique level [12].

5.1.7 Unity Packages Unity allows us to export all created items as a one file (package). Later the content of this package can be imported in any Unity project. The export can be done automatically with all dependencies, which means for example if a Scene will be selected to be exported as a package with all dependencies, then all models, textures and other assets that appear in the scene will be exported as well. All assets in Unity Asset Store are created in this way.

5.2 Unity Lifecycle Unity calls certain event functions in certain moments. Every script inherited from MonoBehaviour can implement/override those functions. There is possibility to disable calling some of these functions. This can be done by disabling checkbox on the editor. But it will only prevent Start(), Awake(), Update(), FixedUpdate(), and OnGUI() from executing. If none of these functions are present, the checkbox is not displayed.

Functions belong to its stages in the lifecycle. The most important are listed below.

5.2.1 First scene load The following functions are called when scene starts (one time per one object in the frame):

Awake() -- always called first for the object right after the initialization of prefab (If a GameObject is inactive during start up, Awake() is not called until it is made active, or a function in any script attached to it is called.)

OnEnable() -- (called only if the Object is active) called only after it is enabled. It is recommended to use this method to subscribe on desired events.

28

5.2.2 Before first frame update Start() -- called before rendering or the first frame, it is called only once for a given script. Start() may not be called on the same frame as Awake() if the script is not enabled at initialization time.

5.2.3 Update order FixedUpdate() -- does not depend on Update(). It can be called multiple times per frame. All physical calculations are going right after FixedUpdate().

Update() -- called once per frame. It is the mail function for frame updates.

LateUpdate() -- called once per frame after Update function has finished. Any calculations that are performed in Update will have completed when LateUpdate begins.

5.2.4 When quitting OnDisable() -- called when behavior becomes disabled or inactive. This is also called when the object is destroyed and can be used for any cleanup code. This method is a good choice to unsubscribe the object from any events it has been subscribed.

5.2.5 When object is destroyed OnDestroy() -- called in the last frame for the object existence.

The simplified lifecycle scheme is shown in Figure 5.1. There are shown only basics functions which are used in or influent on the project. Functions which can be disabled by MonoBehaviour checkbox are marked bold. The complete scheme with all events can be seen in [12]

29

Figure 5.1: Execution order of Event Functions in Unity 5.3 Patterns implemented in Unity In general development it is very common to use design patterns. And game development is not an exception. There are several patterns used here as well. Some of them are the same as in the application development, but some are 30 different. Many general patterns are already integrated into Unity and they have to be taken into consideration during the architecture design. The main implemented patterns which needs to be mentioned here are:

 Service Locator in concert with the Component pattern in its GetComponent() method [14];  Update method is implemented [14] in several classes, including MonoBehaviour.

Service Locator pattern allow to find any component attached to the object it can be very useful, but in case of overuse can decrease performance [15].

Update method is mentioned many times in this thesis. With this pattern each specific object can be responsible for update of its status to the system. Whole architecture described in this thesis is implemented based on it.

5.4 Unity as a component-based engine Unity is an example of component-based game engine. Its classical structure can be seen in [16]. Every object in the game is inherited from the main class called GameObject. It contains a lot of base components, such as: camera, collider, renderer, rigidbody, transform, tag, etc. Each GameObject can be unique by the composition of components, as it usually does in component-based engines. GameObject class provides a common interface to access properties of an object.

One of the advantages of the component based engine is that every game object can be easily defined by the property of its components. But this can lead to performance issues. It is very important to eliminate them especially in case of mobile development [17]. One possible optimization is described in the Subchapter 6.5.1.

31

6 Game architecture

6.1 Architecture layers Requirements for easy support of the project after release have been considered during developing of the application architecture. The application structure has been divided into several logic levels:

Level of device driver – responsible for plug-in of the device to the computer. The software (drivers) can be released together with the application.

Bridge level – software interface, making a “bridge” between API of the device driver and common application interaction interface.

Level of common interface – it is a standardized interface. The interaction with every supported device is going through it.

The game core – handles signals which are going through common interface. Basically, the main logic to switch between mini-games is located here.

Mini-game level – responsible for specific behavior for each concrete mini-game.

Each level will be shown in more details below.

6.1.1 The device driver level The Kinect SDK version 1.8 should be installed on the computer in order to run the game. There is possibility to use other drivers for this purpose, but Microsoft drivers are recommended.

6.1.2 The wrappers level On this level the C++-calls from driver are wrapped into C# functions to integrate them into the game. Two assets from Unity Asset Store are used for this purpose: Kinect with MS-SDK and Kinect Extras with MS-SDK from RF Solutions. Information related to both assets will be provided in special chapter 7.1.

32

In order to plug-in another devices to the game similar packages should be used and created.

6.1.3 The common interface level The transition of device-specific calls to the standardized interface is happening here. Every mini-game and the game core should be able to work with this standardized interface. The interface can be extended on demand when new mini- games will be needed to add.

6.1.4 The game core When switching between scenes and mini-games, logging is happening on this level. The core is only answering on calls of some common interface functions. It does not transfer them to the mini-games. They work directly with common interface.

6.1.5 The mini-game level It defines specific logic for each mini-game and reaction on user manipulations. This is the last level in the suggested model. The subscribing on events, needed for concrete mini-game is happening here. All other calls are ignored.

The common scheme in the context of working with the devices in shown in Figure 6.1.

33

Figure 6.1: Six-levels scheme 6.2 Design patterns Several design patterns are described in this subchapter. The main focus is on its application in the developed system and on approach used for it.

34

6.2.1 Wrapper There are a lot of wrappers implemented in the thesis: InteractionWrapper, KinectWrapper, BaseWrapper, KeyBoardWrapper most of them are provided by used assets. BaseWrapper is used only to start initialization of KeyBoardWrapper which is singleton. InteractionWrapper and KinectWrapper are used to allow interaction with Kinect and dll-libraries created for it.

6.2.2 Singleton Every wrapper should be presented in the game only once. Singleton pattern has been used for this purpose.

All Wrappers are set to be initialized during the start of the first scene. This behavior has been done by creating BaseWrapper class and adding it to the Camera object in the first scene. This class is only calling the Instanstiate property of all wrappers which are singletons. So, the BaseWrapper can be added many times to the scene or can be added to different scenes. It will not crash anything. It will just ensure that all needed wrappers are initialized.

From the beginning, one of the possible usages of singleton could be a Master Manager class responsible for every general activities in the game: initialization of wrappers, save/load of game state, logging, etc. This approach would lead to strong connectivity and big dependency of the whole game on one class. It is possible architecture way, but it is not recommended to use in big and complex projects [18]. Overuse of singletons can prevent refactoring in future. In case there will be a decision to remove the singleton in the future, it will be very complicated as every component in the game will be dependent on it.

6.2.3 Observer The event-based system is implemented using observer [19] pattern. In fact, C# already contains this pattern implemented as events [20]. But for this thesis additional functionality is needed. Objects should not react on the event immediately. They should store them somewhere and react on it before call of 35

Update() method. The concrete aspects of the implementation will be shown in chapter 6.3.

The difficulty of using standard C# events in the game is coming from following situation:

Imagine there is an ObjectA, where MethodA() is called. This method contains 3 steps and calls 3 other methods, which implements these steps (MethodA1(), MethodA2(), MethodA3()). In the second MethodA2() some event is published. Now the following happens: all subscribers start to handle the event, execute some logic for this. It this logic another event can be published. During handling of it, new event will be published, etc.

In this case two problems can occur. The first one is that such big chain is very difficult to debug. But the bigger problem can happen if one of events will come back to ObjectA and it will try to handle the event by some MethodB(). At this moment, MethodA() can still do something with the ObjcectA, and did not finish interacting with object’s properties. And MethodB() will be executed on the object in invalid state which can lead to unpredictable behavior.

This kind of error is very hard to find. Trying to fix it can lead to the control of sequence of methods calling. The structure can become very complex in this case.

The solution used to avoid such problem is the following:

Each object will have a function to store the occurred events. When the event will occur, it will not be executed immediately, but will be stored in this storage. The execution will happen during calling of the Update() method of MonoBehaviour, which is called in every frame. Classes CommandMonoBehaviour and PendingCommand described in Subchapter 6.5 are used to implement this behavior.

36

6.3 Event-based system The architecture ground is a “world air”, which contains all events, happening in the game world. Each class can subscribe on any available event. Also, every class should send information not into specific responsible class, but put in on the “world air”, unless it is information for “internal” usage (for example, “internal communication” between wheels and car body). But all sensitive information should be transferred to the world ether to be available for other classes such as Logger or Score counter.

This approach allow us to illuminate strong connection between components and make them replaceable.

The event system supposes that not only the incoming data from devices goes on the “world air”, but also another game events. However, in the situation, where every mini-game is developed by another person, it is almost impossible to guarantee common approach in it without controlling every step of the developer. But the implementation of such control does not make sense.

To reduce the problem using different approach, the main menu well demonstrates the expected “best practices” in its usage to show how it works and still being able to work with mini-games using another architecture principles inside. But they still must answer on common calls.

Connected to each mini-game the world-air scheme looks like shown in Figure 6.2.

37

Figure 6.2: World-air scheme The player can be represented in the game as a concrete object, alter ego or as a set of scripts which allow interacting with the game world. Representation of the player in the game is counted as a Unit here. It is a main point of the interaction with the game. The representation interacts with the game world by moving there, putting events on the world air: where he came, what he achieved, etc. Manager classes are subscribed on these events: SoundManager, AchievementManager, EffectManager, ScoreManager, etc. They can process information they got according to their needs. They are not limited to subscribe to any event in the game.

In case the event-based system with “world-air” would not be in place, the scheme would look as shown in Figure 6.3.

38

Figure 6.3: Connections between components without event-based system 6.4 Event-based system implementation The whole structure of event-based system is shown in the Figure 6.4. Each element will be described in details in this Subchapter.

39

Figure 6.4: Complete class diagram of event-based system 6.4.1 CommandMonoBehaviour CommandMonoBehaviour is an abstract class. It is used to keep all occurred events until the Update() method of MonoBehaviour is called. This will eliminate the possibility of collision compared to using “normal” C#-events.

Every object which will be a subscriber to any event should be inherited from this class. Then it will collect all event it is subscribed for and executed during the Update() call from MonoBehaviour.

Pending commands are stored in CommandMonoBehaviour as queue types of PendingCommand. During Update() execution every command is executing in the same sequence as it came in the queue (FIFO principle)

40

The class diagram for CommandMonoBehaviour is shown in the Figure 6.5. EventSubscriber represents there any class this will subscribe on events.

Figure 6.5: CommandMonoBehaviour class diagram 6.4.2 PendingCommand For a given EventArgs object it stores callback and arguments for it. It can call the method from callback with stored arguments when needed. The class diagram for PendingCommand class is shown in the Figure 6.6.

Figure 6.6: PendingCommand class diagram

41

6.4.3 GameEvent This is the main class in the event-based system. It stores all delegates/callback which need to be called and ID of subscribers. It manages subscribing, unsubscribing and publishing.

GameEvent contains storage for the list of callbacks where for every unique subscriber ID the delegate to call is set. Delegates are stored as a type Action, which can store pointer to the function with one argument. When event will be published by Publish() function all callbacks will be called with stored argument. The class diagram for GameEvent class is shown in the Figure 6.7.

Figure 6.7: GameEvent class diagram To subscribe on the event unique ID needs to be mentioned. Counter for IDs implemented as a static variable _counter in CommandMonoBehaviour class. The counter will be increased by 1 when every object inherited from CommandMonoBehaviour will be initialized. Newly initialized object will have assigned current value of _counter. The ID is stored in variable of type long which

264 means − 1 objects can be counted without collisions. 2 42

Several variant were considered for implementation of storage for subscriber’s callbacks:

1. Key-value dictionary; 2. List; 3. Sorter list.

The key-value dictionary gives possibility to add and remove elements in linear time (O(n), where n – amount of elements), but this time contains also constant to compute hash. This type of storage uses more memory.

The normal list allows to add new events into the queue in linear time without any constant. But all n elements must be examined in case of need to delete one specific element. This means that O(푛 ∗ 푛) operations needed to delete all elements from the list one by one.

Sorted list allows to eliminate quadratic time needed for deletion. Sorted list needs the same amount of operations to perform deletion and insertion – O(log2 푛). This means that no more than n ∗ log2 푛 operations are needed to delete all elements from sorted list one by one. But insertion will need the same amount of operations. Total time for both operations calculated in equation (1).

2 n ∗ log2 푛 + 푛 ∗ log2 푛 = n ∗ log2 푛 ∗ 푛 = n ∗ log2 푛 = 2푛 ∗ log2 푛 (1)

Total time for normal list will be calculated shown in equation (2).

푛 + 푛 ∗ 푛 (2)

The ration of amount of operations for normal list and sorted list is shown in equation (3).

푛+푛∗푛 1+푛 = (3) 2푛∗log2 푛 2∗log2 푛

As shown in (3) total complexity of insertion and deletion operations for linear list is higher than for sorted. This is a reason why sorted list was chosen as a type of storage for events. Sorted list does not give big advantage if there are only few 43 subscribers in the systems, but advantage starts to be visible already after 3rd subscriber.

6.4.4 GameEventArgs This class allows to store any kind of arguments which can be transferred via events. As the result, there is no need to create any other classes in the game for this purpose. Templates allow to use any of them. The class diagram for GameEventArgs class in shown in the Figure 6.8.

Figure 6.8: GameEventArgs class diagram

6.5 Basic classes Few optimizations can be done in Unity. In order to make it easier, several basic classes are provided as a start point for all subgames.

6.5.1 MonoBehaviourBase This class encapsulates the Transform property. In Unity it is taking much time to get it from the object. The best practice is to cache it [21]. This can be done manually in every class used in the game, but easier way is to inherit once and do not implement it in every class.

The syntax of this class does not follow to the code of C# guidelines [22]. The public property is named with mainly lower case letters. It is done in this way to allow not to rewrite the existing files in different games to enable caching of Transform. This will be applied automatically after inheritance. MonoBehaviourBase class is inheritor of standard MonoBehaviour which means

44 that all classes inherited from MonoBehaviourBase will also be able to react on Update() and other events of Unity lifecycle described in Subchapter 5.2. The class diagram for MonoBehaviourBase is shown in the Figure 6.9.

Figure 6.9: MonoBehaviourBase class diagram 6.5.2 Singleton This class allows to easily define singleton from a given class if needed without writing any additional code. Only what is needed – to inherit from it. This class itself is inherited from MonoBehaviour. This means that its inheritors will receive Unity lifecycle events. But there was created a copy this class called SingletonCommand to enable for its inheritors support of event-based system.

6.5.3 MouseControl This class is used as emulator of mouse events. It calls native Win32 functions inside for this purpose. MouseControl contains public static methods which can be called from any other script.

45

7 Kinect Interaction

7.1 Assets used Two assets are used as base for implementation of Kinect interaction,“Kinect with MS-SDK” and “KinectExtras with MsSDK” published by RF Solutions. “Kinect with MS-SDK” is free and provide possibility to use gestures recognition with Kinect. “KinectExtras with MsSDK” cost $10, but it can be used for free by universities, students and teachers.

7.1.1 Kinect with MS-SDK KinectManager is the main class in that asset. It needs to be attached to any object in the scene. KinectManager has several settings which allow to configure it as needed. Used settings will be described later.

7.1.2 KinectExtras with MsSDK “KinectExtras with MsSDK” asset adds advanced functionality to the “Kinect with MS-SDK”. It contains classes to enable face recognition and voice recognition. But the most important from them is the InteractionManager class which can control the mouse cursor. It needs to be attached to any object in the scene.

In order to be compatible with proposed architecture, several changes have been done. The original and modified structures are shown in Figure 7.1 and Figure 7.2. Changes itself will be described separately for each class.

46

Figure 7.1: Class diagram of Kinect with MsSDK assets

47

Figure 7.2: Current structure of classes 7.2 Kinect Manager KinectManager is the class responsible for the most of the Kinect interaction features. It uses KinectWrapper to interact with KinectWrapper.dll which encapsulates thework with Kinect drivers. KinectManager class cooperates with KinectGestures class to recognize detected gestures.

The class diagram for KinectManager is shown in Figure 7.3.

48

Figure 7.3: Kinect Manager class diagram 7.3 Interaction Manager InteractionManager is the main class which has been added in “KinectExtras with MsSDK” asset. It uses InteractionWrapper class to interact with Kinect via InteractionWrapper.dll also provided with the asset.

Original InteractionManager has been modified to be compliant with Event-based System. Before modification it directly used MouseControl class to emulate mouse events happening as reaction on gestures. After changes made it is publishing OnMouseEvent. One of its subscribers is the instance of MouseEventsControl class. MouseControl calls are happening there. OnMouseEvent is defined in InteractionEventAggregator class. 49

InteractionManager has been inherited from CommandMonoBehaviour to support the Event-based system. The class diagram for InteractionManager is shown in Figure 7.4.

Figure 7.4: InteractionManager class diagram InteractionManager also defines custom cursor texture to be displayed in both states: “clicked” and “released”.

7.4 Gesture Listener

KinectManager class contains the list of objects where it will send messages about recognized gestures, detected users etc. An object only should response to the GestureListenerInterface.

The interface contains five methods:

50

1. void UserDetected(…) - invoked when a new user is detected and tracking starts. 2. void UserLost(…) - invoked when a user is lost. In this method the used resources can be freed. 3. void GestureInProgress(…) - invoked when a gesture is in progress. 4. bool GestureCompleted(…) - invoked if a gesture is completed. Returns true, if the gesture detection must be restarted, false otherwise. 5. bool GestureCancelled(…) - invoked if a gesture is cancelled. Returns true, if the gesture detection must be restarted, false otherwise.

The class diagram of Gesture Listener used in the project is shown in Figure 7.5.

Figure 7.5: Gesture Listener class diagram Gesture Listener is used as a connector between gesture recognition feature provided from asset and event-based system described in chapter 6.3. The connection is implemented by publishing OnGesture event from InteractionEventAggregator. 51

7.5 Prefabs Several prefabs has been created to be used by the game cores:

 LoggerPrefab – GameObject with Logger script attached.  CursorPrefab – GameObject with textures to be used for custom cursor. This prefab is attached to InteractionManager.  KinectManagerPrefab – GameObject with all main gesture recognition scripts attached: KinectManager, InteractionManager, MouseEventsControl, GestureListener.

KinectManager allows us to set many parameters. For example, it can display user map and skeleton detected with possibility to change the width of the display area for it. Appendix II contains tables Table 12.1, Table 12.2Table 12.3Table 12.4 there are shown settings for KinectManagerPrefab, InteractionManager, CursorControlSwitchPackages

52

8 Game Implementation

8.1 Basic scene The basic scene – the main interface – according to requirements is designed as follows:

There are several points, where the camera can stand. In front of each point possible games are displayed. When the player clicks on it, the camera comes closer and the player can choose the level of difficulty or some other available configurable features of a concrete game.

The camera is moving smoothly between points. The amount of points can be changed in the future. There is be a possibility to change it flexibly.

The amount of points depends on the number of “parts of the brain”, which will be present in the game. From the beginning it is just few of them. But later there can be added more.

Being present in each point, the player can see several available mini-games from this part. After choosing them the camera will move closer to let choose the level of difficulty or other specific parameters. The amount of mini-games is also flexible.

Each point is represented in the game scene by an empty invisible object. Its coordinates and direction is defining coordinates and direction for the camera in this point. The sub points are done in the same way. The scheme of it is shown in Figure 8.1.

53

Figure 8.1: Menu implementation scheme Camera can move between points and sub points roundly. Return on/to the back level is going to the same point as previously. InterfaceTestScene demonstrate the approach which can be used to implement described behavior. All manipulation with points can be done directly in Unity editor by moving objects which represent every point.

The camera is responding to various ActionEvents.

8.2 Mini-game example

8.2.1 The Tower of Hanoi description One of the mini-games proposed for implementation in set is The Tower of Hanoi. It is well-known mathematical game. This game can be a good example of using architectural approach described in chapters 6 and 7.

54

The principal of this classical game is the following. There are 3 rods, first of them has a stack of discs in an ascending order of size. The smallest is at the top. The goal is to move the pyramid discs from one rod to another using the fewest moves.

The rules to move one disc are the following:

1. Only one disc can be moved at one moment. 2. Each time only the upper disc of the rod can be moved from top of it. 3. Bigger disc cannot be placed on the top of a smaller one.

The classical realization contains 8 discs. The implemented mini-game allows to set any amount if discs from 1 to 8. This must be chosen in the menu before the launch of the game.

8.2.2 Simple State Machine for The Tower of Hanoi Realization of a state machine generally can be done in two main ways:

1. By using state pattern [19] 2. By using declarative approach

Patterns overusing can be considered as antipattern [18] and it is not recommended to implement every functionality via some “classical” pattern. Required functionality often can be implemented differently and simpler, which will lead to easier maintenance and will make the code more understandable.

As the behavior of the disc is simple and it does not operate by many different states, the declarative approach has been chosen.

To implement it, the following has been declared:

 Names of states are described via enumerations  Current state is always hidden  All state changes happen only via call state() methods. There can be executed logic of moving from one state to another. Then it executes SetState(newValue) and specific state logic 55

 Function for changing the state can have parameters  SetState contains also exit logic from the previous state and only then it sets the new state state=value  Event handlers do only what is possible in the current state

There are several states. The transition between them is a transaction from atomic operations. This means that they all always happen together, in a right order. Between them there cannot be executed any other code. By changing state from A to B the following is happening:

1) The exit code from state A is executing; 2) The state is changing from A to B; 3) The entering code into state B is executing.

StateA method should be called to set the state of A. It will execute the required logic and will call SetState(A). It is not recommended to call SetState(A) manually.

8.2.3 Simple State Machine in Hanoi Towers The mini game is placed in one Unity scene. It will start after scene loading. The difficulty of the game should be set in the code which will call the loading.

Hanoi Tower game use 4 main scripts: DiscScript, Rod, HanoiEvents and TowerOfHanoiController.

TowerOfHanoiController is responsible for setting up the game from the beginning. It also checks the events in order to notice event which will finish the game and manage return to the game menu with some result.

HanoiEvents is used as a centralized collection for declaration of events which are going to be presented in the game. It has only internal events, which should not be a part of the core. But this centralization allows, if needed, to implement additional logger for local needs or any other class to transfer event to the upper level. In case

56 when any logging is needed it can be done via publishing of LogEvent defined in the core part.

Rod script is responsible for checking the amount of discs on the stick. It can identify the disc which is on the top of it. The Rod has several states. Switching between them is implemented via a simple state machine.

Each rod can be in 3 states:

1. Ready – Rod is ready to release the Disc from the top 2. Idle – Rod is ready to be selected 3. Active – Rod is colored in active color (black in the example). If Disc goes down it will be added into stack of active rod

The path between them is shown in Figure 8.2.

Figure 8.2: Rods states The Disc script knows its state and can manage its changes. The simple state machine is used for this purpose. Below are described its states:

1. Idle – the disk is on the idle rod 2. Ready – the disk is ready to go up (it is located on the top of active rod) 3. Active – the disk is up

57

The path between them is shown in the Figure 8.3.

Figure 8.3: Disk states The game itself has two main states (stored in TowerOfHanoiController as static variable):

1. Disk is up; 2. Disk is down.

These states define if states of rods will switch between Ready and Idle or between Active and Idle

8.3 Logging The logging system is decentralized in terms that the logger itself has no connections with concrete game object or scene. It only reacts on specific events. This approach allows us to have several loggers implement several rules or a setting to log depends on needs. Also, it allows to create the logger by independent developers even without interacting with game team as long as all of them follow the defined conventions in the interface.

The analyzing of log files is out of scope of this thesis. This should be done by external application of by a human. But logs should be well organized in order to allow it.

For logging purpose InteractionEventAggregator has defined event LogEvent. Mini-game developer can publish this event in time when the game will need to log any text information he need.

58

9 Conclusion This thesis is a part of a big project for creation a “serious game” initiated by psychologists from Faculty of psychology. The aim of the game is to help in rehabilitation of people after injury and in development of autistics children. The idea is that person will play in games specially developed in a way to require performing special actions (movements, gestures) to achieve a goal of the game. These movements will be tracked by special devices which can recognize them. One of such devices is Kinect. More devices can be supported by the game in the future.

The main goal of the thesis was to create a scalable architecture which allows adding new mini-games and support of new devices easily. The second goal was realization of interaction with the game world via Kinect and creation of the core for implementation of the whole game system.

Available solutions on the market were reviewed in order to solve mentioned tasks. Unity 3d game engine was chosen for implementation of the project. Unity has low system requirements for both developer’s and player’s computer. It also has a strong community and an Asset Store where available a lot of components created by other developers. Part of components from Asset Store was used in the thesis.

One of the problems found during work on the thesis was a need to have low connectivity between components to have possibility to modify them easier. The problem could be solved by using of C# events to let game objects communicate between each other via them. But then another problem can happened. Theoretically the event can be processed by the object in any moment during its lifecycle. This can lead it to invalid state. Both problems were successfully solved by creating event-based system which was described in the Subchapters 6.4 and 6.5.

59

The asset “Kinect with MS-SDK” from Asset Store was taken as a core for implementation of Kinect Interaction. Basic support of some gestures (they are listed in Appendix I) is already implemented there. The “KinectExtras with MsSDK” asset was used to support mouse cursor control by gestures. Both mentioned assets were modified to support mentioned above event-based system.

The tower of Hanoi mini-game was implemented as an example of usage of event- based system together with Kinect interaction.

The system implemented during work on the thesis allows implementing a complex and full of events game. This approach requires higher level of skills from developer to understand the system and use it properly. But following fact will be a beneficial: components of such game can be modified or added without editing the code of the game core. The system can be extended by other mini-games created by other developers. The source code of the event-based system is planned to be published on Github16.

16 https://github.com/ 60

10 Bibliography

[1] Garther, "Gartner Says Worldwide Video Game Market to Total $93 Billion in 2013," [Online]. Available: http://www.gartner.com/newsroom/id/2614915.

[2] G. Zichermann, The Gamification Revolution: How Leaders Leverage Game Mechanics to Crush the Competition, 2013.

[3] A. Stellman and J. Greene, Learning Agile: Understanding Scrum, XP, Lean, and Kanban, 2014.

[4] G. Junker, Pro Ogre 3D Programming, 2006.

[5] "Unity 3d supported platforms," [Online]. Available: https://unity3d.com/unity/multiplatform.

[6] A. S. Kyaw, C. Peters and T. N. Swe, Unity 4.x Game AI Programming, 2013.

[7] S. Kean, J. Hall and P. Perry, Meet the Kinect – An Introduction to Programming Natural User Interfaces, 2011.

[8] Microsoft, Kinect for Windows - Human Interface Guidelines v1.8, 2013.

[9] M. Spiegelmock, Leap Motion Development Essentials, 2013.

[10] Microsoft, "Windows Touch User Interface Guidelines," [Online]. Available: https://msdn.microsoft.com/en-us/library/dn742468.aspx.

[11] Unity Technologies, "Unity tutorials," [Online]. Available: https://unity3d.com/learn/tutorials/modules.

[12] Unity Technologies, "Unity documentation," [Online]. Available:

61

http://docs.unity3d.com/.

[13] A. Thorn, Learn Unity for 2D Game Development (Technology in Action), 2013.

[14] R. Nystrom, Game Programming Patterns, 2014.

[15] S . Blackman, Beginning 3D Game Development with Unity 4, 2013.

[16] M. Dickheiser, Game Programming Gems 6, 2006.

[17] P. Chu, Learn Unity 4 for iOS Game Development, 2013.

[18] C. J. Neill and P. A. Laplant, Antipatterns: Identification, Refactoring, and Management, 2005.

[19] E. Gamma, R. Helm, R. Johnson and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, 1998.

[20] Microsoft, C# Language Specification, 2012.

[21] P. D. Byl, Holistic Game Development with Unity, 2012.

[22] J. Richter, CLR via C (4th Edition), 2012.

62

11 Appendix I. Supported gestures List of gestures which can be recognized by the system

RaiseRightHand / RaiseLeftHand – left or right hand is raised

over the shoulder and stays so for at least 1.0 second.

Psi - both hands are raised over the shoulder and the user stays

in this pose for 1.0 seconds.

Stop - both hands are below the waist.

Wave - right hand is waved left and then back right, or left

hand is waved right and then back left.

SwipeRight – left hand swipes right.

SwipeLeft – right hand swipes left.

SwipeUp – swipe up with left or right hand

63

SwipeDown – swipe down with left or right hand

Jump - the hip center gets at least 10cm above its last position within 1.5 seconds.

Squad - the hip center gets at least 10cm below its last position within 1.5 seconds

Pull - pull backward with left or right hand within 1.5 seconds

Push - push/punch forward with left or right hand within 1.5 seconds

64

12 Appendix II. Prefabs settings

Name in Unity Value in the Test Description editor Scene Two Users Unchecked Enable gesture recognition of second player Near Mode Unchecked Determine if the sensor is used in near mode Compute User Checked Determine whether to receive and Map compute the user map Compute Color Unchecked Determine whether to receive and Map compute the color map Display User Checked Displaying computed user map in right Map bottom corner Display Color Unchecked Displaying computed color map in right Map bottom corner (overlays user map) Display Checked Displaying recognized bones over Skeleton Lines user/color map Display Maps 30 Size of screen to display user/color map Width Percent in percentage of screen width Sensor Height 1 How high off the ground is the sensor (in meters) Sensor Angle 15 Angle of sensor from horizontal Min User 1 Minimum distance where system will Distance try to recognize gestures Max User 0 Maximum distance where system will Distance try to recognize gestures (0 = infinity) Detect Closest Checked In single user mode in case there will be

65

User to person in front of Kinect, it will recognize gestures of closest one Ignore Inferred Checked Determine whether to use only the Joints tracked joints (and ignore the inferred ones) Smoothing Default Selection of smoothing parameters Use Bone Unchecked Determine the usage of additional filter Orientations Filter Use Clipped Unchecked Determine the usage of additional filter Legs Filter Use Bone Checked Determine the usage of additional filter Orientations Constraint Use Self Unchecked Determine the usage of additional filter Intersection Constraint Player 1Avatars 0 Set avatars to be controlled by Kinect for first player Player 2Avatars 0 Set avatars to be controlled by Kinect for second player Player None The pose used to start recognition of 1Calibration first player (none means system will Pose always try to recognize gestures) Player None The pose used to start recognition of 2Calibration second player (none means system will Pose always try to recognize gestures) Player 0 Array of expected gestures from the first 1Gestures player to be tracked (0 = all gestures)

66

Player 0 Array of expected gestures from the 2Gestures second player to be tracked (0 = all gestures) Min Time 0.7 Before this time system will not try to Between recognize another gesture Gestures Gesture 1 Set gesture listeners for the Kinect Listeners Element 0 – Manager. Listener must implement KinectManagerPrefab interface (GestureListener) KinectGestures.GestureListenerInterface Calibration Text Info Text Text object to display notifications for user from Kinect Manager Hand Cursor 1 Cursor Prefab GUI Texture to display the hand cursor for Player1 Hand Cursor 2 None GUI Texture to display the hand cursor for Player2 Control Mouse Cursor Table 12.1: Settings of KinectManagerPrefab and its value in the TestScene Name in Unity editor Value in the Test Scene Description Hand Cursor CursorPrefab Object to be displayed instead of standard cursor Grip Hand Texture Cursor click Texture displayed as a cursor when Grip Release Hand Texture Cursor Texture displayed as a cursor when release gesture appears Normal Hand Texture Cursor Texture displayed as a cursor when there is no

67

gesture Smooth Factor 3 Control Mouse Cursor Unchecked Debug Text None Table 12.2: Settings of Interaction Manager and its value in the TestScene Name in Unity editor Value in the Test Scene Description Cursor Control On Unchecked Start value for the option of controlling the cursor by gestures Target Gesture Stop Gesture used as a switch to turn on/turn off gestures mouse control Table 12.3: Settings of CursorControlSwitch and its value in the TestScene Name in Unity editor Value in the Test Scene Description Gesture Info Info Text2 Text object to display notifications about recognized gestures Table 12.4: Settings of GestureListener and its value in the TestScene

68