An Investigation Into the Use Of

Total Page:16

File Type:pdf, Size:1020Kb

An Investigation Into the Use Of

Licenciatura Thesis

An Investigation Into the Use of Synthetic Vision for NPC’s/Agents in Computer Games

Author

Enrique, Sebastian [email protected]

Director Co-Director

Watt, Alan Mejail, Marta University of Sheffield Universidad de Buenos Aires United Kingdom Argentina [email protected] [email protected]

Departamento de Computación Facultad de Ciencias Exactas y Naturales Universidad de Buenos Aires Argentina

September 2002

Abstract

The role and utility of synthetic vision in computer games is discussed. An implementation of a synthetic vision module based on two viewports rendered in real-time, one representing static information and the other dynamic, with false colouring being used for object identification, depth information and movement representation is presented. The utility of this synthetic vision module is demonstrated by using it as input to a simple rule-based AI module that controls agent behavior in a first-person shooter game.

Son discutidos en esta tesis la utilidad y el rol de la visión sintética en juegos de computadora. Se presenta una implementación de un módulo de visión sintética basado en dos viewports renderizados en tiempo real, uno representando información estática y el otro dinámica, utilizando colores falsos para identificación de objetos, información de profundidad y representación del movimiento. La utilidad de este módulo de visión sintética es demostrada utilizándolo como entrada a un módulo simple de IA basado en reglas que controla el comportamiento de un agente en un juego de disparos en primera persona. A mis padres, quienes sacrificaron todo por mi educación. An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Table of Contents

TABLE OF CONTENTS ...... 4

ACKNOWLEDGMENTS ...... 7

INTRODUCTION ...... 8

GENERAL OVERVIEW...... 8 IN DEPTH PRE-ANALYSIS...... 8

PREVIOUS WORK ...... 11

PROBLEM STATEMENT ...... 13

SYNTHETIC VISION MODEL ...... 14

STATIC VIEWPORT...... 14 DEFINITIONS...... 14 LEVEL GEOMETRY...... 15 DEPTH...... 16 DYNAMIC VIEWPORT...... 17 BUFFERS...... 18 REMARKS...... 18

BRAIN MODULE: AI ...... 20

AI MODULE...... 20 FPS DEFINITION...... 20 MAIN NPC...... 21 POWER-UPS...... 23 BRONTO BEHAVIOUR...... 25 BEHAVIOUR STATES SOLUTION...... 26 DESTINATION CALCULATION...... 27 BEZIER CURVE GENERATION...... 27 WALK AROUND...... 28 LOOKING FOR A SPECIFIC POWER-UP...... 36

Game Applications Analysis - 5 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

LOOKING FOR ANY POWER-UP...... 37 LOOKING QUICKLY FOR A SPECIFIC POWER-UP...... 37 KNOWN PROBLEMS...... 37 EXTENDED BEHAVIOUR WITH DYNAMIC REACTIONS...... 43 DON’T WORRY...... 44 AVOID...... 44 INTERCEPT...... 44 DEMONSTRATION...... 44 LAST WORD ABOUT DYNAMIC AI...... 44

GAME APPLICATIONS ANALYSIS ...... 45

ADVENTURES...... 45 FIRST PERSON SHOOTERS...... 45 THIRD PERSON ACTION GAMES...... 45 ROLE PLAY GAMES...... 45 REAL TIME STRATEGY...... 46 FLIGHT SIMULATIONS...... 46 OTHER SCENERIES...... 46

CONCLUSIONS ...... 47

FUTURE WORK ...... 48

REFERENCES ...... 50

APPENDIX A – CD CONTENTS ...... 52

APPENDIX B – IMPLEMENTATION ...... 53

FLY3D_ENGINE CLASSES...... 53 FLYBEZIERPATCH CLASS...... 53 FLYBSPOBJECT CLASS...... 53 FLYENGINE CLASS...... 53 FLYFACE CLASS...... 53 LIGHTS CLASSES...... 54 SPRITE_LIGHT CLASS...... 54 SVISION CLASSES...... 54 AI CLASS...... 54

Game Applications Analysis - 6 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

SVOBJECT CLASS...... 55 VISION CLASS...... 55 VIEWPORT CLASSES...... 55 VIEWPORT CLASS...... 55 WALK CLASSES...... 55 CAMERA CLASS...... 56 CAMERA2 CLASS...... 56 OBJECT CLASS...... 56 PERSON CLASS...... 56 POWERUP CLASS...... 57

APPENDIX C – SOFTWARE USERS’ GUIDE ...... 58

SYSTEM REQUIREMENTS...... 58 INSTALLING DIRECTX...... 58 CONFIGURING FLY3D...... 58 RUNNING FLY3D...... 58 RUNNING SYNTHETIC VISION LEVELS...... 59 MODIFYING SYNTHETIC VISION LEVELS PROPERTIES...... 61

APPENDIX D – GLOSSARY ...... 63

APPENDIX E - LIST OF FIGURES ...... 64

APPENDIX F - LIST OF TABLES ...... 66

Game Applications Analysis - 7 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Acknowledgments

I will be forever grateful with Alan Watt, who accepted to guide me through the whole project, from the very beginning. I won’t ever forget all the help he offered me during my little trip to Sheffield, despite the hard time he was having. Special thanks to his wife, Dionéa, who is an extremely kind person.

From Sheffield as well, I want to thank Manuel Sánchez and James Edge, for being very friendly and easy- going. Special thanks to Steve Maddock for all the help and support he gave to me, and for the “focusing” talk that we had.

Very special thanks to Fabio Policarpo, for many reasons: he gave me access to the early full source code and following versions of the engine; he helped me on each piece of code when I was stuck or when I just didn’t know what to do; he even gave me a place in his offices at Niterói. Furthermore, all the people from Paralelo –Gilliard Lopes, Marcos, etc.- and Fabio’s friends were very kind with me. Passion for games can be smelled inside Paralelo’s offices.

I shall be forever in debt with Alan Cyment, Germán Batista and Javier Granada for making possible the presentation of this thesis in English.

Thanks to all the professors from the Computer Sciences Department who make a difference, like Gabriel Wainer, who not only teaches computer sciences, but a work philosophy as well.

I must mention all of my university partners and friends, with whom I shared many suffered funny study days and nights. In special to Ezequiel Glinsky, an incredible teammate who was next to me on every academic step.

Thank you Cecilia for standing by my side all of these years.

I don’t want to forget to give thanks to Irene Loiseau, who initiated contact with Alan; and to all the people from the ECI 2001 committee, who accepted my suggestion to invite Alan to come to Argentina to give a very nice, interesting, and successful course.

And, finally, very special thanks to Claudio Delrieux, who strengthened my passion for computer graphics. He was always ready to help me unconditionally before and during every stage of this thesis.

Game Applications Analysis - 8 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Introduction

General Overview

Today, 3D computer games usually uses artificial intelligence (AI) for the non-player characters (NPC’s) taking information directly from the internal database. They control the NPC’s movements and actions having knowledge of the whole world information, probably cheating if the developer does not put any constraint over this.

Since a large quantity of computer controlled opponents or friends are human-like, it seems to be interesting and logic to give senses to those characters. This means that the characters could have complete or partial systems, such as vision, aural, tactile, smell, and taste. They could then process the information sensed from those systems in a brain module, learn about the world, and act depending on the character’s personality, feelings and needs. The character becomes a synthetic character that lives in a virtual world.

The research field of synthetic characters or autonomous agents investigates the use of senses combined with personality in order to make characters’ behaviour more realistic, using cognitive memories and rule based systems, producing agents that seem to be alive and interacting in their own world, and maybe with some human interaction.

However, not too much effort has been given to the investigation of the use of synthetic characters in real time 3D computer games. In this thesis, we propose a vision system for NPC’s, i.e. a synthetic vision, and analyze how useful and feasible its usage might prove in the computer games’ industry.

We think that the use of synthetic vision together with complex brain modules could improve gameplay and make for better and more realistic NPC’s.

We have to note that our efforts were focused on the investigation of the use of the synthetic vision, and not to the AI that uses it. For that reason, we developed only a simple AI module in a 3D engine, where we implemented our vision approach.

In Depth Pre-Analysis

We can regard synthetic vision as a process that supplies an autonomous agent with a 2D view of his environment. The term synthetic vision is used because we bypass the classic computer vision problems. As Thalmann et al [Thal96] point out, we skip the problems of distance detection, pattern recognition and noisy images that would be appertain for vision computations for real robots. Instead computer vision issues are addressed in the following ways:

1) Depth perception – we can supply pixel depth as part of the synthetic vision of an autonomous agent’s vision. The actual position of objects in the agent’s field of view is then available by inverting the modeling and projection transform.

2) Object recognition – we can supply object function or identity as part of the synthetic vision system.

3) Motion determination – we can code the motion of object pixels into the synthetic vision viewport.

Thus the agent AI is supplied with a high-level vision system rather than an unprocessed view of the environment. As an example, instead of just rendering the agent’s view into a viewport then having an AI interpret the view, we instead render objects in a colour that reflects their function or identity (although there is nothing to prevent an implementation where the agent AI has to interpret depth – from binocular vision,

Game Applications Analysis - 9 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002 say, and also recognize objects). With object identity, depth and velocity presented, the synthetic vision becomes a plan of the world as seen from the agent’s viewpoint.

We can also consider synthetic vision in relation to a program that controls an autonomous agent by accessing the game database and the current state of play. Often a game database will be tagged with extra pre- calculated information so that an autonomous agent can give a game player an effective opponent. For instance, areas of the database may be tagged as good hiding places (they may be shadow areas), or pre- calculated journey paths from one database node to another may be stored. In using synthetic vision, we change the way the AI works, from a prepared, programmer-oriented behaviour to the possibility of novel, unpredictable behaviour.

A number of advantages accrue from allowing an autonomous agent to perceive his environment via a synthetic vision module. First, it may enable an AI architecture for an autonomous agent that is more ‘realistic’ and easier to build. Here we refer to an ‘on board’ AI for each autonomous agent. Such an AI can interpret what is seen by the character, and only what is seen. Isla and Blumberg [Isla02] refer to this as sensory honesty and point out that it “…forces a separation between the actual state of the world and the character’s view of the state of the world”. Thus the synthetic vision may render an object but not what is behind it.

Second, a number of common games operations can be controlled by synthetic vision. A synthetic vision can be used to implement local navigation tasks such as obstacle avoidance. Here the agent’s global path through a game level may be controlled by a high level module (such as A* path planning or game logic). The local navigation task may be to attempt to follow this path by taking local deviations where appropriate. Also, synthetic vision can be used to reduce collision checking. In a games engine this is normally carried out every frame by checking the player's bounding box against the polygons of the level or any other dynamic object. Clearly if there is free space ahead you do not need every-frame collision checking.

Third, easy agent-directed control of the synthetic vision module may be possible, for example, look around to resolve a query, or follow the path of a moving object. In the case of the former this is routinely handled as a rendering operation. A synthetic vision can also function as part of a method for implementing inter-agent behaviour.

Thus, the provision of synthetic vision reduces to a specialized rendering which means that the same technology developed for fast real-time rendering of complex scenes is exploited in the synthetic vision module. This means that real-time implementation is straightforward.

However, despite the ease of producing a synthetic vision, it seems to be only an occasionally employed model in computer games and virtual reality. Tu and Terzopoulous [TuTe94] made an early attempt at synthetic vision for artificial fishes. The emphasis of this work is a physics-based model and reactive behaviour such as obstacle avoidance, escaping and schooling. Fishes are equipped with a “cyclopean” vision system with a 300 degree field of view. In their system an object is “seen” if any part of it enters the view volume and is not fully occluded by another object. Terzopoulos et al [Terz96] followed this with a vision system that is less synthetic in that the fishes’ vision system is initially presented with retinal images which are binocular photorealistic renderings. Computer vision algorithms are then used to accomplish, for example, predator recognition. This work thus attempts to model, to some extent, the animal visual processes rather than bypassing these by rendering semantic information into the viewport.

In contrast, a simpler approach is to use false colouring in the rendering to represent semantic information. Blumberg [Blum97] use this approach in a synthetic vision based on image motion energy that is used for obstacle avoidance and low-level navigation. A formula derived from image frames is used to steer the agent, which, in this case, is a virtual dog. Noser et al [Nose95] use false colouring to represent object identity, and in addition, introduce a dynamic octree to represent the visual memory of the agent. Kuffner and Latombe [Kuff99] also discuss the role of memory in perception-based navigation, with an agent planning a path based on its learned model of the world.

Game Applications Analysis - 10 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

One of the main aspects that must be addressed for computer games is to make the synthetic vision fast enough. We achieve this, as discussed above, by making use of existing real-time rendering speed-ups as used to provide the game player with a view of the world. We propose that two viewports can be effectively used to provide for two different kinds of semantic information. Both use false colouring to present a rendered view of the autonomous agent’s field of view. The first viewport represents static information and the second viewport represents dynamic information. Together the two viewports can be used to control agent behaviour. We discuss only the implementation of simple memoryless reactive behaviour; although far more complex behaviour is implementable by exploiting synthetic vision together with the consideration of memory and learning. Our synthetic vision module is used to demonstrate low-level navigation, fast object recognition, fast dynamic object recognition and obstacle avoidance.

Game Applications Analysis - 11 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Previous Work

Beyond the mere mention of present research in the previous section, it is necessary to give a short description of each of the relevant works on the field.

We can think of the following two opposing approaches:

 Pure Synthetic Vision, also called Artificial Vision in [Nose95b], quoting the same reference, it is “a process of recognizing the image of the real environment captured by a camera (…) is an important research topic in robotics and artificial intelligence.”

 No Vision At All, it est., not using any vision system for our characters.

We can imagine a straight line, with Pure Synthetic Vision in one extreme, and No Vision At All in the other end. Each of the previous approaches falls somewhere in between.

Pure Synthetic Vision No Vision At All

Figure 1. Approaches: a graphical view.

Bruce Blumberg describes in [Blum97a] a synthetic vision based on motion energy that he uses for his autonomous characters for obstacle avoidance and low-level navigation. He renders the scene with false colouring, taking the information from a weighted formulae that is a combination of flow (pixel colours from the last frames) and mass (based on textures), dividing the image in half, and taking differences in order to steer. A detailed implementation can be found in [Blum97b]. Related research on Synthetic Characters can be found in [Blum01].

James Kuffner presents in [Kuff99a] a false colouring approach that he uses for digital actors navigation with collisions detection, implementing visual memory as well. Details can be found in [Kuff99b]. For a fast- related introduction and other papers you can navigate through [Kuff01].

Hansrudi Noser et al use in [Nose95a] synthetic vision for digital actors navigation. Vision is the only connection between environment and actors. Obstacle avoidance as well as knowledge representation, learning and forgetting problems are solved based on actors vision system. A voxels-based memory is used to do all these tasks and path searching as well. Their vision representation makes difficult to quickly identify visible objects, which is one of our visual system’s goals. Even though it would be very interesting to integrate their visual memory proposals with this thesis. Mentioned ideas are used in [Nose98], plus aural and tactile sensors, to make a simple tennis game’s simulation.

Kuffner and Noser vision works were the most influential ones during the making of this thesis.

Olivier Renault et al [Rena90] develop a 30x30 pixels synthetic vision for animation, using the front buffer for a normal rendering, the back buffer for objects’ identification and the z-buffer for distances. They propose high level behaviours to go through a corridor avoiding obstacles.

Rabie and Terzopoulos implement in [Rabi01] a stereoscopic vision system for artificial fishes’ navigation and obstacle avoidance. Similar ideas are used by Tu and Terzopoulos in [Tute94] to develop fishes behaviours more deeply.

Game Applications Analysis - 12 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Craig W. Reynolds explains in [Reyn01] a set of behaviours for autonomous agents’ movement, such as ‘pursuit’, ‘evasion’, ‘wall following’, ‘unaligned collision avoidance’, among others. This is a very interesting work that should be used as a computer games NPC’s complex AI module development guide.

Damián Isla discusses in [Isla02] AI potential using synthetic characters.

John Laird discusses in [Lair00a] human level AI in computer games. He proposes in [Lair01] autonomous synthetic characters design goals. Basically his research is focused on improving artificial intelligence for NPC’s in computer games; an example is the Quakebot [Lair00b] implementation. For more research from J. Laird refer to [Lair02].

Most of other investigations are based on previously mentioned authors’ ideas.

Game Applications Analysis - 13 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Problem Statement

We will define our concept of Synthetic Vision as the visual system of a virtual character who lives in a 3D virtual world. This vision represents what the character sees from the world, the world part sensed by his eyes. Technically speaking, it takes the form of the scene rendered from his point of view.

However, in order to have a vision system useful for computer games, it is necessary to find vision representations with which we can take enough information to make autonomous decisions.

The Pure Synthetic Vision approach is not useful today for computer games, since the amount of information that can be gathered in real-time is very limited: almost only shapes recognition and obstacle avoidance could be achieved. This is a research field on its own. You can read any Robot Vision literature to see all the problems related to it.

No Vision At All is what we do not want.

So, our task is to find a model that falls some place in between (Figure 2).

Pure Synthetic Vision No Vision At All Our Model

Figure 2. Aimed model place.

Game Applications Analysis - 14 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Synthetic Vision Model

Our goal was to create a special rendering from the character’s point of view, in order to produce a screen representation in a viewport, so that the AI system could eventually undertake:

 Obstacle avoidance.  Low-level navigation.  Fast object recognition.  Fast dynamic object detection.

We must understand “fast” from a subjective point of view, dependent on results, as “fast enough” to produce an acceptable job in a 3D real time computer game.

To reach this goals, we propose a synthetic vision approach that uses two viewports: the first one represents static information, and the second one dynamic information.

We will assume a 24bits RGB model to represent colour per pixel.

In figure 3 there is an example of the normal rendered viewport and the corresponding desired static viewport.

Figure 3. Static viewport (left) obtained from the normal rendered viewport (right).

Static Viewport The static synthetic vision is mainly useful to identify objects, taking the form of a viewport with false colouring, similar to that described in [Kuff99].

Definitions

We will make the following definitions:

Object. An item with 3D shape and mass.

Class of Object. Grouping of objects with the same properties and nature.

In our context, for example, the health power-up located near the fountain on some imaginable game level is an object, whilst all health power-ups of the game level conform a class of object.

Each class of object has a colour id associated. That is, there exists a function c that constitutes a mapping from classes of objects to colours. This is an inyective function since no two different classes of objects have the same colour.

Game Applications Analysis - 15 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

So, given

CO : Set of classes of objects. and

C : [0, 1] x [0, 1] x [0, 1]

Set of 3D vectors with real values between zero and one in each component. Each vector represents a colour in RGB model: the first component corresponds to the red colour quantity, the second to the green, and the third to the blue.

c : CO  C

The mapping function from classes of objects to colours.

co1, co2  CO : c(co1) = c(co2)  co1 = co2

The function is inyective.

Since CO is a finite set, and C infinite, not all the possible colours are used, and the function is not suryective, neither biyective.

However, we can create a mapping table in order to know which colour corresponds to each class of object. See an example in Table 1.

Class of Object Colour Health Power-Ups (1, 1, 0) Ammo Power-Ups (0, 1, 1) Enemies (1, 0, 1)

Table 1. Classes of Objects and Colour Ids Mapping Table.

Level Geometry

Typically, besides objects as defined in the previous section, computer 3D games make a difference between ‘normal objects’ or items, and level geometry. The definitions are:

Item. Any object, as defined in the previous section, that is not part of the level geometry.

Level Geometry. All polygons that conform to the structure of each game level. It est., floor, walls, etc.

However, the level geometry as defined may not strictly be an object, because it is usually made of a peel of polygons that will not necessarily make up a solid object with mass.

Despite what has just been said, we will divide level geometry in three classes that will be part of the CO classes of object set:

 Floor : Any polygon from the level geometry with Z normal normalized component greater or equal than 0.8.  Ceiling : Any polygon from the level geometry with Z normal normalized component less or equal than -0.8.  Wall : Every polygon from the level geometry that does not fit in any of the two previous cases.

Game Applications Analysis - 16 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

We assume a coordinate system where Z points up, X right, and Y forward (figure 4).

Z

Y

X Figure 4. Coordinate System used.

In Table 2, we give an extended mapping table adding level geometry classes of objects.

Class of Object Colour Health Power-Ups (1, 1, 0) Ammo Power-Ups (0, 1, 1) Enemies (1, 0, 1) Wall (0, 0, 1) Floor (0, 1, 0) Ceiling (1, 0, 0)

Table 2. Extended Classes of Objects and Colour Ids Mapping Table with Level Geometry.

Depth

Identification is not enough: we need to know something about position. The variables that every 3D game engine manages and for which we have specific use are:

Camera Angle. It is the angle of vision, from the character’s point of view, with which the scene is rendered.

Near Plane (NP). The near plane of the scene view frustum volume [Watt01].

Far Plane (FP). The far plane of the scene view frustum volume [Watt01].

We can achieve knowledge about position combining the previous variables and the depth information of each pixel rendered in the static viewport.

When the static viewport is being rendered, it will be using a depth buffer in order to know if it is necessary to draw a given pixel. When the rendering is finished, each pixel on the static viewport has a corresponding value in the depth buffer.

Be dx,y = DepthBuffer[x,y] the depth value of the pixel at coordinates x,y.

dx,y  [0, 1]

In world coordinates, the perpendicular distance between the character and the object that the pixel belongs to is defined as:

P = (FP – NP) * dx,y + NP

Game Applications Analysis - 17 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Dynamic Viewport The dynamic synthetic vision is useful, as its name implies, to represent instantaneous movement information of the objects being seen in the static viewport. Like this one, it takes the form of a viewport with false colouring but, in place of object identification, colours represent velocity vector of each object. As we use RGB model, we will define how to obtain each pixel colour component. The dynamic viewport has the same size than the static one.

R, G, B  [0, 1]

Colours Red, Green, and Blue are real numbers between 0 and 1.

And given

+ Vmax  

The maximum velocity allowed in the system, a constant positive real number.

If Vx,y is the velocity vector of the object located at coordinates (x, y) at the static viewport,

R(x,y) = min(||Vx,y|| / Vmax, 1)

The red colour component is the minimum value between 1 and the velocity magnitude of the object located at (x, y) coordinates on the static viewport divided by the maximum velocity allowed.

If D is the direction/vision vector of the agent, we can normalize this and the velocity vector V:

DN = D / ||D||

N Vx,y = V / ||Vx,y||

And then we obtain,

N N N N N N N N c = Vx,y . D = Vx,y 1 D 1 + Vx,y 2 D 2 + Vx,y 3 D 3

c  [-1, 1] c is the dot product or cosinus of the angle between the normalized Vx,y, velocity vector of the object located at (x, y) coordinates on the static viewport, and D, the non-player character direction and vision vector. The angle ranges from 0 to 180º. c is a real number between –1 and 1.

G(x,y) = c * 0.5 + 0.5

The green colour component is a mapping between the cosinus c into the interval [0, 1]. A cosinus value of zero will produce a green colour component of 0.5.

s = (1 – c2)

s  [0, 1] s is the sinus of the angle between the normalized Vx,y, velocity vector of the object located at (x, y) coordinates on the static viewport, and D, the non-player character direction and vision vector. It is calculated using the cosinus of the same angle.

Game Applications Analysis - 18 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

s is a real number between 0 and 1.

B(x,y) = s

The blue colour component is a direct mapping between the sinus s into the interval [0, 1]. A cosinus value of zero will produce a green colour component of 0.5, and a blue colour component of 1.

With this definitions, a fully static object will have a colour of (0.0, 0.5, 1.0). Dynamic objects will have different colours depending on the movement direction and velocity.

Buffers The three kinds of information types defined so far (static, depth, and dynamic) could be kept in memory buffers for their use. Each element of the static and dynamic buffers contains three values that correspond to each colour component. The depth buffer contains a single value. The size of each buffer is fixed: screen height times screen width; say, a two dimensional matrix. So, given a viewport coordinate (x,y), you can obtain from the buffers: object id, its depth and dynamic information.

Remarks We say that our vision system is ‘perfect’ because the character does not have any occlusions caused by lightning conditions.

To make the dynamic system more realistic, it is possible to add some noise in order to obtain less precise data. Why so? Humans, through the use of their eyes, can only make estimations (sometimes very good ones!) but it is improbable that they shall determine the exact velocity of any given moving object.

Figure 5. From left to right, static viewport, normal rendered viewport, and dynamic viewport.

See Appendix B for details of our synthetic vision implementation over Fly3D [Fly01; Watt01; Watt02].

The expected dynamic viewport is shown together with the static and normal viewports in figure 5.

Game Applications Analysis - 19 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Brain Module: AI

Static and dynamic information of what are you seeing in a given moment is useful only if you have a brain that knows how to interpret those features, decide, and act according to a given behavior. Synthetic vision is a simple layer that makes an abstraction of the world through the character eyes and represents it in a useful way. The layer in charge of taking the information sensed by the synthetic vision module as input, processing it, and acting after it, is what we call the Brain Module, or the AI module.

AI Module

So as to demonstrate a possible usage of the described synthetic vision, an artificial intelligence module was developed, in an effort to give autonomous behavior to a NPC within a FPS game.

Due to its simplicity, the developed module could not possibly be used in commercial games without the addition of new features and behaviors, and refinements on implemented movements. It is basically intended to show how our synthetic vision approach could be used, without having deeply developed AI techniques.

FPS Definition

FPS games main characteristics are:

 It takes place through a series of 3D modeled regions (interiors, exteriors, or a combination of both environments) called levels.

 The player must fulfill different tasks in each level in order to be able to reach the following one. That is to say, every level consists of a general goal: reach the next level; that is made of a set of specific goals as: get the keys to open and walk across the door, kill level’s boss, etc. Also, it is very common to find sub-goals not necessary to achieve the general one, such as “getting (if possible) 100 gold units”.

 The player can acquire different weapons during its advance in the game. In general, he will have 10 different kinds of weapons at his disposal, most of them with finite ammunition.

 The player has an energy or health level that as soon as it reaches 0 level, he dies.

 The player has to face enemies (NPC’s) constantly, who will try to attack and kill him, according to their characteristics, with weapons or in hand-to-hand combat.

 It is possible that some NPC’s collaborate with the player.

 The player is able to gather items, called power-ups, that increase health or weapon values, or give special powers for a limited time range.

 The player sees the world as if he were seeing it through the eyes of the character that he is personifying. It est., a first person view.

It is recommended that you browse a little over the Internet to become more familiar with this kind of games. Wolfenstein 3D [Wolf01] was one of the pioneers, whereas Unreal [Unre01] and Quake [Quak01] series are more than enough good examples.

Game Applications Analysis - 20 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Next the basic characteristics of the FPS to be implemented with our synthetic vision model will be described.

Main NPC

The FPS is inhabited by the main character to whom we will give an autonomous life. His name is Bronto. He has two intrinsic properties: Health (H) and Weapon (W). Although he contains a weapon property, in our implementation he cannot shoot.

As an invariant,

H  0, 0  H  100

Bronto’s health ranges between 0 and 100, natural numbers. Initially,

Hini = 100

The first rule that we have is:

H = 0  Bronto dies

Something similar happens with the weapon property: it will follow the same invariant:

W  0, 0  W  100

Bronto’s weapon value ranges between 0 and 100, natural numbers. Initially,

Wini = 100

But, in this case, the only meaning of W = 0 is that Bronto has not ammunition.

In addition to the fact that Bronto cannot shoot, he will not be able to receive enemy shoots. Given these two simplifications, we have decided that both health and weapon diminish their value during time linearly and discretely. That is to say, given:

+ t   0 as actual time. + tw0   0 as the time that weapon starts to diminish from.

W0   as weapon value at tw0 time. W0 = Wini initially. tdw   as the time interval in seconds of weapon decreasing.

Dw   as weapon decreasing factor.

We define that,

t = tw0  W(t) = W0

W assumes a present value W0 when the system is at initial counting time tw0.

k  0, tw0 + k tdw  t < tw0 + (k+1) tdw  W(t) = max(W(tw0 + k tdw) – Dw, 0)

If the system is at a greater time than tw0, W linearly decreases its value according to tdw intervals of time, until reaching zero and continuing in that value.

It is analogous for health:

Game Applications Analysis - 21 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

+ th0   0 as the time that health starts to diminish from.

H0   as health value at th0 time. H0 = Hini initially. tdh   as the time interval in seconds of health decreasing.

Dh   as health decreasing factor.

t = th0  H(t) = H0

k  0, th0 + k tdh  t < th0 + (k+1) tdh  H(t) = max(H(th0 + k tdh) – Dh, 0)

H e a lth (H ) a n d Am m o (W )

1 0 0 9 0 8 0 7 0 6 0 W 5 0 H 4 0 3 0 2 0 1 0 0

0 10 20 30 40 50 60 70 80 90 0 0 0 0 0 0 0 0 10 11 12 13 14 15 16 17

Time (sec)

Figure 6. An example of weapon and energy diminishing system: game starts, then both properties decrease gradually. Weapon reaches 0 value at 100 seconds, whereas health does the same at 170 seconds. At that moment Bronto dies.

Taking as an example: t0 = 0, start of game. H0 = Hini = 100. tdh = 5 seconds. Dh = 3. W0 = Wini = 100. tdw = 4 seconds. Dw = 4.

The situation described in figure 6 is produced when Bronto’s weapon value diminish gradually until it reaches 0 value at 100 seconds; whereas health value reaches 0 value at 170 seconds, in that instant Bronto dies.

Power-Ups

Initial health and weapon values as well as the diminishing system have already been defined. It has been specified that when health reaches zero value Bronto dies. We have to define now some way of increasing Bronto’s properties: it will be by means of power-ups.

Game Applications Analysis - 22 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Our FPS will count solely with two power-ups: Health (Energy) and Weapon (Ammunition or Ammo). When Bronto passes ‘above’ one of them, an event that will increase corresponding Bronto’s property value in a fixed amount will be executed:

Hafter = Hbefore + Ah

Wafter = Wbefore + Aw

Being,

Ah  , the NPC health property’s increment value when he takes a power-up of the same one.

Aw  , the NPC weapon property’s increment value when he takes a power-up of the same one.

The action of taking a power-up, besides to increasing the corresponding property value, will set a new tw0 or th0 value, depending on the power-up taken, to present time. In addition, W0 = Wafter or H0 = Hafter will be set also.

This means that taking a power-up reinitiates the cycle by which Bronto’s property value is decreased.

Let’s take as an example the same one that was given during the previous section, where: t0 = 0, start of game. H0 = Hini = 100. tdh = 5 seconds. Dh = 3. W0 = Wini = 100. tdw = 4 seconds. Dw = 4.

Let’s add now:

Aw = 10. Ah = 5.

And suppose that Bronto obtains:

 A weapon power-up at 37 seconds; its ammunition value will increases from 64 to 74, establishing a time tw0 of 37, from which the weapon value must be discounted every tdh seconds.  Another weapon power-up, but this time at 43 seconds, its value increases from 70 to 80, and establishing again tw0 time, but now at 43.

 A health power-up at 156 seconds, its energy value increases from 7 to 12, and reestablishes time t h0 to 156.

Game Applications Analysis - 23 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

H e a lt h (H ) a n d Am m o (W )

1 0 0

9 0

8 0

7 0

6 0 W 5 0 H 4 0

3 0

2 0

1 0

0

0 10 20 30 40 50 60 70 80 90 0 0 0 0 0 0 0 0 10 11 12 13 14 15 16 17

Figure 7. Same example as figure 6 but Bronto takes power-ups this time: two ammo power-ups at 37 and 43 seconds, and one energy power-up at 156 seconds. Weapon value reaches 0 level at 123 seconds now, whereas energy at 176 seconds, when Bronto dies.

Bronto’s weapon value will first reach zero at 123 seconds, and he will die at 176 seconds, when energy reaches zero value. Situation is graphically represented in figure 7.

The whole process is described in figure 8 by means of a state diagram. When the game starts (Start Game event) initial values are established. When the game is running (Game Running state) several events can arise: those that decrease Bronto’s weapon and energy values, caused when the system reaches a given time, and those that increase Bronto’s weapon and energy values, caused when a power-up is taken. The event produced when Bronto reaches a zero value of energy makes the transition to Bronto’s death.

EVENT: Start Game SET EVENT: H = 0 t = 0 GAME BRONTO w0 RUNNING SET DIES t = 0 h0 H = H o ini W = W 0 ini

EVENT: t = t + k t , EVENT: t = t + k t , w0 dw h0 dh k   k   SET SET W = W - D H = H - D w h

EVENT: Health EVENT: Ammo Power-Up Taken Power-Up Taken SET SET t = t t = t h0 w0 H = H + A W = W + A Game Applications Analysis - 24 h w An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure 8. Events that affect and are affected by Bronto’s health and weapon values during the game.

Bronto Behaviour

Now we will describe the behaviour that Bronto will assume during the game, based on health and weapon properties’ values.

Being:

Hut  , health upper threshold.

Hlt  , health lower threshold.

Wut  , weapon upper threshold.

Wlt  , weapon lower threshold. H and W health and weapon properties’ values, as they where defined in previous sections.

So that:

0 < Hlt < Hut < 100 and

0 < Wlt < Wut < 100

Bronto will be in any of the following six possible states:

1. Walk Around (WA), when Hut  H and Wut  W. Hut and Wut denote a limit or threshold where if H and W are over them, Bronto have not any specific objective and his behaviour is reduced to walk without a fixed path or course.

2. Looking for Health (LH), when Hlt  H < Hut and Wut  W. When Bronto has enough ammunition, that is over Wut, and starts to feel a need for energy, its value is in between Hlt and Hut, his objective is to pickup health power-ups.

3. Looking for Weapon (LW), when Hut  H and Wlt  W < Wut. When Bronto has enough health, that is over Hut, and starts to feel a need for ammo, its value is in between Wlt and Wut, his objective is to pickup weapon power-ups.

4. Looking for Any Power-Up (LHW), when Hlt  H < Hut and Wlt  W < Wut. When Bronto feels a need for both health and weapon, his objective is to pickup any power-up.

5. Looking Quickly for Weapon (LQW), when Hlt  H and W  Wlt. When Bronto feels an extreme weapon necessity, caused because its value is under the lower threshold Wlt, his objective is to collect ammo power-ups as soon as possible. 6. Looking Quickly for Health (LQH), when H < Hlt. When Bronto feels an extreme health necessity, caused because its value is under the lower threshold Hlt, it imposes the search for collect energy power- ups as soon as possible over any other behaviour, knowing that if his health value reaches zero, he will die.

Bronto’s behavior is represented by means of a reduced state diagram in figure 9. Even though it is possible to make transitions from one state to any other, only usual and expected transitions are represented with arrows. That is to say, a change from Walk Around to Looking Quickly for Health should be caused by an energy steep diminution, passing from HHut to H

Game Applications Analysis - 25 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

H=Hini H H

H H

H H ut W W

H

Figure 9. Bronto’s behaviour reduced states diagram. Only expected transitions between states are represented with arrows, even though it is possible to go from one state to any other.

Behaviour States Solution

Only the static viewport will be used to resolve all the previously defined behaviour states. In that viewport we can obtain the information of our interest such as power-ups presence and data for navigation and obstacle avoidance.

Basically, the process consists in a per frame basis analyze the information provided by the static viewport, depth included, and choose a destination point that corresponds to level geometry, specifically floor, in the viewport. The chosen coordinate is unprojected in order to obtain the world coordinates of the destination point. Then, a Bezier curve is generated between actual Bronto position and the destination. The generated curve is the path to follow.

Destination Calculation

Once the destination point has been chosen in (x, y) viewport coordinates, it is unprojected in order to obtain its world coordinates. To do that, in addition of the chosen coordinates, the actual model matrix M and projection matrix P (they are both found in every 3D engine) together with the viewport V and camera angle used to render it are needed.

The process consist in apply the matrices in inverse order to the projection process described in [Eber01].

Calculation is:

Game Applications Analysis - 26 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

 2x V[0]   1  V[2]   2y V[1]  T  P.M 1. 1  V[3]     1     1 

The first three coordinates of T correspond to the (x, y, z) world coordinates of the chosen point with coordinates (x, y) in the static viewport.

If an engine based on OpenGl [Open01] is used, as Fly3D [Fly01] does, the operation is reduced to use the glu extension command called gluUnProject.

Bezier Curve Generation

The Bezier curve is generated over the plane parallel to the XY plane and where Bronto is standing; therefore it is a 2D curve. It is not necessary to construct a 3D curve in this case because Bronto only has 4 degrees of freedom. A 3D curve must be calculated and used when dealing with objects with 6 degrees of freedom, such as space shuttles.

P 1

D

P P 2 0 P B 3 T

Figure 10. Bezier curve. Control points (Pi), initial position (B), destination point (T), and Bronto’s visual/direction vector (D) are represented.

Four control points are needed to draw a Bezier curve of grade 3. First and last control points match with initial and final curve point, the other two control points do not touch the curve (exceptions when there are successive points co aligned). Refer to [Watt01] for more information about Bezier curves.

If Z coordinate is discarded in all cases, being:

B : Actual Bronto’s position in world coordinates. T : Destination point in world coordinates. D : Bronto’s direction/vision vector.

PathDistance = || T – B ||, shortest distance between actual position and destination.

PathDirection = ( T – B ) / PathDistance, normalized vector between destination and actual position.

Control points are established as:

P0 = B P1 = B + D * (PathDistance / 3.0) P2 = T – PathDirection * (PathDistance / 3.0) P3 = T

Game Applications Analysis - 27 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

With this control points, the curve’s initial tangent corresponds with the Bronto’s initial vision direction; and the curve’s final tangent corresponds with the vector between initial and destination point.

Used variables and curve control points are shown in figure 10.

Walk Around

A simple heuristic has been developed for the walking around behaviour. Even when its results are not remarkable, they are good enough for demonstration aims.

During Walk Around state Bronto chooses a new destination point –synthetic vision viewport coordinates (x, y)- whether he is idling or has already reached certain rate of the current Bezier curve path. That rate is called

Cf,

Cf  , 0.0 < Cf  100.0

Another used parameter is Bronto’s half width, Bbbr. In world coordinates it is determined as the bounding box mid-width. Whereas, what is actually used is an estimation of Bronto’s half width measured in synthetic vision viewport pixels.

Bbbr  , 0 < Bbbr < 80

Above mentioned parameter becomes important under certain circumstances, such as when Bronto tries to go through certain corridors that are narrower than his width.

Other two constants are used to determine viewport margins of non possible destination points:

Bwaupd  , 0 < Bwaupd < 80, upper and lateral margins, in pixels, of synthetic vision viewport; and

Bwalpd  , 0 < Bwalpd < 80, lower margin, in pixels, of synthetic vision viewport.

160

B waupd

120

B walpd

B B waupd waupd Figure 11. Static viewport and used Walk Around margins. Bolder rectangle outer pixels cannot be chosen as destination points.

Figure 11 represents mentioned margins inside the viewport.

Game Applications Analysis - 28 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

At last, synthetic vision static viewport information is also used, defined as a 160 rows by 120 columns matrix. First matrix row (row number 0) corresponds to the lower viewport line. Viewport’s information is accessed as:

vps[i][j], i, j  0, where 0  i < 160 and 0  j < 120, being i a column and j a row.

If a new destination should be chosen, the heuristic tries to find a rectangle which its lower side matches with static viewport lower line, its width is (2 * Bbbr + 1), its minimum height is (bwalpd + bwaupd), and its fully coloured as floor. Every rectangle that meets the previous conditions is named free way. Look an example in figure 12.

160

120 B waupd

B walpd

2 * B + 1 bbr Figure 12. A free way inside the static viewport. White are is floor, grey area is wall. Bolded rectangle shows the chosen free way. Within that rectangle are the upper and lower margin lines, and the point that the heuristic will select as the new destination.

When a free way is found –used strategies will be explained in Walk Around Algorithm section- rectangle half width coordinate is selected as destination x coordinate, and y coordinate is determined by height minus upper margin (bwaupd) of the tallest free way.

If the heuristic fails to find a free way, it tries to turn randomly left or right. If it cannot turn either, no destination point is chosen.

Once Bronto has walked down the whole chosen path and no new destination point was set, he rotates 180º left or right. This could happen when the heuristic could not achieve to find a free way from C f to the end of current path.

Walk Around Algorithm

Walk Around pseudocode together with free way search strategy is presented in this section. Details are explained and examples showed on each step when it is necessary.

Input: vps[160][120] : A matrix with each static viewport’s pixel content. Row 0 corresponds to viewport bottom line. Bbbr : Bronto’s estimated medium width in viewport’s pixels. Bwaupd : Satisfactory way’s lateral and superior margin in viewport’s pixels.

Game Applications Analysis - 29 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Bwalpd : Satisfactory way’s inferior margin in viewport’s pixels. Cf : Walked path’s percent from which a new destination point must be selected. Output: A new destination point in (x, y) viewport’s coordinates, if one was found.

If at least Cf% of current path has not been walked, return.

// Fail flag initialization, fail, in a satisfactory way search. fail = false

// New destination found flag initialization. A true value is assumed initially. newDestination = true

// freeway is a predicate that tells if there exists a potentially satisfactory free way. Look an example of a free way at figure 12. freeway =  x0, yG  0, 0  xo < x1 < 160 ^ (x1 – x0) = (2 * Bbbr) ^ 0  yG < (120 – bwaupd – bwalpd) ^ ( i, j 

0, 0  i < (2 * Bbbr + 1) ^ 0  j < (yG + bwaupd + bwalpd), vps[x0 + i, j] = ‘floor’)

// If there not exists any free way, then search is unsatisfactory: it fails. If (freeway = false) then fail = true

// A free way is chose if one of existing ones is satisfactory depending on search strategy, otherwise fail flag is set.

If (freeway = true) then

// Strategy 1: If exists, central free way is chose. A central free way is represented in figure 12. If (x0 = 79 – Bbbr) makes the predicate true then that x0 is chose.

// Strategy 2: If exists, the right most free way of the viewport left half is chose. This happens where do not exists a central free way and in the viewport’s inferior closest row containing at least one pixel different than floor, there exists one pixel different than floor more close to the left side than to the right side of the searching rectangle. Look an example in figure 13. If precondition is true but then no free way is found in the viewport left half, the strategy fails. Look an example in figure 14.

If ( i, j, k  0, 0 < i < j  Bbbr ^ 0  k < (bwaupd + bwalpd) ^ ( w, s  0, (79 – Bbbr)  w < (79 + Bbbr)

^ 0  s < k ^ vps[w, s] = ‘floor) ^ ( t, u  0, (79 – Bbbr)  t  (79 – Bbbr + i) ^ vps[t, k] = ‘floor’ ^ (79 + Bbbr

- j) < u  (79 + Bbbr) ^ vps[u, k] = ‘floor’) ^ vps[79 + Bbbr - j, k]  ‘floor’) then Choose the maximum x0, with x0 < 79, that makes freeway true.

// Strategy 3: If exists, the left most free way of the viewport right half is chose. This strategy is a strategy 2 symmetry. It is used when do not exists a central free way and in the viewport’s inferior closest row containing at least one pixel different than floor, there exists one pixel different than floor more close to the right side than to the left side of the searching rectangle. If precondition is true but no free way is found in the viewport right half, the strategy fails.

If ( i, j, k  0, 0 < j < i  Bbbr ^ 0  k < (bwaupd + bwalpd) ^ ( w, s  0, (79 – Bbbr)  w < (79 + Bbbr)

^ 0  s < k ^ vps[w, s] = ‘floor’) ^ ( t, u  0, (79 – Bbbr)  t < (79 – Bbbr + i) ^ vps[t, k] = ‘floor’ ^ (79 + Bbbr

- j)  u  (79 + Bbbr) ^ vps[u, k] = ‘floor’) ^ vps[79 - Bbbr + i, k]  ‘floor’) then Choose the minimum x0, with x0 > 79, that makes freeway true.

Game Applications Analysis - 30 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

a) b) 160 160

120 120

B B waupd waupd B B walpd walpd

2 * B + 1 2 * B + 1 bbr bbr Figure 13. Strategy 2 in action. In a) precondition is fulfilled in order to employ this strategy: a central free way does not exists and viewport’s bottom closest row containing a pixel different than floor inside searching rectangle, contains a pixel different than floor more close to the left side than to the right one, signaled with a double circle. Dotted line separates searching rectangle in halves. In b) the free way that the strategy will finally choose as well as new destination point is showed.

160

120

B waupd B walpd

2 * B + 1 bbr Figure 14. Strategy 2 when fails. Precondition is fulfilled in order to employ this strategy. However, no free way is found when the strategy look for one to the left of the point signaled with double circle.

// Strategy 4: Fail is set to true when it is believed that is not possible to go forward due to a thin corridor or a blocking wall. It happens when there does not exists a central free way and in the viewport bottom’s closest row that contains at least one pixel different than floor, the two pixels closest to the left and right side of the searching rectangle are at the same distance to the left and right sides. A special case is given when only one pixel is found. Look the examples presented in figure 15.

If ( i, k  0, 0 < i  Bbbr ^ 0  k < (bwaupd + bwalpd) ^ ( w, s  0, (79 – Bbbr)  w < (79 + Bbbr) ^ 0 

s < k ^ vps[w, s] = ‘floor’) ^ ( t, u  0, (79 – Bbbr)  t < (79 – Bbbr + i) ^ vps[t, k] = ‘floor’ ^ (79 + Bbbr - i) <

Game Applications Analysis - 31 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

u  (79 + Bbbr) ^ vps[u, k] = ‘floor’) ^ vps[79 + Bbbr - i, k]  ‘floor’) ^ vps[79 - Bbbr + i, k]  ‘floor’) then fail = true a) b) 160 160

120 120

B B waupd waupd B B walpd walpd

2 * B + 1 2 * B + 1 bbr bbr c) d) 160 160

120 120

B B waupd waupd B B walpd walpd

2 * B + 1 2 * B + 1 bbr bbr

Figure 15. Strategy 4 examples. In all cases, beyond a free way may exists, search fails if central rectangle fulfills strategy preconditions. In a) two pixels different than floor at the same distance to the left and right sides of the rectangle are found. In b) the same case is produced when a thin corridor is in front of the character. In c) only one pixel different from floor is found at rectangle center. In d) there is directly a wall in front.

// If one way resulted satisfactory, static viewport’s destination point (x, y) coordinates are set.

If fail = false then Take the maximum yG that makes true freeway predicate with chose x0; x = x0 + Bbbr + 1; y = yG + bwalpd.

// If no way resulted satisfactory, turn strategies are intended. Turn’s heuristics are explained in following sections. Both algorithms (turn left and right) receive as input coordinates variables x and y as reference, in order to set the corresponding value if a satisfactory turn was found. Both return true in the latter case, or false if no turn is possible.

Game Applications Analysis - 32 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

If fail = true then // Chose randomly where to turn turnTo = random( left, right )

// If random selection is left, try to turn there. If turnTo = left then If not turnLeft(&x, &y) then // If turn to left failed, try to turn to the right If not turnRight(&x, &y) then // If turn right also failed, no new destination was found newDestination = false

// If random selection is right, try to turn there If turnTo = right If not turnRight(&x, &y) then // If turn to right failed, try to turn to the left If not turnLeft(&x, &y) then // If turn left also failed, no new destination was found newDestination = false

// If some of used strategies found a new destination point, set it If newDestination = true Set (x, y) as new destination

// If no new destination was found and 100% of current path was walked (character is idle), rotate 180º degrees. If newDestination = false and Bronto is idle then rotate randomly to the left or right 180º.

TurnLeft Algorithm

TurnLeft find satisfactory coordinates to turn to the left if at viewport’s b walpd height there exists a ‘floor’ pixel in such a way that every pixel at the right of it until viewport’s center corresponds to floor, b waulpd to its left are floor, and the pixel immediately to the left of those bwaulpd is not ‘floor’ or exceeds viewport’s limits (see figure 16.a). Previous statement is what the below defined ‘success’ predicate tells.

Input: vps[160][120] : A matrix with each static viewport’s pixel content. Row 0 corresponds to viewport bottom line.

Bwaupd : Satisfactory way’s lateral and superior margin in viewport’s pixels. Bwalpd : Satisfactory way’s inferior margin in viewport’s pixels. Output: Returns True or False if a satisfactory turn is found or not. (x,y), are the coordinates as reference of the viewport for the destination of the satisfactory turnleft found.

// success predicate is used to verify is there exists a satisfactory destination point to turn success = ( k  , Bwaupd < k < 79 ^ ( w  0, k  w < 79 ^ vps[w, bwalpd] = ‘floor’) ^ vps[k - 1, bwalpd]  ‘floor’)

// If a satisfactory turn is found, destination points coordinates are set

If success = true then x = k; y = bwalpd.

// Returns True or False depending on predicate value Return success

Game Applications Analysis - 33 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

a) b) 160 160

120 120

B B waupd waupd B B walpd walpd

B 2 * B + 1 2 * B + 1 B waupd bbr bbr waupd

Figure 16. Turn strategy. Walk Around has failed as showed in figure 15.d. In a) a left turn was chosen where the destination point is signaled as a crossed circle and the first point from the left different than floor is signaled as a double circle. In b) a right turn was chosen, every pixel to the right corresponds to floor, so the right margin will be taken from the right side of the viewport. The crossed circle signs new destination point.

TurnRight Algorithm

TurnRight is symmetric to TurnLeft; it finds satisfactory coordinates to turn right it at viewport’s b walpd height there exists a ‘floor’ pixel in such a way that every pixel at its left until viewport’s center corresponds to floor, bwaulpd pixels to its right are floor, and the pixel immediately to the right of those bwaulpd is not ‘floor’ or exceeds viewport’s limits (see figure 16.b). Previous statement is what below defined ‘success’ predicate tells.

Input: vps[160][120] : A matrix with each static viewport’s pixel content. Row 0 corresponds to viewport bottom line. Bwaupd : Satisfactory way’s lateral and superior margin in viewport’s pixels. Bwalpd : Satisfactory way’s inferior margin in viewport’s pixels. Output: Returns True or False if a satisfactory turn is found or not. (x,y), are the coordinates as reference of the viewport for the destination of the satisfactory turnright found.

// success predicate is used to verify is there exists a satisfactory destination point to turn

success = ( k  0, 79 < k < (79 - Bwaupd) ^ ( w  0, 79 < w  k ^ vps[w, bwalpd] = ‘floor’) ^ vps[k + 1,

bwalpd]  ‘floor’)

// If a satisfactory turn is found, destination points coordinates are set If success = true then x = k; y = bwalpd.

// Returns True or False depending on predicate value Return success

Game Applications Analysis - 34 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Looking for a Specific Power-Up

Since Bronto does not have memory, the following concept is used when he has to look for a power-up: the needed power-up is just searched in current static viewport; and if it is found, he goes to the closest one; otherwise Walk Around behaviour is used.

As well as in Walk Around, Cf parameter is used to indicate the current path rate where new destination is selected from.

Cf  , 0.0 < Cf  100.0

Algorithm’s pseudocode could be summarized to the following steps:

1. If Cf% of current path has not been walked, it returns. 2. Create a list called objectlist of the wanted class power-ups seen in the static viewport at that moment. 3. As no wanted class power-ups are seen, then use Walk Around. 4. If at least one is found, get from objectlist the power-up that is closest to Bronto, p. 5. Get from the static viewport a floor pixel located in straight vertical line under p and with approximately the same depth value, d. 6. In case d does not exist, use Walk Around. 7. If exists, set d as new destination coordinates.

As it can be inferred from step 5, it is also necessary the use of depth information provided by the static viewport.

With regards to step 2, creating a list of power-ups seen in the viewport is not a precise task if only the information provided by defined viewports is used. Following questions arises:

 How could two or more objects of the same class be differentiated when they are part of the same colour area in the static viewport?

 How could be known if a single colour area belongs to one or more objects?

Considering that power-ups identification is only useful to choose a destination point, it does not matter if it is estimated that there are more objects than there really are. What really matters is to know where is an object and which is the closest one.

Developed heuristic is:

Being

x, y  0, static viewport x and y axis tolerance amount in pixels. d  , 0  d  1, depth buffer tolerance.

Viewport pixels are scanned from left to right and from bottom to top, doing this: If p pixel is a wanted class power-up then For each o object from objectlist list If p.x  [o.x - x, o.x + x] y p.y  [o.y - y, o.y + y] y p.depth  [o.depth - d, o.depth + d] then p pixel is part of o object, stop comparing objects and continue with the next viewport’s pixel. If p was not part of any object, add an object o’ to objectlist list with: o'.x = p.x; o'.y = p.y; o'.depth = p.depth;

Game Applications Analysis - 35 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Tolerance values that resulted reasonable in the implementation were:

Being PUr the bounding box radius of each power-up, NP and FP the distances, in world coordinates, from the viewpoint to the near and far planes, respectively,

x = y = PUr * o.depth

d = PUr * ( NP / FP )

Regarding step 4, the object with the minimum depth value is picked up from the objects list created in step 2.

Finally, step 5 pseudocode is very simple:

Let o be the chosen destination object, do DestinationFound = False ycoord = o.y - 1

While ycoord  0 and DestinationFound = False

If depth[o.x, ycoord]  [o.depth - d’, o.depth + d’] then DestinationFound = True EndWhile If DestinationFound = True then Set x = o.x; y = ycoord as new destination coordinates.

Note that depth buffer matrix is used as well as a d’ arbitrary tolerance value.

Looking for any Power-Up

Looking for any power-up is like looking for a specific power-up, but searching in the static viewport any colour that corresponds to any power-up. In the FPS defined in this thesis, health and weapon colour ids will be looked for. Bronto’s closest power-up will be chosen as destination also.

Looking Quickly for a Specific Power-Up

Looking quickly for a specific power-up is the same as looking for a specific power-up described behaviour. The only difference is that in this state, Bronto run in place to walk; it est., his velocity is greater than in the other state.

Known Problems

The stated approach causes three distinguishing problems that we describe in detail below. One of them has to do with the obstacles formed with geometry level polygons and it is called ‘The Higher Floor Problem’: when something that is being seen like ‘floor’ is actually too high to be reached, causing Bronto to get stuck. The second problem has to do with the perspective used to render the viewports and the corrections that need to be applied to the ‘free way’ box in order to allow Bronto to select farther distant destination points, and it is called ‘The Perspective Problem’. The last problem has to do with obstacle avoidance when Bronto is looking for a power-up, since he chooses a point without taking into account if there exists a ‘free way’ to reach it, and it is called ‘The Looking-For Problem’.

The Higher Floor Problem

Remember that for each level geometry polygon the Z normal component is used to identify if it will be represented as floor in the static viewport or not.

Game Applications Analysis - 36 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

However, not all polygons that suits to ‘floor’ classification really belongs to the floor; it est., the plane where the character is standing on. Lets suppose that a box is part of the level geometry, and it is high enough to keep Bronto from climbing it but so short that its upper face can be seen by Bronto. Then, that face is renderized in the static viewport as floor.

Proposed Walk Around heuristic finds difficulties with those kinds of structures.

Figure 17. The Higher Floor Problem. Bronto in front of a box obstacle that is part of the level geometry. Left: Blue colour of box sides as well as green colour of upper box face is seen in the static viewport. The upper face and the ‘real’ floor cannot be differentiated. Right: The normal render.

For example, supposing that Bronto is relatively far from the box, he will see the box sides as ‘wall’ colour in the static viewport (Figure 17). Now, since the viewpoint is over the box upper face –Bronto is taller than the box- while he gets closer, box lateral faces will disappear from his view, eventually leaving the upper face as the only visible box part. Therefore, the upper face and the ‘real’ floor cannot be differentiated from each other using just their colours (Figure 18).

Figure 18. The Higher Floor Problem. Bronto closer to the box of figure 17. Walk Around does not note height differences between polygons represented as floor in the static viewport. In this case, Bronto will get stuck trying to ‘go through’ the floor.

It could be differentiated by using the depth buffer or unprojecting the image, besides colours. Since Walk Around, as defined before, is unable to do so -uses only colours- it will always find a free way and will choose destination points trying to go forward, having Bronto to get stuck intending to go through the box.

This problem is named ‘The Higher Floor Problem’ because it happens when there are polygons that appear in the static viewport as floor, are higher than the real floor plane where Bronto is standing on and are lower than his viewpoint.

The Perspective Problem

Lets take a constant width corridor in its whole length that is just in front of the observer. The amount of the static viewport pixels representing the corridor floor is higher in the closer part than in the farer one. That effect is caused by perspective (Figure 19). It is analogous to a real person view of a tennis court from the grades behind the service line.

Game Applications Analysis - 37 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure 19. The Perspective Problem. The static viewport when Bronto is going through a corridor. Corridor’s width is constant in the whole length; however, due to perspective, the amount of floor pixels in the lower part is higher than in the farer one.

Walk Around does not take into account perspective effects. It defines free way as a rectangle in the static viewport with Bbbr as base length and bwaupd + bwalpd as minimum height, where all pixels have floor colour id. Three mentioned values are constants and user defined. A more precise free way should be defined as a trapezoid, with its base standing on viewport’s bottom and its upper side parallel to its base. This trapezoid is the projection of the rectangle located on the floor plane, with a base length equal to Bronto’s width, following corridor’s direction. In figure 20 a current free way compared with the desired free way is showed in a possible situation.

160

B waupd

120

B waupd B walpd

2 * B + 1 bbr Figure 20. The Perspective Problem. Bronto in front of a corridor in a possible situation. Current free way is shown with continuous lines together with destination point that Walk Around will choose. Free way with perspective correction is shown with dotted lines, together with destination point that should be chosen. Note that distances between both destination points could be enormous.

Game Applications Analysis - 38 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure 21. The Perspective Problem. Bronto stuck trying to go forward between two columns very close to each other. Upper-Left: Static viewport. Upper-Right: Normal rendered scene. Lower: Rendering from a lateral camera.

Walk Around as it is defined makes Bronto’s vision to be short; it est., he will always choose relatively close destination points. To be able to go through a long corridor he will have to choose several destination points at short distance of his current position while he advances through it. If the perspective corrected free way were used, he could go through the whole corridor selecting only one destination point at the end of the passage.

Then, Bbbr value has to be small enough in order to let Bronto select farer destination points, but big enough to avoid him to get stuck trying to pass through places where does not fit. So, its value is a designer’s trade off and has to be chosen as a result of research.

In figure 21, Bronto is shown trying to go forward between two columns very close to each other. The gap between them is lower than Bbbr. Bronto will stay stuck until a state change sets him free.

Game Applications Analysis - 39 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure 22. The Looking-For Problem. Bronto stuck trying to go through the stairs to reach a power-up. Upper-Left: Static viewport. Upper-Right: Normal rendered scene. Lower: Scene rendered from a lateral camera.

The usage of the perspective corrected free way would solve this issues. However, it could be forbidding in computer games due to involved calculating times. It is desirable to develop heuristics using less processing time and producing satisfactory results anyway.

For perspective influence in this situation, the problem has been named as ‘The Perspective Problem’.

The Looking-For Problem

Used heuristic when Bronto is looking for some item does not take into account neither obstacles nor Bronto’s width.

If any obstacle does not allow Bronto to reach the selected destination, he will ignore it and try to continue on the traced Bezier curve path, having him to get stuck. In figure 22, for example, Bronto is looking for ammo, in the static viewport he sees a weapon power-up and sets the path to reach it. However, Bronto is under stairs and since he sees the power-up between the step gaps, will keep trying successless to go forward.

This issue could be solved if the Bezier curve is previously checked in order to know if it interferes with any obstacle. If it does, alternative destination points should be generated to avoid the obstacles; or even generating more than one Bezier curve, connected by the ends, keeping continuity, and going through the necessary points in order to avoid the obstacles. In Figure 23 the idea of avoiding an obstacle found in the original curve path is shown.

a) b)

PU PU

Bronto Bronto

Figure 23. The Looking-For Problem. In a) the chosen way selected by Bronto to Power-Up (PU) go through an obstacle. In b) a possible solution is shown: choosing intermediate points that do not cross the obstacle, constructing a composed Bezier curve to the final destination.

Mentioned method would work perfectly avoiding obstacles if Bronto were a point. Curve could be traced without crossing any obstacles but, since Bronto has a certain width, he could get stuck if the curve is between two objects which distance is lower than Bronto’s width (Figure 24.a). On the other hand, curve could be close to a wall in such a way that part of Bronto’s body would collide with it; causing an undesirable behaviour (Figure 24.b) until he gets free from the wall. Both problems are found in proposed Looking-for heuristics, that is why they have been named ‘The Looking-For Problem’.

Ideal solution would be to take, in every single Bezier curve point or while Bronto is going down his way, a line segment perpendicular to the curve tangent in current position, centred at the curve, and with a length equal to Bronto’s width and check if that segment intersects with any obstacle. If it does, apply corrections to the path as it was mentioned in previous sections, generating intermediate points in order to join several Bezier curves avoiding obstacles and reaching final destination.

Game Applications Analysis - 40 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

a) b)

PU PU

Bronto Bronto

Figure 24. The Looking-For Problem. In a) the chosen way selected by Bronto to Power-Up (PU) does not go through any obstacles, but Bronto will not pass between the two objects due to his width. In b) a similar case is given when Bezier curve passes very close to walls.

Extended Behaviour with Dynamic Reactions

A very simple reactive and rule-based AI was developed in order to deal with the information provided by the dynamic viewport.

We define the following three states:

Intercept  Enemy Present  H  Hut  W  Wut

Avoid  Enemy Present  ( H < Hut  W < Wut )  cos() < 0

Don’t Worry  Enemy Not Present  (Enemy Present  ( H < Hut  W < Wut )  cos()  0)

See previous sections to be reminded of variables’ meanings. cos() is mapped into the green component. So, we can replace the conditions as:

cos() < 0  Enemy.Green < 0.5

cos()  0  Enemy.Green  0.5

When cos() < 0, the enemy is facing Bronto; when cos()  0 the enemy is going away.

Conceptually, when Bronto is full of Health and Weapon, it est., above the upper threshold, he is in perfect condition to chase the enemy and try to intercept it.

But, when Bronto has any of his properties under the upper threshold, and the enemy is facing him, Bronto needs to avoid it!

If no enemy is present, or Bronto has any of his properties under the upper threshold, and the enemy is going away from him, he will don’t worry about it.

The dynamic states diagram is represented in figure 25.

Game Applications Analysis - 41 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Intercept

Don’t Worry Avoid

Figure 25. Dynamic States Diagram. Initially Bronto is in Don’t Worry State, he can then make transitions between any state depending on the value of his properties and if an enemy appears or not in the static viewport, and if it is facing or not Bronto.

We determine about an enemy’s presence by looking for his colour id inside the static viewport.

Don’t Worry

In this state no enemy colour id has been detected inside the static viewport or all the enemies detected are going away from Bronto: they are not in a dangerous position, their green colour component is greater or equal to 0.5. Therefore, Bronto continues with static behaviour.

Avoid

In this state at least one enemy colour id has been detected inside the static viewport, it is approaching Bronto, and he has Health or Weapon below the upper threshold: the enemy is in a dangerous attitude, its green colour component is less than 0.5. So, Bronto rotates 180º, imposing this behaviour over the static one.

The same implementation than Look-For a Power-up is used when looking for an enemy.

Intercept

In this state at least one enemy colour id has been detected inside the static viewport, it is approaching Bronto, and he has Health and Weapon above or equal the upper threshold.

After the closest enemy’s position is determined, Bronto sets his destination point to the closest floor point standing at the enemy’s depth. This is made in the same way as in the look for a power-up behaviour. This algorithm imposes over any static behaviour. If no floor at the same depth is found for the closest object, Bronto remains in the current static behaviour.

Demonstration

The dynamic object used as the enemy in the demonstration is a bouncing flying metal ball travelling at constant speed.

It can be viewed at the static viewport as yellow.

Since the intercept state is implemented in the same way as the look for a power-up, it suffers ‘The Looking- For Problem’ too.

Game Applications Analysis - 42 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Last Word about Dynamic AI

As can be seen, we are using only the green colour component of the information supplied by the dynamic viewport.

Red and blue components could be used to improve the behaviour (as the flying ball travels at constant velocity we do not need to use the red one). For example, in a pursuit situation, Bronto could estimate a better interception point, avoidance, or target to fire using the velocity magnitude and sinus, together with cosines, of the moving object. We also think that besides using instantaneous information, a series of previous frames’ data, together with Bronto learning and cognitive memory, could be used to improve AI modules.

Game Applications Analysis - 43 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Game Applications Analysis

An analysis of how could be useful the proposed synthetic vision in different game genres is made in this section. Beyond our implementation is based on a FPS game, it is possible to use the proposed model in any other games and situations that make use of a 3D engine. However, implemented behaviours (explained in previous sections) are practically common in all computer game genres.

Adventures NPC’s have a fundamental role during the history in every adventure game. The player must to interact and communicate constantly with them, and to be part of their lives. Each NPC also has unique characteristics that allow to give them different personalities. In a completely 3D adventure, such as Gabriel Knight III [Gabr01], synthetic vision could be used with every NPC, supported by a strong AI module, where necessary attributes would be assigned here in order to define their personalities and behaviours. Beyond a ‘normal’ behaviour, agents could have assigned specific goals that they must to reach in order to continue the history as was specified by the designer. However, if no goals are specified at design level, this kind of games is open to a realism never seen before, where each character could set his own goals in a per day basis, ‘living their own life’. This will let designers focus on personalities development, allowing non linear histories, and producing unpredictable interactions. In addition, the challenge will be to provide an attractive and addictive environment, ruled by non-deterministic interactions, to the player. For sure new open issues will appear in order to build better NPC’s, extending the frontiers for adventure game designers.

First Person Shooters The demonstration of the proposed synthetic vision model was implemented in a FPS game. AI module complexity will depend on the quality and realism that you may want to assign to each NPC. In contrast with Adventure genre, where near each NPC needs different personality attributes, in FPS games personalities could be ‘reused’ since NPC’s are in general grouped in ‘races’ or types; in example, all Skaarj of Unreal [Unre01] have approximately the same behaviour. With this we are not saying that all NPC’s of the same race have exactly the same behaviour, but the main characteristics of them are equal for all. It is possible also to try with some variants in the vision representation; for example, Predator [Pred01] has a set of different visions that he could use, one of them is heat sources detection. The challenge in this kind of games will be in the NPC’s learning capacity to improve their behaviour, to fight better, using previous experiences, and in the way that their memories is constructed.

Third Person Action Games Practically, there not exist differences between FPS and Third Person Action games for vision purposes. Perhaps by this type of games’ levels characteristics (in example, Tomb Raider [Tomb01], where it is common to jump from one place to another to avoid precipices and falls) AI module should implement an efficient and complex jumping action, taking into account always what the NPC is sensing through its synthetic vision.

Role Play Games Role Play games could take advantage of the vision system in a similar way that the one described for adventures, also sharing behaviours between FPS’s. Independent autonomous behaviours could be given to avatars or very important NPC’s; whereas monsters that the player fight with each certain time or less important NPC’s could have assigned a less parameterized realistic behaviour as the ones used in FPS. Like Adventure games, more autonomous, realistic and intelligent avatars open new design possibilities.

Real Time Strategy 3D Real Time Strategy games could make use of the synthetic vision for ‘living’ units (such as men, animals, aliens, monster) as well as for motorized ones. For example, a tank could use synthetic vision provided information in an AI module that models tank behaviour as if it were driven by a human. Perhaps, to have a

Game Applications Analysis - 44 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002 well balanced game, it is necessary to add several restrictions to behaviours, but not for that reason make less realistic ones.

Flight Simulations Flight simulations could use synthetic vision to make realistics calculation of, for example, missile trajectories and shoots. A missile could have synthetic vision, using it for target pursuit detecting heat sources based on vision representation. Anti-airplanes turrets could use the mechanism to detect plane trajectories and shoot them taking into account what they are ‘seen’ at that and previous moments.

Other Sceneries It would be interesting try with other sceneries like automotive competences, where the vision will represent what the driver is seen, and the AI module should model his behaviour and actions based on the vision. Also it would be interesting to give vision to all characters in a team-game like soccer, where depending on individual characteristics and mood, the AI module should take decisions in a distributed way, individual, and autonomously to reach collective goals.

Game Applications Analysis - 45 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Conclusions

This thesis proposes a synthetic vision model to be used in computer games. Our approach gives more manageable information than pure vision systems, such as robot or computer vision, but avoids the AI module’s unrestricted access to the game database (like AI modules not using a vision system).

The proposed approach has two viewports, static and dynamic, where objects are perfectly identified, and depth as well as discretized information of objects’ velocity are represented using false colouring techniques.

A simple rule-based artificial intelligence module was developed in order to demonstrate that our synthetic vision model could be incorporated into agents or NPC’s inside computer games. The AI module was implemented in a FPS and Third Person Action Adventure kind of game environment.

Demonstration of synthetic vision module usage was the main goal of the developed AI module. It was not intended to focus on the development of complex techniques, such as memory, learning and interaction. The created heuristics for different agents’ behaviours, even when they are extremely simple, could be extended and improved to obtain more realistic behaviours.

Finally, synthetic vision and autonomous NPC’s potential in today computer games genres are briefly analyzed and discussed.

The long term objective is to have completely autonomous characters living in virtual computer game worlds, with their own needs and desires, their own life and realistic behaviours, introducing new game genres and gameplay.

Vision sense is just the first step.

Game Applications Analysis - 46 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Future Work

In the graphics section of the synthetic vision we can enumerate:

 Research with ‘non human’ representations, for agents with larger vision capabilities, such as infrared vision, heat sensing, or any other kind of improvement or difference in the vision (like the vision systems found in Predator [Pred01] or Geordi La Forge’s visor [Star01] in Star Trek: The Next Generation [Star02]).

 Investigate different ways to represent lighting effects. For example, in an obscure zone should be more difficult to identify an object than in an illuminated one, where identification should be perfect.

 Investigate how it is possible to add some type of noise in vision representation given an index, call it ‘visual fatigue’, that deteriorates the vision due to (thinking of a game) fatigue, low health, or any other factor that is affecting the agent. Also it could be possible to vary a vision index according to if the agent pass from a very illuminated zone to one with very little illumination (the eye would take certain time to get accustomed to the new environment) and vice versa.

For artificial intelligence field without doubts it is required to develop complex modules using proposed synthetic vision that improve and widely extend the presented one on this thesis. Not only applying these ideas in the FPS genre, but implementing it in other game genres. Some ideas that we can enumerate are:

 Memory. Hold a memory for each agent, for example, in order to know where was a power-up needed if he saw it before. It could be useful to remember past events, state changes (an object that was present at past time but not at present), etc.

 Learning. That agents learn based on what they are seen in order to improve, for example, attack/evasion tactics, or to learn different paths to go from one place to another and choose the better one depending on actual agent situation.

 Interaction. That agents interact between them, identifying by means of the vision. This task is a very useful for adventure games genre and RPGs.

 Personality. To give attributes of personality to each agent and, based on memory, learning and interaction, the agent grows building his own life, being a completely autonomous living agent inside a virtual world. This could be the expected and desired case for Massive Multiplayer OnLine RPG.

We do not have to let mention that also it is for future work:

 To integrate proposed vision system to all feasible game genres.

 To make a whole game with NPC’s that uses an AI module with only information sensed by the synthetic vision module.

 To extend sensory system with the rest of the senses: aural, tactile, smell, and taste.

 To research for new game genres and gameplay that could be achieved using a world inhabited completely by autonomous characters.

Game Applications Analysis - 47 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

References

[Blum01] Synthetic Characters Media Group Website: http://www.media.mit.edu/characters/. MIT.

[Blum97a] B. Blumberg. Go With the Flow: Synthetic Vision for Autonomous Animated Creatures. Proceedings of the First International Conference on Autonomous Agents (Agents'97), ACM Press 1997, pp. 538-539.

[Blum97b] Old Tricks, New Dogs: Ethology and Interactive Creatures. Bruce Mitchell Blumberg. February 1997. Ph.D Thesis. Massachusetts Institute of Technology.

[Eber01] David H. Eberly. 3D Game Engine Design. Chapter 3: The Graphics Pipeline. Morgan Kaufmann. 2001. ISBN 1-55860-593-2.

[Ente01] PadCenter level by Andreas Ente. E-mail: [email protected]. Website: http://padworld.exp- network.de/.

[Fly01] Fly3D Website: http://www.fly3d.com.br/. Paralelo Computaçao.

[Gabr01] Gabriel Knight III Website: http://www.sierrastudios.com/games/gk3/. Sierra Studios.

[Isla02] D. Isla, B. Blumberg. New Challenges for Character-Based AI for Games. AAAI Spring Symposium on AI and Interactive Entertainment, Palo Alto, CA, March 2002.

[Kuff99a] J.J. Kuffner, J.C. Latombe. Fast Synthetic Vision, Memory, and Learning Models for Virtual Humans. In Proc. CA’99: IEEE International Conference on Computer Animation, pp. 118-127, Geneva, Switzerland, May 1999.

[Kuff99b] James J. Kuffner. Autonomous Agents for Real-Time Animation. December 1999. Ph.D Thesis. Stanford University.

[Kuff01] James J. Kuffner Website: http://robotics.stanford.edu/~kuffner/.

[Lair00a] Human-level AI’s Killer Application: Interactive Computer Games. John E. Laird; Michael van Lent. AAAI, August 2000.

[Lair00b] Creating Human-like Synthetic Characters with Multiple Skill Levels: A Case Study using the Soar Quakebot. John E. Laird; John C. Duchi. AAAI 2000 Fall Symposium Series: Simulating Human Agents, November 2000.

[Lair01] Design Goals for Autonomous Synthetic Characters. John E. Laird.

[Lair02] John E. Laird AI & Computer Games Research Website: http://ai.eecs.umich.edu/people/laird/gamesresearch.html/. University of Michigan.

[Micr01] Microsoft DirectX Website: http://www.microsoft.com/directx/. Microsoft Corp.

[Nose95a] H. Noser, O. Renault, D. Thalmann, N. Magnenat Thalmann. Navigation for Digital Actors based on Synthetic Vision, Memory and Learning, Computers and Graphics, Pergamon Press, Vol.19, Nº1, 1995, pp. 7-19.

[Nose95b] Synthetic Vision and Audition for Digital Actors. Hansrudi Noser; Daniel Thalmann. Proc. Eurographics `95, Maastricht.

[Nose98] Sensor Based Synthetic Actors in a Tennis Game Simulation. Hansrudi Noser; Daniel Thalmann. The Visual Computer, Vol.14, No.4, pp.193-205, 1998.

[Open01] OpenGL Website: http://www.opengl.org/.

Game Applications Analysis - 48 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

[Pred01] Predator. Directed by John McTiernan; with Arnold Schwarzenegger and Carl Weathers. Fox Movies, 1987.

[Quak01] Quake Website: http://www.quake.com/. Id Software.

[Rabi01] Motion, Stereo and Color Analysis for Dynamic Virtual Environments. Tamer F. Rabie; Demetri Terzopoulos.

[Rena90] A Vision-Based Approach to Behavioral Animation. Olivier Renault; Nadia Magnenat Thalmann; Daniel Thalmann. Journal of Visualization and Computer Animation, Vol.1, No1, 1990.

[Reyn01] Steering Behaviors For Autonomous Characters. Craig W. Reynolds. Website: http://www.red3d.com/cwr/steer/gdc99/.

[Star01] Geordi La Forge character information at Star Trek Website: http://www.startrek.com/library/individ.asp? ID=112463. Paramount Interactive.

[Star02] Star Trek Website: http://www.startrek.com/. Paramount Interactive.

[Terz96] D. Terzopoulos, T. Rabie, R. Grzeszczuk. Perception and learning in artificial animals. Artificial Life V: Proc. Fifth International Conference on the Synthesis and Simulation of Living Systems, Nara, Japan, May, 1996, pp. 313-320.

[Thal96] D. Thalmann, H. Noser, Z. Huang. How to Create a Virtual Life?. Interactive Animation, chapter 11, pp. 263-291, Springer-Verlag, 1996.

[Tomb01] Tomb Raider Website: http://www.tombraider.com/. Eidos Interactive.

[TuTe94] X. Tu and D. Terzopoulos. Artificial Fishes: Physics, Locomotion, Perception, Behavior. Proc. of ACM SIGGRAPH'94, Orlando, FL, July, 1994, in ACM Computer Graphics Proceedings, 1994, pp. 43-50.

[Unre01] Unreal Website: http://www.unreal.com/. Epic Games, 1998.

[Watt01] A. Watt, F. Policarpo. 3D Computer Games Technology, Volume I: Real-Time Rendering and Software. Addison Wesley. 2001. ISBN 0-201-61921-0.

[Watt02] A. Watt, F. Policarpo. 3D Games: Advanced Real-time Rendering and Animation, Addison-Wesley 2002 (in press).

[Wolf01] Wolfenstein 3D Website: http://www.idsoftware.com/games/wolfenstein/wolf3d/. Id Software.

Game Applications Analysis - 49 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix A – CD Contents

Main CD contents are:

 Fly3D 2 Beta 8

Fly3D engine 2.0 beta version 8, adapted to support special renderings for static and dynamic viewports. Modified and new levels and plug-ins can be found in subfolders.

Folder: “\flysdk2b8\”.

 Fly3D Plug-Ins

o Walk Existing walk.dll was modified in order to support Bezier curve generation for character path once the destination point was selected by the AI module. Folder: “\flysdk2b8\plugin\walk\”

o Synthetic Vision Svision.dll was created. It contains the AI module and static and dynamic viewports generation and processing. Folder: “\flysdk2b8\plugin\svision\”

 DirectX 8.1

Microsoft DirectX 8.1 API.

Folder: “\directx\”.

 Thesis

This document in PDF format.

Folder: “\thesis\”.

 Referenced papers

Some of the papers referenced in this thesis.

Folder: “\papers\”.

 ASAI 2002

Slideshow and paper published in ASAI 2002 proceedings, 31° JAIIO conference, Santa Fe, Argentina.

Folder: “\asai2002\”.

Game Applications Analysis - 50 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix B – Implementation

The implementation of the synthetic vision and artificial intelligence modules described in this thesis was made using Fly3D [Fly01] graphics engine.

Some existing Fly3D plug-ins were reused, other ones modified, and new ones created. It was necessary to adapt some of the engine classes and methods in order to support synthetic vision module required functionality.

Although it could be possible to make a better modularization and optimization of implemented features, we guess that it was good enough in order to demonstrate the utility of synthetic vision.

We will briefly comment important implementation details on each component.

Fly3D_engine classes

Modifications made to the engine were absolutely necessary in order to support the rendering for static and dynamic viewport, with corresponding the false colouring approach. Although this goes in only one flyEngine.dll, ALL modules should been recompiled because they direct relationship between them and this library. flyBezierPatch class Two drawing modes were added in ‘draw’ method. One corresponding to static viewport, and the other to dynamic. flyBspObject class ‘colorid’ and ‘dyncolor’ properties were added, containing the corresponding colour for the static viewport, and the calculated colour for the dynamic viewport, for each bsp object.

‘draw’ method was modified in order to assign to every mesh face the colour ‘colorid’. flyEngine class ‘maxvel’ property was added. It indicates the maximum velocity allowed for any object. It is useful when the velocity magnitude is mapped into the dynamic viewport.

‘draw_faces’ method was modified: if ‘debug’ mode is in 256, it will draw faces as for static viewport; if ‘debug’ mode is in 128, it will draw faces as for dynamic one. Level geometry colours are assigned here.

‘step_objects’ method was modified in order to calculate and assign the corresponding colour of each object into the dynamic viewport. flyFace class ‘colorid’ and ‘dyncolor’ properties were added, containing the corresponding colour for the static viewport, and the calculated colour for the dynamic viewport, for each face.

Game Applications Analysis - 51 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002 lights classes sprite_light class The sprite_light is draw only if synthetic vision mode is not set. This was modified in ‘draw’ method. svision classes

This is a new plug-ing that implements the process of saving viewports buffers to memory as well as all code to control NPC’s behaviour depending on what was defined in this thesis.

Global properties are:  ‘depthbuffer’, it contains last frame depth buffer.  ‘dynamicbuffer’, it contains last frame dynamic viewport buffer.  ‘staticbuffer’, it contains last frame static viewport buffer.

NPC’s present state is shown on screen through global method ‘draw_game_status’. ai class NPC’s behaviour code is contained in this class.

The most important properties are:  ‘bwalpd’, bottom margin in pixels.  ‘bwaupd’, lateral and top margin in pixels.  ‘cutfactor’, cut factor (percent) of current path.  ‘hid’, health colour id.  ‘hlt’, health lower threshold.  ‘hut’, health upper threshold.  ‘player’, reference to the person object controlled in this class.

 ‘radio’, Bbbr.  ‘state’, NPC’s current state.  ‘wid’, weapon colour id.  ‘wlt’, weapon lower threshold.  ‘wut’, weapon upper threshold.

Most important methods are:  ‘b_addobject’, it is used to add objects to a list when it is necessary to obtain objects present in the static viewport.  ‘b_getanynextobject’, get the person’s closest object from an object list. It is used when both power-ups (weapon and health) are looked for.  ‘b_getnextobject’, get the person’s closest object that have a specific colour, from an object list.  ‘b_lookboth’, heuristic used in looking for any power-up state.  ‘b_lookfor’, heuristic used in looking for a specific power-up state.  ‘b_turnleft’, heuristic used to turn left in walk around state.  ‘b_turnright’, heuristic used to turn right in walk around state.  ‘b_walkaround’, heuristic used for walk around state.  ‘step’, set current state on each frame and call the corresponding heuristic.

Dynamic behaviour is embedded into static heuristics’ methods.

Game Applications Analysis - 52 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002 svobject class It is the structure used for objects found in Looking-for strategies. vision class It is used to save viewports data to memory. It contains the following properties:  ‘player’, reference to the person object controlled by means of the synthetic vision.  ‘vpstatic’, reference to the static viewport.  ‘vpdynamic’, reference to the dynamic viewport.  ‘width’, viewports width in pixels. Current implementation only supports 160 pixels width.  ‘height’, viewports height in pixels. Current implementation only supports 120 pixels height.

‘step’ is the implemented method that in a per frame basis gets from the colour buffer the static and dynamic viewports pixels information to save it in main memory. It does the same to save depth information but taking data from the depth buffer (Z-buffer). viewport classes viewport class The following properties were added:  ‘coordmode’, if 0 percentual coordinates are used as in the old viewport (xf, yf, wxf, wyf); if 1 pixel coordinates y-inverted are used (xi, yi, sxi, syi); if 2 demonstration’s special coordinates are used: normal viewport centered, at xi pixels from the left for static viewport, and at xi pixels from the right for dynamic one.  ‘mode’, 0 normal; 1 static, 2 dynamic.  ‘camangle’, viewport rendering angle.  ‘xi’, viewport is placed at xi pixels from the left.  ‘yi’, viewport is placed at yi pixels from the window’s top.  ‘sxi’, viewport pixels width.  ‘syi’, viewport pixels height.

‘ draw’ method was modified in order to support the different viewport modes. Its task is to set the corresponding debug mode: 128 (dynamic) or 256 (static). walk classes Walk plug-in suffered several changes. Demonstration’s dynamic object (‘flying ball’) as well as new implemented cameras were added to it. The modifications were made basically in ‘powerup’ and ‘person’ classes in order to support health and power-up power-ups as they were defined in this thesis. Other methods and properties were added to ‘person’ class also in order to allow the interpretation of static viewport coordinates selected destination point and unprojection in order to obtain its world coordinates and make then a path between current and desired final positions.

NPC’s health and weapon properties values are showed in screen through ‘draw_game_status’ global method. camera class This is a new class for managing a camera that follows a target at a given distance without colliding with any other objects. ‘source’ property must refer to the corresponding camera target flyBspObject. ‘displacement’ property has the distance that the camera must hold from his target.

Game Applications Analysis - 53 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002 camera2 class This is a new class for managing a camera that follows a target at a given distance. The camera collides. It has the following properties:  ‘source’, a reference to the camera target flyBspObject.  ‘dist’, minimum distance that the camera must hold from his target.  ‘radius’, bounding box radius.  ‘ maxvel’, maximum velocity allowed for the camera when it need to get close to his target, until it reaches a ‘dist’ distance from it. object class This is a new class. It draws a moving object that changes direction when collisions are detected. It uses a constant velocity indicated by inherited ‘vel’ property. person class The NPC is contained in this class. Once the destination point was selected, the Bezier curve is calculated and draw here. One of its tasks is to make the agent to follow curve path.

Added properties are:  ‘mode’, 0 if normal walk mode is desired; 1 if interactive destinations point selection interface is desired (similar to the one found at Fly3D provided shop.dll); 2 if a synthetic vision behavior is desired.  ‘svhealth’, current health value.  ‘svinithealth’, initial health value .  ‘svweapon’, current weapon value.  ‘svinitweapon’, initial weapon value.  ‘maxhealth’, maximum allowed health value.  ‘maxweapon’, maximum allowed weapon value.  ‘healthtime’, time interval for health decreasing.  ‘weapontime’, time interval for weapon decreasing.  ‘healthfactor’, health quantity that must be decreased once the specified time interval was fulfilled.  ‘weaponfactor’, weapon quantity that must be decreased once the specified time interval was fulfilled.  ‘healthpowerup’, health quantity that must be increased each time a health power-up is taken.  ‘weaponpowerup’, weapon quantity that must be increased each time a weapon power-up is taken.  ‘healthcurtime’, time since last health power-up was took or since its value was decreased.  ‘weaponcurtime’, time since last weapon power-up was took or since its value was decreased.  ‘movesprite’, spritelight object reference used to show mouse cursor in modes 1 and 2.  ‘targetsprite’, spritelight object reference used to show destination point in modes 1 and 2.  ‘spriteflag’, indicates if movesprite must be draw.  ‘targetflag’, indicates if targetsprite must be draw.  ‘characterflag’, indicates if character is idle or in movement.  ‘walkpath’, Bezier curve for character path.  ‘path_dist’, estimated distance between Bezier curve control points.  ‘path_len’, Bezier curve length.  ‘path_factor’, Bezier curve path walked percent.  ‘svx’, static viewport’s x coordinate of selected destination point. It is –1 if no destination was selected.  ‘svy’, static viewport’s y coordinate of selected destination point. It is –1 if no destination was selected.  ‘vpstatic’, static viewport reference.  ‘lastpos’, character’s last position. It is used to know if the character is stuck.  ‘stucktime’, time used to determine if character is stuck at any position.  ‘signo’, used to determine where to rotate once the system knows that the character is stuck.  ‘drawpath’, it indicates if Bezier curve must be draw.  ‘svwalkvel’, character’s walk velocity when the system is in mode 1 or 2.  ‘svrunvel’, character’s run velocity when the system is in mode 1 or 2.

Game Applications Analysis - 54 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Added and modified methods are:  ‘die’, simply showed message when the character dies was modified.  ‘ draw’, it was modified in order to draw 3D cursor and destination sprites as well as to draw Bezier curve.  ‘init’, necessary initializations for synthetic vision where added.  ‘move2’, if mode is 1, destination points selected by the user with the mouse are processed in order to create character Bezier curve path. It implements an interface similar to Fly3D shop.dll plug-in.  ‘move3’, if mode is 2, destination’s points selected by AI module are processed in order to create Bezier curve path. When it does not need to generate a new curve, makes the character to continue walking over current path. If it detects that the character is stuck, makes him rotate 180°.  ‘step’, the corresponding move method to current mode is called in a per frame basis. Furthermore, it manages health and weapon’s properties values in order to decrease their values when corresponding time was reached. powerup class ‘get_mesh’ method was overrided in order to return power-up mesh only when ir is visible from the current point of view (a power-up is invisible when it was taked and respawning time was not completed).

‘powerup_get’ method was modified in order to increment health and weapon’s properties when synthetic vision mode is used. Values defined in ‘person’ object are used.

Game Applications Analysis - 55 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix C – Software Users’ Guide

System Requirements

The Fly3D Engine version 2.0 requires Windows 9x/Me/2k/NT4 in order to run. A good 3D card supporting OpenGL is also required (NVIDIA recommended). Make sure to be using the latest drivers available for your video card and also to have the latest Windows Service Packs and latest DirectX installed. In order to run the engine under NT4 you must have Service Pack 5 or later installed.

For more information refer to [Fly01].

Installing DirectX

If you don’t have DirectX installed in your system, look at Appendix A to find the software (DirectX 8.1) at the companion CD or visit [Micr01].

Run the executable file for your operating system and follow the on-line instructions.

Configuring Fly3D

The first thing that you have to do, after having installed everything, in order to run Fly3D properly is to configure it. Follow this steps:

1. Run flyConfig.exe, located at Fly3D path. A window will be opened (Figure C-1). 2. If it is checked, uncheck the ‘Customized data and plugin folders’ option. 3. Select the desired full screen resolution at ‘Fullscreen Video Mode’ combo. 4. Select the desired rendering mode at ‘Rendering Modes’ combo. The first time that you run Fly3D on your machine the best rendering mode will be automatically selected. A hardware accelerated mode is always preferable to a software one. 5. Press ‘Test’ button, then ‘Save’, and then ‘Exit’.

Later, if Fly3D is running too dark or bright, you will have to increment or decrement the brightness value accordingly, and repeat step 5.

The same applies if you want to change the full screen resolution or rendering mode.

Running Fly3D

Before running Fly3D for the first time, be sure to meet the system requirements and to configure it properly.

For running Fly3D, execute the flyFrontEnd.exe file located at Fly3D path.

Game Applications Analysis - 56 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure C-1. Fly3D Configuration window.

Running Synthetic Vision Levels

After running Fly3D, a menu will appear (figure C-2). Choose ‘Single Player’ pressing the ‘Enter’ key.

The levels selection menu will appear (figure C-3). Push ‘down arrow’ key and then push ‘right arrow’ or ‘left arrow’ several times until you find the desired level. Then press ‘Enter’ to load it.

We provide four levels on the CD:

 Synthetic Vision 1: A custom level with a camera located behind Bronto, without collisions detection (it est, the camera will get into walls).  Synthetic Vision 2: The same custom level but this time using a camera with collision detections.  Synthetic Vision 3: A Quake III Arena [Quak01] level converted to Fly3D, with custom sky and power- ups. The camera collides.  Synthetic Vision 4: The PadCenter [Ente01] level with custom power-ups. The camera collides.

Game Applications Analysis - 57 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Figure C-2. Fly3D FrontEnd main menu.

Figure C-3. Fly3D FrontEnd level selection menu.

Game Applications Analysis - 58 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Once the selected level is loaded, first press the ‘B’ key to activate the Dynamic Viewport, and then ‘C’ to activate the Static Viewport. After both viewports have been activated (in that same order) the AI module will start to control Bronto’s behaviour.

At any time during the simulation you can press the ‘V’ key to activate or deactivate the viewport that shows the scene normally rendered from Bronto viewpoint.

To exit from the level, press ‘ESC’.

After that, you could repeat the steps from the start of this section in order to run other levels.

If you want to quit Fly3D, press ‘down arrow’ key until ‘Quit’ option appears selected in Fly3D FrontEnt main menu, and then press ‘Enter’.

Modifying Synthetic Vision Levels Properties

To edit some of the properties of the synthetic vision levels, execute flyEditor.exe from the Fly3D path and load any of the available synthetic vision levels:

 Synthetic Vision 1: sv08.fly.  Synthetic Vision 2: sv09.fly.  Synthetic Vision 3: sv10.fly.  Synthetic Vision 4: sv12.fly.

Refer to Fly3D literature [Watt01; Watt02] or website [Fly01] to know how to use flyEditor.

Some of the interesting parameters that are possible to change and play with are:

Global Parameters

 Nearplane (NP)  FarPlane (FP)  Camangle (Camera Angle)  Camera: o camera2: Third person camera with collisions detection. o 3rd_Person: Third person camera without collisions detection. o player: First person camera.

SVision plug-in; “ai” class object properties

The colour id parameters in this class are used to identify weapon and health.

 Health Upper Threshold (Hut)

 Health Lower Threshold (Hlt)

 Weapon Upper Threshold (Wut)

 Weapon Lower Threshold (Wlt)  Health Color Id  Weapon Color Id

 WA Upper Pixel Dist (bwaupd)

 WA Lower Pixel Dist (bwalpd)

 Cutfactor (Cf)

Game Applications Analysis - 59 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

 Radio (Bbbr)

Walk plug-in; “person” class object properties

 svhealth (Hini)

 svweapon (Wini)

 healthtime (tdh)

 weapontime (tdw)

 healthfactor (Dh)

 weaponfactor (Dw)

 healthpowerup (Ah)

 weaponpowerup (Aw)  drawpath (1 or 0: Do/Do not draw bezier curve)

Walk plug-in; “powerup” class object properties

Each object must have the type and corresponding colour id set.

 ColorId  Type (1: Ammo; -1: Health)  Spawntime (Time for power-up regeneration after taken)

You can add as many power-ups as you want.

Walk plug-in; “object” class object properties

This is the flying ball. The unique colour id supported for time being is (1.0, 1.0, 1.0).

 ColorId

You can add as many flying balls as you want.

Notes

 All time variables are in milliseconds.  All percent variables range between 0 and 1.  Colour Ids are floating point vectors. Actually, not all possible colours work well, since some parts of the code use unsigned int colours in place of float, and there are conversion precision errors. This will be fixed in future releases.

Game Applications Analysis - 60 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix D – Glossary

2D : Two Dimensions.

3D : Three Dimensions. Three Dimensional.

AI : Artificial Intelligence.

BSP : Binary Space Partition.

ECI : Escuela de Ciencias Informáticas.

FPS : First Person Shooter.

NPC : Non Player Character.

RGB : Red-Green-Blue.

RPG : Role Player Game.

Game Applications Analysis - 61 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix E - List of Figures

FIGURE 1. APPROACHES: A GRAPHICAL VIEW...... 11 FIGURE 2. AIMED MODEL PLACE...... 13 FIGURE 3. STATIC VIEWPORT (LEFT) OBTAINED FROM THE NORMAL RENDERED VIEWPORT (RIGHT)...... 14 FIGURE 4. COORDINATE SYSTEM USED...... 16 FIGURE 5. FROM LEFT TO RIGHT, STATIC VIEWPORT, NORMAL RENDERED VIEWPORT, AND DYNAMIC VIEWPORT...... 18 FIGURE 6. AN EXAMPLE OF WEAPON AND ENERGY DIMINISHING SYSTEM: GAME STARTS, THEN BOTH PROPERTIES DECREASE GRADUALLY. WEAPON REACHES 0 VALUE AT 100 SECONDS, WHEREAS HEALTH DOES THE SAME AT 170 SECONDS. AT THAT MOMENT BRONTO DIES.....22 FIGURE 8. EVENTS THAT AFFECT AND ARE AFFECTED BY BRONTO’S HEALTH AND WEAPON VALUES DURING THE GAME...... 25 FIGURE 9. BRONTO’S BEHAVIOUR REDUCED STATES DIAGRAM. ONLY EXPECTED TRANSITIONS BETWEEN STATES ARE REPRESENTED WITH ARROWS, EVEN THOUGH IT IS POSSIBLE TO GO FROM ONE STATE TO ANY OTHER...... 26

FIGURE 10. BEZIER CURVE. CONTROL POINTS (PI), INITIAL POSITION (B), DESTINATION POINT (T), AND BRONTO’S VISUAL/DIRECTION VECTOR (D) ARE REPRESENTED...... 27 FIGURE 11. STATIC VIEWPORT AND USED WALK AROUND MARGINS. BOLDER RECTANGLE OUTER PIXELS CAN NOT BE CHOSEN AS DESTINATION POINTS...... 29 FIGURE 12. A FREE WAY INSIDE THE STATIC VIEWPORT. WHITE ARE IS FLOOR, GRAY AREA IS WALL. BOLDED RECTANGLE SHOWS THE CHOSEN FREE WAY. WITHIN THAT RECTANGLE ARE THE UPPER AND LOWER MARGIN LINES, AND THE POINT THAT THE HEURISTIC WILL SELECT AS THE NEW DESTINATION...... 30 FIGURE 13. STRATEGY 2 IN ACTION. IN A) PRECONDITION IS FULFILLED IN ORDER TO EMPLOY THIS STRATEGY: A CENTRAL FREE WAY DOES NOT EXISTS AND VIEWPORT’S BOTTOM CLOSEST ROW CONTAINING A PIXEL DIFFERENT THAN FLOOR INSIDE SEARCHING RECTANGLE, CONTAINS A PIXEL DIFFERENT THAN FLOOR MORE CLOSE TO THE LEFT SIDE THAN TO THE RIGHT ONE, SIGNALED WITH A DOUBLE CIRCLE. DOTTED LINE SEPARATES SEARCHING RECTANGLE IN HALVES. IN B) THE FREE WAY THAT THE STRATEGY WILL FINALLY CHOOSE AS WELL AS NEW DESTINATION POINT IS SHOWED...... 32 FIGURE 14. STRATEGY 2 WHEN FAILS. PRECONDITION IS FULFILLED IN ORDER TO EMPLOY THIS STRATEGY. HOWEVER, NO FREE WAY IS FOUND WHEN THE STRATEGY LOOK FOR ONE TO THE LEFT OF THE POINT SIGNALED WITH DOUBLE CIRCLE...... 32 FIGURE 15. STRATEGY 4 EXAMPLES. IN ALL CASES, BEYOND A FREE WAY MAY EXISTS, SEARCH FAILS IF CENTRAL RECTANGLE FULFILLS STRATEGY PRECONDITIONS. IN A) TWO PIXELS DIFFERENT THAN FLOOR AT THE SAME DISTANCE TO THE LEFT AND RIGHT SIDES OF THE RECTANGLE ARE FOUND. IN B) THE SAME CASE IS PRODUCED WHEN A THIN CORRIDOR IS IN FRONT OF THE CHARACTER. IN C) ONLY ONE PIXEL DIFFERENT FROM FLOOR IS FOUND AT RECTANGLE CENTER. IN D) THERE IS DIRECTLY A WALL IN FRONT...... 33 FIGURE 16. TURN STRATEGY. WALK AROUND HAS FAILED AS SHOWED IN FIGURE 15.D. IN A) A LEFT TURN WAS CHOSEN WHERE THE DESTINATION POINT IS SIGNALED AS A CROSSED CIRCLE AND THE FIRST POINT FROM THE LEFT DIFFERENT THAN FLOOR IS SIGNALED AS A DOUBLE CIRCLE. IN B) A RIGHT TURN WAS CHOSEN, EVERY PIXEL TO THE RIGHT CORRESPONDS TO

Game Applications Analysis - 62 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

FLOOR, SO THE RIGHT MARGIN WILL BE TAKEN FROM THE RIGHT SIDE OF THE VIEWPORT. THE CROSSED CIRCLE SIGNS NEW DESTINATION POINT...... 35 FIGURE 17. THE HIGHER FLOOR PROBLEM. BRONTO IN FRONT OF A BOX OBSTACLE THAT IS PART OF THE LEVEL GEOMETRY. LEFT: BLUE COLOUR OF BOX SIDES AS WELL AS GREEN COLOUR OF UPPER BOX FACE ARE SEEN IN THE STATIC VIEWPORT. THE UPPER FACE AND THE ‘REAL’ FLOOR CAN NOT BE DIFFERENTIATED. RIGHT: THE NORMAL RENDER...... 38 FIGURE 18. THE HIGHER FLOOR PROBLEM. BRONTO CLOSER TO THE BOX OF FIGURE 17. WALK AROUND DOES NOT NOTE HEIGHT DIFFERENCES BETWEEN POLYGONS REPRESENTED AS FLOOR IN THE STATIC VIEWPORT. IN THIS CASE, BRONTO WILL GET STUCK TRYING TO ‘GO THROUGH’ THE FLOOR...... 38 FIGURE 19. THE PERSPECTIVE PROBLEM. THE STATIC VIEWPORT WHEN BRONTO IS GOING THROUGH A CORRIDOR. CORRIDOR’S WIDTH IS CONSTANT IN THE WHOLE LENGTH; HOWEVER, DUE TO PERSPECTIVE, THE AMOUNT OF FLOOR PIXELS IN THE LOWER PART ARE HIGHER THAN IN THE FARER ONE...... 39 FIGURE 20. THE PERSPECTIVE PROBLEM. BRONTO IN FRONT OF A CORRIDOR IN A POSSIBLE SITUATION. CURRENT FREE WAY IS SHOWN WITH CONTINUOUS LINES TOGETHER WITH DESTINATION POINT THAT WALK AROUND WILL CHOOSE. FREE WAY WITH PERSPECTIVE CORRECTION IS SHOWN WITH DOTTED LINES, TOGETHER WITH DESTINATION POINT THAT SHOULD BE CHOSEN. NOTE THAT DISTANCES BETWEEN BOTH DESTINATION POINTS COULD BE ENORMOUS...... 40 FIGURE 21. THE PERSPECTIVE PROBLEM. BRONTO STUCK TRYING TO GO FORWARD BETWEEN TWO COLUMNS VERY CLOSE TO EACH OTHER. UPPER-LEFT: STATIC VIEWPORT. UPPER- RIGHT: NORMAL RENDERED SCENE. LOWER: RENDERING FROM A LATERAL CAMERA...... 40 FIGURE 22. THE LOOKING-FOR PROBLEM. BRONTO STUCK TRYING TO GO THROUGH THE STAIRS TO REACH A POWER-UP. UPPER-LEFT: STATIC VIEWPORT. UPPER-RIGHT: NORMAL RENDERED SCENE. LOWER: SCENE RENDERED FROM A LATERAL CAMERA...... 41 FIGURE 23. THE LOOKING-FOR PROBLEM. IN A) THE CHOSEN WAY SELECTED BY BRONTO TO POWER-UP (PU) GO THROUGH AN OBSTACLE. IN B) A POSSIBLE SOLUTION IS SHOWN: CHOSING INTERMEDIATE POINTS THAT DO NOT CROSS THE OBSTACLE, CONSTRUCTING A COMPOSED BEZIER CURVE TO THE FINAL DESTINATION...... 42 FIGURE 24. THE LOOKING-FOR PROBLEM. IN A) THE CHOSEN WAY SELECTED BY BRONTO TO POWER-UP (PU) DOES NOT GO THROUGH ABY OBSTACLES, BUT BRONTO WILL NOT PASS BETWEEN THE TWO OBJECTS DUE TO HIS WIDTH. IN B) A SIMILAR CASE IS GIVEN WHEN BEZIER CURVE PASSES VERY CLOSE TO WALLS...... 42 FIGURE 25. DYNAMIC STATES DIAGRAM. INITIALLY BRONTO IS IN DON’T WORRY STATE, HE CAN THEN MAKE TRANSITIONS BETWEEN ANY STATE DEPENDING ON THE VALUE OF HIS PROPERTIES AND IF AN ENEMY APPEARS OR NOT IN THE STATIC VIEWPORT, AND IF IT IS FACING OR NOT BRONTO...... 43 FIGURE C-1. FLY3D CONFIGURATION WINDOW...... 59 FIGURE C-2. FLY3D FRONTEND MAIN MENU...... 60 FIGURE C-3. FLY3D FRONTEND LEVEL SELECTION MENU...... 60

Game Applications Analysis - 63 An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Appendix F - List of Tables

TABLE 1. CLASSES OF OBJECTS AND COLOUR IDS MAPPING TABLE...... 15 TABLE 2. EXTENDED CLASSES OF OBJECTS AND COLOUR IDS MAPPING TABLE WITH LEVEL GEOMETRY...... 16

Game Applications Analysis - 64

Recommended publications