Examensarbete LITH-ITN-MT-EX--07/032--SE

EMO - A Computational Emotional State Module. Emotions and

their influence on the

behaviour of autonomous agents Jimmy Esbjörnsson

2007-05-28

Department of Science and Technology Institutionen för teknik och naturvetenskap Linköpings Universitet Linköpings Universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping LITH-ITN-MT-EX--07/032--SE

EMO - A Computational Emotional State Module. Emotions and their influence on the behaviour of autonomous agents Examensarbete utfört i medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping Jimmy Esbjörnsson

Handledare Pierangelo Dell'Acqua Examinator Pierangelo Dell'Acqua

Norrköping 2007-05-28 Datum Avdelning, Institution Date Division, Department

Institutionen för teknik och naturvetenskap 2007-05-28

Department of Science and Technology

Språk Rapporttyp ISBN Language Report category ______Svenska/Swedish Examensarbete ISRN LITH-ITN-MT-EX--07/032--SE x Engelska/English B-uppsats ______C-uppsats Serietitel och serienummer ISSN x D-uppsats Title of series, numbering ______

______

URL för elektronisk version

Titel Title EMO - A Computational Emotional State Module. Emotions and their influence on the behaviour of autonomous agents

Författare Author Jimmy Esbjörnsson

Sammanfattning Abstract Artificial intelligence (AI) is already a fundamental component of computer games. In this context is emotions a growing part in simulating real life. The proposed emotional state module, provides a way for the game agents to select an action in real-time virtual environments. The modules function has been tested with the open-source strategy game ORTS. This thesis proposes a new approach for the design of an interacting network, similar to a spreading activation system (Maes, 1991), of emotional states that keeps track of emotion intensities changing and interacting over time. The network of emotions can represent any number of persisting states, such as moods, emotions and drives. Any emotional signal can affect every state positively or negatively. The states' response to emotional signals are influenced by the other states represented in the network. The network is contained within an emotional state module. This interactions between emotions are not the focus of much research, neither is the representation model. The focus tend to be on the mechanisms eliciting emotions and on how to express the emotions.

Nyckelord Keyword artificial intelligence (AI), emotions, feelings, mood, sigmoid function, response curve, behaviour, autonomous agents, real-time strategy game (RTS), open real-time strategy (0RTS) Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extra- ordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/

© Jimmy Esbjörnsson EMO ­ A Computational Emotional State Module

Emotions and their influence on the behaviour of autonomous agents

Jimmy Esbjörnsson VITA ­ Visual Information Technology and Applications Linköping University

[email protected]

Abstract Artificial intelligence (AI) is already a fundamental component of computer games. In this context is emotions a growing part in simulating real life. The proposed emotional state module, provides a way for the game agents to select an action in real­time virtual environments. The modules function has been tested with the open­source strategy game ORTS. Extensive research work on the subject of emotions is available in literature, and different models of emotional systems are debated. The material have been reviewed the material and a model have been constructed based on psychological studies by Picard, Minsky, Pinker and others. Especially the work of Picard has been very influential. This thesis proposes a new approach for the design of an interacting network, similar to a spreading activation system (Maes, 1991), of emotional states that keeps track of emotion intensities changing and interacting over time. The network of emotions can represent any number of persisting states, such as moods, emotions and drives. Any emotional signal can affect every state positively or negatively. The states' response to emotional signals are influenced by the other states represented in the network. The network is contained within an emotional state module. This interactions between emotions are not the focus of much research, neither is the representation model. The focus tend to be on the mechanisms eliciting emotions and on how to express the emotions.

Key words: artificial intelligence (AI), emotions, feelings, mood, sigmoid function, response curve, behaviour, autonomous agents, real­time strategy game (RTS), open real­time strategy (ORTS)

Table of Contents Glossary...... 1 1 Introduction...... 3 1.1 Preface...... 3 1.2 Audience...... 4 1.3 Overview and Goals...... 4 1.4 Achievements...... 4 1.5 Acknowledgements...... 5 2 The Concept of Emotions ...... 7 2.1 A brain to simulate...... 7 2.2 Emotional genes...... 8 2.3 The purpose of emotions...... 11 2.4 Why machines need emotions...... 14 2.5 The classification of emotions and feelings...... 16 2.6 Basic Emotions...... 17 2.7 Appraisal Theory...... 18 The OCC Model...... 19 The Roseman Model...... 19 2.8 Temperament and Mood...... 20 2.9 Models with Multiple Emotion Layers...... 21 3 Architecture...... 23 3.1 A model to build on...... 23 3.2 The Emotional State Module...... 24 Signal representation for emotions...... 27 Mood of an emotional state...... 30 Considerations...... 31 3.3 The Decision Making Module...... 31 3.4 EMO Integrated with ORTS through a middleware...... 32 3.5 Configuration...... 35 4 Contribution, Future Work and Conclusions...... 37 4.1 Contribution...... 37 4.2 Future Work...... 37 4.3 Related work...... 38 Appraisal models...... 38 The eliciting emotion model...... 38 The project Oz model...... 39 The Carthxis model...... 39 The DER model...... 40 4.4 Conclusion ...... 40 5 References...... 43 6 Appendix...... 45 A)The ORTS Project...... 45 A.1 Background...... 45 A.2 Getting started...... 47 A.3 Brief overview of ORTS...... 49 B)The Brain...... 55 Glossary The content of this glossary section is given as a guidance for you to understand the findings in this thesis. Affect is the scientific term used to describe an agent's externally displayed mood. This term doesn’t regard at what point an emotion becomes "genuine" to the agent. (Affect is also a verb meaning “have an influence on”, often confused with the word effect.) Affective The word affective is sometimes used as an opposite to cognitive. But the word simply means “having to do with emotions“.

Agent is an intelligent actor, which observe and act upon an environment, real or virtual. An agent is specified in terms of a set of capabilities (see actions), knowledge (see beliefs), ambitions (see desires) and commitments (see goals).

Amygdala See appendix B. Appraisal is the act of estimating the value of a perceived sensation. Beliefs are the agents understanding about the current state of the environment. Cognitive refers to the mind's processing of information, applying knowledge and changing preferences. Cortex See appendix B. Desires are the ambitions related to personal standard (see personal standard) and aspirations. No universal desires exists – every agent can have a unique set of ambitions.

Emotion is a term that has no single universally accepted definition. Some authors define emotion as an intense state arising autonomically in the nervous system rather than through conscious effort, other define emotion as a cognitive process requiring a conscious effort. In this thesis is the term emotion used mostly to refer to unconscious processes, but the term is sometimes also including feelings when the usefulness of separating the two are vague.

Feelings are affective states of consciousness. Conscious feelings are states of consciousness and thus perceptions of an underlying brain activity. Feeling is a cognitive function, meaning we are making emotional judgements; feeling evaluates.

Goals are the things the agent is committed to achieve. Hypothalamus See appendix B. Hippocampus See appendix B. Limbic system See appendix B. Mood is a relatively lasting emotional or affective state. Moods differ from emotions in moods are less specific, often less intense, less likely to be triggered by a particular stimulus or event and last

1 longer. Mood is a function of the complete set of emotions that constitutes someone's mental state at a particular time. Needs are the universally shared physiological and psychological requirements for the well being of all agents. Every need is expressed in several different personal desires.

Neocortex See appendix B. Neuroendocrine cells are a specialized group of nerve cells (neurons) that produce hormones. Personal Standards is the standards of good conduct and high morals that must prevail over the temptations of deceit and immorality. R­complex See appendix B. Sensation as used in this thesis refers to a non­cognitive evaluative sensation by appraisal that may or may not register in consciousness.

Time­invariant system is one whose output does not depend explicitly on time. Time­variant system is one whose output as explicit dependence on time. Triune brain See appendix B. Thalamus See appendix B.

2 1 Introduction “What are emotions? Emotions are the glue that holds the cells of the organism together in the material world, and in the spiritual world they're the glue that holds the classrooms and the society together. That's why they are so interesting, because they're on a material level – the molecules of emotion as I've studied them as a scientist – and they're in the spiritual realm as well.” – Candace Pert, EQ Today.

1.1 Preface Artificial intelligence (AI) is already a fundamental component of computer games. At the surface level what the player notice is the computer graphics, and therefore business people in game industry do care a whole lot about new graphics FX that can help in their sales pitch. For some time little have been emphasised on the artificial intelligence advances. The reason is that it is something the consumer will notice only while playing the game and nothing one is used to consider before buying the actual game, but it has great potentials to become the new focus of the industry after the steam of computer graphics has gone out and the benefits of each new generation graphic board becomes less obvious. There is a difference between scientific AI and game AI but the trend is to bring more and more from the scientific world unto games. In this context is emotions a growing part in simulating real life. While I was working on this thesis I started to see more interesting things happening in the real­times­ strategy (RTS) game domain than in any other area of artificial intelligence in games. In “Company of Heroes” and “Face of War” I saw AI that really reacts to the situation and provides a sense of chaos. We can expect to see more games where each unit even will have a unique personality and may not always follow orders. We will see more games with agents that asses the situation dynamically and react realistically without being scripted to react in a predefined way, where every agent must be managed and cared for because each unit is of a certain personality type. We have already seen previous attempts to achieve this but they have been limited and not very successful because the dominating approach used was to script AI. The problem currently facing the introduction of a strong AI is to the balance the control between what the player expects will be carried out and the freedom of the AI brain of agents. The AI trait currently just gets in the way of the user as the AI will do things like force the agent to drop to a crawl or stop and return enemy fire when the player is trying to get him to sprint to safety. Sometimes the agent won't respond to movement commands just because the AI is not up for this type of obedience at the moment. The benefits that emotions can bring to the artificial domain are promising but also a lot of confusion will come with it. What is an emotion? To start with we have a complex array of overlapping words in our language to describe them, and we have probably more than a hundred definitions from scientist of emotions to choose from. I picked this topic for my diploma work not only for my interest in games but mostly for my interest in psychology and Neuro­linguistic programming (NLP). This topic has given me the possibility to dig

3 into numerous books and articles that I would not have cared for otherwise.

1.2 Audience My primary intended audience for this thesis are post graduates or students in upper­level under graduate computer classes. This thesis assumes a knowledge of artificial intelligence at university level as well as a robust practice in programming and design patterns. Some knowledge of psychology will be beneficial, although certain key concepts will be reviewed here. The secondary audience is researchers and students interested in writing their own software agents. Although many of the concepts in this thesis are familiar to experienced people I believe that the appendix section covering ORTS will be of particular interest for those that want to get started developing their own AI in real­time­strategy (RTS) games.

1.3 Overview and Goals The hypothesis behind this thesis was that I would be able to express an emotional state of the agent that can influence them to make healthy decisions about what behaviour to perform, and that this may increase the believability of a RTS game agents. This hypothesis led to two goals: The primary goal and focus have been that of the construction of a computational module that can handle emotional stimuli in a novel way by using parametric signals representing sensations. The secondary goal has been to interface the emotional state module with ORTS, an open source programming environment for RTS games, so that the module could be evaluated in a real time system. The ORTS is such an environment it aspires to not lack the key features found in high­quality RTS games. As a side benefit, I found results from my research that strongly indicate that there are in reality no pure rational thoughts in humans, instead every thought is influenced by emotions.

1.4 Achievements The model of this thesis extends an appraisal mechanism, such as the OCC model (Ortony et al., 1988), that classifies events, actions and objects, and outputs emotional states. The output of the appraisal is instead define as an emotional signal. Emotional signals are defined by the name of the emotional state, its intensity value and the three durations phases. Each signal has the characteristic phases of an attack, sustain and decay phase. Each signal is then processed through a number of filter patterns that modifies the signal according to the personality and current state of the agent.

4 The main achievements of this thesis are:

● a time­variant emotional response through dynamic thresholds

● a method for composing of emotional states

● a network of interacting emotional states with both inhibitory and excitatory influence

● a saturation levels for each emotional response, meaning they can not reach extreme heights or depths

● a method to accumulate emotional response over time

1.5 Acknowledgements First of all I would like to thank the ORTS people, especially to Timothy Furtak, for the updates and fixes they did when it was asked for. Pierangelo Dell Acqua, head of the AICG lab at ITN, has contributed to this thesis in many ways, without his support and guideline would the end result of this thesis not have reached its current level. He contributed with office space and good equipment to work with. His tireless input of new ideas and questions was both challenging and leading to new insights. Anja Johansson were instrumental in making the office space such a nice place to work in, she contributed with a nice atmosphere and many good dialogues, as well as for using and testing my implementation and giving feedback on my report. I am especially grateful to Emma Mattsson, my soul­mate, for her unfailing encouragement, love, support, and patience with my work obsession. This thesis is dedicated to her and to my parents who gave me all the support I needed to get here in the first place.

5

2 The Concept of Emotions “What are emotions? Emotions are human beings' warning systems as to what is really going on around them. Emotions are our most reliable indicators of how things are going in our lives. They are also like an internal gyroscope; emotions help keep us on the right track by making sure that we are led by more than cognition” – Maurice Elias, EQ Today. This chapter will highlight samples from the historical development of emotional studies and toward today's cognitive and affective research.

2.1 A brain to simulate The computational theory of the mind originates from the mathematician Allan Turing (1912­1954). His assertion was that the computer, when properly programmed, could rival the human brain. It founded the computer science and artificial intelligence of coming decades. Turing used the body­ mind interaction theory that believes are interpretations about the surrounding environment, incarnated as symbols in the mind. This is well illustrated by a quotation from the psychologist George Miller: "The crowning intellectual accomplishment of the brain is the real world. All the fundamental aspects of the real world of our experiences are adaptive interpretations of the really real world of physics." The symbols are the physical states of neurons or digital switches which symbolise things in the world, triggered by our senses. The symbols can also represent what they can do once they are triggered. The mathematical use of symbols came from Boole (1815­1864), that lead to a greater understanding of assumption grammar (the assumption that language is founded on logic). The use of symbols allow us to imagine that the mind combines one symbol representing one belief with another symbol representation another belief and this then can (if the logic holds true) give rise to a new symbol, representing yet another belief. This has led to the development of logic programming techniques, for example Prolog and later XSB. But how does the mind really work? MIT/Howard professor Dr. Steven Pinker (1954­ ) summed it up in five words during a TV­show interview1: “Brain cells fires in patterns.” And he explained further that: “One pattern correspond to one thought, one pattern cause another pattern, and that is what happens when we think.” The symbols are the abstract representation of these patterns. Pinker belong to the supporters of the evolutionary psychology that was more or less crystallised by Tooby & Cosmides (Barkow et al., 1995; Pinker 1997) but had already had some predecessors like Plutchik (1927­2006). “Emotions”, Plutchik says, “are best observed through an evolutionary perspective as adaptations triggered by the challenges of survival and reproduction that are part of every organism’s existence.” An evolutionary approach, Plutchik argues, can sort out the roles of emotion, impulse and action and, in a therapeutic setting, help people understand the circumstances in which emotions can sometimes fail in their adaptive tasks: for example, when one change the light bulb it doesn't help to have fear of heights. Evolutionary psychology has been labeled by Pinker as the new science of the

1 “The Colbert Report” http://www.comedycentral.com/shows/the_colbert_report/videos/celebrity_interviews/index.jhtml

7 human nature. The theory has a radical vision of cognition as a “bag of tricks” rather than a neat integrated system. This theory is gaining support from robotics (Clark, 2001) because it promises a way to generate adaptive behaviour in real­time. The theory encourages abandoning the use of an inner store of symbols, the unified knowledge base (common in artificial intelligence models) in favour of a more neurologically realistic version. This version consists of multiple representation types, local memory banks and letting components, cognitive subsystems, that communicate in a wide range of different ways, processing operations in parallel. Thinking is information processing or computation, but comparing it to the way a computer is operating is not so simple even if the thinking stuff that goes in to be processed would be the same. The problem is also that you would explain things you don't understand in the human mind world with equivalents from the computer world, and this is apparently not avoided in the science literature. To describe and understand the mind, we need to be less focused on silicon circuits. The difference to mark out for starter is that neurons are great for pattern matching; the silicon chips are not. Computers work in serial — doing one thing at a time; brains work in parallel — doing millions of things at once (Pinker, 1997). Of course today there is a whole new breed of microprocessors with multiple cores causing a cultural chock among developers (New Scientist 2594). Sony's PS3 have IBM's nine core processors and Intel has presented a prototype of a general purpose processor equipped with 80 cores. But adding more cores to the chip is only half the battle, the second half of the battle is to figure out how to program it to divide the labour because the computers still work serial. The tasks must be divided up in a logical way and data from each operation combined at appropriate times. The simple way is to avoid the problem and run applications separately on the different cores. Only few tasks lend themselves well to parallel computer processing; like fluid dynamics and computer graphics rendering. But the most important difference between the human mind and the computers is that symbols in the form of sensations in the brain can't be stored (what we know of) but symbols in the form of cybernetic information can be stored.

2.2 Emotional genes Evolutionary Psychology brings together two scientific revelations: one is the cognitive revelation of the 1950's and the 1960's, which explains the mechanics behind thought and emotions in terms of information and computation. Cognition is just a fancy word for the science that studies how the mind process perceptions, remembers and thinks. The other revelation is that of evolutionary biology from the 1960's and 1970's with explains the complex adaptive design of living entities in terms of Darwin and Wallace theories on the natural selection. Combine the two give a good toolkit; cognitive science helps us understand how a mind is possible and what kind of mind we have. Evolutionary biology helps understand why we have the kind of mind that we have. The theory proposes that many assumptions of the real world are built in by the natural selection, working by making good guesses on an incompletely described problem, stipulating that the mind is not quite as a general problem solver as we have come to believe. An example of such an inbuilt assumption is the visual system. Evolutionary Psychology is therefore an extension of biology with a narrow focus on the mind. This evolution has worked its way by sloppy copying what already exist. We can assume that the

8 language of thoughts is a result of evolution. Inherited brain­circuits have given us the ability to reason about space and force, during evolution were these circuits separated from the eyes and muscles and references to the physical world were reduced to mere symbols, symbols that can represent more abstract concerns, like states, possessions, ideas and desires. The circuits have maintainted their computational capabilities and can reason about states being in one state at a time, shifting from one state to another and overcoming entities with the opposite valence; which make us able to reckon things about the traffic lights or to understand the cause of effects to manipulated objects; that a peeled banana was originally an uneaten banana. Even a chimpanzee understands what effect an eraser will have on a written page of text. But we are not only able to reason about our own bodily state but also to reason about other people's state. It is hard to read other people's minds even if we can make very good guesses from what they say, what is shown in their faces and what can be drawn from their behaviour, and be found out by looking at people's eyes. Even small toddlers are excellent mind readers of the language of thought; exceptionally bad at understanding the language of thoughts is those suffering from autism disorder. Facial expressions are useful in the process of thought reading because they are hard to fake. One could say that our emotions are "handcuffed" to the body because they grew out of evolutionary predecessors that were modern emotions may exploit the involuntariness of older reflexes, like phobias; a snake is always scary to apes and toddlers, but a gun is not.

There is an apparent firewall in 20th­century literature between emotion – described as something bodily (irrational impulses of violence and passion), versus reason – described as the intellect living in the mind working as a cool deliberator following the interests of self and society by keeping the emotions at check. The left brain versus the right brain or as the reptilian brain (i.e. the limbic system) – that would be responsible for feeding, fighting, fleeing and the sexual behaviour versus the Cerebral cortex – the prehistoric baggage and the intelligence that propelled us from the animals. This is a whole mountain of “crap” talk since no part of the nerve system is left untouched by the evolution, and we can't be condemned from the start not to be able to feel anything more than what our remote ancestors were able to feel. The concept of theories such as Neuro­linguistic programming (NLP) claims that the emotions are easy to reprogram. NLP has a very pragmatic approach, applied to focus on mental excellence and how to reproduce it by changing believes using words and behavior. Richard Bandler, the co­inventor (with John Grinder) of NLP, notes that the act of just laughing will alter the state of consciousness by releasing chemicals into your blood. The hallmark of NLP is summed up by the following quotation from Henry Ford: "Whether you believe you can, or you can't, you are right." Very influential for NLP is Noam Chomsky, who with the “Syntatic Structures” (1957) sparked the belief in a cognitive subsystems in psychology.

9 The emotional repertoire varies widely even within a single species; for example dog breeds. even if the many visually distinguishable breeds are no older than two hundred years was dogs developed for enhancing behavioural traits already several millenniums ago, shown by a greater divergence in three different brain regions – frontal lobe, amygdala and hypothalamus, compared to wolves. It was evolutionarily beneficial for dogs to be exceptionally good at reading human signals, allowing (Björnerfeldt, 2007). Within the narrow gene poll of chimpanzees are the anatomical differences between Common and Bonobos Chimpanzees slight, but in sexual and social behaviour there are marked differences. The Common Chimpanzee is more of a warmonger than the Bonobos that is more of a nature's version of a peace­loving hippie. An even smaller gene poll is that of the humans, the differences between visually distinguishable groups are merely visual; any other distinguishable traits in behaviour can often be related to cultural backgrounds. All species show a genetic difference but the humans have small variations compared to other spices, for example there is more genetic difference between two Chimpanzees in the same forest than between two humans from different continents. (Pinker, 2002; Weber, 2006). Some parts of the human genome show a shallow and recent ancestry, but parts of it show a very deep ancestry. Something very uniquely happened to the human genome 40,000 years ago (New Scientist 2593), a mutation or perhaps reintegration of genes from our archaic sister species2 into the Homo Sapiens; the new genes were propelled by the natural selection and made the human race very successful in the cognitive adaptation. The cognitive adaptation means that adoption can happen in real­time instead of the need for natural evolution to promote the fittest design. The human species excellence in the cognitive adaptation give them an obvious advantage over other spices; allowing them to manipulate the environment to their benefits by social cooperation among non­kin leveraged by using a language for transferring knowledge symbols; especially humans unique way of referring to symbols independent of their current emotional state and ambitions. The trade­off that the humans have done for this excellence is to be the only mammal that can't drink and breath at the same time, and to spend a disproportional long time in childhood because the human adaptation is not primarily “fur adaptation” but “know­how adaptation." These three parts of the human excellence is called the cognitive niche theory and is well depicted in the Devil's Dictionary by Ambrose Bierce (1911): “MAN, noun. An animal so lost in rapturous contemplation of what he thinks he is as to overlook

2 Neanderthal (in Middle East and Europe, until 30,000? years ago) and Homo Erectus (on Java, Indonesia, until 25,000? years ago) in particular are interbreeding candidates, but there are also Homo Floresiensis (on Flores island, Indonesia, until 12,000 years ago) to be considered. These three spices along with Homo Sapiens was the only hominid groups that came through a bottleneck 73,000 to 63,000 years ago (Weber, 2006).

10 what he indubitably ought to be. His chief occupation is extermination of other animals and his own species, which, however, multiplies with such insistent rapidity as to infest the whole habitable earth and Canada.”. Still there are lesser variations that make each human unique in appearance, skills and interests, and in physiology and biochemistry. Some of the genetic differences affect the way we react to drugs. Some tremendously beneficial drugs are ineffective, or even dangerous, in some people. In turn, our mental configuration comes from our genetic program, but there is not a gene for every trait or anything suggesting that learning is unimportant, but evolutionary psychology gives a good motivation for why all humans would share universal needs and psychology, that would limit the possibly variation of cultures. The behavioural differences that are associated with distinct groups of people would be a result of them spending an extended time in childhood learning the nuances of the surrounding culture. With culture we associate locally accumulated expertise, customs and social arrangements.

2.3 The purpose of emotions Evolution has been free to modify the emotional behaviour. Behaviour that is similar over many spices has been preserved because they are well adapted for all spices. The emotion fear triggers an acute stress response that makes the body ready to quickly react. Blood is redirected from the stomach and skin living butterflies and itchiness, and instead sent to the muscles. Rapid breathing takes in oxygen. Adrenalin releases fuel from the liver and help the blood de­clot. Plutchik (1989) claimed that “Cognition has developed in purpose to serve the emotions”, by which he means that cognitive processes help the individual predict emotional outcomes in the future to its behaviour, especially if behaviour is related to habitat selection. This correlation guarantees that the behaviour is relevant. But emotions are not the animal legacy of the lower levels of the triune brain (Pinker, 1997). According to the computational theory of the mind­body is the lifeblood of the psyche information and not foremost energy, and the emotions are designed to duplicate our genes and not to make us happy or morally wiser. An emotional program working at it is best would absolutely be able to verdict behaviour that is harmful to society or a self dilution. The cerebral cortex does not fight to control any reptilian brain, even if literature is littered with such examples, but instead the cerebral cortex works in tandem with emotional modules. Emotions are indispensable functions to the whole mind. This is not saying that evolution has had the time to adapt us to a modern life style. This can be exemplified with this lovely citation from the T.V. series “The Scrubs” by the character Dr. Cox: "I've been trained for many years to take any emotion I feel, push it down, and then let it out by drinking way too much and by yelling at the football players on my T.V. screen. And I...I really thought I hit the jackpot when I finally met a woman who was as disturbed and closed­off as I am”. Neurosciences place these emotional modules that colourize our experience in the Amygdala, a small oval shape in each temporal lobe. Amygdala receives signals from all lobes of the hippocampus, and cingulate gyrus, the brain steam as well as the cerebral cortex. Almost all the sensory information is delivered to a particular nuclear called basolateral nuclei in amygdala (Dariush, 2001).

11 All sense signals, like the sight of a snake, are first sent to the thalamus where a preliminary screening takes place. The main processing is carried out in the cortex where each of the senses is assigned a certain area. In the case that an emotional reaction (due to a cognition process) is called for, a message is sent to the amygdala. A breakthrough in the research occurred when it was found by measurements that a smaller part of the sense signal was also sent through a more direct and faster channel from thalamus to amygdala. The Amygdala in turn sends the signal to almost every part of the brain, including the decision making circuit in the frontal lobes. Damage on the Amygdala is reported causing flattening emotions, reduced fear and absence of hesitation (LeDoux, 1996 ).

Illustration based on LeDoux JE (1994) Emotion, Memory, and the Brain. Scientific American.

A historical point is that Charles Darwin (1899) in person should once have tested this automatic response in an experiment. He visited a zoo and placed his face against the thick glass window of the puff adder's cage, and tried to ignore the inevitable strike against the glass. However, when it came attacking, Darwin found that he had jumped one meter back from the glass. Amygdala's coloration, is a sort of colour coded marker that is put on the event to tell the importance of it. According to Dames (1995) does this somatic marker connect an event to a “gut” reaction that lead to aversion if the events were a failure, attraction if the events were a success. The marker is the emotional tone, the trace of the event, and connects it to a reaction. Even non bodily events like imagination can activate this emotional tone, and therefore also the reaction. Bower's comprehensive survey from 1981 of his network model (Dariush, 2001) talked about the mood congruence effect, he stated that the emotional state becomes a part of the experience, and in a particular emotional state it is easier to remember the experience that corresponds to this state. The philosopher Francis Bacon (1561­1626), famous of his uttering "Anger makes dull men witty, but it keeps them poor", and also Eric Eich in his survey from 1980 have similar reasoning's, describing state dependent recollections (Dariush, 2001). An example of such recollection would be that one thinks about how lovely it would

12 be to eat one's fill and then one would remember all the places one have found food before, and this would explain why some people would think of the McDonald instead of an expensive China Restaurant in a time of desperate need for food.

Evolution has been programmed in deliberately because only if the emotions are in control it can help to connive intricate plots for escape, revenge, ambition and courtship. As a demonstration that emotions sharpen the mind I gives you the following citation from Winston Churchill observations: "There's no more exhilarating feeling than being shot at without result", a mind focusing experience many of us share. A computer runs a program by executing a list of instructions until it runs out of instructions. Living entities need a more flexible method of controlling execution. Intelligence is the pursuit of goals in the face of obstacles. Without goals the concept of intelligence is meaningless, each goal is archived by completing sub goals. If our emotional reactions to an urgent goal were “successful” we would be likely to remember the procedure by which we got out of it or into it. We are also likely to remember a new long term goal, which of staying out of the hazard in the first place, triggered by the sensation of relief. To support an augment against emotions, it would be interesting to study a mind operating presumably without them. A well known example of such a mind would be the one of the character Spock, the Vulcan from the Star Trek series. He is stipulated to lack all emotions except the “Pon farr” (a term for the Vulcan mating cycle, every seven year the adult Vulcan undergoes an extreme and erratic physical and psychological imbalance to make them carry out the mating ritual). But Vulcan's dominating feature, emotionlessness is only portrayed as letting him be in control and not losing his head in stressed situations. If he had no feelings why would he then be interested in exploring strange worlds and seek out new civilisations and “boldly go where no one has gone before”? Something must drive him. A good guess would be out of intellectual curiosity, an ambition to find and solve problems and solidarity with other allies. What would Spock do if he were faced with an attacking alien? He would probably take avoiding precaution because he fear being hurt (Pinker, 1997). If we agree that emotions are a cognitive function we can say that Spock lacks feelings, like those suffering Alexithymia3, but still have emotions. In fact, the actor William Shatner playing the role as

3 Alexithymia is a term coined to describe people who appeared to have deficiencies in understanding, processing, or describing their emotions. Research indicates that alexithymia overlaps with Asperger's syndrome. Studies have confirmed that 85% of people with ASD's (or Autism Spectrum Disorder) have alexithymia. This information was taken from Wikepedia (http://en.wikipedia.org/)

13 “Captain Kirk" in the Star Trek series once said to Spock: "You're half human Spock, don't you have any god damn feelings? Sometimes a feeling, Mr. Spock, is all we humans have to go on. Will you try for one moment to feel, at least act like you've got a heart?" Spock has also ambitions that imply him to deploy his intellect in pursuit of certain goals instead of others The highest goals in an AI program come from the software designer. In living entities the goals come by natural selection. The brain strives to put its owner in favourable situations for its genes to spread. Very few living entities probably understand the real purpose of sex, or that sex is the cause of offspring. Here is the reason why living entities have emotions. Emotions is a mechanism that set the the highest level goals. Each emotion triggers a cascading flow of thinking and acting upon sub­goals, in this hierarchy there can't be a clear separation of thinking from feeling, meaning that thinking and feeling don't precede each other but co­stimulate each other. The emotions work as a guide for choosing behaviour and habitats, as a law of attraction that never stops. Knowledge is only beliefs and contain by itself no great meaning to us; our emotions towards these beliefs on the other hand have a huge impact that turn those beliefs into convictions. A living entity can't pursue all its goals at once; it has to choose among them and not like the donkey get stuck and starved to death between two haystacks. The entity has to commit it self to the best goal matched the best situations to archive them. “For everything there is a season, and a time for every matter under heaven...” (Ecclesiastes 3:1­8) When we decide to buy a car, how many facts do we need to know? Not a thousand but perhaps a half dozen or so is enough, and the same number is enough in most decisions. This is because most decisions are done emotionally anyway, especially if we need to make a decision hastily. but we are probably going to do it out of emotion anyway. The key to decision making is to be brief with logic, facts and reason. Considering too many of them will only confuse us.

2.4 Why machines need emotions We can't simply ask the question of whether or not emotions are useful to the agent without first finding out how emotions are generated and for what purpose. If computers, robots and software agents are going to have emotions they need to be able to synthesis them. To make it meaningful for a system to have emotions it should be able to handle them in an reasonable fashion. The level of complexity an emotional system has will depend on the task the agent needs to cope with. According to Picard (1997), the founder and director of the Affective Computing Research Group at MIT4, do we have to identify from the following list which subset will suffice our needs: 1. Emotional behaviour, behaviour that appears to arise from emotions 2. Fast hard­wired basic emotions, fast emotional responses to certain inputs 3. Cognitively generated emotions, by reasoning about situations, especially when they concern its goals, standards, preferences and expectations 4. Emotional experience, cognitively and physiologically, by subjective considerations

4 Massachusetts Institute of Technology

14 5. Body­mind interactions5, with its different parts: perception, memory, decision making, learning concerns, goals, motivations, attention, planning Picard says that the humans depend on all five parts, implementing all of them is undoubted a major undertake. her book (1997) lays the groundwork for giving machines the skills of emotional intelligence. The ability to feel pain or pleasure does not only affect the living thing's ability to learn but also helps in selecting among all its goals. In computers could emotions help agents to deal with information overload and prioritize activities but today synthesised emotions are almost only used to entertain users. The strongest reason for incorporating emotions into machines and computers is according to the Marvin Minsky, a famous cognitive scientist and the co­founder of MIT is AI laboratory, that it is simpler to model feelings than rational thoughts (New Scientist 2592). Just like Pinker does Minsky (2006) note that humans have learned by the culture that emotions and thinking are different things, and that emotions have become a suitcase for the different states of the mind we don't understand, sometimes also referred to as “the ghost in the machinery”. The trend in AI has been for several decades to make this distinction and to build architecture that can manage specific problems very well, like chess games, but if this architecture is placed in some unusual situation it was not designed for, or simply if only an unknown element is introduced in the environment, it will fail to manage the new situation. Humans on the other hand rarely fail to find a strategy to deal with a new situation. Minsky states that emotions are not so mysterious. He notes that if someone get very angry some of one's mental activities are switched off. That person abandons one's long­term goals and becomes less cautious and thoughtful, making that person more imposing and dangerous. Consequently, Minsky doesn't agree with the traditional view that emotions are some extra colourisation feature to the thought process, instead he says that if we regard the emotional state as suppressing some of our usual mental activities, the mystery of emotions disappears, and we realise that there is no pure rational thought, everything we think is influenced by emotions, ambitions and biases. Emotions are only different ways of thinking: switches that activate or deactivate functions in the mind. Minsky also agrees with the evolutionary psychology view of the mind where the mind consists of several different kinds of components, and the different “ways of thinking” is what happens in the brain when a certain set of components are active. The emotional state he says will switch these components on and off. The challenge of artificial intelligence is now to explore how to combine such different “ways of thinking." Why the pain from a thorn and hunger are similar emotionally is that in both cases they represent a switch in focus. The Japanese and Korean scientists have already earned a reputation for their research in robotics, and now there are enthusiastic predictions (ingenjören, nr3 2007) that the robotics will be responsible for more than five percent of these countries BNP within just a few years. These predictions are that this

5 Alan Turing's paper "Computing Machinery and Intelligence" is one of the most frequently cited in modern philosophical literature, according to Stanford Encyclopaedia Of Philosophy. It gave a fresh approach to the traditional mind­body problem, by relating it to the mathematical concept of computability he himself had introduced. The concept of symbols representing perception interpretations have already been discussed elsewhere in this thesis.

15 is robots that operate not within industries but rather in our homes, hospitals and everyday life, being as common as mobile phones are today. Hopes are high that robots will be social companions in our future society. The intention with the new breed of robots is to design robots that can make us feel better, and not that of replacing humans. One key aspect for this human computer interaction to work is that the robots can communicate through emotions. Besides all that, there is the entertainment value of agents that appear to have feeling in computer games like “Black & White” and “The Sims” where the emotionally and bodily state of the agents determine their behaviour. Even greater enhancements would be archived if the agents understood other agents' behaviour and by that consequently adjusted their own behaviour in relation to the other. Another benefit one could reap are in simulations containing realistic virtual characters regards the behaviour studies of pedestrians in situations of emergency or safety check­ups. In turn looking at the behaviour of agents in the simulation can tell us how good the cognitive and emotional model is.

2.5 The classification of emotions and feelings So where has all this introductory talk on emotions and language of thought brought us? It states that living entities don't come into existence in this world as empty slates; instead living entities have a vast number of preconfigured competence modules running in parallel by exchanging information. One of these component modules, according to Pinker (1997), are the emotions that help select high­ level goals, both long­term and short­term. Emotions have evolved to deal with anticipation of emergency in unexpected situations. Emotions have mainly two purposes: communicate intentions to other individuals and stimulate behaviour that increase the individual's chances of survival. From this have Conte and Plutchik (1995) argued that “emotions are present in all organisms.” So far little have been mentioned concerning feelings. It is not unusual to make a distinction between feelings and emotions. Daniel Gil'Adi for example distinguish them as follows: What is the difference between feelings and emotions? E­Motion is preparing and anticipating action. Feelings are the internal expression of the emotion and can be differentiated from body sensations and states: "feeling cold" or "feeling depressed." The emotions "behind" the feeling of depression are sadness or anger” (EQ Today). This would mean that feelings are the cognitive part of emotions that is evoked by thinking; the verbal reporting of an emotional state would be the feeling. Emotion have the power to transform the living entity. It is common to first think of emotions in terms of feeling states; but feelings are probably too ambiguous and unstable to serve as a basis for a general theory on emotions, even if Damasio (1995) have expressed more confidence in feelings than many of his colleagues. The emotions are easy for us to link to reactive behaviour, therefore easy to study in animals: avoidance when fearful; attacking when angry and distress signals such as crying when sad. The debated question is still what worms on a hook and boiling lobsters actually feel (ABC News, 2005). Can we avoid projecting our own feelings to animals in this cases? Though there can be no consensus about the definition of emotion, there is a general agreement that emotional response in humans involves this four related sets of processes:

16 ● neurophysiological processes (the autonomic nervous system and neuroendocrine activation through Hypothalamus) e.g. hormone cycles, sleep, diet, drugs

● motor­ or behaviour­driven processes e.g. facial expressions, crying, changes in posture and tone of voice, muscle tension

● motivational processes (e.g provoked emotions by stimulation such as pain, hunger)

● cognitive processes (the subjective awareness and verbal reporting of "feeling" states) e.g. feelings evoked by thinking So the word feeling refers to the subjective, cognitive domain of emotional response, the word emotion refers to the neurophysiological and motor­driven domains. The Emotion is a boxing term which includes the situation, the perception of the situation and the sensational response, sometimes even the feeling related to the subjective evaluation of the situation. If we want to describe the emotion processes we can essentially use three different models as a means of organizing data in the fields of emotion, personality and psychopathology (Conte & Plutchik, 1995). The free different models are structural, sequential and derivative.

The structural model: specifies a limited number of basic emotions and the relation among them. The basic components of the feelings are examined as variables in mathematical equations that may be compounds and not only basic ones. There can be significant interaction between the variables studied despite the assumption that they are independent of one another. The metaphor of an artist's palette is suitable here. From the palette consisting of only three basic colours one creates almost innumerable variations and shades depending on how the portions of each colour are mixed. The Basic Emotions are discussed it in detail in section 2.6.

The sequential model: concerns with describing the complex sequence of events that generate an emotion, from the initial triggering to the final action. First, there are the stimulus events that initiate a chain of reactions that we call emotions. Stimulus events alone are not enough to automatically produce emotional states. The events need to be interpreted before the next step in the chain can happen. According to this model: “certain stimuli are cognitively evaluated as dangerous, others as enemies and still others as friends. ... Based on such cognition, the subjective feelings of emotions occur.” (Conte & Plutchik, 1995). The Appraisal Theory section will give more detail on this. The derivative model: is specifying the relations between emotions and personality. When we observe an emotional reaction occurring frequently we start to describe the individual via personality traits. If someone is often fearful that one is thought of being shy or timid; someone that is often angry or disgusted is described as hostile. Personality is related to the concept of temperament and mood (see section 2.8)

2.6 Basic Emotions The concept of basic emotions is fairly widely accepted as the emotions that are hard­wired into the living enteritis's brain. There have been numerous attempts to characterize the structure of emotions with a set of primitive variables (typically fewer than ten) from which all emotions can be built by

17 mixing the primitives (Conte, Plutchik, 1995; Ekman 1999). For example, the emotions that Plutchik lists as basic are: anger, fear, sadness, joy, disgust, surprise, curiosity and acceptance. To visualize Plutchik's basic emotions you could imagine a pie divided into eight equal parts. Starting at "12:00" and going in the clockwise order are the eight emotions in complementary pairs, opposite pieces of pie represent opposite emotions: acceptance — disgust, fear — anger, surprise — anticipation and sadness — joy. He came up with this list from psychologically grounded behavioural observations. He also lists these emotion on evolutionary grounds by relating each to behaviour that promotes survival of the individual. Therefore the emotions are the underlying reason for behavioural changes. Other approaches propose different basic emotions; some consist of fewer emotions, as few as only containing ecstasy, terror, and despair, while others have many more. The different lists come from the purpose of different intentions the inventor had, depending on their researcher's area of focus. Some lists focus on body chemicals, some on cultural similarities, and some on behaviour. Basic emotions are commonly used in models that describe facial expressions (Cañamero, 2000, 2003), the motivation behind this is that the facial expression manifest a set of basic emotions that are universally recognisable. This field of study has produced some dubious results and therefore some serious critique has been raised: “Our own view is that the search for and postulation of basic emotions is not a profitable approach. There seems to be no objective way to decide which theorist's set of basic emotions might be the right one” (Ortony et al., 1988).

2.7 Appraisal Theory Emotions are described by some researchers (Plutchik; Ortony et al., 1988; Roseman, 1996) as states resulting from processes involving cognitive and non­cognitive evaluation of internal and external sensations in relation to the individuals beliefs and desires. This evaluation is known as appraisal. Cognition here defines the general information processing mechanism while appraisal focuses on the evaluation of sensations (sensory information incoming from senses6). The relation is that appraisal produce emotions from evaluating the sensation of an event on a number of characteristic dimensions. The suggested appraisal dimensions are not fixed but several different dimensions have been proposed in the literature. Mainly it is the number of dimensions that change between the different models, suggesting how complete the models are. And considering we need to implement the model in a computer we realise that the more dimensions a model have the more difficult it will be to implement. Cognitive expectation is also an important factor. For example, suppose you are playing a hazard game and you are not expected to win. Then you will not feel so upset if you don't win. But if you instead would have manipulated the game you will probably get very disappointed and surprised if you don't win. If we return to the basic emotion theory we would use appraisal to differentiate between emotional responses. The appraisal would be regarded as the sensory information processing and emotions

6 the senses are: sight, hearing, touch, smell, taste, balance, thermoception(heat and cold), nociception (physiological pain), proprioception (body awareness).

18 would emerge from this process. In a time­invariant­cognitive system, the emotional response should always be the same but in a time­variant­cognitive system would the emotional response change. “A mere catalogue of typical precipitants for particular emotional reactions will have only limited use because the consequent emotions are the result of the attribution of meaning to the occasion and this meaning will determine the emotion” (Ortony et al., 1988). There also exist a fast automatic non­ cognitive unconscious appraisal system that give raise to emotions (LeDoux, 1996), sometimes also regarded as the pre­cognitive system. An example is the previously discussed Amygdala circuit. Emotions are not only generated by reasoning but also by low­level non­cognitive bodily associated processes, such as the process of smiling can make you happier. This aspect can only be mapped into software agents metaphorically but are interesting for game agents interacting with a virtual environment. Finally we are going to have a closer look on two of the more interesting appraisal theories and their usage.

The OCC Model The most popular model of cognitive appraisal is the OCC model after its inventors: Ortony, Clore and Collins (Ortony et al., 1988). It does not inform about the dynamics of emotions and the interactions between emotions. The OCC model can be regarded as an attempt to an outline on how appraisal is made due to cognitive expectations with relation to the agent's goals, beliefs and personal standards. In his book, The 22 Non­Negotiable Laws of Wellness, Greg Anderson wrote, "Let us be about setting high standards for life, love, creativity, and wisdom. If our expectations in these areas are low, we are not likely to experience wellness. Setting high standards make every day and every decade worth looking forward to." The OCC model account for how arousal and appraisal interact to produce emotions and at the same time registers valenced reactions (positive or negative) to situations, agents, and objects. The interaction between emotions is not relevant in the cognitive appraisal process, but emotions influence the priority of goals and therefore have an indirect affect on the appraisal process. The OCC model derives its popularity in the detailed and structured way it dictates how 22 emotion types are spawn from events, actions and objects. The original model is not flawless though. It has no memory of past events and therefore it regards the same occurring event, action of object as time­ invariant. The problem with this standpoint is that the desirability of this event, action or object have not changed after its first occurrence, therefore the computed intensity of the emotional stimulus must remain the same. Also there is no emotion of surprise in the OCC model; instead it is classified as a cognitive state.

The Roseman Model Another appraisal theory, more more resent than the OCC model, was proposed by Roseman (1996). Roseman's model have proved itself well suited for computer implementations. The Roseman model is much simpler than the OCC model and is grounded in studies in human appraisals. The model

19 propose how 13 different emotions are generated. Roseman (Roseman, 1996; Ruebenstrunk) suggests the following 6 dimensions which the emotions can be build upon: 1. Unexpectedness. Dealing with personal surprise. 2. Situational state. Whether or not a person possesses a motivation to achieve a desired situational state. 3. Motivational State. Whether the situation agrees with the motivational state of the person. 4. Probability. Whether or not an outcome is certain. 5. Control potential. Whether or not a person can deal with the outcome of an event. 6. Agency. Emotions felt toward other people due to events seen as caused by other people. How far this model by Roseman can be proven empirically cannot be said. One weakness however of the model is that it doesn't tell how to compose different appraisals. For example, in a situation in which one agent makes two different appraisals, like when the agent think someone has done something wrong against the agent but the agent at the same time recognise that the agent is not without blame. In this case the model doesn't clearly predict the resulting emotion. Compared to the OCC model (that doesn't even supply any formalization rules for all of its defined emotions) the Roseman model is still interesting. The Cathexis (Valásquez, Maes 1996) agent architecture has extended the Roseman model to cope with blended emotions. Picard (1997) suggests that the simple structure of Roseman's model can be translated quickly into rules: "Overall, it shows promise for implementation in a computer, for both reasoning about emotion generation, and for generating emotions based on cognitive appraisals."

2.8 Temperament and Mood Personality has influence on the emotional response we can expect from an individual. It is still debated if the personality of a person depends on the environment this person is being brought up in or it is something that is set before birth. Most likely that it is a combination of them both (Pinker, 2002) shows that the temperament in adolescents and adults is predictable by looking at their behaviour as babies. It is not obvious how temperament influences the emotions but it is apparent that it does. Even parents of conjoined twins have reported the twins have two different temperaments and two different personalities even if they are biochemically and physically completely connected. Temperament apparently is a property of the nervous system (Picard, 1997). An analogy to temperament would be the shape of a bell. Bells with different shapes will not emit the same sound, even when struck with the exact same force and hammer. One bell needs to be struck much harder to emit any sound at all. This bell shape is analogous to temperament; both influence the response to a stimulus (Picard, 1997).

20 Mood is something that changes over relatively long time scales compared to emotions. The mood is always present while the emotions come and go. The current mood promotes some emotions over other ones. A good mood makes negative events seem trivial, even likely to be ignored if it is not a major event. On the other hand, a bad mood can make it an event so a slightly negative activate a powerful response. When modelling the generation of mood we have to decide what generates mood. It is not enough to describe mood with only one dimension, like intensity. Picard (1997) suggests using valence and arousal as dimensions. The valence is negative if the event had a harmful outcome and positive if it had a beneficial consequence. By using two dimensions we can describe a bad mood with greater precision, for example a bad mood because of anger has a high level of arousal but low level of arousal if it is due to grief. A good mood due to a peaceful mind is low in arousal but high if it is because of winning the lottery. According to LeDoux (1998) there are a number of indirect channels which allow amygdala to influence the cortex process. An important set of these involves the arousal system of the brain. When arousal occurs, cells in the cortex become more sensitive. Arousal is important in all mental functions. The brain need to have just the right level of arousal to perform optimally in a situation.

2.9 Models with Multiple Emotion Layers

Figure 1: Here we have a low­level layer for signals, medium­level for patterns and a high­ level layer for concepts. This diagram shows the different components in a layered model of emotions. There is a transition from high level cognitive mind symbols down to simpler affective physical patterns. Information flows from both high to low and from low to high in a system that can express emotions, and recognize emotions.

There is a layered approach of representing emotions that are well depicted by Sloman (1981) and Picard (1997). There are several representations one can use. The first one that is used is the low­level for signals, a medium­level for patterns, and a high­level for concepts. For a complete emotional system one would need all three concepts to be implemented in some way or another.

21 These layers can be described according to evolutionary age and function (Sloman, 1981, 2003), with the oldest, non­cognitive, in the lowest layer, those that are inherited within our nervous system. The top­layer would be the only layer where the notation of “self” is significant and we can hold the concepts of shame and guilt – embarrassment that need to be thought. This low­level layer is handling the reactive mechanism by mapping external stimulus to behaviours, enabling fast predicable behaviour to environmental changes. Following this idea the medium­level would take care of the deliberation mechanism, recognition and expression of expressive and non­expressive signals from the environment and from within the individual. This deliberation is the learned association between recognised stimulus pattern to the situation where this pattern occurs (Sloman 2001) and takes into consideration goals, belief, personal standards and expectation, this process is also called deliberation. The top­level layer works with a symbolic rule­based language, dealing with situation based generation of emotions, and the effect that expectations have on emotions. This layer is able to reason about these internal emotional states. The information will flow in both directions, not only will signals become interpreted into symbols, but the high­level expectations can influence what the low­level layer perceive ­ how signals are processed. To keep track of the emotional state there is a challenge imagining a structure that can hold an emotional state that depict without glitches both representations mentioned above, in the signal and symbolic form. There is also a clear need for the emotional layer distinction if we bring up the concept of actors that portray emotions that they truly don't have but only portray. The actor's concept of showing an emotion is a cognitive symbol that is propagated downward and is converted into physical signals that communicates the emotion. As such the actor is having the emotions when he performs the behaviour portraying the feeling.

22 3 Architecture "My own brain is to me the most unaccountable of machinery­­ always buzzing, humming, soaring roaring diving, and then buried in mud. And why? What's this passion for?" ­ Virginia Woolf

3.1 A model to build on In chapter 2 I have gathered a lot of perspectives on emotions and during my recollect of this material I was able to device a model. Simply the fact that I had been exposed to the material for a long time revealed a pattern to me that has become more obvious during my work, manifesting itself in all the sources I have looked at. I would point out that I am not saying this is the only conclusion possible, just that this is mine. Here I will present the keystones in my very much philosophic model. First there are two primary patterns that invisibly shape our reaction to everything that happens:

● In the moment it is our current "State" – physical and emotional.

● In the long­term it is our model of the “World view” – the shaper of meaning, emotion, and action. This is our personality. This patterns can be represented by filters that decides subconsciously what we decide to let in and to focus on or what we “choose” to ignore, more precisely the model modifies out perception of reality. This is our appraisal of the world. So if we regard the strive to archive a goal as getting from A to B on a map, than we have emotions as part of the map description to get there, a guiding system to fulfil our needs, like a never ending law of attraction. If we have the right emotions we are motivated to go in the right direction toward the goal. There are not only one goal in the map of life, but many. We can divide the needs of an agent has into two parts: The personal needs that probably are common to all living entities with a mind, these are: 1. certainty in what to expect 2. variety and not too much uncertainty 3. to feel important, to be special 4. connection and love. Then we have the spiritual needs, which probably are unique to the human mind: 5. growth and fulfilment, to improve and have career ambitions 6. to contribute to society Emotions help us determine what goal is most important for the moment. What an agent should focus on for the present time. With this model I have defined a philosophy that dictates what my model aspires to handle. A thing to remember here is that we don't actually need to know how the thought processes in humans work. We only need to make it appear as if we do.

23 If we recollect the brain's emotional computer: the Amygdala. This unit has two paths in the case of a reaction to fear stimuli: one that is slow and one that is fast.

● A detailed but slow­acting Thalamus–Cortex–Amygdala circuit.

● A simpler, faster Thalamus–Amygdala circuit bypassing the cortex. The faster circuit allows us to respond rapidly if threatened. It ties in with a model of preconscious, precognitive emotions including the body states. The slower circuit allows for cognitive evaluation of the emotional situation or stimuli so a thoughtful response can be made, which tie in with the classical, post cognitive model. In this thesis I will not implement the slow path, it is left to a future improvement.

Figure 2: The Amygdala circuit

The term pattern recognition can be used to describe what the Amygdala does on the incoming sense signals. Probably it is an evolutionary process that creates a genetic library of templates for these patterns. Perhaps the unborn child learns some more patterns during it is time in the womb sensing what is happening to the mother, and more patterns from its mother after birth. A corresponding pattern can be found for other emotions as well, even if they physically look different. This circuits will be implemented as simpler appraisals. William James said that “we are sad because we cry and not the other way around”, actually what it means is that we feel sadness because the emotional state is such that crying is a typical expression. The "emotion" is a label we give to a physical brain process ­ the level at which that psychological function is represented in the brain. My model will therefore deal with emotions not as conscious feelings but instead as computational functions of the nervous system. Thus, the emotions themselves are unconscious. Measurable indicators exist for handling the physical brain mechanisms underlying emotions, ways of collecting electric nerve signals – even if we don't know how to do it in reality; this is the idea behind my Emotional State Module (EMO).

3.2 The Emotional State Module The construction of the emotional control system that I have implemented is based on Picard’s (1997)

24 description of affective signals. My implementation allows the affective signals to be modified by filters in their intensity and length, as well as allowing the signals to have an affect on the filters themselves during runtime. The responsibility of the emotional state module is to maintain a dynamic low­level representation of the emotional states. The term low­level is used here to indicate that emotions are ongoing signals, dynamic in the sense the states are constantly changing as time progresses. There is no memory of the emotional stimulus associated with the states beside that the emotional response depends on the previous emotional state and that each state holds a memory with the fading signals. By extending the model we could allow the signals to be associated with the stimuli in some way or another, but there is no great benefit as the signal will fade and be cleared out.

Figure 3: An example of the filter influence process in the case of an anger signal. A similar diagram can be created for all the other signal types.

In this thesis I will only implement the low­level part of emotions. These low­level signals are those physiological signals that carry the affective information. By limiting myself to the signal representation I will not address the agent's cognitive ability to reason about the ongoing emotional states and the questions such as why the agent should do this or that. The emotional response is of time limited nature, and will fall below a level of perceptibility unless it is re­activated. The response can never be reduced below the zero level. Rapid activation causes the perceived intensity to increase until it reaches saturation level; at which level the response no longer

25 increase. All inputs will not activate emotions; the unconscious emotion will need to reach a sufficient intensity to get by a threshold and become a conscious feeling. By sufficient means there should not be any fixed threshold. The threshold depends on factors such as current mood, temperament and current cognitive expectancy. The emotional state module assumes that some mechanisms in pre­advance elicit an emotional signal in response to a sensation. Emotional signals are generated by an appraisal process such as OCC (Ortony et al., 1988) or Roseman (1996) to describe a sensation that is cognitive (e.g. surprise caused by an object not being present, there an agent thought it should be), physical (e.g. fatigue when climbing the stairs, pain if health is lost) or motivational (e.g. fear is enemy presence or armour is lost). The creation of an emotional signal requires that the appraisal module decides the signals duration, attack, sustain and decay duration. The duration will be specific to the event and the type of emotion intended. The signal is defined by the name of an emotion, an intensity value and a valence, which can be positive, negative or neutral. The sensation is than represented by an emotional signal consisting of three phases: attack, peak­sustain and a decay phase. The signal is than fed into the emotional state module. The emotional state module enables customisation of the influence of emotional signal. Any emotion signal can be affected positively or negatively by the previous emotional state and any signal can affect any number of emotional states indirectly (e.g. a happiness impulse can increase the level of happiness and the positive value of temper but also increase the threshold for negative emotions, like anger, by modifying the filters). The emotional state module is intended to work in three different time scales, a short one, a medium and a long one. The short time scale gives a way of selecting well motivated behaviours in a decision module: attack, withdraw, explore, eat, play, and rest. This short time scale is not used as an emotional state but as a meaningful context for doing something as it tells what the agent is feeling. An example of such meaningful feelings are: afraid, angry, sad, happy, disgusted, bored, fatigued, surprised, relieved. Their purpose is to represent emotional awareness, not to represent the underlying emotional state. When a feeling is active the intensity level of the corresponding emotion in the middle time scale could also be set up to be stimulated by backward propagation as a sign of focused awareness of the feeling. This middle time scale is used to keep a consistent context of what emotions the agent has: fear, anger, sadness, happiness, disgust, boredom, fatigue, surprise, relief. The long time scale is representing the long lasting mood state that I propose could be described using two dimensions: pleasure and arousal, possibly more dimension. More details on the mood definition is given later. One can note that fatigue mentioned above was placed among the emotions and fatigued among the feelings even if the general tendency is to classify them as a drive. Fatigue in particular doesn't pose a trouble here but if we would include the drive of hunger we would need some consideration. The general concept of a drive is that the intensity of a desire for it increases as times goes by, like hunger (i.e a consummatory actions rather than emotional impulses can abruptly reduce its level). But if the concept of the desire for this drive is inverted, so the intensity decrease by time, we can handle the drives as we handle other emotions. To avoid misunderstandings we could rephrase the title of the

26 drive, rename hunger into satiation or energy. If we don't, we need to invert the intensity and the decay functions. This is because my states have a valance that is either negative or positive, but the signals does not support negative intensity, because of the parametric functions describing the signals phases and because it does not make any neurobiological sense to distinguish negative and positive signals. In case we would have negative signals we would have to consider the meaning of a negative state, even if the state has a positive valence – therefore if we extend the support for negative signals we have to make sure the state never shifts valence. Let me give an example where one of your friends has hurt you in some way and you became angry with him. After some time it turns out he did not hurt you intentionally. Thus you start feeling regret about your attitude toward him. In this case my model doesn't support the idea that you can simply send an inverse negative signal to undo the previous anger. The problem here is that the feeling of anger has already faded away, your notion of anger related to a person is anyhow cognitive. I suggest that the appraisal in this case would generate emotion signals for relief and happiness, not simply a negative anger signal.

Signal representation for emotions I have followed the suggestion (Picard, 1997) to model an emotional signal based on the characteristic waveform that is created from striking a bell. When a bell is struck once it emits a loud sound that quickly builds to a peak at first but the intensity decays after the peek until it is too faint to be heard. The input signals originate from both sensations that are cognitive and physical. Every emotional sensation decays over time, but an emotion such as hope could persist as long as the situation is the same. In this case I take the position that hope will decay even if the situation stays the same. However new cognitive appraisal of the same situation may produce new emotional signals increasing the level of hope. Several analogies can be made from the bell ringing and emotions. Emotions, like sounds, decay over time unless they are re­stimulated. When a bell is struck before the sound of the previous strike has faded the bell will in fact sound louder and louder the more times you strike it. Emotions have the same property; several repeated little stress­producing events that one at a time is too weak to be noticed may build up to a greater level of stress than another greater stress­producing event would at the time it occurs. One problem with this analogy is of course that there is no such thing as an affective signal that we can measure the same way we can measure a sound wave, however it is convenient to reason about emotion intensity instead of talking about hormonal concentrations, neurotransmitters or nervous system activities, or any other thing that cannot be measured. There is one more factor to consider and that is the one of personality. In the case with the bell we could conclude that the shape of the bell would give different characteristics to the sound it emits. In the case of humans it is considered an attribute of the nervous system ­ a configuration due to circumstances in our environment and our genetic inheritance (Kagan, 2001). Picard (1997) suggests

27 that this can be simulated with sigmoid functions7, which is the method I used and I will show later how it was done.

Figure 4: A sigmoid function curve

The response curve of the bell that I am going to use for the human response curve has been described mathematically (Picard, 1997) as: y=ae−bt where y is the output intensity at time t and the parameter a controls the height of the response and b controls how fast it decays. If we are considering the attack phase of the signal we simply use the inverse of this function. The response function is actually a model of a Galvanic skin response (GSR). GSR is a method of measuring the electrical resistance of the skin. The mapping of skin areas to internal organs is usually based on acupuncture points. The GSR is highly sensitive to emotions in some people and that is why it is interesting to model, although one cannot identify the specific emotion being elicited. Fear, anger, startle response and sexual feelings are among the emotions which may produce similar GSR responses to that of figure 5.

Figure 5: An approximated response curve of a bell, in its three phases. (Not that the image is not proportional)

The shape and timing of the human response curve to emotional signals are influenced by personality. 7 http://mathworld.wolfram.com/SigmoidFunction.html

28 Human emotions are time­variant. If you are in a happy mood one day and get scolded by your boss, you will probably not take it nearly as hard as if the next day you are in a bad mood already and you get scolded by your boss. Consequently the exact same input will generally not generate the exact same response depending on the current emotional states, even if the exact same appraisal was done. The emotional signals need to be filtered to prevent state from running of into extreme heights or depths. Yet another property of emotions that we need to cover is that of activation. Most people can tolerate some level of anger­producing stimuli before they actually feel emotionally angry, on the other hand if the person's mood is low they are much more likely to tolerate a much lower level of anger­ producing stimuli. The signal can't have a linear effect on the feelings as there is the previous state and the personality to consider. The influence from these factors can be represented using a sigmoidal non­linearity function as a filter: g =  y − x− x / s y0 1e 0  where x is the input signal, originating from within or from without the agent. The height of the curve is the output y. All tiny values of x make no output and very high input values give the same maximum height. In between we have more of a linear behaviour. Small values of s make the curve steeper – and would correspond to personality of the individual – and we would expect this transition to be steeper for negative than for positive emotions, meaning that the signals affect on a positive state will be more linear. The s should be a variable that remain stable. The parameter x0 will shift the curve left or right. When it is pushed to the right a stronger input is required to activate an output. It would be the mood of the individual that is determinant if the curve is shifting right or left, effectively changing the threshold. Parameter g is the gain put on the curve, the overall amplitude. This could be coupled to the arousal of the individual, it would be corresponding to the subjective intensity someone experience. The parameter y0 shifts the curve up or down. The value of y0 can be given by the cognitive expectations, such as increasing or lowering the felt emotion by the expectancy of victory or defeat in the case of some event. It is g, y0 and x0 that will change during time if we hock them up so they are influenced by other components. In my tests I have kept the value of g fixed.

Figure 6: When using sigmoid function it is possible to support not only one transition region but several, this would correspond to going through several stages of an emotion.

29 Mood of an emotional state As described earlier is the emotional state module intended to work in three different time scales. The first is used for activating short time feelings, a second one to represent medium time emotional states, and the third one to represent long time mood. The feelings' role is to activate behaviours. The mood has the role of manipulating how emotional signals are perceived. Each emotional state can be associated with some filters that process the incoming signal before it is inserted into the state. A filter is using a sigmoid to adjust the peak of incoming signals in relation to the state of other components in the emotional module (Picard, 1997), this is where the effect of mood come into play. Each emotional state can be associated with up to four filters. Filters are used to control the effect of the emotional signals on the emotion states. Three filters are used to modify the duration of the phases of the emotional signal, one parameter is linked to each phase; one for the attack, one for sustain and the last one is for decay duration. The last filter modify the intensity of this signal in relation to the value of other emotions. The signal is then summed up with the previous emotion state if the amplitude of the emotional signal is higher than a certain threshold for this emotion state. The threshold is varying dependent on the level of other emotion intensities (i.e. the threshold for the feeling angry will increase if the emotion state for happiness is high). Emotional signals that don't overcome the threshold are discarded. The threshold is either fixed to a value picked by the designer or modified in real­time by the influences from other states. When modelling mood we have to decide what generates mood. A very simple model for example would be to let every signal influence the mood, even if the intensity of the signal is lower than the activation threshold for the emotion. But as have been discussed previously is only one dimension to describe mood rarely enough. I propose a different concept for calculating mood than that of valence and arousal proposed by Picard (1997). Her suggestion links mood to personality, which I don't agree with. Instead I want to use the concept of pleasure/displeasure instead of valence to describe one of the dimensions for mood. The displeasure level varies according to the external environment, threats, pain and pressure of accomplishing time limited tasks will reduce the level of pleasure. I want the arousal to describes how intense the outcome of the labour was on the physical energy level so that we can link the physical drives such as hunger, thirst and fatigue, to a reduction on the arousal level. The previous concept of valence can be derived from my concept of pleasure and arousal by taking the difference between them. If the displeasure level is lower than the arousal the valence is positive, but if the case would be the opposite; if the displeasure level is being greater than the arousal the valence is inclined to be negative. The agent will get tired and becomes inclined to fall asleep if the arousal level decreases very low.

30 Considerations

There is no simple way of finding the values of the sigmoid response functions ( the parameters g, x0, s and y0 ) for the sigmoid or suitable values on a and b for the signals response curve. It is so easy to argue for parameters that have a qualitative connection to behaviour than knowing the exact value of parameters for any other model, meaning that the shape of the sigmoid curve will visually tell us more about the personality of an individual than any digit values correlated to the concentration of some chemicals would. The diagram of a curve is much more visually informative. The good thing with a signal based setup is that subtle changes can be accumulated over time, so that even if the system receives inputs which are so small they will not activate any behaviour but, if we configure the filter network to do so, will anyhow influence the mood, which in turn will change the threshold for the corresponding emotion. The emotional model proposed by Picard (1997) that only showed how we could model mood was somewhat simple. I have extend it so that not only can we now model mood but we can have a sigmoid filter for every emotion, each with its own unique activation and saturation threshold. This is a significant improvement to the model previously proposed. The different emotions also have inhibitory and excitatory influences on each other. This was not proposed in the original model by Picard. The input of negative emotion signals will also contribute to a negative mood and shift all the sigmoid functions for the emotions to the left making the negatively valence emotions easier to activate and to the right making the positively valence emotions more difficult to activate.

3.3 The Decision Making Module The decision making module is responsible for shaping the destiny of the agent. A complete decision model should be able to reason about the following three steps. 1. What am I going to focus on? 2. Why should I do it? Why not? What does it mean to me? 3. What am I going to do? And then commit to follow through on its decision. In my implementation I have done oversimplifications in a way that predefines what the agent should focus on. The emotions should be a integrate part of this decision. The agent is not allowed to reason about “why it should do it”. This part would need a cognitive part. Therefore the only thing that is left for the agent to dynamically decide on is “what to do”. The emotions are therefore currently only used to drive a reactive behaviour. For a decision making module, I would like to see an implementation of a behavioural network which I believe would come nice by using the emotions both as goals and as conditions.

31 3.4 EMO Integrated with ORTS through a middleware Adding emotions in ORTS scripts could be easy enough by putting the relevant attributes in the object blueprints. The tricky part would be to find a good way of modifying other actions based on those parameters (e.g. automatically fleeing8). By the use of a probability theory one can ignore actions based on morale or fear would be simple enough (one would need to modify all actions to include this check and that's just tedious). Also one would probably need to tweak constants to account for the potential to send the same actions every cycle, because if the action is ordered frequently enough it will lead to a scenario where the order will be carried out anyhow. That is the downside of else­if, instead we need a very good decision making mechanism, ORTS is a software package under constant development, we should regard it as a beta or alpha software. To reduce our dependencies on ORTS and avoid the need to change any of the current ORTS code to fit our need I made the decision to implement my client part in two layers. The bottom layer is a middleware interface to the ORTS, this layer is subscribing to the state view updates. This layer is using my adapter toolkit for minimising the dependency on the ORTS environment. The second layer consists of the component based architecture, meaning that any object is described by an identification marker, not by an instantiated class. This component based system makes it easy to build the components independent of each other and hock them onto an identification marker. We can hock an arbitrary number of components onto any identification marker. This identification marker turned out to be a game object pointer in the case of ORTS, but it could be replaced by any type of identification number as well. The ORTS interface is responsible for identifying new or vanished and dead objects and associate or disassociate components to those objects. In reality the disassociate process will free up memory and the association will allocate memory dynamically. Finally, the emotional state module was integrated as one of the components. The benefit of this component system is that the infrastructure of the agent is instantly worked out for free. Any other module we want to add, for example a cognitive module implemented using XSB or any other first order logic module only needs a costume built component interface to be able to communicate with the other modules that are already in place. Another nice thing with this components is that they abstract away the dependency on ORTS. The components relay on the middleware to bridge the two parts communication. The middleware is responsible for updating the components at each view update and to propagate any actions downwards to ORTS.

8 By first finding all enemy units within a certain radius by a scripted intersection with a circle; and then by checking if the unit can see enemies and finally move towards the "least frightening" direction.

32 Figure 7: UML diagram for the Components system. The StateEventHandler is connects the ORTS client to my component system. The ComState is the Emotional State Module. The ObjectManager is simply a facade class to the component API

If one relies on objects being part of an inheritance tree one can quickly run into problems, when one wants different objects to have certain abilities, while other ones should not have the abilities that are forced upon them by the inheritance. In fact static inheritance hierarchies do not stand up to the challenge of modern development, extending an already present architecture that is still under development demands a flexible strategy that can cope with frequent changes of the requirements. What we would like instead is a system where we can combine existing functionality into new objects and add new functionalities to existing objects without having to refactor or reshuffle the inheritance hierarchy. A better solution would be to use Design Patterns. This theory contains a library of strategies to create objects from component parts. Each component is designed to be self maintained so it on its own can perform a particular task. The Object is built up by composing several components together. There can

33 be an Entity component that places the object in the world, a Render component that associates the object with a 3d mesh. In ORTS we don't need the Render component because that function is handled by the ORTS engine. My middleware presents a strong alternative in the form of a component­based system. In theory a non­programmer would be able to design and modify objects without programming the interventions between the components using a data driven solution (e.g XML configuration). This leads to quicker turnaround time for testing a design change. The programmer no longer has to spend time refactoring the system every time the architecture designs changes. This Figure 8: A simple object built from would unfortunately require that some graphical design tool components first was implemented. All components are derived from a base interface that makes it easy to manage all the components uniformly in the manager that keeps track of all the component interfaces belonging to handles. The manager is needed for organization because there is never any real object per se, and there is actually no need to retain an object class, instead all the components are associated with an id handle. The handle can be anything as long as it is represented uniquely, it can be a unique number or an object pointer. Each component type is an interface which can have many different implementations. What the component interface does is giving a uniform interface to each component, force the components to implement any functionality that the interface above it promises and only exposing the functions we decide to let it expose, nothing is exposed by default. This is very convenient compared to the traditional inheritance chains as these components have a shallow tight internal cohesion. The components need to communicate with each other. This is done indirectly by sending messages or asking the handle manager for an interface to the component and directly communicate with that by calling its methods. For example, the Render component would ask the Entity component where the object is placed before it can render the 3d models. The messages comes handy when one wants to tell something but doesn't know to whom it is addressed, like in the case of a broadcast message. A message can be sent to one object or to all objects, to one component or to all components of an object. This works fine because each component tells the handle manager which messages it is interested in subscribing to at initialization time (using the observer design pattern).

34 Figure 9: UML diagram for the classes in the clock system

The last thing worth mentioning about the middleware is that is that it incorporate its own real­time clock and timer functionality. This clock is running independently from the ORTS notion of ticks, instead it is a clock based on real milliseconds calculated from the computer's own operating system clock. This clock give a stable time value for each frame, because it is only allowed to increase once each frame. There is also Timers Observer The timers allow the possibility of speeding up or slowing down time compared to the clock. A timer can even be stopping for some time before one let it run again. This clock is a very important part so that the intensity of the signal and emotion is calculated correctly. This clock also ensures that the emotions will not fade away while the application is debugged, which would be the case if we used the operating system's clock directly.

3.5 Configuration I looked at a few different strategies for making the configuration data driven for the modules. The two options were XDS and XML. Even if the XDS was to be preferred, because of its binary approach, I was never able to make the whole XDS toolkit package platform independent, that is, the source files were windows specific, porting the code to Linux would have been a project in itself, therefore I have selected XML. A toolkit called TinyXML seemed to be the best available option, unfortunately is this integration still on the to­do­list.

35

4 Contribution, Future Work and Conclusions

4.1 Contribution If computers could recognise feelings in users and reason about how the agent action will affect the user we would open doors to many new exiting possibilities. We can expect that emotion synthesis will grow in importance in order to improve software agents, computers and robots will gain better skills to interact with people, to entertain and to understand us. Several emotional representation models already exist and some are used within facial animation systems or to influence action selection within autonomous agents architectures. Emotions interact together and influence each other. This interactions are not the focus of most emotion models. In this thesis this influence is the main focus. The proposed use of signals is described to be a flexible way of representing any arbitrary emotional model, as it can handle physical and cognitive inputs on a low level and still letting us influence it with the concepts of temperament and mood. This flexibility is adventitious as we know very little about how real feelings function exactly. It is therefore easy for us to model our ideas to test them in a virtual environment. It is my true belief that there is no one best model for all situations and therefore the system is made flexible so it can be adjusted to address the situation that may come up. The best use of this low­level representation is to monitor emotion intensities and blend states from appraisals. The dependency of an appraisal module actually simplifies the implementation of such a module. The appraisal module in fact does not have to consider if an emotion is a compound, like in the example from the Roseman's model there it was possible to make two appraisals in the same situation, instead the emotional module will merge all appraisals into an emotional state automatically. In modern games, the key feature is the agents' believability. In game AI the believability is more related to imitation than to rational reasoning. or psychological plausibility. The game AI needs to be done in real­time which my implementations has been proven to do.

4.2 Future Work Unfortunately, the proposed low­level signal representation is not well­suited to address a high­level cognitive reasoning that sometimes is required to trigger an emotion. It would therefore be better to have a separate high­level representation to deal with this kind of reasoning. This high­level reasoning is something that has to be left to future work. It would be best advised to do this using first­order logic. There is amiably a whole lot that can be done to differentiate physical and cognitive signals. This would be required if we want to differentiate mental and physical health, as it is possible to feel physically bad and exhausted but still be happy about something. The signal representations that have been used do not require us to make this distinction, but in most cased it is questionable what the benefits would be because what difference is there for example between mental pain and physical pain

37 (i.e. in cases of torture)? It would be interesting to see the emotion module integrated into some other applications, for example a facial animation system. To select different expressive signals in relation to the emotional state of the agent producing emotional face and body animation. In this thesis I have avoided the area of facial animation totally to keep the focus narrow. It could possibly be interesting to study the effects of noisy signal, which would represent the unclear thoughts of the individual, reasoning with uncertainty, originating from a high­level reasoning layer. The emotional signals in this thesis are rather simple. They could need to be extended with more details about them, such as of their cause, if an object, action or agent is the source of the signal, but this thesis state that this information would arguable be best put to use in other parts of an agent's intelligence. The sigmoid function has relatively few parameters that need to be fine tuned. The question still unanswered here is if they control useful behaviours. Some more extended tests still needs to be conducted to investigate the performance. For the decision making module I would like to see an implementation of a behavioural network that I believe would come nice by using the emotions both as goals and as conditions.

4.3 Related work There are many works related to this one. I will not mention every on every other project relating to my project but only the things that I have looked more closely at.

Appraisal models An appraisal theory mechanism is used to derive emotional values from sensations (Plutchik, 1980; Ortony et al., 1988). The emotional values are produced for each situation and used to influence the decision processes and the behaviours of the virtual agent. In my work the influencing mechanism is achieved in a time­variant fashion but the appraisal models are time­invariant. They don't handle situations there two different appraisals can be done. The emotional network that I suggest will solve this kind of dubious situations automatically. The classical appraisal models do not tell you how to mix states resulted from the appraisal varies mine does. Therefore my model is work as an extension to the Appraisal model.

The eliciting emotion model An eliciting emotion model (Cañamero, 2000, 2003) assigns a decay function to each emotion elicited with a value higher than a personality threshold. The model is based on emotions and triggers changes in synthetic hormones, and as a cause new emotions can rise. Emotions influence perception (e.g. a high state of endorphins reduces the perception of pain). Emotions are triggered by external as well as internal events or patterns. For example, fear is triggered if an enemy is nearby and that will result in increased heart rate and lower body temperature. A high level of endorphins can trigger an emotional

38 state of happiness. Motivations intensities and therefore behaviours are influenced by emotions. This work does not support any interaction between emotions in its emotion representation varies my model does.

The project Oz model Under the “project Oz” (Reilly, 1996) are the emotions arranged in a hierarchy which separated good­ mood from bad­mood emotions. To estimate the overall mood in this system they sum up all the positive emotions using this formula:

I p I p=log2∑ 2  , p∈{positive emotions}

This is repeated for the negative emotions to form In. Now if In < Ip the good­mood is set to Ip and bad­ mood to zero, otherwise good­mood is set to zero and bad mood to ­In . They did it in this trivial way because they wanted a non­mistakable state for their character to communicate. These emotions influence the forming of new cognitive goals, for example revenge. Emotions also influence perception: if an agent sees two other agents bouncing around the agent will perceive them as fighting if the agent is angry, but if the agent is happy it will perceive the two other agents behaviour as something else. So the agents in Oz have social interactions that both influence and are influenced by emotions. All the rules and cognitive behaviours can be changed by hard coding it. The nature of hard coding these things is flexible but when it comes down to expanding the system with new emotions and behaviours it becomes of putting. My model allows a mood defined by more than a single dimension and the idea with the component based system is that it is configuration will not require hard coding, and extending it with new components is an easy task.

The Carthxis model Carthxis (Valásquez, Maes 1996) is a connectionist model of emotion synthesis that tries to model four types of elicitors9 of emotions in humans, a model proposed by Carroll Izard. The four elicitors in this model are:

● Neutral. Background processes that are influenced by hormones, sleep, diet, drugs, etc.

● Sensorimator. Facial expression, body posture, muscle tensions that would intensify a current emotional state.

● Motivational. Provoked emotions by stimulation such as pain, hunger, and emotions evoking each other.

● Cognitive. Feelings evoked by thinking. This was done using an adaptation of Roseman's theory. The interesting thing to look at in this model is that it, compared to appraisals, doesn't use rules that varies from each emotion but has only one update rule. Many models of emotion synthesis employ activation thresholds for each emotion but don't cope with emotional saturation (Picard, 1997). There is a limit to how fast a heart can beat and after a certain level a higher hormone or neurotransmitter

9 Substances of biotic origin which induce a defence response

39 levels doesn't have no further effect as the physical receptors have been saturated. But the Carthexis model has a saturation level. The emotion intensity is a function of its decayed previous value, its arbiters10 select the influences from other emotion intensities. As in the OCC model the intensity is compared to an emotions­specific activation threshold before determining if an emotion exists. The Cartehexis is similar to the behaviour network (Maes, 1991), but the Cathexis consists of a constellation of proto­specialist. Each proto­specialist represents an emotion type. Only if the threshold is exceeded, the proto­specialist releases its value to influence the behavioural network. The network has behaviours such as “run away, “engage in fight” etc. The behaviours not only consist of an expression but also an experience (e.g. “feel happier”). The emotional behaviours compete for control. A combination of “anger” and “foe present” might release “bite person” behaviour. This model has several features common with my model. The emotional states are connected in a connectionistic network and my filter­influences is picked by an arbiter.

The DER model Another work is similar to: the DER (dynamic emotion representation) (Tanguy, et al, 2005). It deals only with expressing facial animation but it also uses the ide of three time scales. Though, the concept of mood is also different from that of Picard (1997) and for the short time scale it talks about activating behaviours (facial expressions) directly whereas I talk about activating feelings. There is also a fundamental difference: the DER does not rely on any appraisal module, it receives emotional impulses and on its own translates the pulses of intensity into an attack, sustain and decay duration phase according to a library of predefined pulse responses. In my thesis the emotional signal is synthesised and designed by the appraisal process. Giving this responsibility to the appraisal is arguable more flexible and realistic, as the cognitive process can define any type of emotional signal with a non­fixed durations.

4.4 Conclusion The emotional module presented in this thesis supports the use of moods, feelings and emotions which are the concurrent affective states of the “mind”. The usage of the emotional module is threefold. First, different emotion states enforce different emotional behaviours. Second, emotions influence agents' properties such as shooting accuracy and morale. Third, the agent has genuine reason to gesticulate and comment on the situation according to its emotional state and perceives things the way a human could possibly perceive them. This simply means that players will think the agents look and behave like living beings. The emotion module provides a more varied emotional behaviour than most AI emotional models do because the module provides a state that depends on a historical background of perceived events that determines its momentary emotional state. This thesis is a simplification, real emotions will affect the cognitive processes but an intricate web of interacting signals would not be easy to debug and evaluate. So it is appropriate to separate cognitive appraisal and emotion states in separate modules. A more deliberate model would account for this. Regarding the model with separated modules only loosely connected removes a lot of the complexity

10 An Arbiter is like a traffic officer at an intersection who decides which car may pass through next

40 in designing the agent's architecture. The field of Emotion is an interesting and important area. Emotions are starting to be accepted as more important to motivate adaptive activities than was once thought ­ not only for (but still mostly for) entertainment. To really enjoy the benefits of a history of perceived events and interactions, the system must be allowed to run for some time. This makes the emotional module ideal in RTS environments but less so in a first­person shooter (FPS) game.

41

5 References ABC News (2005) Marc Lallanilla: “Do Boiling Lobsters Feel Pain? Research Indicates They Don't, But Not Everyone Agrees” http://abcnews.go.com/Health/PainManagement/story?id=722163&page=1 Barkow, J, Cosmides, L., J Tooby, J. (1995) “The adapted mind : evolutionary psychology and the generation of culture.” New York ; Oxford : Oxford University Press Björnerfeldt, Susanne (2007) “Consequences of the Domestication of Mans Best Friend, The Dog” http://publications.uu.se/abstract.xsql?dbid=7799 Bower, G. (1981) “Theories of learning.” Englewood Cliffs: Prentice­Hall. Cañamero, Dolores (2000). “I show you How I Like You – Human­Robot interaction through Emotional Expression and Tactile Stimulation.” http://www.daimi.au.dk/~chili/feelix/index.html Cañamero, Dolores (2003). “Designing Emotions for Activity Selection in Autonomous Selection” http://citeseer.ist.psu.edu/384344.html Clark, A. (2001) “MindWare: an introduction to the philosophy of cognitive science” New York ; Oxford : Oxford University Press Conte, H. , Plutchik, R. (1995) “Ego defences : theory and measurement.” New York : John Wiley & Sons, Inc. Damasio, A. (1995) “Descartes' error : emotion, reason, and the human brain.” New York : Avon Books Dariush, Araï (2001) “Introduktion till kognitiv psykologi” Lund: Studentlitteratur Darwin, Charles (1899) “The Expression of Emotion in Man and Animals Page.” http://charles­ darwin.classic­literature.co.uk/the­expression­of­emotion­in­man­and­animals/ LeDoux, J (1998), “The emotional brain: the mysterious underpinnings of emotional life” : Weidenfeld & Nicolson Ekman, Paul (1999) “Basic Emotions” http://www.paulekman.com/pdfs/basic_emotions.pdf EQ Today. “What Are Emotions?” Interviews with EQ experts. http://www.eqtoday.com/archive/emotions.html, Six Seconds ingenjören 3. (2007) “Roboten blir din vän.” Sveriges Ingengörer, Stockholm. Sörmlands Grafiska Quebecor. Kagan, J. et al. (2001) “Infancy to early childhood genetic and environmental influences on developmental change.” New York : Oxford University Press. Maes, P. (1991) “How to do the right thing.“ http://citeseer.ist.psu.edu/maes89how.html Minsky, Marvin (2006) “The Emotion Machine.” Simon & Schuster Ortony, A., G. L. Clore, and A. Collins (1988). “The cognitive structure of emotions.” Cambridge

43 University Press. New Scientist 2592, (24 February, 2007) Amanda Gefter, ”Interview – Veteran AI researcher Marvin Minsky thinks emotion is the key to success.” New Scientist, Magazine issue 2592 New Scientist 2593, (3 March, 2007) Dan Jones,”The Neanderthal within.” New Scientist, Magazine issue 2593 New Scientist 2594, (10 March, 2007) Celeste Biever, “Chip revolution poses problems for programmers,” New Scientist, Magazine issue 2594 Picard W. Rosalind (1997) Affective computing , Cambridge, Massachusetts MIT Press. Pinker, S. (2002)“The blank slate : the modern denial of human nature.” London: Lane Pinker, S. (1997) “How the mind works.” New York, N.Y. : Norton, cop. Plutchik, R. (1989) “The Measurement of emotions.” New York : Academic Press Reilly, N. (1996) Believable Social and Emotional Agents http://www.cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/oz.html Roseman, I., Antoniou, A.. & Jose, P. (1996) Appraisal Determinants of Emotions: Constructing a More Accurate and Comprehensive Theory, Cognition and Emotion, 10(3), pp. 241­77. Ruebenstrunk, (1998) Emotional Computers ­ Computer models of emotions and their meaning for emotion­psychological research http://www.ruebenstrunk.de/emeocomp/content.HTM Sloman, A (1981) “Why robots will have emotions.” http://www.cs.bham.ac.uk/research/projects/cogaff/0­INDEX81­95.html#36 Sloman, A. (2002). How many separately evolved emotional beasties live within us? http://citeseer.ist.psu.edu/sloman02how.html Tanguy, E. Willis, P. , Bryson, J. (2005) “A Dynamic Emotion Representation Model Within a Facial Animation System.” http://www.cs.bath.ac.uk/~jjb/ftp/TanguyHR06.pdf Weber, George (2006) “Toba Volcano: Through the Bottleneck” http://www.andaman.org/BOOK/originals/Weber­Toba/ch5_bottleneck/textr5.htm Valásquez, (1997) “Carthxis: A computational model.” ACM 1997 http://portal.acm.org/citation.cfm?id=267658.267808 Valásquez, (1996) “Modeling emotions and other motivations in synthetic agents.” ACM 1996 http://citeseer.ist.psu.edu/103027.html

44 6 Appendix

A) The ORTS Project My project utilizes ORTS11, an Open Real Time Strategy game engine being developed at University of Alberta.

A.1 Background RTS games in particular offer a large number of AI research problems, more than any other game genre, featuring hordes of interacting objects, in an environment with imperfect information, and fast­ paced micro­actions which only have incremental effect. Other game genre often have better conditions for an AI with turn­based perfect information, there every action has global consequences on the environment. These are the areas that need to be investigated:

● Real­time planning in a dynamic adversarial environment by mini­maxi search in abstract state space augmented by beliefs about the world. ● Decision­making under uncertainty, plausible hypotheses need to be formed and acted upon. ● Opponent modelling, learning opponents tactics, approaches based on statistics are inadequate, there is need for smart reasoning to fast adapt. ● Spatial and temporal reasoning, currently largely ignored. ● Resource management, balancing the investments in research or in larger troops. ● Collaboration, coordinate actions by communication among parties to concentrate an attack or intercept enemies fleeing. ● Path finding, keeping unit formation and fuel consumption into account.

Game companies are not inclined to release communication protocols or to add AI interfaces to their games, which is required for researchers to connect their programs, or have entire games autonomously played. Players that would like to add agents that can aid them in the game are poor of options. The current RTS games rely on client­side game simulations and peer­to­peer networking for communicating player actions. These games just hide some information from the other players. This approach saves bandwidth when a synchronized client­side simulation requires only player commands to be transmitted to peers, which keeps the data flow low if only few commands are generated each frame, but this is prone to all sorts of hacks by reverse engineering the message protocol and reveal “secret” game state information. This is the motivations behind the development of ORTS as a test­bed for real­time AI research. ORTS is an RTS game engine that have been developed since 2001 at the Dep. of Computer Science at the University of Alberta, currently in its third edition. Their ultimate goal is to build a system that

11 http://www.cs.ualberta.ca/~mburo/orts/

45 can outperform human experts in this domain, not to increase entertainment value of RTS games. The current short­term goal for them is to set up a programming environment for conducting these real­ time experiments, which they are about to accomplish. The following is a feature list of what is offered right out of the box.

● Provides basic RTS functionality and can be extended somewhat easily by users using its own scripting language: blueprints, which defines all game properties. Both the games and the user interface are described with this scripting language. This gives the ability to tailor ORTS games towards particular research needs. The scripting language allows changing setting and to use the same executables for different games without with out a new compilation.

● ORTS implements a server­client architecture. Only the server knows the entire game state. In each game simulation tick it generates the entire view independent of the last cycle, sends individual views to the clients (players), which in turn can generate a single action for each game object under their control back to the server. See orts/apps/sampleai/ to learn how client software communicates with the server. Players do not know, unless they send out scouts to find out, were enemy units are located and what opponents plan. Playing with incomplete information on the internet demands that information can be physically hidden from clients, it would be unwise to send out a complete game state to all the players – the downside with information hiding is that it rules out client side simulation causing a fury demand on server side calculation power and band­with.

● Worlds are represented as layered rectangular play­fields which are populated by simple geometric shapes. For the latest AIIDE12 tournaments only the ground level has currently been used. The world is structured by tiles. Each tile corner is set to some height allowing tiles to slope in various ways. Terrain boundaries are automatically generated where two adjacent tiles are of different types or do not line­up such as ground­plateau transitions, and are represented by line segments. The highly static nature of the ORTS terrain obstructions allow the result of visibility computations to be reused for any unit looking out from a tile afterwards, determining visible tiles is done via boolean operations on a bitmap.

● Each object navigates in fixed maximal speed and can move in any direction and the action executed can be stopped at any time and replaced with some other action. Objects don't have a mass or any acceleration, so there is no impulse conservation, meaning that when two units collide, they just stop. Motion is also acceleration free. Each unit has a limited circular radar­ like vision, only obstructed by elevation in terrain. Objects and vision areas are approximated by bounding squares. Objects are restricted to attacking objects in a given attack range. In ORTS each object has an associated numerical feature vector with the following components: (Object ID, Owner, Radius, Sight Range, Minimum Attack Range, Speed, Attack Value, Replicating, Hit Points, Moving, Position)

● A state is generated entirely independent from the last cycle. Execution time of admission

12Artificial Intelligence and Interactive Entertainment Conference www.aiide.org, sponsored by the American Association for Artificial Intelligence www.aaai.org

46 computation is linear to the number of objects.

● The ORTS source code is mainly written in C++ with exception of the games specifications and GUI customizations for which a simple scripting language is used. The C++ code uses a number of libraries which are available for many platforms: SDL, SDL_net, Qt, OpenGL, GLUT and GLEW. A few sample games is provided with the distribution along with a set of 3d models, game specification and user interface scripts. The software licensed under the GNU Public Licence and can be downloaded from http://www.cs.ualberta.ca/~mburo/orts/ There are few alternatives to ORTS, if we are looking for free open software good another for conducting AI researches in RTS games. There is only one that is notable and that is Strategus (http://en.wikipedia.org/wiki/Stratagus) formerly known as FreeCraft (that because of obvious reasons had to change name). Strategus unfortunately uses client­side simulation and is therefore not as suited for real­time internet AI competitions. Other researchers focus on computer soccer that pursues goals similar to those outlined in RTS games, with fewer objects, no economy, simple terrain, and more or less complete information, there is also a lack of a macro­management; all decisions are made per agent. The SOAR agent architecture13, that has been in use since 1983, has got some revenue for its use in first person shooter games, and was interfaced with ORTS with the help of the SORTS project14 and did well, winning two out of three games in AIIDE 2006 tournaments15. The architecture had the following features:

● Workers are guided by a mining manager and operate under a finite state machine (FSM)

● The pathfinder is a modified standard ORTS pathfinder

● Local obstacle avoidance

● If a miner exceeds estimated travel time, it requests a new path from the mining manager

● Mining manager learns which routes are bad

A.2 Getting started The least complicated starting point when getting into making your own agent architecture in ORTS is to start with the game example orts/apps/sampleai, and then consider the previous tournament entries. To use a scripting language one has to expose C functions to the script using a wrapper. The ORTS as a scripting language of its own called blueprints an example of how they expose C functions to that scripting language can be found in Game.C. An action queue is implemented in Action.C (see the SimpleActions/MoveActions classes). Incoming client actions are added along with anything locally generated by the server. The simulation is running on the server and if one needs to set values explicitly from the client, one has to set values explicitly from client and one has to create a new action to do that. Putting the shared variables in a game object would be the way to go. Global script

13 http://sitemaker.umich.edu/soar/home 14 http://winter.eecs.umich.edu/orts­wiki/index.php/Main_Page 15 http://www.cs.ualberta.ca/~mburo/orts/AIIDE06/

47 objects are an option if one needs to pass some non­unit­specific data. For example add a feed() action to a blueprint; add a stamina attribute and create a polling action is_resting() that calls itself every frame and checks if the object is moving; etc. It is very important to understand the client­server architecture. You will find an example of the client side in orts/apps/sampleai/src/SampleEventHandler.C. It shows how to receive update messages and how to send unit actions back to the server. The server repeats the following cycle every frame (see libs/kernel/src/Game.C): 1. wait and receive client actions 2. shuffle actions and execute them in turn 3. execute scheduled actions from the action queue 4. move objects 5. send view updates to clients The tournament games are defined in orts/tournament. The *.bp files describe all the object attributes and actions. The scripting reference is found here: http://www.cs.ualberta.ca/~furtak/orts/ is somewhat out of date, but it might still be useful. The default game is defined in orts/testgame, look at it to find out the scripted game specifications (*.bp files). Once the core libraries are compiled, compilation time only depends on the user application. Only ortsg requires a decent gfx­card. orts and other non­gfx clients could be run on slow machines as they require somewhat less than 10% of an Athlon 2400+ computation power.

Figure 10: Screenshot from ORTS showing the gfx­client, with terrain, shadows, 3d unit models, trees and GUI

48 A.3 Brief overview of ORTS Since ORTS is not sufficiently documented I provide a short explanation of the parts not so well documented.

Actions (how to fire my gun) The blueprints define actions available to the units, for example in orts/tournament/terrans/unit.bp. Actions are sent to the server pretty much automatically by calling set_action() with appropriate parameters. In the client's action message to the server "move", is sent as the action for that object, so an object can't move() and execute another action in the same cycle/tick. The closest thing to documentation explaining the scripting language is at the moment is located here: http://www.cs.ualberta.ca/~furtak/orts/ Objects have no default actions, only what's scripted. Ex: "move" can be called on every object, but it is not a member function of that object per se. The only actions that matter in the first tournament game are "move", "mine", and "return_resources", in the second tournament game are “shoot” and “target”, Some actions such as mining or attacking are invoked on the tool or weapon object: gob­>component("tool")­>set_action("mine", args); "weapon" and "pickaxe" are named sub objects in the worker blueprint, so to call "attack" and "mine" you would first access those components. The "resource_bin" is included within the worker object itself (akin to the C++ inheritance), so "return_resources" is just called directly. Other actions are on the unit itself: gob­>set_action("move", args); When it is needed to specify a reference to an use object, the ID to pas is the ID of the player that owns the calling object, that is PlayerInfo::get_obj()/get_id(). Before attacking one should check if the distance to target is less than fire range, line­of­sight computation is done with Game::can_see_obj(). Check if the points between the GameObjects are free from terrain, bases; an example is SquadCombatAI::compute_attack_graph() for that, then one call gob­>component("weapon")­>set_action("attack", args); A side note is that the direction the object is facing doesn't matter when shooting.

Damage To know that an object has been damage on the client side one uses the “dir_dmg” flags. There seems to be no way of knowing which units are attacking which, only the direction the damage is coming from (one of 32 directions), but not which player did the damage. GameConst.H has the important ranges for attack/motion and the "damage heading", a variable defined in common.bp. The easiest way to find out when a unit is attacking someone is by checking the weapon's shooting attribute. One can

49 maintain the ids of the attacked unit by using STL maps. Whenever an attacked unit dies or vanishes one have to adjust the maps. Whenever an attacked unit moves you check whether it is still in range. If not, you must adjust the maps accordingly. This kind of update would be quite efficient. It shouldn't be overly difficult to maintain ones own map between objects and their latest assigned targets, resetting if the weapon stops shooting.

Game Objects Objects have ids, if by any chance the object in question goes out of sight and then returns one needs to recompute its id. To get a hold of this id is to use PlayerInfo:get_obj()/get_id(), which is relative to each player, but consistent between the server and client. In the blueprints they use the GameObj::code() to create the object type, this number is in no way unique so don't use that. So if one game object goes out of view, and another object of the same type comes back into view, there is no way telling if the two objects are the same object. There's some reference counting going on, so the GameObj might not be destroyed immediately, but you probably want to keep track of the objects in the 'vanished' and 'dead' lists of the view update to avoid segmentation faults. To sort out which GameObject belong to whom there is three ways: get_int(“owner”) on the object; ServerObjectData (sod) has a pointer as well; third way is using game.get_objs(id), but this is a long route. The first works on the objects placed in the following lists: new_objs, changes.changed_objs, changes.valished_objs, and changes.dead_objs.

Grid Essentially ORTS has a reasonably fine grid overlaid on the world, and objects/boundaries block off those grid cells that they intersect.

GUI Common GUI widgets are defined in orts/ortsg_std (also see orts\libs\sipath\gfxclient\src). To display more info you have the GUI.C code look for specific attributes (hp, etc) and then draw something based on that. The 3d view mode is done with OpenGL and the 2d view mode is done with SDL.

Movement Be careful to use coordinates that are Tile coordinates and not Grid coordinates, else you will get the assertion error. For an example on how to submit a move action in the client see for instance orts/apps/sampleai/src/SampleEventHandler.C: gob­>set_action("move", params); ORTS implements a simple straight­line, constant speed motion model. The "move(x,y[speed])" action sets an object in motion. Collision times are computed exactly. When two objects collide, the default action — which is also used in this competition — is to stop them both. Additional effects such as taking damage can be scripted. To stop an object from moving one just tells it to move to its

50 current location, not the perfect solution but the best one available. Finding out whether an agent is moving or not is trickier than first could be expected. Assuming there is no network lag; there shouldn't be more than 1 frame of delay between a move command being sent and any changes in speed. Although at the moment, I don't believe there is a distinction. “is_moving” may be a more reliable attribute than “speed." it is conceivable that an object's speed could drop to 0 due to external (scripted) effects. It would be much better if one checks access gob­>cur_action.id and finds out whether the id has changed.

Messages When you want to send messages to objects to inform other components that a subtask has been accomplished or has failed, you can do it in two ways, using “task_id” and “obj_id”. If we just send a single message for a task it can't differentiate from an individual object's progress. A task may consist of sending a group of units somewhere and in general units can arrive at different times. If the message is sent as a task_id the task is completed as soon as any object reaches the location, but if it is sent per obj_id all the units will get there,

Mining To check whether or not the miner are “close enough” to a resource that it can mine is by use of a script that makes a distance check. The current rule in ORTS is that the distance need to be less or equal to 2 grid (see terrans/tools.bp). One can use the 'distance' function in Game, and round the result down. The value is stored in global::player::minerals. One should be able to get the player object by calling PlayerInfo::global_obj("player"). From there you get it by get_int("minerals").

Path finding The standard ORTS path finding algorithm uses a graph with a distance scale of 1:1. The path execution has options for doing some collision avoidance, but the path finding itself is just A* on the grid, with occupied cells being either not passable or with a high weight.

Physics There is much is left to be improved on concerning the physics engine in ORTS. As far as the server/simulation side is concerned, objects are simple geometric objects; circles, rectangles, line segments. These objects exist on a fine sub­tile grid, with collisions computed exactly in 2D (albeit it with no inertia). Extending the physics model is not currently a priority in ORTS. The server­side motion/collision code is fairly independent of the rest of the engine but one won't be able to just rip it out and plug anything else in. It shouldn't anyhow be too hard for anyone to modify/extend with for example rag doll physics for client­side graphics. It would be more challenging to develop a complete physics model of the entire world.

51 Scripts The blueprints appear to contain C++ functions in them. Those are scripted functions. They're called by the scripts on the server or sent as request from the client to be executed on the server­side. The blueprints contain things called classes. Blueprints are actually a superset of classes that can be made into objects. Classes can be used only as components or have their attributes copied over using "has" or "is." "has" and "is" can be used interchangeably. C++ classes are exactly the same as blueprints except they can't be instantiated as objects directly, only as components (e.g. sub­objects). Blueprints are a superset of classes that can be made into objects. Classes can be used only as components or have their attribute copied over using "has" or "is".

Symmetric map See Unsymmetrical map:

Time Every time measurement, like weapons cool down time, is specified in view frames. A client is only updated when the new view frame arrives (8 fps by default). The server will run its simulation no matter if it receives new commands from the clients.

Vision Visibility occlusion is the main concern. Units can't see onto or around high ground, which affects weapon targeting and scouting. Practically, the Game class has a function to test for obj­>obj visibility, and PlayerInfo has MapView members with information about each tile. Terrain altitude block the unit­to­unit line of sight even when fog of war is turned off. However it is possible to modify the weapon scripts to not test for that.

Unsymmetrical map ORTS does not have symmetric maps because the terrain is generated randomly. The base locations are actually symmetric but asymmetric terrain can make a big difference. In general the games can't be perfectly fair under this condition, assuming that the terrain generation is not symmetric.

References of Appendix A ORTS doxygen­generated documentation http://www.cs.ualberta.ca/~mburo/orts/doxygen/html/ Scripting and 3d model loading documentation http://www.cs.ualberta.ca/~furtak/orts/index.php M. Buro, ORTS: A Hack­Free RTS Game Environment, Proceedings of the International Computers and Games Conference 2002, Edmonton, Canada. http://www.cs.ualberta.ca/~mburo/ps/orts.pdf M. Buro and T. Furtak, RTS Games as Test­Bed for Real­Time Research, Invited Paper at the

52 Workshop on Game AI, JCIS 2003 http://www.cs.ualberta.ca/~mburo/ps/ORTS­JCIS03.pdf M. Buro and T. Furtak, RTS Games and Real­Time AI Research, Proceedings of the Behavior Representation in Modeling and Simulation Conference (BRIMS), Arlington VA 2004 http://www.cs.ualberta.ca/~mburo/ps/BRIMS­04.pdf M. Buro and T. Furtak, On the Development of a Free RTS Game Engine, GameOn'NA Conference, Montreal 2005 http://www.cs.ualberta.ca/~mburo/ps/orts05.pdf

53

B) The Brain The content of this section is taken from Wikepedia (http://en.wikipedia.org/)

Illustration from Laura Erlauer (2003) The Brain­Compatible Classroom

Cerebral Cortex is defined as one of the two regions of the brain that are delineated by the body's median plane. The brain can thus be described as being divided into left and right cerebral hemispheres. Each of these hemispheres has an outer layer of grey matter called the Neocortex that is supported by an inner layer of white matter.

Cortex See Cerebral Cortex and Neocortex.

Limbic system includes the structures in the human brain involved in emotion, motivation, and emotional association with memory. The limbic system influences the formation of memory by integrating emotional states with stored memories of physical sensations. Neocortex It is the top layer of the cerebral hemispheres, 2­4 mm thick, and made up of six layers. It is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language. R­complex named for the most advanced part of the brain higher mammals share with reptiles. It is responsible for rage, xenophobia, basic survival fight­or­flight responses, territoriality, social hierarchy, and the desire to follow leaders blindly.

55 Illustration from Laura Erlauer (2003) The Brain­Compatible Classroom

Triune brain is a made up model proposed by Paul D. MacLean to explain the function of traces of evolution existing in the structure of the human brain. The triune brain consists of the R­complex, the limbic system, and the neocortex.

Amygdala are almond­shaped groups of neurons located deep within the medial temporal lobes of the brain in complex vertebrates, including humans. Shown in research to perform a primary role in the processing and memory of emotional reactions, the amygdala is considered part of the limbic system.

Hippocampus is a part of the brain located under the temporal lobe. Humans and other mammals have two of em, one in each side of the brain. It forms a part of the limbic system and play's a part in memory and spatial navigation.

Hypothalamus links the autonomous nervous system to the endocrine system, it is located below the thalamus. The hypothalamus controls body temperature, hunger, thirst, and circadian cycles. The neurons that secrete are linked to the limbic system, which is primarily involved in the control of emotions and sexual activity. Thalamus Many different functions are linked to this organ. For example the sensory systems auditory, somatic, visceral, gustatory and visual systems. Humans and other mammals have two of em, one in each side of the brain. The thalamus has been thought of as a "relay" that simply forwards signals to the cerebral cortex. Newer research suggests that its function is more complicated.

56