Complex Reflexive Agents as Models of Social Actors

Peter Dittrich(1,3) and Thomas Kron(2)

(1) University of Dortmund, Department of Computer Science, D-44221 Dortmund, Germany (2) University of Hagen, Department of , D-58084 Hagen, Germany (3) Friedrich-Schiller-University Jena, Department of Mathematics and Computer Science, and Jena Centre for Bioinformatics, D-07743 Jena, Germany

Abstract

The first part gives an overview about the initiative which has been established by the German Research Foundation (DFG) about three years ago. In this initiative eight project cooperating in a tandem-structure with at least one partner from Computer Science and one from Sociology in each ”tandem project”. The second part focuses on our own project where the central metaphor is ”the complex agent”. We present latest results from two lines of research we are following: (1) An architecture to build “realistic” agents for modeling social actors. (2) Models of learning and reflexive agents. In this line of research we have developed a model which uses genetic programming (GP) as a learning mechanism, and a model of the ”situation of double contingency” introduced by Luhmann as an explanation for the origin of social order where learning and reflexivity plays an important role.

1 The Socionics Initiative

The socionics initiative has been established by the German Research Foundation (DFG) about three years ago. In this initiative eight project cooperating in a tandem-structure with at least one partner from Computer Science and one from Sociology in each ”tandem project”. Socionics aims on the one hand at developing computer technologies by employing paradigms of our social world, on the other hand computer science techniques are used to develop and validate social theories. A third aspect of socionics is the study of hybrid systems which consist of real social actors (e.g., humans) and artificial actors (e.g., software agents). See (M¨uller, Malsch, and Schulz-Schaeffer 1998) for an introductory text. The original focus of socionics was on the first and third aim, namely, to build technical systems with the aid of Sociology. This motivation has also led to the name “socionics”, which has been derived in a similar way as the name “bionics”. Why is it beneficial to look a social systems in order to build technical systems? Obviously social systems show how a huge number of autonomous actors are integrated into a quite complex system, which is able to perform a variety of simple and complex tasks. In addition, a social system can have a set of properties that we also desire from technical systems, such properties are for example: robustness, stability, adaptability, flexibility, creativity, and scalability. Scalability has been a central matter of the socionics initiative, recently. Although there is no common definition to which all eight socionics projects would agree we can roughly say that a (social) system is scalable if its identity and performance would not decrease drastically when we increase the number of components (e.g., members, actors, or agents) of the system. This

1 view of scalability is also compatible with the view of scalability in computer science. Of course it is not obvious whether social systems are scalable or not. But we think that they are scalable to a certain extent, but how scalable they are is unknown and in our opinion an interesting research question. Today, in the socionics initiative, the transfer from social science to computer and vice versa is about equally weighted. So in addition to technical advances we can also see advances in the development of social theories by incorporating techniques from computer science. Now we will describe briefly the eight projects of the socionics initiative. Detailed information and recent pubilcations can be found on their web sites (see Sect. 1.9).

1.1 Behavior in Social Contexts - A Socionical Approach of Model Creation and Theory Evaluation

Project manager: Prof. Dr. Rolf v. L¨ude, Dr. Daniel Moldt, Prof. Dr. R¨udiger Valk The central method applied in this project are petri nets. Petri nets have been developed in computer science as a a formal, graphical, and executable technique for the specification and analysis of concurrent, discrete-event dynamical systems (Petri 1962). An elementary petri net consists of places and transitions connected by arcs. A place can hold a token. During a synchronous, deterministic update step tokens can move through transitions from one place to another, if the condition of the respective transition is fulfilled. A large variety of petri nets exists. A variant developed by (Valk 1998) allows petri nets to become tokens of other petri nets. These hierarchical petri nets are used to build multi-agent architecture to model social systems. The architecture has been applied to develop a middle range theory for decision processes in organizations. A concrete simulation model has been built for a decision process in a university. A second focus of the project is the formalization and comparison of social theories using petri nets as formal tools. The advantage of petri nets is that they can be easily designed using standard computer software and that the formal representation of a petri net is equivalent to a graphical representation. This means that after designing a petri net graphically on a computer, it can be simulated without any additional effort.

1.2 The Emergence and Modification of Social Structure in Groups of Intel- ligent, Multiply-Motivated, Emotional Agents

Project manager: Prof. Dr. Dietrich D¨orner, Prof. Dr. Paul Levi The starting point of this project is a model of an simulated autonomous agent called “PSI”. The agent has been designed following psychological theory. The control mechanism consists of various symbolic (rule based) and sub-symbolic layers (e.g., artificial neural networks). Multiple goals, cognition, emotions, and self-observation are already formalized within PSI’s architecture. Now, PSI is extended by social motives in order to study their impact on cooperation and group structure.

1.3 INKA: Integration of Cooperative Agents in Complex Organizations

Project manager: Prof. Dr. Hans-Dieter Burkhard, Prof. Dr. Werner Rammert

2 The aim of the INKA project is to develop foundations for open agent-based systems which are able to cope with incoherence and heterogeneity in complex organizations. The real world prototype application domain is a clinical planing system. The resulting multi-agent system should be able to schedule the various medical treatments in a hospital in cooperation with human users.

1.4 Conflict Handling and Structural Change : Social Theory as “Construc- tion Manual” for Adaptive Multi-Agent Systems (ConStruct)

Project manager: Prof. Dr. Wilfried Brauer, Dr. Gerhard Wei¨s, Prof. Dr. Thomas Malsch The1 primary goals of the project are the improvement of the design process and the efficiency of multiagent systems with the help of sociologically founded models, and the construction of new architectures for very large adaptive ”artificial ” modelled after modern human . Doing so, concepts of learning and conflict from distributed artificial intelligence (DAI) are brought together with concepts from sociology. This approach especially aims at the devel- opment of novel methods for the software design and the programming of multiagent systems, contributing to the emerging field of Agent-Oriented Software (AOSE). Two different architectures are developed which arise from two competing sociological paradigms: (1) Pragmatism and symbolic interactionism, and (2) Niklas Luhmanns autopoietic (Luhmann 1984). From these contrary sociological views on the interrelation between conflict and structure, two competing models of artificial sociality are derived: (1) a model of self-determined change via the creative resolution of action conflicts and (2) a model of ”evolutionary differentiation of social systems as a differentiation of conflicts” (Luhmann 1984). These models respectively the architectures based on them are concretized and evaluated by real- world application scenarios. Systems for the semantical rating of web sites through competing agents and the agent-based linkage of web pages have already been specified and are being implemented.

1.5 Models of Social Organization in DAI and Sociology: Analyzing the Habitus-Field- Theory with a View of its Transferability on Software Architecture and Concepts of DAI

Project manager: Dr. Klaus Fischer, Prof. Dr. J¨org Siekmann, Dr. Michael Florian In this project the focus is on robustness of multi-agent systems. The approach is inspired by research in Sociology on social order, e.g., the reproduction of structure in a society of actors despite the dynamics of interaction. The theoretical basis forms the recent habitus-field theory by Pierre Bourdieu (Bourdieu 1987). The main concepts used in the approach are task delegation and social delegation, modes of delegation and types of organizations. These new concepts are applied to a concrete application domain, namely transportation systems where transportation jobs are traded and delegated and social factors like trust plays an important role.

1.6 Structural Differentiation in Multi-Agent-Systems

Project manager: Prof. Dr. Werner Dilger, Prof. Dr. Bernhard Giesen 1Text copied and slightly modified from: http://wwwbrauer.in.tum.de/gruppen/kikog/projects/socionics/

3 The aim of the project is to build a multi-agent system which allows to study environmental conditions which lead to the emergence of a symbolic communication system with coordina- tive functionality. This means that in the model a symbol system can emerge through some kind of evolutionary process and that the emerged symbol system is used to coordinate action. Practically, in the multi agent system communication and action is coupled such that agents have to communicate successfully first in order to achieve a successful coordinated action. Only successful coordinated actions lead to a gain of a certain type of , which is required for an agent’s reproduction, which in turn leads to an evolutionary (reproductive) advantage for communicating agents and possibly to the emergence of a communication system. The project is motivated by Maturana’s theory of autopoietic systems (Varela, Maturana, and Uribe 1974; Maturana and Varela 1980) and by Luhmann’s social systems theory (Luhmann 1984).

1.7 Tropos: Agent-Oriented Environments for Requirements Engineering in Strategic Networks

Project manager: Dr. Christiane Funken, Prof. Dr. Matthias Jarke, Prof. Dr. Gerhard Lakemeyer The aim of the project is to develop an agent-oriented environment for the life-cycle accompa- nying requirements engineering in strategic networks, such as, enterprises or research groups. Doing so, one tries to combine market mechanisms with the stability of hierarchical organiza- tions. A special focus is on modeling distrust in agent networks and hybrid networks of humans and software agents (Gans, Jarke, Kethers, Lakemeyer, Ellrich, Funken, and Meister 2001).

1.8 Investigations into the Dynamics of Social Systems on the Basis of Sim- ulation with Complex Adaptive Agents

Project manager: Prof. Dr. Wolfgang Banzhaf, Prof. Dr. Uwe Schimank The central theme of our project is the complex agent as a realistic model of a social actor. Currently we focus on abilities, such as to learn, to predict, and to form reflexive expectations. The second part of this paper describes in more detail some recent results of that project.

1.9 Socionics on the Web

Publications, technical report series:

http://www.sozionik-aktuell.de/

Overview of all socionics projects:

http://www.uni-konstanz.de/struktur/fuf/sozwiss/giesen/Sozionik/links.html

4 TOP LEVEL (SOCIAL CHARACTER) 1 1 1 1 homo− homo− emotional identity− economicus sociologicus man keeper 0 0 0 0

INTERMEDIATE LEVEL (CHARACTER SPECIFICATION) money 1 1 1 1 1 1 1 infor− Norm1 fear hero value mation "Help!" 0 0 0 0 0 0 0 1 health 1 Norm2 1 police value "Don’t man go away!" 0 0 0

BOTTOM LEVEL (DECISSION MECHANISM) RESULT victim suffers victim rescued I am hurt action COGNITION ACTION VET −10 20 −20 values

ignore 0 70% 20% 0% 30 % action help −10 40% 60 % 30 % 10 % selection

show −1 80% 10% 5 % 60 % readiness

Figure 1: Illustration of a bystander agent whose architecture is based on the social-character model. See Sect. 2 for details.

2 An Architecture for “Realistic” Social Actor Models

Now we will focus on our own project where the central metaphor is ”the complex agent”. In the current stage of our project we follow two lines of research: (1) An architecture to build “realistic” agents for modeling social actors called “social-character model”, and (2) models of learning and reflexive agents, which will be described in Sect. 3. The social-character model allows to integrate different actor models, such as the homo socio- logicus or the homo economicus (rational man), and to switch between these models smoothly. Figure 1 shows an illustration of the social-character model applied to the bystander problem (also called missing hero problem). The bystander scenario consists of a victim who is in a dangerous situation, e.g., the victim is attacked by an attacker. In addition, there are a couple of actors observing the situation, called bystander. Although the bystander are able to help, it has been often observed that the bystander do not engage or that there is a very long delay before someone helps. The aim of our model sketched in Fig. 1 is to build for the bystander actor a computer agent model that integrates all important factors found in empirical studies of bystander situations. Because empirical studies have found a large number of factors there will be also many parameters in the model. Thus, it is important to organize the parameters so that they can be handled easily. The architecture consists of at least three hierarchical levels. The bottom level consists of the decision mechanism. The decision mechanism defines the algorithm according to which

5 the agent selects an action. In principle, every decision mechanism can be chosen, e.g., fuzzy logic or artificial neural networks. Here, we have chosen the value expectation theory (VET or WET), which is used in rational choice theory or methodological individualism, e.g., (Esser 2000). For the VET we have to define the possible actions (here: ignore, help, show readiness) and the possible outcome of an agent’s action (here: victim suffers, victim rescued, I got hurt), and the subjective cost and payoff for each action and outcome, respectively. Additionally, we need subjective probabilities which say how probable a specific outcome is given a specific action. These probabilities are (of course) subjective estimates of the situation. How these estimates are generated by the cognitive apparatus of an agent is a problem in itself and cannot be discussed here; so far we have applied fuzzy logic to solve this problem. The probabilities are multiplied with the respective payoffs and costs so that we get for each possible action an action value, which represents the expected gain when the action is performed. The action values are scaled by raising them to the power of γ (a constant that controls the influence of randomness in the action selection process). The resulting scaled values are normalized so that we get a probability for each action to be selected. With γ we can control the influence of randomness between fully random behavior (γ = 0) and deterministic behavior (γ = ∞). The top level consists only of a few adjustable parameters (here, four) and defines the so called “social-character”. Each of the four parameter represents a social actor type, namely: (1) the homo economicus, who tries always to be well informed in order to act optimally; (2) the home sociologicus, who acts according to (social) norms; (3) the emotional man, whose action is dominated by emotions; and (4) the identity keeper, who acts to fulfill his identity. Each of these actor types is theoretically well founded, e.g., (Schimank 2000). By just changing the four parameters on the top level (between zero and one) we can switch smoothly between a homo sociologicus, home economicus, and so on. We even create “mixed” characters. So it is very easy for a user (e.g., a social scientist) to setup a simulation with a mixture of characters. The user does not need to know necessarily all the model’s details, but he has to know the semantics of the four actor models, which is given by social theory. Of course whether the model works depends on how the four top level parameters are connected to the decision (bottom) level. Unfortunately this is to a large extent problem dependent. Additional problem dependent parameters are located at the intermediate level, the so called character specification. Here every actor type gets additional parameters for refinement. To give an example: The homo sociologicus is refined by a set of norms according to which he acts, in our case the norm “Help!” and the norm “Do not go away!”. Each norm is parameterized by one value between zero and one, which specifies the intensity of the respective norm. The parameters specified on the social-character and character specification level “moderate” the parameters and values on the decision level. This means that parameters and values on the decision level are increased or decreased (by summation or multiplication). How this moderation is done is problem dependent and has to be carefully designed by the modeler so that the semantics of the high-level parameter is in accordance with social theory. This is important in order to make the agent model usable for social scientists.

3 Models of Learning and Reflexive Agents

An important aspect of ”realistic” agents as models of social actors is their ability to predict the behavior of their environment, to learn, and to consider that their own action is predicted by other actors (reflexivity). In this line of research we have developed a model which uses

6 genetic programming (GP) as a learning mechanism, and a model of the ”situation of double contingency” introduced by Luhmann as an explanation for the origin of social order The “genetic programming model” is published in (Dittrich, Kron, Kuck, and Banzhaf 2001), here we describe into more detail the “Luhmann model”.

3.1 Modeling the Problem of Double and Multi Contingency Following Luh- mann

How is social order possible? This is a fundamental question of sociology since its beginning. Several answers have been given in the last 150 years, such as: Social order is generated by a powerful state, the Leviathan (Hobbes 1651), by an “invisible hand” (Smith 1776), or by norms (Durkheim 1893) which are legitimated by values positioned in a cultural system of society (Parsons 1937; Parsons 1971). A very prominent answer has been given by Luhmann (1984): Following Parsons (1968)2 Luh- mann (1984) identified “the problem of double contingency” as the main problem of producing social order. The problematical situation is: two actors meet each other. How should they act, if they want to solve the problem of contingency, that is, that nothing is mandatory nor impossible?3 As opposed to Parsons, Luhmanns answer does not lie in an existing consensus, thus Luhmann does not solely refer to the social dimension. His answer is more oriented along the dimension of : the first step is, that an actor begins in this unclear situation tentatively to act, e.g., with a glance or a gesture. Then, every following step which refers to this first step is a contingency reducing action, so that the actors are able to build up expectations. A system- history evolves. Thus social structures, social order, or in Luhmanns theory: autopoietic social systems of communication - are first of all structures of mutual expectations. That is, every actor expects, that the other actor has expectations about his next act. In the “Luhmann model” we model and simulate the genesis of social order beginning with the situation of double contingency. We concentrate just on this specific aspect of order formation and do not consider other approaches, e.g., which concentrate on the aspect of rationality, e.g., (Lepperhoff 2000) or employ game theory, e.g., (Lomborg 1996). We model and analyze the way of producing social structures, beginning with the basic assumptions of Luhmann: a dialecti- cal constellation, mutual inscrutableness, necessity of expectation-expectation and expectation- certainty and no external assumptions, e.g., norms or values.

3.2 The “Luhmann Model”

The model consists of agents exchanging messages. In the basic model we restrict ourselves to two agents called A and B, or Ego and Alter, respectively. There are N different messages used and recognized by the agents. There is no a priori relationship between messages. Two agents 2Parsons (1968), p. 436.: “The crucial reference poixnts for analyzing interaction are two: (1) That each actor is both acting agent and object of orientation both to himself and to the others; and (2) that, as acting agent orients to himself and to others, in all of primary modes of aspects. The actor knower and object of cognition, utilizer of instrumental means and himself a means, emotionally attached to others and an object of attachment, evaluator and object of evaluation, interpreter of symbols and himself a symbol.” 3Luhmann’s assumption is that both actors are interested in solving this problem because of their anthropo- logical basic necessity of “expectation-certainty”, that is, they want to know what is going on in this situation.

7 interact only by exchanging message. Messages are exchanged asynchronously. This means that Agent A sends one message out of N possible messages. After receiving this message, Agent B sends one message on his part; and so on. We do not distinguish action and communication here. So, sending a message is identified with an action. And action selection is equivalent with choosing what kind of message should be sent. We can imagine that each agent displays the message he would like to send on a sign he holds. The message is just a number written on the sign. An action of an agent consists of changing the number on his sign after observing the sign of another agent. What are the agent’s motives influencing the selection of a specific action? Note that selecting and performing action i is equivalent to display number i on the sign. Here, we consider two fundamental motives: (1) Expectation-expectation (EE): The agent wants to meet the expectations of the other agent. (2) Expectation-certainty (EC): The reaction of the other agent following the own action should be predictable as well as possible. Note that both motives can be contradictory. Agent A selects an action i in the following way: (1) Determine how action i is expected by Agent B (expectation-expectation). (2) Determine how well the reaction of Agent B following action i can be predicted (expectation-certainty). (3) Combine both values, e.g., by a weighted sum, in order to get the so called action value. The action values are calculated for each possible action and then are used to select an action. The larger an action value, the more probable the corresponding action is selected. The impact of randomness can be varied in our model easily. The expectations are modeled explicitly by simple artificial neural networks. See (Dittrich, Kron, Kuck, and Banzhaf 2002) for a detailed model description. Here, we will only summarize the main results: We have shown that, loosely speaking, Luhmann was right for the dyadic situation. In nearly all of our simulations with a large number of different parameter settings order appeared. This is in a way trivial, but we think that it is important to show how this order appears and on which factors the order depends. Our simulations suggest that especially the action selection mechanism (influence of randomness) and the type of the memory model are most influential. But what happens in the generalized situation of many agents? Does the model scale up? Do we still get order? Here we are able to show that order does no necessarily appear, especially not as easily as in the dyadic situation. In order to make the model scalable4 we had to change the mechanism according to which an agent builds up his expectation-expectation (namely, he has to expect that the others expect from him what the average agent is doing) and we had to introduce observations so that an agent does not only learn from his own interactions but additionally observes the interaction of other agents. In this case we achieved “system level” order with increasing number of agents. The third important result of the “Luhmann model” is that it shows the transition from an actor oriented view to a systems level view as probably envisioned by Luhmann. See (Dittrich, Kron, Kuck, and Banzhaf 2002) for more details 4That means that order still appears when the number of agents is increased.

8 Acknowledgement

We are grateful to Wolfgang Banzhaf, Gudrun Hilles, Christian Kuck, Uwe Schimank, and Andre Skusa. The project is funded by the German Research Foundation (DFG), grant Ba 1042/7-1 and Schi 553/1-1.

References

Bourdieu, P. (1987). What makes a social class? on the theoretical and practical existence of groups. Berkeley Journal of Sociology 22, 1–17.

Dittrich, P., T. Kron, C. Kuck, and W. Banzhaf (2001). Iterated mutual observation with genetic programming. Sozionik Aktuell 2, . http://www.sozioni-aktuell.de.

Dittrich, P., T. Kron, C. Kuck, and W. Banzhaf (2002). On the formation of social order – modeling the problem of double and multi contingency following luhmann. JASSS , (in preparation).

Durkheim, E. (1893). De la dicision du travail social. Paris (1968).

Esser, H. (2000). Soziologie. Spezielle Grundlagen. Band 3. Frankfurt/Main: Campus.

Gans, G., M. Jarke, S. Kethers, G. Lakemeyer, L. Ellrich, C. Funken, and M. Meister (2001). Towards (dis)trust-based simulations of agent networks. In Proceedings of the 4th Workshop on Deception Fraud, and Trust in Agent Societies, Montreal, pp. 49–60.

Hobbes, T. (1651). Leviathan. In W. Molesworth (Ed.), Collected English Works of Thomas Hobbes, No 3, (1966), Aalen.

Lepperhoff, N. (2000). Dreamscape. simulation of the emergence of norms out of the state of nature using a computer-based rational choice model. Zeitrschrift f¨ur Soziologie 29 (6), 463–.

Lomborg, B. (1996). Nucleus and shield: The evolution of social structure in the iterated prisoner’s dilemma. American Sociological Review 61 (2), 278–307.

Luhmann, N. (1984). Soziale Systeme. Frankfurt a.M.: Suhrkamp.

Maturana, H. R. and F. J. Varela (1980). Autopoiesis and Cognition. Dordrecht: Reidel.

M¨uller, H. J., T. Malsch, and I. Schulz-Schaeffer (1998). Socionics. introduction and potential. Journal of Artificial Societies and Social Simulation 1 (3), .

Parsons, T. (1937). The structure of social action. New York, NY.

Parsons, T. (1968). Interaction. In D. L. Sills (Ed.), International Encyclopedia of the Social Sciences, Volume 7, London, New York, pp. 429–441.

Parsons, T. (1971). The system of modern society. Englewood Cliffs.

Petri, C. A. (1962). Kommunikation mit Automaten. Ph. D. thesis, University of Bonn, Bonn.

Schimank, U. (2000). Handeln und Strukturen. Einf¨uhrung in die akteurtheoretische Soziologie. Weinheim: Juventa.

9 Smith, A. (1776). The wealth of nations. New York, NY (1937).

Valk, R. (1998). Petri nets as token objects - an introduction to elementary object nets. In J. Desel and M. Silva (Eds.), Lecture Notes in Computer Science 1420, Berlin, pp. 1–25. Springer.

Varela, F. J., H. R. Maturana, and R. Uribe (1974). Autopoiesis: The organization of living systems. BioSystems 5 (4), 187–196.

10