2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops

An Improvement of Coleman Model of Trust: Using FNHMS to Manage Reputation in Multi-Agent Systems

Guangquan Xu 1, Zhiyong Feng 1, Gang Wang 2and Huabei Wu1 1 School of Computer Science and Technology, University, 92 Weijin Road, Nankai , Tianjin 300072, PR of 28357 Research Institute of China Aerospace Science& Industry Corp., 69 Huangwei Road, District, Tianjin 300072, PR of China E-mail: [email protected], [email protected], [email protected],[email protected]

Abstract at being appropriated for the specific domain (commerce, politics, technology, etc.). In his classical model of trust, Coleman argues the Also definitions aimed at being general (with some decision of an actor to trust or not is a function of the cross-domain validity) are usually either incomplete or expected gain and loss involved. Here we will improve redundant. They miss or simply let implicit and give the Coleman model of trust. We believe that only if one for presupposed important components, or they include agent is in the trustworthy group can the other agents something just accidental and domain specific. trust it. In this paper, we mainly devote ourselves to the Not only there is not a shared and dominant work of management of agents according to their definition but even less there is a clear model of trust reputation: we take advantage of FNHMS (Fuzzy Non- as mental attitude, as decision and action, and as a homogeneous Markov System) to model the agent social relationship. What is needed is a general and system, during which three population parameters and principled theory of trust, of its cognitive and affective two basic parameters are defined to deduce the components, and of its social functions [2]. transition probabilities matrix, so lastly we can Generally, trust and reputation are widely looked as simulate the change of the scale of trustworthy agents a concept concerning social science and cognitive and know the whole distribution of agents about their science. Therefore we had better research it from the reputation. Finally our experimental results prove that field of social and cognitive disciplines. the theoretical results are true, and our improvement Section 2 presents the improvement of this model is of the Coleman model of trust is fit and reasonable. given. Section 3 considers the management of the reputation of agents, during which Fuzzy Non- 1. Introduction Homogeneous Markov System is used to simulate the trustworthy group. Section 4 runs experiments to verify Since the definition of trust and reputation has been the results gotten by the previous parts. Finally done by many researchers, we here just simply Sections 5 presents the conclusions and future work. introduce some important concluding remarks. As sociologist Diego Gambetta [1] says: “Trust is one of 2. The Methodology of the Improvement of the most important social concepts that helps human Coleman Model of Trust agents to cope with their social environment and is present in all human interaction”. In fact, without trust Strategies and devices for trust building in MAS (in the other agents, in the organization, in the and virtual societies should take into account the fact infrastructures, etc.) there is no cooperation and that social trust is a very dynamic phenomenon both in ultimately there is no society. the mind of the agents and in the society; not only Yet, we do not have in cognitive and social sciences because it evolves in time and has a history, that is A’s a shared or dominant, and clear and convincing notion trust in B depends on A’s previous experience and of trust. Every author working on trust provides his/her learning with B itself or with other (similar) entities, own definition, frequently not really general but aimed but also in fact trust is influenced by trust in several

0-7695-3028-1/07 $25.00 © 2007 IEEE 319 DOI 10.1109/WI-IATW.2007.12 rather complex ways [3]. We have gotten that Coleman As we know, transition probabilities matrix and the model could be altered based on its parameters characteristics are determined by each other, so it is the including L,GB (− C ) and p [3].We will discuss these only thing for us to do that estimating the transition options below. probabilities matrix. We believe that only the trustworthy agent can be 2.1. Changing trustor’s perception of p trusted, and the management of trustworthy agents can be realized by such methods: some conclusions derived Classic probability theory tells us that the from Coleman model of trust will be used as the probability of receiving the gain prospected by trustee reasoning rules, while FNHMS is taken advantage of is determined by the ratio of the amount of relevant to simulate the agent group, and lastly we exercise FIS favorable information to the total amount of relevant (Fuzzy Inference System) to get the population information held by trustor: distribution, through which we can classify agents with different reputation. amount of relevant favorable inormation p= First, we need define the sets of states. Here we amount of relevant information classify (for simplicity’s reason) agents into different = This implies that alter has at least three options that states according to its reputation, i.e. S {1, 2, 3, 4, 5} , will increase the likelihood of becoming trusted by where 1 represents the least reputation that an agent ego. has at time t , 2 the second least reputation, the former 1. Add favorable information to the above two states are both negative impressions in the trustor’s equation. eyes; while 3 represents a balanceable impression, Here we must note that ‘new’ favorable information 4,5 are in the opposite direction to 2,1. Then we have does not have to be new at all, e.g. make negative the transition matrix as follows: information less relevant to reduce the denominator in qqqqq the equation and thus increase p . 11 12 13 14 15 qqqqq 2. Redefining negative information into positive 21 22 23 24 25 Qqqqqq= information. 31 32 33 34 35

This strategy simultaneous redefines trustor’s qqqqq41 42 43 44 45 perceived cost into a benefit. qqqqq 3. Making the relevant favorable information 51 52 53 54 55 This section will study the dynamic evolution of the more “realistic”. system of the 100 agents and extract useful information

concerning the transitions between states. We assume 2.2. Changing trustor’s perception of L and G (for simplicity’s reasons) that the factors (the population parameters) that influence the transition Obviously, an evident strategy for trustee is to probabilities are the following: provide the potential trustor with new or stronger 1. p = chance of receiving gain ( pp ). We assume insight that increases trustor’s perception of G or 1 decreases trustor’s perception of L. Second, trustee can that the chance of receiving gain takes values from 0 to redefine costs into the benefit. For the trustor now, the 1. > above mentioned options to increase the probability of 2. G = potential gain ( pp2 ), G 0 . giving trust to the trustor can be turned into beliefs in > 3. L = potential loss ( pp3 ), L 0 . the estimation of trustee’s reputation. Next Section will The above three population parameters are describe Fuzzy Non-Homogeneous Markov System, dependent on the other two basic parameters: the and then use it to simulate the agent system to manage agent’s communicability ( bp ) and available positive the reputation of each agent. 1 information (bp2 ). () ( ) 3. Managing the Reputation of Agents Next we will define fuzzy partitions A12 and A for (i) the two basic parameters and Bi,= 1,2,...,25 for the In this section we provide an application of the ()≤≤ theory of FNHMS [4] to the social cognitive concept: outputs pijij 1, 5. trust and reputation. In order to improve the Coleman Empirical knowledge tells us that the positive model of trust, our application falls into the area of the information results to a growth of the transition management of the reputation in agent system and it probabilities from the lower reputation to the higher deals with simulating the change that happens in the reputation states. We then have reasoning rules as development of the concept of ‘trust’. follows:

320 = [ ] 1. When the agent’s communicability ( bp1 ) is big, ei 11111; then pij(5≥> ≥ 1) is small. ij 10000 2. When the agent’s communicability ( bp ) is big,   1 01000 ≤< ≤ then pijij (1 5) is big. I = 00100 3. When the agent’s communicability ( bp ) is   1 00010 small, then pij(1≤< ≤ 5) is small. 00001 ij   4. When the agent’s communicability ( bp1 ) is The grades of membership of two basic parameters small, then pij(5≥> ≥ 1) is big etc. and the output are given out in Table 1-3. ij =∧ • Then we can getQ1 () LOW LITTLE LOW , that Table 1. The grades of membership of bp1 is bp1 -1 -0.5 0 0.5 1 11  LOW 1 0.5 0 0 0 0.5 0.5 =∧•[] MED 0 0.25 1 0.25 0 Q1 0010.5000  HIGH 0 0 0 0.5 1 00 00 Table 2. The grades of membership of bp2  1 0.5000 bp2 -1 -0.5 0 0.5 1 0.50.25000 LITTLE 1 0.5 0 0 0  = 0 0 000 AVER 0 0.25 1 0.25 0  0 0 000 PLENTY 0 0 0 0.5 1  0 0 000 Following the same mechanism, we can get Table 3. The grades of membership of y ()∞ QQ29,..., , and lastly we have N as follows. y 0.2 0.4 0.6 0.8 1 kk# LOW 1 0.5 0 0 0 ()∞= − − − NTeIIwQIwQii∑∑iii AVER 0 0.25 1 0.25 0 ii==11 HIGH 0 0 0 0.5 1 =×[]100 100 100 100 100

=−−{ }  0.21875 0.25 -0.0625 -0.125 0  Next, We assume that bp1 1, 0.5, 0, 0.5,1 ;   -0.2578125 0 -0.0390625 0.09375 -0.03125  bp =−−{ 1, 0.5, 0, 0.5,1} ; y = {0.2,0.4,0.6,0.8,1} . 2  0.12275 -0.125 0.09375 0.225 -0.125  Similarly, we can get the population parameter pp and   2 -0.0390625 0.0335 -0.078125 0.19375 0.25  pp . Then we assume the initial population structure is 3  -0.030125 0 0.25 0.125 0.0566875 N ()0= [ 2,8,40,30,20] = []1.45 15.85 16.40625 51.25 15.04375 As far as the final structure is concerned, we can That is to say, when timet →∞, about 67% agents use the conclusion gotten in the work [4], i.e. will be in the state 4 and 5, i.e. most agents have great kk# ()=∞= ( ) −− − values of reputation, which is reasonable according to lim Nt N TeIii I wQii I wQi t→∞ ∑∑our empirical knowledge. ii==11  # Where (). is the group inverse of the respective 4. Experimental Results matrix [5]; T = 100 is the total number of agents; Qq== fpppppp(),,,… () 1≤≤ j 5 and w is Our experiments are performed on AMD 2400+, iijij12 l i 1.67GHz, RAM 512, operating system Windows XP the weight of rule ii()19≤≤ , here we assume (for sp2, simulation platform NetLogo. Our experiments 1 involve 100 agents, which represent 100 agents with simplicity’s reason) that wfori=≤≤,1 9 , random reputation. i 9 Our experiments are built up based on the model of Each rule corresponds to a matrix Qi and party, which has been verified by many researchers. In ==×= kdd12 33 9; the party model, we just modify the tolerance to be the

321 pG effect, one may argue that Coleman’s model of trust index of interest: Index = . Once the pGpL+−()1 may not be equally applicable for modeling the placing of trust in cultures dissimilar to North American ones. index is out scope of 0.5, the agents will move to other As Henrich et al. in the related work [6, 7] point out group till most agents arrive the appropriate positions, that not all cooperation patterns are rational, and which means that we only need set the index=0.5. therefore it is necessary to take into account the In this way, we will get the experimental result conversion of the cooperation between different N ()∞ as Fig. 1 shows. cultures. Lastly our work has not compared the results to the game theoretical results, and the game theory is traditionally regarded as the main computational framework.

6. Acknowledgements

The work has been supported by the Tianjin project Research on Development Platform for Enterprise Information Integration, 04310891R.

7. References

[1] Gambetta, D. Can We Trust Trust? In D. Gambetta (ed.), Trust: Making and Breaking Cooperative Relationships, pp. Fig. 1. The final population structure of the 213-237. Oxford, UK: Basil Blackwell, 1988. agents in our experiments [2] Coleman J. S. 1990. Foundations of Social Theory. Cambridge: Harvard University Press. [3] Falcone R. & Castelfranchi C. The socio-cognitive Obviously from the Fig.1, we can get dynamics of trust: does trust create trust? In Falcone, R., N (∞=) []217194616, which is close to the Singh, M. & Tan Y. (eds.) Trust in Cyber-societies: theoretical results derived in Section 4. Furthermore, Integrating the Human and Artificial Perspectives. LNAI the red stands for the new-coming agents in the group, 2246 Springer. pp. 55-72 (2001). [4] M.A. Symeonaki, G.B. Stamou, S.G.Tzafestas. Fuzzy while the blue stands for the original agents. Therefore non-homogeneous Markov systems,Appl. Intelligence 17 (2) our theoretical results are proved true. (2002). pp. 203–214. [5] C.D. Meyer. “The role of the group generalized inverse in 5. Conclusion and Future Work the theory of finite Markov chains”. SIAM Review, vol. 17, pp. 443–464, 1975. In this paper, we mainly devote ourselves to the [6] Joe Henrich and Robert Boyd .The Evolution of work of management of agents according to their Conformist Transmission and the Emergence of between- reputation, that is: we take advantage of Fuzzy Non- group differences (1998). Evolution and Human Behavior,19: 215-242 . Homogeneous Markov System to model the agent [7] Joe Henrich and Francisco Gil-White .The Evolution of system, during which three population parameters and Prestige: freely conferred deference as a mechanism for two basic parameters are defined to deduce the enhancing the benefits of cultural transmission (2001). transition probabilities matrix, so lastly we can Evolution and Human Behavior, 22 (3), pp. 165-196. simulate the change of the scale of trustworthy agents and know the whole distribution of agents about their reputation. Obviously this is because trust and reputation are dynamic phenomenon as the above sections mentioned. Finally our experimental results prove that the theoretical results are true, and our improvement of the Coleman model of trust is fit and reasonable. There are still some important issues in our work to do for the future. One is that in our methods symbolic knowledge must be supplied by the experts of the system, which is unavailable sometimes. Another is the reason from the Coleman model of trust itself. In

322