Towards a Theory of Trust in Networks of Humans and Computers
Total Page:16
File Type:pdf, Size:1020Kb
Presented at the 19th International Workshop on Security Protocols, Cambridge, UK, March 28-30, 2011 (to appear in LNCS, Springer Verlag) Towards a Theory of Trust in Networks of Humans and Computers Virgil Gligor and Jeannette M. Wing Carnegie Mellon University Pittsburgh, Pennsylvania 15213 [email protected],[email protected] Abstract. We argue that a general theory of trust in networks of hu- mans and computers must be build on both a theory of behavioral trust and a theory of computational trust. This argument is motivated by increased participation of people in social networking, crowdsourcing, human computation, and socio-economic protocols, e.g., protocols mod- eled by trust and gift-exchange games [3, 10, 11], norms-establishing con- tracts [1], and scams [6, 35, 33]. User participation in these protocols re- lies primarily on trust, since on-line veri¯cation of protocol compliance is often impractical; e.g., veri¯cation can lead to undecidable problems, co- NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants [11, 10]. Both pref- erences and beliefs can be enhanced whenever protocol non-compliance leads to punishment of untrustworthy participants [11, 23]; i.e., it seems natural that betrayal aversion can be decreased and belief in trustwor- thiness increased by properly de¯ned punishment [1]. We argue that a general theory of trust should focus on the establishment of new trust relations where none were possible before. This focus would help create new economic opportunities by increasing the pool of usable services, removing cooperation barriers among users, and at the very least, taking advantage of \network e®ects." Hence a new theory of trust would also help focus security research in areas that promote trust-enhancement in- frastructures in human and computer networks. Finally, we argue that a general theory of trust should mirror, to the largest possible extent, human expectations and mental models of trust without relying on false methaphors and analogies with the physical world. 1 Introduction Consider this fundamental question: How can I, a human, trust the information I receive through the Internet? This question's relevance has grown with the advent of socially intelligent computing, which includes social networking, crowd sourcing, and human computation. Socially intelligent computing recognizes the increasing opportunities for humans to work with each other relying on input from both humans and computers in order to solve problems and make decisions. When we read a Wikipedia page, how can we trust its contents? We need to 1 2 Virgil Gligor and Jeannette M. Wing trust the person who wrote the page, the computer that hosts the page, the channel over which the message that contains the page contents are sent to the reader, and ¯nally the computer that receives the message that contains the page contents. We seek a general theory of trust for networks of humans and computers. Our main idea is that for a general theory we need to build on both a the- ory of behavioral trust to complement and reinforce a theory of computational trust. Behavioral trust de¯nes trust relations among people and organizations; computational trust, among devices, computers, and networks. Towards build- ing a general theory of trust through combining ideas from behavioral trust and computational trust, we moreover argue that there is new economic value to be gained, raising new opportunities for technological innovation. The True State of A®airs. Toward a general theory of trust, let's review from computer science and the social and economic sciences, the state of the art, since it is not as rosy as we would like. Over the past three decades, research on trust in computer networks focused on speci¯c properties, e.g., authentication and access-control trust, in traditional distributed systems and networks [4, 22, 13]), mobile ad-hoc networks [12, 31, 25, 36, 5, 32, 38, 27, 26, 18], and applications [19, 21]. Lack of a formal theory of trust has had visible consequences: de¯nitions of trust are often ad-hoc, and trust relations among di®erent network components and applications are hidden or unknown. Often trust de¯nitions underlying the design of secure protocols are misunderstood by both network designers and users, and lead to unforeseen attacks [30]. Similarly, despite a vast body of work on trust in the social sciences [16, 24], we do not have a formal theory of trust among groups of humans, social and business organizations. Instead, trust is de¯ned by example in di®erent areas of economics, sociology and psychology, and no generally accepted theory of trust exists to date. Hence, we neither have a formal theory of trust for computers nor one for humans; and we certainly do not have a formal theory of trust for networks of humans and computers to date. Yet it is quite clear that such a theory is needed in the light of complex interactions in networks of humans and computers in the Internet. This paper's main contributions to the computer security community are: (1) asking our opening question of trust where humans are as much a part of the system as computers; (2) introducing behavioral trust as a seed toward answering the question; (3) arguing the new economic value introduced by a general theory of trust based on the combination of behavioral trust and computational trust. 2 Impact of a Theory of Trust We anticipate that a new theory of trust will have signi¯cant impact on several important areas of network economics, security, and usability. New Economic Value. A new theory of trust should explain the establishment of new trust relations where none existed before. The expansion in the kinds of and numbers of trust relations in human and computer networks clearly helps create new economic opportunities and value. New trust relations increase the Towards a Theory of Trust in Networks of Humans and Computers 3 pool of usable services, remove cooperation barriers among users, and at the very least, take advantage of \network e®ects." Cloud computing is the most obvious example of new kinds and numbers of trust relations: people trust companies, e.g., Google, Facebook, and Amazon, with all sorts of personal data, and more- over people trust these companies' computing infrastructure to store and manage their data. New trust relations also help increase competition among network service providers, which spurs innovation, productivity, expanded markets, and ultimately economic development. New Focus for Security Research. Much of the past research on trust estab- lishment focused on (formal) derivation of new trust relations from old ones; e.g., trusted third-party services, transitive trust relations, delegation. In con- trast with prior research, we seek a theory of trust which explains how to create new trust relations that are not derived from old ones, and create new opportu- nities for cooperation among users and among services. For example, it should be possible to establish private, pair-wise trust relations between two untrust- ing parties that do not necessarily share a common trusted service, such as eBay, which enables reputation systems to work; or a common social network, which might enable recommendations systems to work. While helpful in many cases, trusted third parties create additional complexity and uncertainty, and sometimes become an attractive attack target (e.g., Google). Instead, we seek network infrastructures that enable unmediated trust relations, which take us beyond the realm of various (re)interpretations of the end-to-end argument at the application level [7]. In this paper, we argue that network infrastructures that support the establishment of behavioral trust, which lower risk and betrayal aversion between untrusting parties and increase beliefs in trustworthiness be- tween these parties, will spur establishment of unmediated trust relations, and as a consequence create new economic value. Usable Models of Trust. A new theory of trust should be useful for casual users, not just for network and application-service designers. To be useful, such a theory must be easily understandable by designers and users alike. And to be understandable, a theory of trust has to mirror, to a large extent, human expectations and mental models of trust. For example, users understand how to separate and protect physical and ¯nancial assets in everyday activity. Simi- larly, they would understand and expect computer systems networks to enable them to separate information assets, be they local system services or ¯nancial data held by bank servers. Furthermore, a new theory of trust must not create false metaphors and analogies with the physical world. The email trust model is an example of false expectations: the widespread user expectation that elec- tronic mail would mirror the trust model of physical mail (e.g., authenticity, con¯dentiality, non-repudiation of receipt, delivery in bounded time) has misled many unsuspecting users into accepting spam, misleading ads, and malware. In contrast, the trust model of eBay follows a well-established, traditional human trust example: it establishes trust relations based on reputation, and to counter inevitable protoctol non-compliance and trust failures, it uses insurance-based recovery mechanisms. 4 Virgil Gligor and Jeannette M. Wing Alice Secure, Private, Social Net. Available Channels (I’m really Alice ) Trusted Path Receiver Sender B A Bob Alice Penetration - Resistant Interfaces Alice Fig. 1. Simple Communication Model: Entities and Channels We begin with a simple communication model (Section 3), primarily to state our assumptions. In addressing our opening motivating question, we focus on no- tions from computational trust (Section 4) that leads us naturally to introduce notions from behavioral trust (Section 5). We explore in Section 6 the implica- tions of behavioral trust for the creation of novel computing and institutional infrastructures.