Trustguard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks
Total Page:16
File Type:pdf, Size:1020Kb
TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks Mudhakar Srivatsa, Li Xiong, Ling Liu College of Computing Georgia Institute of Technology fmudhakar, lxiong, [email protected] ABSTRACT in a decentralized overlay networks. Such systems typically Reputation systems have been popular in estimating the trustwor- assign each node a trust value based on the transactions it thiness and predicting the future behavior of nodes in a large-scale has performed with others and the feedbacks it has received. distributed system where nodes may transact with one another For example, XRep [4] provides a protocol complementing without prior knowledge or experience. One of the fundamental current Gnutella protocol by allowing peers to keep track of challenges in distributed reputation management is to understand and share information about the reputation of other peers vulnerabilities and develop mechanisms that can minimize the po- and resources. EigenTrust [11] presents an algorithm similar tential damages to a system by malicious nodes. In this paper, to PageRank [15] that computes a trust value by assuming we identify three vulnerabilities that are detrimental to decen- trust is transitive and demonstrated its benefits in address- tralized reputation management and propose TrustGuard − a ing fake file downloads in a peer-to-peer file sharing network. safeguard framework for providing a highly dependable and yet However, few of the reputation management work so far efficient reputation system. First, we provide a dependable trust have focused on the vulnerabilities of a reputation system model and a set of formal methods to handle strategic malicious itself. One of the detrimental vulnerabilities is that a mali- nodes that continuously change their behavior to gain unfair ad- cious node may strategically alter its behavior in a way that vantages in the system. Second, a transaction based reputation benefits itself such as starting to behave maliciously after it system must cope with the vulnerability that malicious nodes may attains a high reputation. Another widely recognized vul- misuse the system by flooding feedbacks with fake transactions. nerability is the shilling attack [12] where malicious nodes Third, but not least, we identify the importance of filtering out submit dishonest feedback and collude with each other to dishonest feedbacks when computing reputation-based trust of a boost their own ratings or bad-mouth non-malicious nodes. node, including the feedbacks filed by malicious nodes through Last, but not the least, malicious nodes can flood numerous collusion. Our experiments show that, comparing with existing fake feedbacks through fake transactions in a transaction- reputation systems, our framework is highly dependable and effec- based feedback system. tive in countering malicious nodes regarding strategic oscillating With these issues in mind, we present TrustGuard − a behavior, flooding malevolent feedbacks with fake transactions, highly dependable reputation-based trust building frame- and dishonest feedbacks. work. The paper has a number of unique contributions. First, we introduce a highly dependable trust model to effec- tively handle strategic oscillations by malicious nodes (Sec- Categories and Subject Descriptors tion 3). Second, we propose a feedback admission con- C.2.4 [Distributed Systems]: Distributed Applications; trol mechanism to ensure that only transactions with secure C.4 [Performance of Systems]: Security, Reliability| proofs can be used to file feedbacks (Section 4). Third, we Reputation Management, Overlay Networks propose feedback credibility based algorithms for effectively filtering out dishonest feedbacks (Section 5). We also present General Terms a set of simulation based experiments, showing the effective- ness of the TrustGuard approach in guarding against each of Security, Performance, Reliability the above vulnerabilities with minimal overhead. We con- clude the paper with a brief overview of the related work 1. INTRODUCTION (Section 7), and a conclusion (Section 8). A variety of electronic markets and online communities have reputation system built in, such as eBay, Amazon, Yahoo! 2. TRUSTGUARD: AN OVERVIEW Auction, Edeal, Slashdot, Entrepreneur. Recent works [4, 1, 3, 11, 19] suggested reputation based trust systems as 2.1 System Architecture an effective way for nodes to identify and avoid malicious We first present a high level overview of the TrustGuard nodes in order to minimize the threat and protect the sys- framework 1. Figure 1 shows a sketch of the decentralized tem from possible misuses and abuses by malicious nodes architecture of the dependable reputation management sys- Copyright is held by the International World Wide Web Conference Com tem. The callout shows that each node has a transaction mittee (IW3C2). Distribution of these papers is limited to classroom use, 1 and personal use by others. This research is partially supported by NSF CNS CCR, WWW 2005, May 1014, 2005, Chiba, Japan. NSF ITR, DoE SciDAC, CERCS Research Grant, IBM Fac- ACM 1595930469/05/0005. ulty Award, IBM SUR grant, and HP Equipment Grant. be identified through some digital certification based mecha- nism. Readers may refer to [2, 18, 7] for a detailed discussion on security issues in overlay networks. 2.2 Problem Statement and Solution Approach The TrustGuard framework is equipped with several im- portant safeguard components. In the rest of the paper, we focus on the following three types of vulnerabilities, analyze the potential threats and describe countermeasures against such vulnerabilities using TrustGuard. Strategic Oscillation Guard. Most existing reputation systems such as eBay use a combination of average feedbacks and the number of transactions performed by a node as indi- cators of its trust value. Our experiments show that using a simple average does not guard the reputation system against oscillating behavior or dishonest feedbacks. For example, a bad node may behave non-maliciously until it attains a good reputation (reflected in its trust value) and then behave ma- liciously. Or it could oscillate between building and milking reputation. A dependable reputation system should be able to penalize malicious nodes for such dynamic and strategic Figure 1: TrustGuard's Architecture behavioral changes. In TrustGuard, we promote the incor- poration of the reputation history and behavior fluctuations of nodes into the estimation of their trustworthiness. We use manager, a trust evaluation engine and a feedback data stor- adaptive parameters to allow different weighting functions age service. Whenever a node n wants to transact with an- to be applied to current reputation, reputation history, and other node m, it calls the Trust Evaluation Engine to per- reputation fluctuations. form a trust evaluation of node m. It collects feedback about Fake Transaction Detection. In a typical transaction- node m from the network through the overlay protocol and based feedback system, after each transaction, the two par- aggregates them into a trust value. Such computation is ticipating nodes have an opportunity to submit feedbacks guarded by strategic oscillation guard and dishonest feed- about each other. This brings two vulnerabilities. First, a back filters. The Transaction Manager consists of four com- malicious node may flood numerous ratings on another node ponents. The trust-based node selection component uses with fake transactions. Second, a malicious node may sub- the trust value output from the trust evaluation engine to mit dishonest feedback about a transaction. A dependable make trust decisions before calling the transaction execution trust model should be equipped with mechanisms to handle component. Before performing a transaction, the transac- malicious manipulation of feedbacks to guard the system tion proof exchange component is responsible for generating against such fake transactions, and to differentiate dishon- and exchanging transaction proofs. Once the transaction est feedback from honest ones. In TrustGuard approach, we is completed, the feedbacks are manually entered by the propose to bind a feedback to a transaction through trans- transacting users. The transacting nodes then route these action proofs. In other words, a feedback between nodes n feedbacks to designated nodes on the overlay network for and m on a given transaction is stored if and only if n and storage through a decentralized overlay protocol (e.g. DHT m indeed transacted with each other. based protocol). The designated nodes then invoke their Dishonest Feedback Filter. While the fake transaction data storage service and admit a feedback only if it passes detection guarantees that a feedback is associated with a the feedback admission control where fake transactions are real transaction, a malicious node may submit dishonest detected. The feedback storage service is also responsible feedbacks in order to boost the ratings of other malicious for storing reputation and trust data on the overlay network nodes or bad-mouth non-malicious nodes. The situation is securely, including maintaining replicas for feedbacks and made much worse when a group of malicious nodes make trust values. We build the TrustGuard storage service on collusive attempts to manipulate the ratings. In this paper, top of PeerTrust [19]. we build a dishonest feedback filter to differentiate dishon- Although we implement the TrustGuard framework using est feedbacks from honest ones. The filter essentially assigns a decentralized