Peer-to-Peer Systems and the Grid Jon Crowcroft, Tim Moreton, Ian Pratt, Andrew Twigg University of Cambridge Computer Laboratory, JJ Thomson Avenue, Cambridge, UK [email protected] In this chapter, we survey recent work on peer-to-peer systems, and venture some opinions about its relevance for the Grid. We try to bring some historical perspective and structure to the area, and highlight techniques and open research issues in the peer-to-peer arena which may be relevant to Grid computing. 1 Introduction Peer-to-peer systems are distributed Internet applications in which the resources of a large number of autonomous participants are harnessed in order to carry out the system's function. In many cases, peers form self-organising networks that are layered over the top of conventional Internet protocols and have no centralized structure. The assumptions on which peer-to-peer computing has grown are wildly different than those underlying Grid computing. Peer-to-peer systems have evolved to support resource sharing in an environment characterised by users potentially numbering millions, most with homogenous desktop systems and low bandwidth, intermittent connections to the Internet. As such, the emphasis has been on global fault-tolerance and massive scalability. On the other hand, Grid systems have arisen from collaboration between generally smaller, better-connected groups of users with a more diverse set of resources to share. Though these differences have led to distinct sets of requirements and applications, the long- term objectives of peer-to-peer and Grid systems may not be disjoint. In the short term, research emerging from this field will be applicable to many of the challenges faced by the next-generation Open Grid Services Architecture (OGSA) [50], in areas as diverse as resource discovery, scalable load balancing of computation, and highly available storage and data distribution systems. This chapter is arranged as follows. After investigating the history of peer-to-peer computing and putting the topic in context, we look at the notion of peer-to-peer middleware along with several broad application areas: storage, computation, and searching. We proceed by investigating the relationship between peer-to-peer and Grid computing, and conclude by looking at possible future developments which build on peer-to-peer and Grid computing. Our view is that the future of peer-to-peer and Grid is exciting, but choices need to be made carefully to avoid the pitfalls of the past. Figure 1: Comparing peer-to-peer and grid computing at a high level Peer-to-Peer Embodies the global sharing of resources for a specific task Homogeneous resources within a mutually-distrustful environment Often created for specialist uses (e.g. file sharing, number factorization, searching for alien life) Grid Software Supercomputers Flexible, high-performance, high- availability access to all significant resources Heterogeneous, specialized resources within a trusted environment Sensor On-demand creation of general- ‘nets’ purpose powerful, virtual computing systems Computing as a utility rather than a commodity Colleagues Data archives 2 A Brief History The topic of peer-to-peer networking has divided research circles in two: on the one hand there is the traditional distributed computing community, who tend to view the plethora of young technologies as upstarts with little regard for, or memory of the past – we will see that there is evidence to support this view in some cases. On the other hand, there is an emergent community of people who regard the interest as an opportunity to revisit the results from the past, with the chance of gaining widespread practical experience with very large scale distributed algorithms. One of the oldest uses of the term “peer-to-peer computing” is in IBM's Systems Network Architecture documents on LU6.2 Transactions, over 25 years ago. The term, which we shall use interchangeably with p2p, came to the fore very publicly with the rise and fall of Napster [29]. Although there are prior systems in this evolutionary phase of distributed computing (e.g. Eternity [4]), we choose to limit the scope of this survey to the period from “Napster ’til Now”1. 2.1 Beyond the Client-Server Model The consensus seems to be that a peer-to-peer system can be contrasted with the traditional twenty-five or more year old client-server systems2. Client-server systems are asymmetric; we usually assume that the server is a much more powerful, better connected machine. The server 1 i.e. 1998 – 2003. 2 It can be argued that transient Grid services and their orchestration blur this divide, and that the fragmentation and cross-integration of enterprises' infrastructures is one motivation for OGSA. The symmetry of the peer-to-peer model, however, contrasts more strongly. is distinguished as running over some longer period of time and looking after storage and computational resources for some number of clients. As such, the server emerges as a single bottleneck for performance and reliability. Server sites may make use of a number of techniques to mitigate these problems, such as replication, load balancing and request routing, so that one conceptual server is made up of many distinct machines. At some point along the evolution of this thinking, it is a natural step to include the clients' resources in the system - the mutual benefits in a large system are clear. The performance gap between desktop and server machines is narrowing, and the spread of broadband is dramatically improving end clients' connectivity. Thus peer-to-peer systems emerge out of client-server systems by removing the asymmetry in rôles: a client is also a server, and allows access to its resources by other systems. Clients, now really peers, contribute their own resources in exchange for their own use of the service. Work (be it message passing, computation, storage, or searching) is partitioned in some (usually distributed) means between all peers, so that each peer consumes its own resources on behalf of others (acting as a server), but so that it may ask other peers to do the same for it (acting as a client). Just as in the real world, a fully cooperative model such as this may break down if peers are not provided with incentives to participate. We look into trust, reputation, and work into economically grounded approaches in Section 4.3. A claim sometimes made about peer-to-peer systems is that they no longer have any distinguished node, and thus are highly fault tolerant and have very good performance and scaling properties. We will see that this claim has some truth to it, although there are plenty of peer-to-peer systems that have some level of distinguished nodes, and also plenty that have performance limitations. In fact, the fault tolerance claims are hardly born out at all in the early instantiations of the peer-to-peer movement. Initial availability figures in Napster, Gnutella [37] and Freenet [12] do not compare favourably with even the most humble of web sites! However, second and later generation systems may indeed provide the claimed functionality and performance gains, and we will see in Pastry [39], Chord [42] and CAN [35] very promising results, and even more recent work building applications and services over these systems shows great potential gains. One can look at differences and similarities between classical client-server and modern peer- to-peer systems on another axis: statefulness. Despite successes with stateless servers, many Web servers offer wider functionality by using cookies, script-driven repositories and Web Services to keep state over various transactions with a client. In a peer-to-peer system, since a peer rarely knows directly which node is able to fulfil its request, each keeps track of a soft state set of neighbours (in some sense) in order to pass requests, messages or state around the network. While such use of soft state is a long-recognised technique of Grid computing infrastructure (e.g. [50]), Grid services themselves are inherently stateful during their (explicitly-managed) lifetimes. Yet another viewpoint from which one can dissect these systems is that of the use of intermediaries. In the Web (and client-server file systems such as NFS and AFS) we use caches to improve average latency and to reduce networking load, but typically these are arranged statically. We will see in peer-to-peer systems how a dynamic partitioning of work between cooperative peers allows excellent locality-oriented load balancing. Taking the example of caching, content distribution systems such as PAST [40] and Pasta [28] aim to distribute data in proportion to demand for it, on peers close in the network to that demand. Systems for Grid services in time may extend the idea to seamlessly and dynamically provisioning distributed computation close to a data source, in proportion to the time and parallelization demands of the job. The classical distributed systems community would claim that many of these ideas were present in the early work on fault tolerant systems in the 1970s. For example the Xerox Network System's name service, Grapevine [27] included many of the same traits as the systems mentioned here. Other systems that could easily be construed as architecturally true peer-to-peer systems include Net News (NNTP is certainly not client-server) and the Web's Inter-cache protocol, ICP. The Domain Name System also includes Zone Transfers and other mechanisms which are not part of its normal client-server resolver behaviour. IP itself is built out of a set of autonomous routers running a peer-to-peer protocol. For example, the routing protocols OSPF and BGP are peer-to-peer, and certainly not client-server, and are designed that way for exactly the reasons given above for peer-to-peer.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-