<<

Overview – Part 1

1 Scope and Relevance of P2P in Distributed Systems and Networking 1.1 Motivation

Advanced Topics www.httc.de 1.2 Evolution of Computing Paradigms 1.3 Success of P2P Networking in Distributed Systems 1.4 P2P Application & Service Domains 2 Specification of Peer-to-Peer

Peer-to-Peer, Part 1 www.kom.tu-darmstadt.de 2.1 An Early Definition of P2P 2.2 Nine Characteristics of “Pure” P2P Systems 2.3 P2P Networks are Overlay Networks 2.4 Overlay Structures 3 P2P Applications and Systems 3.1 P2P Applications and Systems: 1st Generation 3.2 P2P Applications and Systems: 2nd Generation Prof. Dr.-Ing. Ralf Steinmetz, Dr.-Ing. Oliver Heckmann 3.3 Some 2nd Generation Applications and Systems Beyond File 3.4 Some Applications and Systems: 3rd Generation TU Darmstadt – Technical University of Darmstadt Dept. of Electrical Engineering and Information Technology, Dept. of 4 Properties of P2P Network Graphs KOM - Multimedia Communications Lab 4.1 Some Metrics Merckstr. 25, D-64283 Darmstadt, Germany, 4.2 Clustering {steinmetz, heckmann}@KOM.tu-darmstadt.de Tel.+49-6151-16-5188, Fax. +49-6151-16-6152 4.3 Average Path Length 4.4 Small World Phenomenon 4.5 Power Law Phenomenon

1 2

Scope and Relevance of P2P 1 Overview – Part 2 in Distributed Systems and Networking 5 Mechanisms for Unstructured P2P Networks 5.1 Broadcast P2P (Peer-2-Peer): A distributed systems and a 5.2 Expanding Ring communications paradigm 5.3 Random Walk www.httc.de www.httc.de 5.4 Bloom filters 6 Mechanisms for Structured P2P Networks 6.1 DHT Distributed Hash Tables Distributed systems definition (very general) 6.2 DHT: Usage "A distributed system is www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de 6.3 Chord, a DHT Example 6.4 Pastry, a prefix-based DHT a collection of individual computing devices 6.5 Tapestry, a suffix-based DHT 6.6 that can communicate with each other" 6.7 Content Addressable Network (CAN) 6.8 Semantics-based Search Techniques 6.9 Topology: a Summary Lecture focus: 7 Case Study: Omicron - a Hybrid Overlay Design • Systems with loosely-coupled, autonomous devices 7.1 Design Mechanisms: Overlay Structure 7.2 De Bruijn Networks • Devices have their own semi-independent agenda 7.3 Design Mechanisms: Clusters 7.4 Design Mechanisms: Roles • (At least) limited coordination and cooperation needed 8 Accounting for P2P Networks 8.1 Introduction and Overview 8.2 KOM Token-based Accounting System 9 GRID Computing 10 Research: Some Major Issues in P2P Networking 11 Annex: Some References 3 4 1.1 Motivation Motivation (2) www.httc.de www.httc.de www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de

One of the newest buzzwords in networking is Peer-to-Peer (P2P) Is it only a hype? • initially 40 million users in 2 years • integrated into commercial systems, e.g., Microsoft P2P SDK • Advanced Networking Pack for Windows XP, http://www.microsoft.com/windowsxp/p2p • open source, e.g., JXTA (Sun) with Protocols & Services • strong presence at international networking conferences

5 6 Above logos copied from the respective web page

Motivation (3) Dominant P2P Applications

P2P traffic is the major traffic source, since at least 2003 Sandvine Study 2003 Overall Internet traffic is more than ~50% P2P traffic • in Europe (France, Germany, ..) www.httc.de www.httc.de • predominant EDonkey/EMule e.g. France Telecom • in USA • see N.B. Azzouna & F. Guillemin • predominant /Fastrack www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Analysis of ADSL traffic on an IP backbone link IEEE Globecom 2003 Today, • seems to be the most successful P2P • Results application HTTP: 14.6 % Edonkey: 37.5 % • KaZaA more and more irrelevant FTP: 2.1 % KaZaA: 7.8 % • eDonkey largely replaced by eMule (using an NNTP: 1.9 % Napster: 3.8 % extended but compatible protocol) Other: 31.8 % : 0.3 %

Sum P2P: 49.6% + large part of “Other”

7 8 1.2 Evolution of Internet Computing Paradigms Evolution of Internet Computing Paradigms (2)

1st generation (since the beginning of the Internet): 3rd generation (since 2000): • permanent IP addresses, always connected • more collaboration and personalized applications www.httc.de www.httc.de • static domain name system (DNS) mapping • powerful edge devices (peers), instant networking • limited specialized applications, protocols: Telnet, • protocols/applications: FTP, Gopher, .... • Napster, Gnutella www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Ö World Wide Access • Emule/Edonkey/MLDonkey, Fasttrack (KaZaA), Freenet, .. •Chord, … 2nd generation (since 90ties): Ö World Wide Peering P P • WWW & graphical browsers • dynamic IP addresses / NAT / firewalls • heterogeneous applications, asymmetric based services P P P • protocol: HTTP, .. Ö P P P 9 10

1.3 Success of P2P Networking Success of P2P Networking (2)

Some reasons for the success of P2P applications: New services at the edge of the network • P2P overlay networks make it relatively easy to deploy new Filesharing: highly attractive and cheap content services www.httc.de www.httc.de • users their content with other users • Ö attractive content Group collaboration superior for business processes • copyrights are usually not respected (problem!) • grow organically, non-uniform and highly dynamic www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de • Ö cheap content • largely manual, ad-hoc, iterative and document-intensive work • often distributed, not centralized Unused resources at the edges • no single person/organisation understands the entire process • assume e.g. a SME enterprise with 100 desktop computers: from beginning to end • storage space: 100 x 150 GB = 15 TB spare storage space • processing power: 100 x 2,5 GHz x 5 ops/cycle = 1,25 trillion Cost effectiveness ops/sec spare processing power • reduces centralized management resources • optimizes computing, storage and communication resources Publishing: exploding amount of data • rapid deployment • 2 x 10e+18 Bytes are produced per year • 3 x 10e+12 Bytes are published per year P2P applications/protocols tailored for user’s needs • search engines like Google only index 1.3x10e+8 websites • Napster’s success depended to a great amount on its ease of • see Gong: JXTA: A Network Programming Environment, use 11 IEEE Computing 2001 12 1.4 P2P Application & Service Domains P2P Application & Service Domains (2)

File Sharing: music, video and other data Distributed Computing - GRID • Napster, Gnutella, FastTrack (KaZaA, ...), eDonkey, eMule, • P2P CPU cycle sharing www.httc.de www.httc.de BitTorrent, eXeem, etc. • GRID Computing, ..., distributed simulation • SETI@home: search for extraterrestrial intelligence Distributed Storage/Distributed Filesharing • Popular Power: former battle to the influenza virus www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de • (Anonymous) Publication • Freenet, PAST, OceanStore, etc. Security and Reliability • Resilient (RON) Collaboration • Secure Overlay Services (SOS) • P2P groupware • Groove Multicast • P2P content generation, • Narada • Online Games • P2P instant messaging

13 14

2 Specification of Peer-to-Peer 2.1 An Early Definition of P2P www.httc.de www.httc.de www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de

Definition of P2P Networking (. Shirkey): • "Peer-to-peer (P2P) is a class of applications that takes advantage of resources - storage, cycles, human presence - available at the edges of the Internet. Because accessing these decentralized resources means operating in an environment of unstable connectivity and unpredictable IP addresses, peer- to peer nodes must operate outside the DNS and have significant or total autonomy from central servers”

15 16 19 17

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de .resource locations 3. peers sharetheir resources 2. nodes (peers) attheedgesofa relevantresources locatedat 1. Resources (location, sharing) Ö autonomy? 2.does itgivethenodesatedgesofnetworksignificant 1.does ittreatvariableconnectivityasthenorm? Litmus testforaP2Papplication: Technologies, O’Reilly2001) (see AndyOram:Peer-To-Peer/HarnessingthePower ofDisruptive Detailed Characteristics(1) An EarlyDefinitionofP2P(2) If answertobothis e.g. isstorage/processingdonebyautonomousend-systems • e.g. doesitsupportdial-upuserswithvariableIPaddresses? • most often largely replicated distributed widely • • network yes then theapplication is P2P otherwise not. 20 18

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de .variableconnectivityis thenorm 4. Networking Detailed Characteristics(2) 2.2 often operating behindfirewalls orNATgateways • operating outsidethedomainname system(DNS) • of dial-upuserswithvariable IP addresses support • ieCaatrsiso Pr”P2PSystems Nine Characteristicsof“Pure” 23 21

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de .self-organizing system 9. nocentral control orcentralized usage/provisioning of 8. peershavesignificant autonomyand mostly similar 7. directinteraction (provisionofservices, e.g.file 6. combinedClientand Serverfunctionality 5. variableconnectivityisthe norm 4. resourcelocations 3. peerssharetheirresources 2. relevantresourceslocated atnodes(“peers”)the 1. directinteraction (provisionofservices,e.g.filetransfer)between 6. combinedclientandserverfunctionality 5. Interaction ofPeers Peer-to-Peer: 9Properties Detailed Characteristics(3) peers (=“peertopeer”) a service rights transfer) between peers(=“peerto peer”) edges ofanetwork • services providedbyendsystems services • minimal demandsoftheunderlyinginfrastructure • =SERVENT” + “SERVer • most oftenlargely replicated • widely distributed • P P P P P P P P 24 22

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Overlay Network www.httc.de www.httc.de .self-organizingsystem 9. nocentralcontrolorcentralizedusage/provisioningofaservice 8. andmostly peershavesignificantautomony 7. Management Detailed Characteristics(4) 2.3 Service A ieal+ NAT Firewall similar rights Network TCP/IP Network TCP/IP P2P NetworksareOverlay HTTP er dniidb PeerID by Peers identified Underlay Networks Relay Service B Network TCP/IP Network Picture adapted from Traversat, et.al Project JXTA virtual network virtual JXTA Project et.al Traversat, from adapted Picture TCP/IP TCP Service C Network TCP/IP Network TCP/IP 27 25

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de E.g., Not allnodesmustsupport it Allow forbootstrapping Do nothaveto Overlay network A network Overlay Networks:Advantages Overlay Networks TCP/IP addingIPon topofEthernetdoes not require • Make useofexistingenvironmentaddingnew layer • modify existing software/protocols • deploynew equipment • addsanadditionallayerof • = networkbuilt • needsforaddressing, routing,… • defineshownodesinteract • provides services(servicemodel) • TCP/IP modifying Ethernet protocolordriver indirection/virtualization • abstraction • TCP/IP Peers TCP/IP on top of oneormoreexistingnetworks TCP/IP TCP/IP itr dpe rmTaest ta Project JXTAvirtualnetwork Picture adaptedfromTraversat,et.al Underlay Networks Overlay Network 28 26

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de P2P networksformanoverlaynetwork Some restricted functionality Redundancy Complexity Overhead IP networksformanoverlaynetwork Both Overlay Networks Overlay Networks:Disadvantages TCP/IP • Some featuresa“lower layer”doesnotprovide can Some • Features maybeavailable atvariouslayer • More layersoffunctionality • doesnoteliminatecomplexity,itonly Layering • packet headers, processing Additional • anotherlayer innetworkingstack Adds • Misleading behaviour • are basedontheend-to-endprinciple • emphasize fault-tolerance • ontopoftheInternet(IPnetwork) • over theunderlyingtelecominfrastructure • politically andtechnically • introduce theirownaddressingscheme • TCP/IP E.g. non real-time capabilities (forQoS) not be addedontop manages it more possibleunintended interactionbetweenlayers • muchintelligenceaspossible attheendnodes as • usernames,IPaddresses e.g. • E.g. corruptiondropsonwireless linksinterpretedas • congestion drops byTCP TCP/IP Peers TCP/IP TCP/IP TCP/IP itr dpe rmTaest ta Project JXTAvirtualnetwork Picture adaptedfromTraversat,et.al Underlay Networks Overlay Network 2.4 Overlay Structures Overlay Structures (2)

Client-Server Peer-to-Peer Client-Server Peer-to-Peer 1. Server is the central 1. Resources are shared between the peers entity and only provider 2. Resources can be accessed directly from other peers of service and content. 3. Peer is provider and requestor (Servent concept) www.httc.de www.httc.de Æ Network managed by the Server 2. Server as the higher Unstructured P2P Structured P2P 1.Server is the central entity and 1.Resources are shared between the performance system. only provider of service and peers 3. Clients as the lower Centralized P2P Pure P2P Hybrid P2P DHT-Based performance system content. 2.Resources can be accessed 1. All features of Peer-to- 1. All features of Peer-to- 1. All features of Peer-to- 1. All features of Peer-to- www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Example: WWW Peer included Peer included Peer included Peer included Æ Network managed by the directly from other peers 2. Central entity is 2. Any terminal entity can 2. Any terminal entity can 2. Any terminal entity can necessary to provide be removed without be removed without be removed without Server 3.Peer is provider and requestor the service loss of functionality loss of functionality loss of functionality 3. Central entity is some 3. Æ no central entities 3. Æ dynamic central 3. Æ No central entities 2.Server as the higher (Servent concept) kind of index/group Example: Gnutella 0.4, entities 4. Connections in the database Freenet Examples: Gnutella 0.6, overlay are “fixed” performance system Example: Napster Fasttrack, edonkey Examples: Chord, CAN 3.Clients as the lower performance system

Example: WWW

30 from R.Schollmeier and J.Eberspächer, TU München 31

Overlay Structures (3) Requirements for Overlay Networks

Unstructured P2P Structured P2P Efficiency Centralized P2P Pure P2P Hybrid P2P DHT-Based • ratio of www.httc.de www.httc.de 1. All features of Peer- 1. All features of Peer- 1. All features of Peer- 1. All features of Peer- • productive vs. to-Peer included to-Peer included to-Peer included to-Peer included 2. Central entity is 2. Any terminal entity 2. Any terminal entity 2. Any terminal entity • total resource consumption necessary to can be removed can be removed can be removed Scalability (expendability, enhancements) provide the service without loss of without loss of without loss of www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de 3. Central entity is functionality functionality functionality • ease with which the topology graph can be extended some kind of 3. Æ no central entities 3. Æ dynamic central 3. Æ No central index/group entities entities to larger sizes database Examples: Gnutella 4. Connections in the • incrementally 0.4, Freenet Examples: Gnutella overlay are “fixed” Examples: Napster 0.6, Fasttrack, Examples: Chord, • partially eDonkey CAN • exponentially Adaptability • ease with which a system or component can be modified to fit the problem area Stability • preserving the/an overlay structure when network changes (e.g. nodes join/leave, network grows) 2nd Generation 3rd Generation 32 1st Generation 35 38 36

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de (Structure) Topology Privacy Security Fairness Heterogeneity Fault-tolerance Requirements forOverlayNetworks(2) Characteristics ofthe OverlayP2PNetwork Design ConstraintsforSearch/Lookup: ihl structured Tightly degree towhichasystemorcomponentallowsfor(or • ofasystemtomanage, protectanddistribute ability • evenly distributingworkloadacrossnodes • considering variationsinbehaviourandphysical • resilience oftheconnectivitywhenfailuresare • supports) anonymoustransactions sensitive information capabilities encountered by arbitrarydeparture ofpeers • osl structured Loosely Chord Freenet Random Walk Probabilistic Hierarchical Autonomy Pure P2P Gnutella KaZaa Bloom Filters Deterministic Determinism Napster (Search) 41 37

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Overview eiblt Lowmaintenanceoverhead Reliability – Autonomy Efficiency – load Network Scope – Completeness Efficiency – Privacy Security – Space Time – Requirements forOverlayNetworks:Trade-offs 3 1st Generation • 3rd Generation • 2ndGeneration • e.g. deterministicvs.probabilisticoperations • e.g. hierarchicalvs.pureP2Poverlays • e.g. TTLbasedrequestsvs.exhaustivesearch • with TTL(timetolive) • e.g. exactkey-basedmatchingvs.partial • e.g. fullyloggedoperationsvs.totallyuntraceable • e.g. localinformationvs. complete replicationof • indices P2P applicationsbeyondfilesharing • more sophisticated filesharingapplications • mostly basic filesharing • • advanced concepts (e.g. distributed hash tables) advanced concepts (e.g.distributed • P2P ApplicationsandSystems 3.1 P2P Applications and Systems: 1st Generation I. with Central Server (like Napster)

History • 1999-2001: www.httc.de www.httc.de 1st Generation, Overview first famous P2P file see e.g. http://www.napster.com/ I. File Sharing with Central Server sharing tool for files II. Completely Decentralized File Sharing • free content access, violation of copyright of artists www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de III. Anonymous File Sharing • 2001-2003 IV. Processing Sharing • by introduction of filters: users abandoned service • April 2001: OpenNap appr. 45.000 users, 80 (interconnected) directory servers, more than 50 TB data • 2004: Napster 2 • music store • no P2P network anymore • download music tracks with a subscription model

42 43

Napster II. File Sharing: Decentralized (like Gnutella 0.4)

see e.g. Download • http://dss.clip2.com/Gnutella C C Protocol04.pdf www.httc.de www.httc.de Search • http://gnutella.wego.com/ S Central • http://www.gnutella.co.uk/ S Napster A P P www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Server C P C GNU (GNUs not Unix) + Nutella Napster C P Client P P Download

Gnutella version 0.4 has been P Messages P replaced with version 0.6 but is P Centralized elements B P • central server (file index, address of peers, search engine) widely referenced and discussed in scientific literature

Decentralized elements • decentralised storage (content at the edges) • file transfer between clients (decentral) • out of necessity, not as design goal

44 46 49 47

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de .Datatransfer 3. Search 2. Connecting 1. on GUIDofrequestingmessage Answers tothismessagesareroutedback origin based Some messagesareflooded Each message of knownandconnectednodes Each nodekeepsadatabase Phases ofProtocol0.4 Characteristics Gnutella: Protocol0.4- • PUSH message: Tocircumvent firewalls PUSH • HTTPis usedtotransfer files (HTTPGET) • QUERYmessage: • message: PING • QUERYHIT,PUSH PONG, • QUERY PING, • byapseudogloballyunique ID (GUID) identified • bootstrapping • QUERY HITmessage: • message: PONG • • can contain severalmatchingfilesofoneservent can • toaQUERYmessage Response • Searching thedistributed network • includes informationaboutoneconnectedGnutellaservent • AnswertothePING messages • Activelydiscoverhosts onthenetwork • code+networkMACaddressrandomnumber time • 128bitrandome.g.consisting of • notpartofprotocolspecification • hostcachesprovidelists ofentrypointtothenetwork • anyactivepeermaywork • 50 48

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Hop limitbyTTL node discoveryandsearchrequests Message broadcastfor servent mustinitially know(atleast) Note: Inorder toconnectaGnutellanetwork ntla rtcl04–Characteristics (2) Gnutella: Protocol0.4– hs :Guel Connecting Phase 1:Gnutella- originally TTL=7 • flooding • nodesrecognizemessagetheyalreadyhave • nowadays host cachesare usually used • • these first member nodes must befound byothermeans first membernodes must these • onemember nodeofthenetworkand connect toit • forwarded donotforwardthem twice • by theirGUIDand • (to allconnectednodes)isusedtodistributeinformation • • (Web, ...) (Web, • P P P ping pong P P P pong ping ping 53 51

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de 3. QUERYHIT message 2. Thenodealso checkswhetheritcananswer tothisQUERY Servent connectstoanumberofnodes hs :Guel Searching (2) Phase 2:Gnutella– Connecting (2) Phase 1:Gnutella– information passedback thesame waytheQUERY took(no • contains • match thesearchcriteria e.g. availablefiles if • a QUERYHITmessage with • gotPONGmessagesfrom it • flooding) • information about oneormore filesthatmatch thesearchcriteria. information • the IPaddress ofthesender • • Ö thus itbecomespartoftheGnutellanetwork Gnutella connection(TCPonspecifiedPort) P P servent P P P query hit P servent P P P 54 52

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de 1. AnodethatreceivesaQUERYmessage Flooding: QUERYmessageisdistributedtoallconnected nodes firewall/NAT gateway Special case:serventwith thefilelocatedbehinda Peer setsupaHTTPconnection hs :Guel Searching Phase 2:Gnutella- hs :Guel Data Transfer Phase 3:Gnutella– • downloading servent downloading • GETisused HTTP • actual datatransferisnotpartoftheGnutellaprotocol • = flooding ittoallnodes forwards • •if theHOPcountfieldofmessageand increases • • can instead sendthePUSHmessage can • initiateaTCP/HTTPconnection cannot • theonehereceiveditfrom except • andaQUERYmessagewiththisGUIDwasnotreceivedbefore • <=TTL (Time ToLive) HOP • does notworkifbothservents arebehindfirewalls • then transfer(push)thefilevia it • theother serventtoinsteadinitiate aTCP/HTTP asking • P connection toit and P query P P query query 57 55

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de rqec itiuino ntlaMsae eg otanet.al.): Frequency DistributionofGnutellaMessages(e.g.Portmann scalabilityissues Gnutella initsoriginalversion(V0.4)suffersfromarangeof A TTL(TimeToLive)of4hopsforthePINGmessages Ö Gnutella 0.4:ScalabilityIssues(3) Gnutella 0.4:ScalabilityIssues Ö reason forthebreakdown ontheGnutellanetwork dueto • TTLintheoriginalGnutellaclientwas7(not4) • toaknowntopologyofroughly8000nodes leads • ev obnwdhfru-anddownloads nobandwidthforup- leave • forpassingonPINGand QUERYmessagesofother • Low bandwidthpeerseasily useup users • flooding ofmessages. flooding • decentralisedapproach and fully • in August 2000. all theiravailablebandwidth Ö 41.7% of the messages just for network discovery QUERY HIT QUERY PONG PUSH PING 14.8% 26.9% 54.8% 0.7% 2.8% 58 56

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de bandwidth formanagementtraffic(notup-/downloads): Low bandwidthpeerseasilyuseupalltheiravailable Chosen solution: Hierarchy(V0.6): Mechanisms toovercome theperformanceproblems Average bandwidth usage (kbps) per node for peer discovery: Gnutella 0.4:ScalabilityIssues(2) Gnutella 0.4:HowScalabilityIssuesaretackled Average Bandwidth usage (kbps) per node for search messages: Search rate tocope withload oflowbandwidth peers • (likeinFastTrack..) Superpeers • File Hashestoidentify files • Ultra-Peers • Dropping oflow-bandwidthconnections • optimization PING/PONG • Optimizationofconnection parameters • PING rate 1/min 1/sec 1/min 1/sec moves modem/ISDNuserstotheedge oftheGnutella • und V. Falco.Limewire:PingPongScheme, e.g. C.Rohrs • numberofhopsetc • http://www.limewire.com/index.jsp/pingpong • • similar toeDonkey similar • super nodes toKaZaA similar • Network April 2001. 500 Nodes 500 Nodes 151.0 288 2.5 4.8 4000 Nodes 4000 Nodes 2211.0 4090.4 36.8 68.2 8000 Nodes 8000 Nodes 11694.5 7617.3 127.0 194.9 61 59

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Gnutella 0.4Problem:OverlayTopologyDesign Gnutella 0.6 search for german movie ntla06i yia n eeainP2Psystem 2ndgeneration atypical 0.6is Gnutella yrdPP-Hierarchical OverlayNetwork : HybridP2P- IP Route Gnutella connection british movies german movies P P rmRShlmirandJ.Eberspächer,TUMünchen from R.Schollmeier IP Node Gnutella Node movies japanese 62 60

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Solutions 2000): Study results(sincee.g.Adar/Hubermann Free Rider This overlaynetworkhas abettertopology Gnutella 0.4Problem:FreeRiders Gnutella 0.4Problem:OverlayTopologyDesign • incentives forsharing incentives • answer noqueries 90% • oftheGnutellausers sharenofiles 70% • selfishindividualsthatoptoutofavoluntary • exist inbiganonymous communities • micropayment • Gnutella network • contribution only 2-5% oftheconnectionsarewithinsame • MojoNation see • onlyacceptconnections / servents • ...andgetawaywithit • but downloadingfromothers by i.e. not sharinganyfiles • thecommunitysocialwelfare to • but:howtoverify?quality ofthecontent? • autonomous system forward messagesfromserventsthatsharecontent P german movies movies japanese 66 64

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de heterogeneity ofthepeers The 2ndgeneration hybridarchitectures exploitthe Incentives forSharing(battling freeriders) (http://www.gnu.org/software/gnunet/) Alternative: GNUnet Characteristics Principle: see Performance improvements over1stgenerationsystems: III. FileSharing:Anonymous(likeFreenet) 3.2 eetaie yntok ihsproe / bynetworkswithsupernodes Decentralized • provides anonymityforusers • are Documents • provides plausiblesecuritywrt.nodeoperators • removes anysinglepointoffailureorcontrol • prohibits censorshipofdocuments • distributed servers P2P ApplicationsandSystems:2ndGeneration as theycannotreadthecontenton theirdisks • decentralizedstorage • sdocumentpiecesarecopiedatseveralmachines a • authorofdocumentcannotbeidentified • stored onseveralmachines • intoseveralpiecesand cut • encrypted • http://freenet.sourceforge.net/ • V Cooperative FileSharing(likeBitTorrent) IV. FileSharingwithCharging(like MojoNation) III. with SuperNodes(likeKaZaA) DecentralizedFileSharing II. DecentralisedFile SharingwithDistributed I. Servers (likeeDonkey2000) 67 65

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Principle I T E S SETI = France in2003 [seesandvine.org] successful/used file-sharingprotocol ine.g.Germany& eDonkey file-sharing protocolwas the most Characteristics see e.g. For example:eDonkey (like SETI@home) IV. ProcessingSharingwithCentralServer (like eDonkey/eMule/mlDonkey) I. DecentralizedFileSharingwithDistributes Servers ntelligence errestrial xtra earch for aa onlyfor44% inGermany KaZaA • for 52%ofgeneratedP2P filesharing Responsible • started 1998,untilOctober2000.4*10e20 • • client =screensaver(using idleCPUcycles) client • distributed viacentralservertomillionsofprocessing-clients • telescopeperday about50GBofdata cominginfromArecibo • famousP2Pnetworkforsharingprocessingpower first • • similar idea:GRIDcomputing similar • architecture similartoNapster • traffic Andresults ofmeasurements: • http://savannah.gnu.org/projects/mldonkey/ • http://www.emule-project.net/ • http://www.edonkey2000.com/ • unusedprocessingpowerofdesktopcomputers uses • floating pointoperationsperformed • for themassivelyparallelanalysisofradiosignals • (largest computationeverperformed) www.kom.tu-darmstadt.de • http://setiathome.ssl.berkeley.edu see e.g. 70 68

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de andnotbyfilenames, thishelpsin - Algorithm 4,RFC 1186)filehashes uniqueMD4 (Message-Digest - Files areidentifiedby Additionally: Principle The eDonkeyNetwork(3) The eDonkeyNetwork • verification that thefilehas verification • downloadingthesame filefrom • adownload froma resuming • Server listscanalsobe • Servers sendserverlists(other • serverssendtheirport+ New • • CLIENT applicationconnectstoonerandomserver CLIENT • DISTRIBUTEDSERVER(s)is/aresetupand • been correctly downloaded time multiple sourcesatthe same different source websites downloaded onvarious clients servers theyknow)tothe IP tootherservers(UDP) and staysconnected RUN BYPOWER-USERS • clients canalsoextendtheirsearchbysendingUDP search clients • aredirectedtotheserver searches • servermanagesfileindices this • using aTCPconnection • exchangetheirserverlistswithotherservers servers • itisnearlyimpossibletoshutdownallservers thus, • messages toadditionalservers • using UDP astransportprotocol using • C C C Server Lists S S Extended Search C Search C C Download C S Client Server UDP TCP 71 69

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Therefore, theSEARCH consistsoftwosteps are (pseudo-)uniquelyidentified bya16bytesMD4hash. Files sharedinthenetwork The eDonkeyNetwork(2) The eDonkeyNetwork(4) C C •Later queryserversforclients offering 2. “QuerySources” – • 1. Full textsearchtoconnectedserver(TCP)or • fiin ert-erApplications, Informatik 2004 Peer-to-Peer Efficient Heckmann et.al. a filewith certain hash extended searchwithUDP tootherknownservers. • Search yields the hashes ofmatching files Search yieldsthehashes • download fromthese sources • C Server Lists The eDonkey S File - Sharing S Network Extended Search . In C okhpo loihsadPooosfor andProtocols Workshop on Algorithms Search C C Download C S Client Server UDP TCP 74 72

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de preteen nero keygen hulk dragon porno charlie anal mpeg matrix pdf porn love german 2003 mp3 The MostFrequentKeywords – eDonkey: UserBehavior eDonkey: UserBehaviour File SizeDistribution book tatu iso pirates movie soundtrack lolita fuck girls crack fast spanish dvd windows sex jpg rape buffy futurama office studio bruce album pussy nude deutsch girl schrottwerkzeug teen cd zip mpg are shared) (indication thatmanyvideos Average filesize217MB angels amateur adobe remix microsoft terminator video furious dvdrip young gay xxx divx avi 75 73

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de eDonkey: UserBehaviour(2) The MostFrequentDownload Files – eDonkey: UserBehavior 8d3291ae65b2 f8822b8ae56ae6522637 deine Jugend Verschwende Charlie 3 Engelfür Terminator 3 wird später gekotzt Werner – furious 2 fast 8eb65462653 8db340f5535fdcb0dd98f 66c15f7102f8 b10a7654542694aaa8d9 cce6f71f7297576d7d7da 403bf4b44da abc83456cfc6c618dfcd9 10b2d7deaa0b1e4567ce 662bf131637b18f9844b 549c3785546e9a348558 9e9917050fe911c590a9 c76caa465261a4dcd6a0 638a03f25aefd6e86bf0b 40551721263eaba3cafa a87d6a11bd9e8f7d47d5 bc4d1c5d5564 9dcc8870a817 a1e59a0a1655 526278d12ef5 942d4525b7e1 8eaa49eda817 b810c91e990 #Shared Files/User e72dcaf19efe 8d23b5ab3fd 57.8 filessharedonaverage 34c87491cc190db05ebc38 367cf123d539a98008cdc7 38007f44d185479780ad78 459d7b8c4fb36dc4a23198 499fabfaf7090b40920a1d c9bb207fd398d3079c4c20 d24b35e50b8ff5491525c4 dc9e5ae12c5ef626a50000 bde86f4b9e22974b64be27 eea96f0f473db644bd2dbd 4fa94d319ab88c99f6dbb1 19c3812f1e321c68f769f5 9780b08f289b2bb123cdc Gangs ofNewYork da34016491 5e88427029 b8b264a4df 0000000000 1019b50a31 Natürlich Blond 2 6d4bfff664c 59e3b2e824 5b35e87f86 Bulletproof Monk 79bd5aaaf4 58aabf402c 48d7af7fe2 bcc0cee30f fc7f997902 Adam und Midnight Club Eva Hulk 78 76

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de incentive schemes in thelast part ofthe lecture We introduce aworking technical foundation forsimilar The MojoNation projectfailedand wasstopped Architecture: neithercompletelycentralnordecentralised Properties: System see Defines electronicmoney (mojos) e.g. MojoNation III. FileSharingwithCharging(likeMojoNation) (like KaZaA/Fasttrack) II. DecentralizedFileSharingwithSuperNodes Numbers are from10‘2002 Numbers are P2P system Fasttrack eDonkey Gnutella mojos canbespenton • mojos areearned byofferingresourcestotheMojo • extends Freenet • • SUPERNODES toreducecommunicationoverhead SUPERNODES • most successfulP2P networkinUSA2002/3 • (, GiFT/OpenFT) KaZaA Clients: • Fasttrack Developer: • www.kazaa.com,www.grokster.com, www.fasttrack.nu, • .sourceforge.net Nation network downloading • searching • bandwidth • processing power • storage space • 120.000 230.000 2,6 Mio. #users 472 Mio. 28 Mio. 13 Mio. #files terabytes 650-2600 3550 105 www.mojonation.net download.com #downloads Ca. 525.000 600.000 4 Mio. (from see e.g. 79 77

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Peers Involved nodes Nation Less ambitiousthanMojo peers: super Super nodes- Additionally, thecommunicationbetweennodesisencrypted (like KaZaA/Fasttrack)(2) Decentralized FileSharingwithSuperNodes IV. CooperativeFileSharing(likeBitTorrent) • • • send IPaddressandfilenamesonlytosuperpeers send • onlytosomesupernodes connected • • 256kB) (typically chunks exchangestrategy tit-for-tat into • split is file • novirtualcurrency • canberemovedwithoutproblems oneormoresupernodes • answersearchmessages forallpeers(reductionofcomm.load) • server andproxyforsimplepeers theroleofcentral take • peerswithhigh-performancenetwork connections • leechers seeders tracker e oe onodrno hnsfirst chunks downloadrandom nodes new first • chunks downloadrarest nodes someone • hurting without downloadspeeds faster get can noone efficiency • Pareto toreach attempt • giveme(beingoptimisticabout unknownnodes) you giveyou– if • ecestryto downloadmissing chunks content leechers desired • ofthe copies incomplete • content desired ofthe copies complete have • andleechers all seeders tracks actively • node non-content-sharing • lesdownload speed else's C C C C C C C C C C C Client Client Superpeer Search Download Some 2nd Generation Applications and Systems 3.3 Beyond File Sharing I. Media Streaming (like Mercora) P2P network offering multiple services Media Streaming • Private networks of friends www.mercora.com www.httc.de www.httc.de I. Media Streaming (like Mercora) • Broadcast radio II. Free Internet Telephony (like ) • Sharing pictures, etc. • Instant Messaging (IM) www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Alternative to File Sharing Collaborative work • Published media are not downloaded, but rather III. Support for Scientific Collaboration (like S2S) streamed IV. Support Group Collaboration (like Groove) • No full VCR-like control but radio-like • Music can not be “copied”

Further Information • Hundreds of users • Stable implementation • Conforming with legal issues • No technology information available 80 81

Media Streaming (like Mercora) (2) II. Free Internet Telephony (like Skype)

From www.mercora.com FAQ Offered Services www.skype.com “Mercora is a person-to-person network that enables you to find, communicate • IP Telephony features and share interests with friends and family. Mercora has built a framework for • File sharing www.httc.de www.httc.de sharing digital content using peer-to-peer technologies to directly connect you with your friends on the network. You do all these things through the simple • Instant Messaging interface of the Mercora applications.” Features • KaZaA technology www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de Is broadcasting music on the Mercora network legal? • Yes. Mercora has obtained the necessary licenses so that you can • High media quality broadcast music on the Mercora Network legally. • Encrypted media delivery • Specifically, Mercora enables the webcasting of music according to the • Support for teleconferences Digital Millennium Copyright Act, 17 U.S.C. § 114 (required Adobe Acrobat to read). … • Multi-platform Further Information What can I broadcast on the Mercora Network? • You can broadcast any music that you own legally. These recordings • Very popular must originate from an authorized source (either created originally by the • Low-cost IP-Telephony artist or record label that owns the copyright), and are not unlawful business model copies that have been downloaded illegally or obtained from an unauthorized third party. • SkypeOut extension to call regular phone numbers (not Can I broadcast music that is ripped from CDs or downloaded from an online music store? free) • Yes. Music that is ripped from CDs that you purchased is considered an • Great business potential if authorized source and so is music bought from online music stores like combined with free WIFIs 82 iTunes. 83 88 85

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Features From theGroovewebsite: Goal Science-to-Science (S2S) IV. SupportforGroupCollaboration(likeGroove) III. SupportforScientificCollaboration(likeS2S) itcanbeusedto • • Pilot phase runssince2004at Pilot • onJXTAtechnology Based • informationtoanyoneconnected Provide • Indexsharedinformation • “groove virtual office issoftwarethatallowsteamsof • •Share theexchangeofknowledgeandinformation Promote • the needsofscientificresearch Serve • GmbH Managed andimplementedbyNeofonie • bytheGermanResearchNetwork(DFN) Supported • researchprogram Scientific • share (JXTA isanopen-sourceP2Pframework) were inthesamephysical location.” people toworktogether overanetworkasifthey dataandworkflows • projects, meetings, • conversations, files, • G-WiN,thescientificGigabitnetworkrunbyDFN. • “Knowledge” • Publications • s2s.neofonie.de 89 87

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de “JXTA 1.0Search”algorithm Published items Groove - of Groove Networks Inc. of Groove - Searching inS2S Groove • permanent securityconfiguration permanent • • dependsonrelationships rootedinreal-world • actslikeanad-hocvirtual VPN(virtualprivate • providesextensivesupport forcollaborationona hubsmaintainindexesof • basedonHubs • Overlay Unstructured • automaticdescriptionof • describedwithDublinCore • use ofmetadata • requests and matchfieldswithsearch published XMLdocuments (super-peers) Network formed published items metadata schema interaction network) group-project dataintegrity • dataprivacy • authentication • 94 90

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Functionalities 3rd generationP2Papplications andsystems Groove (2) 3.4 are currentlyemerging • • to offer further group-applications (chat,calendar, filearchive..) offerfurthergroup-applications to • dataencryptedonharddisksaswell "workspace" • offline peersobtaindatathrough "relay servers"atlatertime • synchronizedocumentsandincrementalpartsof to • (sharedspace)aggregatedocuments,messages "workspaces" • employ advancedmechanisms • confidentially through"deltamessages" related toacollaborationactivity e.g. /Kademlia • prototype) e.g. Pond (anOceanStore • or Bloom filters • distributedhashtables(DHTs) like • Some ApplicationsandSystems:3rdGeneration 95 91

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Establishment ofagroup Hard toobtain inreality Ideal networkcharacteristics (general) Groove (3) 4 Symmetry • n)) (e.g.O(log Scalability • loadbalancing of traffic Support/allow • Highconnectivity(and highfaulttolerance) • andsmallvertex/nodedegree Limited • averagedistance/pathlength Small • networkdiameter (worst-casedistance) Small • Properties ofP2PNetworkGraphs 98 96

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de of aspecificnodei CC Clustering factor Clustering coefficient P2P environmentandcontextdescription Challenges Some Metrics(2) Ideal NetworkCharacteristics network Overlay Network • Is thereawaytodescribethenetwork? • doesthenetworklooklike? How • tobenonerandomgraph Seems • coordination&control Decentralized • ontopofunderlyingTCP/UDP-IPinfrastructure • of anetwork CC i network average clusteringfactorof allnodesofa neighbours ofaspecificnode number ofpossiblelinksbetween all over (of aspecificnode)between themselves Number oflinksbetweenall neighbours 99 97

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Total amountofnodes Degree Distribution network Average pathlength Diameter Distance Av. degree Degree CC has Clustering ofonenode 4.1 4.2 CC •K measured bythe ClusteringCoefficientCC • •N numberoflinksamonganode's immediateneighboursto • • compared tomaximumnumberofedgestheymighthave compared • actual number ofedgesamong Kneighbours ofthenode • Is theamount ofnode‘sneighbours (directly connected) • = Clustering Some Metrics k each other between them. i d D K of a nodei ij ( of network K N 2 − 1 of annetwork ) P(k) L of N ALL path betweennodeiandj number ofedgesalongshortest nodes max. distancebetweenany2 probability distributionofk also knownassizeofannetwork mean distanceoverallnodes the nodeiisattachedto total numberofedges, average ofallk i of annetwork i 102 100

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de (how tofindthehops Path lengthL lseigExample Clustering 4.3 .averageoverall nodes 4. …chooseanothernotyet selected node.. 3. calculatethemediandistance totherestof 2. chooseanode 1. of thetopology? ) with onlyalocal knowledge h Knodes the between N=3 edges erynodes nearby i node Average PathLength graph tms 6edges at most CC K(K-1) i 2 = 0,5 6 3 = 4(4-1) 2 = 6 103 101

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de CC ofwholenetwork CC ofonenodehas Example: regulargraph i.e. opposite side Most distantnode: Closest node:L=1 Example: regulargraph with Clustering Example(2) Example ofCalculationAveragePathLength I.e.,CC • nodes havesameCC All • CCofallnodes Average • •N = 3 •K = 4 (exactly L=6,63) approx. L=6.25 average node locatedat90 distance tooppositeside, 50 nodesandCC=0,5 ca. halfof12.5 =6.25 ca. halfof25nodes= degr. 180 degr. 12.5 network CC CC = 0,5 = = 0 4 , ( 5 4 2 3 − 1 ) 4.4 Small World Phenomenon Watts / Strogatz Process

Graphs seem (at a first glance) to be established randomly • But, characteristics are not “random” www.httc.de www.httc.de Hence, • Which are the properties? www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de • Search for “simple” construction principle

Regular Gr. slightly "rewired" Random Graph

rewiring probability Process (by Watts and Strogatz) • randomly select edge (by edge) and • randomly "rewire" it (them)

What is the effect of this process?

104 105

Small World Phenomenon Small World Phenomenon (2) www.httc.de www.httc.de www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de

Regular Graph Slightly rewired Random graph Regular Graph Slightly rewired Random graph graph graph

Clustering ? ? ? Clustering high high low Coefficient Coefficient

Path Length ? ? ? Path Length high low low

106 107 110 108

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de 2 techniques resultinpowerlawdistributions Barabási: Statistics resultingfrom theWatts/Strogatzgraphs Effect: Result 4.5 Dynamic growth • notmatchthoseofreal-worldsmallworld graphs do • • Preferential attachment Preferential • theyexhibitpower-law distributionofedgestothe • • But the path length (better look-up times)aredramatically thepathlength(betterlook-up But • clusteringremainshigh the • to berandomlyreconnected "rewiring" averyfewedges • reduced like certainnetworkgraphs (topologies) nodes • rather thanrandomly rather • ofnodes preferentiallyattaching tomost connected rewiring • place aswithWatts/Strogatz rather than rewiringagraph in • smallworldsgraphsdynamically constructs • Watts/Strogatzmodeldoes not the • E.g. powergrids,webpages,P2Pnetworks • Power LawPhenomenon nodes Red – Blue – Length Path Coefficient Clustering 111 109

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de 2. SmallDiameter(pathlength) 1. Clusteredsparseness(clustering) Noticed properties: Slightly rewiredgraphs=smallworld P(k) But… E.g. random network E.g. regularlattice Small WorldGraphs/Networks Power Law:DistributionofNodeDegree minimal distance • “CLIQUISH how • i.e.spreadofnodedegrees k • probability thatarandomlyselected nodehasexactly • average pathlength • • But, doesnotapply inreality ! But, • i.e.foranydegree k>>mean#degree (named) • distribution ofnodedegrees Poisson • i.eP(k)isdeltafunction • length path • • all nodeshavesamenodedegree k all • k edges GROWS LOGARITHMICALLY • SMALL RATHER • betweenthemostapartpeersis small • set-up bymanyinterconnectedclusters Network • Small WorldNetworkscomprisefewedges • • P(k) tendstobe0 P(k) • " agraphis i with sizeofnetwork over thenetwork i 114 112

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Building process/growing process Why do“PowerLawNetworks” exist? oe a BuildingProcess Power Law- Social networks Power Law:DistributionofNodeDegree(2) • new nodepreferablyattached new • continuouslygrow networks • e.g. Gnutella • Amount of called persons (log) hostcachesalsoprefer tocomprisehighlyinterconnected • settingupnew connections nodesarepreferred wrt this • nodeswithmany attachedpeersrespondtomanyPING • wellinterconnected nodes to • nodes requests muto aln esn (log) persons ofcalling Amount Figure fromWilliamAiello 115 113

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de Power lawdistributionofnodedegrees: nodes is"power-law" Resulting distributionof edgesto 3. continues untilallnodes added new nodesoffixedorder 2. dynamicallywireintothegraph disconnected nodes 1. startwithasmallnumberof Construction principle Power Law:DistributionofNodeDegree(3) Growing Process •example: accordingto preferentially • eachedgewiredtoexisting • • but occursmoreoften thanpredictedbyrandom but • connectivityisunlikely high • node has. number ofedgestheexisting nodes network 3.new edgeswiredtodestination 3.new nodes, 2.adding unwirednodes 1.three attached tothetarget node amount ofedges already probability proportional to nodes preferentially, with new node. one atatimewith 2edgesper P figure fromWilliamAjello ( k ) ∝ k

− γ 118 116

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de P(k) isstillproportionaltok (multiplied byaconstant),then because ifk is rescaled Power lawsarescalefree .. Construction principle Scale FreeNetworks Growing Process(2) Power-law distribution • Short diameter • nodesenterthe New • (rich nodesgetricher) already popularnodes network byattachingto Fragile toattacksathigh- • heterogeneity Supports • link /load Uneven • degree nodes distribution .newedgeswiredto 3. addingnodes, 2. threeunwirednodes 1. node attached tothetarget amount ofedgesalready probability proportionalto preferentially, with destination nodes edges pernewnode. one atatimewith2 - γ P ( k ) ∝ k

− γ 119 117

www.kom.tu-darmstadt.de www.kom.tu-darmstadt.de www.httc.de www.httc.de .. Construction principle Optimization ofsearch mechanism Fault tolerance Growing Process(3) Use ofScaleFreeNetworksforP2P random walk,high degreesearch,... • withfaultsathighlyconnectednodes uncertainty • robust againrandom faults(andattacks) • .addingnodes, 2. threeunwirednodes 1. .newedgeswiredto 3. edges pernewnode. one atatimewith2 node attached tothetarget amount ofedgesalready probability proportionalto preferentially, with destination nodes