318 JOURNAL OF INTERNET ENGINEERING, VOL. 5, NO. 1, JUNE 2012 A Survey of P2P Traffic Management Approaches: Best Practices and Future Directions R. Dunaytsev, D. Moltchanov, Y. Koucheryavy, O. Strandberg, and H. Flinck situation has changed dramatically in recent years with the Abstract—Over the last decade, we have witnessed the increasing deployment of multimedia applications and tremendous growth of peer-to-peer (P2P) file sharing traffic. services such as Flash video, IPTV, online games, etc. In Built as overlays on top of the existing Internet infrastructure, 2010, global Internet video traffic has surpassed global P2P P2P applications have little or no knowledge of the underlying network topology, generating huge amounts of “unwanted” traffic [6]. According to recent studies [6] [7] [8], P2P traffic inter-domain traffic. Bandwidth-hungry P2P applications can is growing in volume, but declining as a percentage of overall easily overload inter-domain links, disrupting the performance Internet traffic. However, the prevalence of real-time of other applications using the same network resources. This entertainment traffic (Flash, YouTube, Netflix, Hulu, etc.) forces Internet service providers (ISPs) either to continuously with a decrease in the fraction of P2P file sharing traffic is invest in infrastructure upgrades in order to support the quality usually the result of cheap and fast Internet access and is of service (QoS) expected by customers or use special techniques when handling P2P traffic. In this paper, we discuss more typical for mature broadband markets, while many the best practices and approaches developed so far to deal with emerging broadband markets are still in a phase in which P2P P2P file sharing traffic, identifying those that may provide file sharing accounts for a large (or even a dominant) portion long-term benefits for both ISPs and users. of global traffic [8]. In any case, P2P file sharing is still fairly popular among users and continues to be one of the biggest Index Terms—File sharing, peer-to-peer, quality of service, consumers of network resources. For instance, it is expected traffic management. that P2P file sharing traffic will reach 8 exabytes per month by 2015, at a compound annual growth rate (CAGR) of 15% from 2010 to 2015 [6]. I. INTRODUCTION From the early days of P2P file sharing systems, P2P For a long time, the reference model of data exchange in traffic and its impact on the performance of other the Internet was the client/server model, resulting in the applications running on the same network has attracted the so-called “downstream paradigm”, where the vast majority of attention of the academic community and Internet service data is being sent to the user with low traffic load in the providers (ISPs), and continues to be a hot research topic opposite direction. As a consequence, many communication (e.g., see [9] and references therein). What makes this type of technologies and networks have been designed and deployed traffic so special? Let us consider the most distinctive keeping this asymmetry in mind. The best-known examples features of P2P file sharing from the ISP’s point of view. are the Asymmetrical Digital Subscriber Line (ADSL) and 1) P2P file sharing applications are bandwidth-hungry: Data Over Cable Service Interface Specification (DOCSIS) Network applications (and hence traffic sources) can be technologies. For instance, ADSL2 and ADSL2+ provide a classified into 2 basic types: constant bit rate (CBR) and downstream rate of up to 24 Mbps and an upstream rate of up variable bit rate (VBR). CBR applications generate data to 1 Mbps [1]. In fact, bandwidth asymmetry with high data traffic at a constant rate and require a certain bandwidth rates in the downstream direction together with low-rate allocation in order to operate successfully and support the upstream links fits well the environment with dedicated desired quality of service (QoS). At the same time, allocating servers and conventional users. Everything changed with the bandwidth above the requirement does not improve the user arrival of Napster and other peer-to-peer (P2P) file sharing satisfaction. VBR applications generate data traffic at a systems in the late 1990’s and early 2000’s. In such systems, variable rate and are typically designed to quickly discover all participants (although to different extents) act as both and utilize the available bandwidth. In general, the more the content providers and content requestors, thus transmitting bandwidth, the better the user-perceived QoS. However, in and receiving approximately equal amounts of data. Hence, order to provide scalability to a large number of clients uplink and downlink data flows tend to be symmetric [2]. accessing the system simultaneously, both CBR and VBR Since then, P2P networks have experienced tremendous servers usually limit the maximum data rate per user. As a growth, and for several years P2P file sharing traffic used to result, this effectively places an upper bound on the amount be the dominant type of traffic in the Internet [3] [4] [5]. The of data that can be transmitted per unit time and thus the bandwidth used by an individual user. Manuscript received May 15, 2012. As reported in [7] [8] [10], BitTorrent is the most popular Roman Dunaytsev is with the Space Internetworking Center (SPICE), Electrical and Computer Engineering Department, Democritus University of P2P file sharing system today. In contrast to client/server Thrace, Greece (corresponding author’s phone: +30-6995119041; fax: systems, BitTorrent is more robust and scalable: as more +30-2541079554; e-mail: [email protected]). users interested in downloading the same content join an Dmitri Moltchanov and Yevgeni Koucheryavy are with the Department of overlay network, called a torrent or a swarm, the download Communications Engineering, Tampere University of Technology, Finland (e-mail: [email protected], [email protected]). rate that is achieved by all the peers increases [11]. With Ove Strandberg and Hannu Flinck are with Nokia Siemens Networks, BitTorrent, when multiple users are downloading the same Espoo, Finland (e-mail: [email protected], [email protected]). DUNAYTSEV et al.: A SURVEY OF P2P TRAFFIC MANAGEMENT APPROACHES: BEST PRACTICES AND FUTURE DIRECTIONS 319 file at the same time, they upload pieces of the file to each other. Instead of relying on a single server, this mechanism distributes the cost of storing and sharing large amounts of data across peers and allows combining upload capabilities of multiple peers into the compound download rate observed by a user [12]. As a rule, if there are enough active peers, the maximum achievable throughput of a user is mostly limited by either congestion somewhere in the ISP’s network (if any) or the last-mile bandwidth of the user. In well-provisioned networks, this results in higher data rates (e.g., up to 80 Mbps over a 100 Mbps Ethernet link) and potentially larger amounts of data that can be sent per unit time, when compared to traditional client/server applications and services such as the World Wide Web and IPTV. In addition, since P2P file sharing requires very little human intervention once it is initiated, some users tend to run P2P programs 24/7, generating huge amounts of traffic. High-speed P2P traffic interferes with other traffic on the same network, degrading the performance of delay-sensitive applications such as multimedia streaming, online games, and VoIP. Poor application performance during congestion Fig. 2. A user in Greece downloads the latest version of Ubuntu. Most of the causes low customer satisfaction and aggravates subscriber peers are from other countries and even the other side of the globe. Does it churn, leading to a decline in service revenues. In turn, this mean that no one is seeding this distro in Greece? Of course not! It is just a forces ISPs either to continuously invest in infrastructure consequence of the random peer selection. upgrades in order to support the QoS expected by customers or use special policies when handling P2P traffic. 3) P2P file sharing brings no additional profit to ISPs, 2) P2P file sharing applications are topology-unaware: only additional costs and legal headaches: Nowadays, some P2P file sharing applications establish overlays on top of the ISPs tend to charge content providers and content delivery Internet infrastructure with little or no knowledge of the networks (CDNs) additional fees for either carrying underlying network topology (see Fig. 1). In P2P networks, high-bandwidth content over their networks or providing data are often available in many equivalent replicas on QoS guarantees for premium traffic. Meanwhile, end users different hosts. However, the lack of topology information are typically billed for Internet access based on flat-rate leads P2P applications to make random choice of peers from pricing, so ISPs do not generate any additional revenue from a set of candidates. For example, it is common for a peer that delivering P2P traffic to/from their customers. wants to download some data (music, movies, software, etc.) High-speed P2P data transfers may also increase the cost to choose sources randomly, possibly picking one located on of traffic exchange with other ISPs. The vast majority of ISPs the other side of the world (see Fig. 2). Such random (except a small group of Tier 1 ISPs that rely completely on selection ignores many peers that are topologically closer and settlement-free peering) are charged for traffic transit therefore could provide better performance in terms of according to the 95th percentile billing model, which works throughput and latency. Moreover, this leads to inefficient as follows. Transit providers poll customer interface ports at utilization of network resources, significant amounts of regular intervals (typically, every 5 minutes) throughout the costly inter-ISP traffic, and congested inter-domain links billing cycle.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-