Embracing Packet Loss in Congestion Control

Total Page:16

File Type:pdf, Size:1020Kb

Embracing Packet Loss in Congestion Control Embracing Packet Loss in Congestion Control Barath Raghavan, John McCullough, and Alex C. Snoeren University of California, San Diego fbarath,jmccullo,[email protected] ABSTRACT in the absence of queues—or at least only in the presence of Loss avoidance has long been central to the Internet archi- very short ones. Yet queues are not an end in and of them- tecture. Ordinarily, dropped packets represent wasted re- selves; they exist largely to smooth bursty traffic arrivals and sources. In this paper, we study whether the benefits of a facilitate fair resource sharing. If these goals can be met—or network architecture that embraces—rather than avoids— obviated—through other means, then router buffers may be widespread packet loss outweigh the potential loss in ef- removed, or at least considerably shortened. ficiency. We propose an alternative approach to Internet In this paper, we propose an alternative approach to con- congestion control called decongestion control. In a depar- gestion control that we call decongestion control [4]. As ture from traditional approaches, end hosts strive to transmit opposed to previous schemes that temper transmission rates packets faster than the network can deliver them, leveraging to arrive at an efficient, loss-free allocation of bandwidth be- end-to-end erasure coding and in-network fairness enforce- tween flows, we do not limit an end host’s sending rate. All ment. We argue that such an approach may decrease the hosts send faster than the bottleneck link capacity, which complexity of routers while increasing stability, robustness, ensures flows use all the available bandwidth along their and tolerance for misbehavior. While a number of important paths. Our architecture separates efficiency and fairness; end 1 design issues remain open, we show that our approach not hosts manage efficiency while fairness is enforced by the only avoids congestion collapse, but delivers near optimal routers. We depart from previous architectures based upon steady-state goodput for a variety of traffic demands in three network-enforced fairness by allowing overloaded routers to different backbone topologies. drop packets rather than queuing them, with the hope of de- creasing the complexity and amount of resources required to 1. INTRODUCTION enforce fairness. Our model suggests that efficiency can be maintained without requiring senders to decrease their trans- Packet loss is taboo; to an Internet architect, it immedi- mission rates in the face of congestion. ately signifies an inefficient design likely to exhibit insta- Such an approach (sometimes referred to as a fire- bility and poor performance. In this paper, we argue that hose [48]) is often dismissed in the literature due to the such an implication is not fundamental. In particular, there potential for congestion collapse—a condition in which the exist design points that provide many desirable properties— network is saturated with packets but total end-to-end good- including near optimal performance—while suffering high put is low. However, congestion collapse occurs only under loss rates. We focus specifically on congestion control, two conditions: if receivers are unable to deal with high loss where researchers have long clung to the belief that loss (so-called classical congestion collapse), or if the network avoidance is central to high throughput. Starting with Jacob- topology is such that packet drops occur deep in the net- son’s initial TCP congestion control algorithm [19], the en- work, thereby consuming network resources that could be tire tradition of end-to-end congestion control has attempted fruitfully consumed by other flows [18]. The first concern to optimize network performance by tempering transmission can be addressed by applying efficient erasure coding [32, rates in response to loss [18]. We argue that by removing the 35]. It is unknown whether the second condition arises fre- unnecessary yoke of loss avoidance from congestion control quently in practice; it occurs rarely in the backbone topolo- protocols, they can become less complex yet simultaneously gies we study. more efficient, stable, and robust. Hence, we argue that packet loss may not need to be Of course, there are a number of very good reasons to avoided, and that the potential exists to operate future net- avoid loss in today’s networks. Many of these stem from works at 100% utilization. Operating in this largely un- the fact that loss is often a symptom of overflowing router charted regime, where loss is frequent but inconsequential, buffers in the network, which can also lead to high latency, raises a number of new design challenges, but also presents jitter, and poor fairness. Hence, one should contemplate con- gestion control protocols that thrive in the face of loss solely 1For simplicity we focus initially on max-min flow fairness [20]. 1 tremendous opportunity. At first blush, we have identified perior to existing approaches while simultaneously alleviat- several potential advantages over traditional schemes: ing a number of long-standing vexing issues. Efficiency. Sending packets faster than the bottleneck ca- pacity ensures utilization of all available network resources 2. MOTIVATION between source and destination. With appropriate use of era- Decongestion control is optimistic: senders always at- sure codes, almost all delivered packets will be useful. tempt to over-drive network links. Should available capacity Simplicity. Because coding renders packet drops (and re- increase at any router due to, for example, the completion of ordering) inconsequential, it may be possible to simplify the a flow, the remaining flows instantaneously take advantage design of routers and dispense with the need for expensive, of the freed link resources. To translate increased through- power-hungry fast line-card memory. put into increased goodput, senders encode flows using an Stability. Decongestion transforms a sender’s main task erasure coding scheme appropriate for the path loss rate ex- from adjusting transmission rate to ensuring an appropriate perienced by the receiver. encoding. Unlike the former, however, one can design a pro- tocol that adjusts the latter without impacting other flows. 2.1 Simplicity Robustness. Existing congestion control protocols are susceptible to a variety of sender misbehaviors, many of Much of the complexity in today’s routers stems from the which cannot be mitigated by router fairness enforcement. elaborate cross-connects and buffering schemes necessary to Because end points are already forced to cope with high lev- ensure the loss-free forwarding at line rate that TCP requires. els of loss and reordering in steady state, decongestion is In addition, TCP’s sensitivity to packet reordering compli- inherently more tolerant. cates parallelizing router switch fabrics. Adding traditional Given our substantial departure from conventional con- fairness enforcement or policing mechanisms to high-speed gestion control, we do not attempt to exhaustively address routers even further complicates matters [34, 42, 47, 48]. all of the details needed for a complete protocol in this pa- Lifting the requirements for loss-free, in-order forwarding per. Instead, we make the following concrete contributions: may enable significantly simpler and cheaper router designs. Decongestion control requires only a fair dropping policy to • We present a rigorous study of the potential for conges- enforce inter-flow fairness. A number of efficient schemes tion collapse in a number of realistic backbone topolo- have been proposed in the literature [22, 41], and it may be gies under a variety of conditions. In particular, we possible to further reduce the implementation complexity in confront the key concern raised by our approach: By core routers [48]. saturating network links, decongestion control is in In addition to their inherent complexity, a significant por- danger of wasting network resources on dead pack- tion of the heat, board space, and cost of high-end routers ets—packets that are transmitted over several hops, is due to the need for large, high-speed RAM for packet only to be dropped before arriving at their destina- buffers. Decongestion, however, needs almost no buffer- tions [18, 26]. ing. Theoretically, this is not surprising; previous work has shown that erasure coding can reduce the need for queuing • We demonstrate that while abundant, dead packets fre- in the network. In particular, for networks with large num- quently do not impact network-wide goodput when bers of flows, coding schemes can provide similar goodput fairness is enforced by routers. We introduce the con- by replacing router buffers with coding windows of similar cept of zombie packets, a far more rare, restricted class sizes [6]. In decongestion control, router buffers exist only of dead packets, and show that they are the key cause to ensure that output links are driven to capacity: if the of- of congestion collapse. fered load is roughly equal to the router’s output link capac- • We propose a decongestion control algorithm that ity, small queues are needed to absorb the variation in arrival eliminates zombie packets, and implement a prototype rates. We show in Section 6.4, however, that if the offered called Opal. We evaluate several basic properties by load is paced or sufficiently large, we can virtually eliminate comparing its performance to TCP in simulation and buffering. Moreover, small queues help to bound delay and identify key areas of future work, including improving jitter for end-to-end traffic. the efficiency of very short flows. In addition to simplifying the routers themselves, decon- gestion control may also ease the task of configuring and To be clear, we are not claiming that existing approaches managing them. Today, traffic engineers expend a great deal to end-to-end congestion control are ill-considered. Rather, of effort to avoid overload during peak traffic times or dur- we assert that a far wider design space exists than researchers ing planned or unplanned maintenance. If a link fails, traf- have previously explored, and evaluate the pros and cons of fic is diverted along a backup path; as a result, the backup one particular alternative.
Recommended publications
  • Traffic Management in Multi-Service Access Networks
    MARKETING REPORT MR-404 Traffic Management in Multi-Service Access Networks Issue: 01 Issue Date: September 2017 © The Broadband Forum. All rights reserved. Traffic Management in Multi-Service Access Networks MR-404 Issue History Issue Approval date Publication Date Issue Editor Changes Number 01 4 September 2017 13 October 2017 Christele Bouchat, Original Nokia Comments or questions about this Broadband Forum Marketing Draft should be directed to [email protected] Editors Francois Fredricx Nokia [email protected] Florian Damas Nokia [email protected] Ing-Jyh Tsang Nokia [email protected] Innovation Christele Bouchat Nokia [email protected] Leadership Mauro Tilocca Telecom Italia [email protected] September 2017 © The Broadband Forum. All rights reserved 2 of 16 Traffic Management in Multi-Service Access Networks MR-404 Executive Summary Traffic Management is a widespread industry practice for ensuring that networks operate efficiently, including mechanisms such as queueing, routing, restricting or rationing certain traffic on a network, and/or giving priority to some types of traffic under certain network conditions, or at all times. The goal is to minimize the impact of congestion in networks on the traffic’s Quality of Service. It can be used to achieve certain performance goals, and its careful application can ultimately improve the quality of an end user's experience in a technically and economically sound way, without detracting from the experience of others. Several Traffic Management mechanisms are vital for a functioning Internet carrying all sorts of Over-the-Top (OTT) applications in “Best Effort” mode. Another set of Traffic Management mechanisms is also used in networks involved in a multi-service context, to provide differentiated treatment of various services (e.g.
    [Show full text]
  • Analysing TCP Performance When Link Experiencing Packet Loss
    Analysing TCP performance when link experiencing packet loss Master of Science Thesis [in the Programme Networks and Distributed System] SHAHRIN CHOWDHURY KANIZ FATEMA Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, October 2013 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Analysing TCP performance when link experiencing packet loss SHAHRIN CHOWDHURY, KANIZ FATEMA © SHAHRIN CHOWDHURY, October 2013. © KANIZ FATEMA, October 2013. Examiner: TOMAS OLOVSSON Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden October 2013 Acknowledgement We are grateful to our supervisor and examiner Tomas Olovsson for his valuable time and assistance in compilation for this thesis.
    [Show full text]
  • Adaptive Method for Packet Loss Types in Iot: an Naive Bayes Distinguisher
    electronics Article Adaptive Method for Packet Loss Types in IoT: An Naive Bayes Distinguisher Yating Chen , Lingyun Lu *, Xiaohan Yu * and Xiang Li School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; [email protected] (Y.C.); [email protected] (X.L.) * Correspondence: [email protected] (L.L.); [email protected] (X.Y.) Received: 31 December 2018; Accepted: 23 January 2019; Published: 28 January 2019 Abstract: With the rapid development of IoT (Internet of Things), massive data is delivered through trillions of interconnected smart devices. The heterogeneous networks trigger frequently the congestion and influence indirectly the application of IoT. The traditional TCP will highly possible to be reformed supporting the IoT. In this paper, we find the different characteristics of packet loss in hybrid wireless and wired channels, and develop a novel congestion control called NB-TCP (Naive Bayesian) in IoT. NB-TCP constructs a Naive Bayesian distinguisher model, which can capture the packet loss state and effectively classify the packet loss types from the wireless or the wired. More importantly, it cannot cause too much load on the network, but has fast classification speed, high accuracy and stability. Simulation results using NS2 show that NB-TCP achieves up to 0.95 classification accuracy and achieves good throughput, fairness and friendliness in the hybrid network. Keywords: Internet of Things; wired/wireless hybrid network; TCP; naive bayesian model 1. Introduction The Internet of Things (IoT) is a new network industry based on Internet, mobile communication networks and other technologies, which has wide applications in industrial production, intelligent transportation, environmental monitoring and smart homes.
    [Show full text]
  • A Comparison of Voip Performance on Ipv6 and Ipv4 Networks
    A Comparison of VoIP Performance on IPv6 and IPv4 Networks Roman Yasinovskyy, Alexander L. Wijesinha, Ramesh K. Karne, and Gholam Khaksari Towson University increasingly used on the Internet today. The increase Abstract—We compare VoIP performance on IPv6 in IPv6 packet size due to the larger addresses is and IPv4 LANs in the presence of varying levels of partly offset by a streamlined header with optional background UDP traffic. A conventional softphone is extension headers (the header fields in IPv4 for used to make calls and a bare PC (operating system- fragmentation support and checksum are eliminated less) softphone is used as a control to determine the impact of system overhead. The performance measures in IPv6). are maximum and mean delta (the time between the As the growing popularity of VoIP will make it a arrival of voice packets), maximum and mean jitter, significant component of traffic in the future Internet, packet loss, MOS (Mean Opinion Score), and it is of interest to compare VoIP performance over throughput. We also determine the relative frequency IPv6 and IPv4. The results would help to determine if distribution for delta. It is found that mean values of there are any differences in VoIP performance over delta for IPv4 and IPv6 are similar although maximum values are much higher than the mean and show more IPv6 compared to IPv4 due to overhead resulting variability at higher levels of background traffic. The from the larger IPv6 header (and packet size). We maximum jitter for IPv6 is slightly higher than for IPv4 focus on comparing VoIP performance with IPv6 and but mean jitter values are approximately the same.
    [Show full text]
  • Impact of Packet Losses on the Quality of Video Streaming
    Master Thesis Electrical Engineering Thesis no: MEE10:44 June 2010 Impact of Packet Losses on the Quality of Video Streaming JOHN Samson Mwela & OYEKANLU Emmanuel Adebomi School of Computing Internet : www.bth.se/com School of Computing Blekinge Institute of Technology Phone : +46 457 38 50 00 Blekinge Institute of Technology Box 520 Fax : + 46 457 271 25 Box 520 i SE – 371 79Karlskrona SE – 372 25 Ronneby Sweden Sweden This thesis is submitted to the School of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering. The thesis is equivalent to 20 weeks of full time studies. Contact Information: Author(s): JOHN Samson Mwela Blekinge Institute of Technology E-mail: [email protected] OYEKANLU Emmanuel Adebomi Blekinge Institute of Technology E-mail: [email protected] Supervisor Tahir Nawaz Minhas School of Computing Examiner Dr. Patrik Arlos, PhD School of Computing School of Computing Blekinge Institute of Technology Box 520 SE – 371 79 Karlskrona Sweden i ABSTRACT In this thesis, the impact of packet losses on the quality of received videos sent across a network that exhibit normal network perturbations such as jitters, delays, packet drops etc has been examined. Dynamic behavior of a normal network has been simulated using Linux and the Network Emulator (NetEm). Peoples’ perceptions on the quality of the received video were used in rating the qualities of several videos with differing speeds. In accordance with ITU’s guideline of using Mean Opinion Scores (MOS), the effects of packet drops were analyzed. Excel and Matlab were used as tools in analyzing the peoples’ opinions which indicates the impacts that different loss rates has on the transmitted videos.
    [Show full text]
  • What Causes Packet Loss IP Networks
    What Causes Packet Loss IP Networks (Rev 1.1) Computer Modules, Inc. DVEO Division 11409 West Bernardo Court San Diego, CA 92127, USA Telephone: +1 858 613 1818 Fax: +1 858 613 1815 www.dveo.com Copyright © 2016 Computer Modules, Inc. All Rights Reserved. DVEO, DOZERbox, DOZER Racks and DOZER ARQ are trademarks of Computer Modules, Inc. Specifications and product availability are subject to change without notice. Packet Loss and Reasons Introduction This Document To stream high quality video, i.e. to transmit real-time video streams, over IP networks is a demanding endeavor and, depending on network conditions, may result in packets being dropped or “lost” for a variety of reasons, thus negatively impacting the quality of user experience (QoE). This document looks at typical reasons and causes for what is generally referred to as “packet loss” across various types of IP networks, whether of the managed and conditioned type with a defined Quality of Service (QoS), or the “unmanaged” kind such as the vast variety of individual and interconnected networks across the globe that collectively constitute the public Internet (“the Internet”). The purpose of this document is to encourage operators and enterprises that wish to overcome streaming video quality problems to explore potential solutions. Proven technology exists that enables transmission of real-time video error-free over all types of IP networks, and to perform live broadcasting of studio quality content over the “unmanaged” Internet! Core Protocols of the Internet Protocol Suite The Internet Protocol (IP) is the foundation on which the Internet was built and, by extension, the World Wide Web, by enabling global internetworking.
    [Show full text]
  • The Impact of Qos Changes Towards Network Performance
    International Journal of Computer Networks and Communications Security VOL. 3, NO. 2, FEBRUARY 2015, 48–53 Available online at: www.ijcncs.org E-ISSN 2308-9830 (Online) / ISSN 2410-0595 (Print) The Impact of QoS Changes towards Network Performance WINARNO SUGENG1, JAZI EKO ISTIYANTO2, KHABIB MUSTOFA3 and AHMAD ASHARI4 1Itenas, Informatics Engineering Department, BANDUNG, INDONESIA 2, 3, 4 UGM, Computer Science Program, YOGYAKARTA, INDONESIA E-mail: [email protected], [email protected], [email protected], [email protected] ABSTRACT Degrading or declining network performance in an installed computer network system is the most undesirable condition. There are several factors contribute to the decline of network performance, which their indications can be observed from quality changes in Quality of Service (QoS) parameters measurement result. This research proposes recommendations in improving network performance towards the changes of QoS parameters quality. Keywords: Network Performance, Quality Parameters, QoS Changes, Network Recommendation. 1 INTRODUCTION very sensitive issue because there is no system that is safe while they are made by human hands, the At the time talked about the Reliability of a system can be improved security built only from system supporting network devices (network one level to another level . infrastructure). The main factors that influence it is Once a network is built, the next job more Availability, Performance and Security, the difficult it is to maintain the network still works as relationship of these factors are as shown in Figure it should, in this case maintain network stability. If 1 [6]. a device does not work then it will affect the work of the overall network.
    [Show full text]
  • Packet Loss Burstiness: Measurements and Implications for Distributed Applications
    Packet Loss Burstiness: Measurements and Implications for Distributed Applications David X. Wei1,PeiCao2,StevenH.Low1 1Division of Engineering and Applied Science 2Department of Computer Science California Institute of Technology Stanford University Pasadena, CA 91125 USA Stanford, CA 94305 USA {weixl,slow}@cs.caltech.edu [email protected] Abstract sources. The most commonly used control protocol for re- liable data transmission is Transmission Control Protocol Many modern massively distributed systems deploy thou- (TCP) [18], with a variety of congestion control algorithms sands of nodes to cooperate on a computation task. Net- (Reno [5], NewReno [11], etc.), and implementations (e.g. work congestions occur in these systems. Most applications TCP Pacing [16, 14]). The most commonly used control rely on congestion control protocols such as TCP to protect protocol for unreliable data transmission is TCP Friendly the systems from congestion collapse. Most TCP conges- Rate Control (TFRC) [10]. These algorithms all use packet tion control algorithms use packet loss as signal to detect loss as the congestion signal. In designs of these protocols, congestion. it is assumed that the packet loss process provides signals In this paper, we study the packet loss process in sub- that can be detected by every flow sharing the network and round-trip-time (sub-RTT) timescale and its impact on the hence the protocols can achieve fair share of the network loss-based congestion control algorithms. Our study sug- resource and, at the same time, avoid congestion collapse. gests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects.
    [Show full text]
  • Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency
    practice Doi:10.1145/2076450.2076464 protocol the presence of congestion Article development led by queue.acm.org and thus the need for compensating adjustments. Because memory now is significant- A discussion with Vint Cerf, Van Jacobson, ly cheaper than it used to be, buffering Nick Weaver, and Jim Gettys. has been overdone in all manner of net- work devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have in- BufferBloat: advertently defeated a critical TCP con- gestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been di- What’s Wrong agnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to with the steer the discussion is Vint Cerf, popu- larly known as one of the “fathers of the Internet.” As the co-designer of the TCP/IP protocols, Cerf did indeed play internet? a key role in developing the Internet and related packet data and security technologies while at Stanford Univer- sity from 1972−1976 and with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976−1982. He currently serves as Google’s chief Internet evangelist. internet DeLays nOw are as common as they are Van Jacobson, presently a research maddening. But that means they end up affecting fellow at PARC where he leads the networking research program, is also system engineers just like all the rest of us. And when central to this discussion.
    [Show full text]
  • Empirical Analysis of Ipv4 and Ipv6 Networks Through Dual-Stack Sites
    information Article Empirical Analysis of IPv4 and IPv6 Networks through Dual-Stack Sites Kwun-Hung Li and Kin-Yeung Wong * School of Science and Technology, The Open University of Hong Kong, Hong Kong, China; [email protected] * Correspondence: [email protected] Abstract: IPv6 is the most recent version of the Internet Protocol (IP), which can solve the problem of IPv4 address exhaustion and allow the growth of the Internet (particularly in the era of the Internet of Things). IPv6 networks have been deployed for more than a decade, and the deployment is still growing every year. This empirical study was conducted from the perspective of end users to evaluate IPv6 and IPv4 performance by sending probing traffic to 1792 dual-stack sites around the world. Connectivity, packet loss, hop count, round-trip time (RTT), and throughput were used as performance metrics. The results show that, compared with IPv4, IPv6 has better connectivity, lower packet loss, and similar hop count. However, compared with IPv4, it has higher latency and lower throughput. We compared our results with previous studies conducted in 2004, 2007, and 2014 to investigate the improvement of IPv6 networks. The results of the past 16 years have shown that the connectivity of IPv6 has increased by 1–4%, and the IPv6 RTT (194.85 ms) has been greatly reduced, but it is still longer than IPv4 (163.72 ms). The throughput of IPv6 is still lower than that of IPv4. Keywords: IPv6; IPv4; network performance; Internet; IoT Citation: Li, K.-H.; Wong, K.-Y. Empirical Analysis of IPv4 and IPv6 1.
    [Show full text]
  • Qosaas: Quality of Service As a Service
    QoSaaS: Quality of Service as a Service Ye Wang Cheng Huang Jin Li Philip A. Chou Y. Richard Yang Yale University Microsoft Research Microsoft Research Microsoft Research Yale University Abstract fest as quality degradation in end-to-end sessions, but ob- QoSaaS is a new framework that provides QoS infor- served end-to-end quality degradation could be attributed mation portals for enterprises running real-time appli- to any of the inline components. Moreover, individual cations. By aggregating measurements from both end system components only have a limited view of an en- clients and application servers, QoSaaS derives the qual- tire system. Therefore, a key challenge of QoSaaS lies in ity information of individual system components and en- developing an inference engine that can efficiently and ables system operators to discover and localize quality accurately derive the quality information of individual problems; end-users to be hinted with expected experi- system components from the collected measurements. ence; and the applications to make informed adaptations. In this paper, using the large-scale commercial VoIP Using a large-scale commercial VoIP system as a tar- system as a target application, we report preliminary ex- get application, we report preliminary experiences of de- perience of developing the QoS Service. We focus on veloping such service. We also discuss remaining chal- packet loss rate as the quality metric and employ a Max- lenges and outline potential directions to address them. imum Likelihood Estimation-based inference algorithm. We present the inferencing results for individual compo- 1 Introduction nents of the system and examine a number of identified Despite recent large-scale deployments, enterprise real- problematic components in detail.
    [Show full text]
  • An Internet-Wide Analysis of Traffic Policing
    An Internet-Wide Analysis of Traffic Policing Tobias Flach∗y, Pavlos Papageorgey, Andreas Terzisy, Luis D. Pedrosa∗, Yuchung Chengy, Tayeb Karimy, Ethan Katz-Bassett∗, and Ramesh Govindan∗ ∗ University of Southern California y Google Enforces rate by dropping excess packets immediately – Can result in high loss rates Abstract. Large flows like video streams consume signifi- Policing cant bandwidth. Some ISPs actively manage these high vol- + Does not require memory buffer ume flows with techniques like policing, which enforces a + No RTT inflation Enforces rate by queueing excess packets flow rate by dropping excess traffic. While the existence of + Only drops packets when buffer is full Shaping policing is well known, our contribution is an Internet-wide – Requires memory to buffer packets study quantifying its prevalence and impact on transport- – Can inflate RTTs due to high queueing delay level and video-quality metrics. We developed a heuristic to Table 1: Overview of policing and shaping. identify policing from server-side traces and built a pipeline to process traces at scale collected from hundreds of Google search that require low latency. To achieve coexistence and servers worldwide. Using a dataset of 270 billion packets enfore plans, an ISP might enforce different rules on its traf- served to 28,400 client ASes, we find that, depending on re- fic. For example, it might rate-limit high-volume flows to gion, up to 7% of connections are identified to be policed. avoid network congestion, while leaving low-volume flows Loss rates are on average 6× higher when a trace is policed, that have little impact on the congestion level untouched.
    [Show full text]