Embracing Packet Loss in Congestion Control

Embracing Packet Loss in Congestion Control

Embracing Packet Loss in Congestion Control Barath Raghavan, John McCullough, and Alex C. Snoeren University of California, San Diego fbarath,jmccullo,[email protected] ABSTRACT in the absence of queues—or at least only in the presence of Loss avoidance has long been central to the Internet archi- very short ones. Yet queues are not an end in and of them- tecture. Ordinarily, dropped packets represent wasted re- selves; they exist largely to smooth bursty traffic arrivals and sources. In this paper, we study whether the benefits of a facilitate fair resource sharing. If these goals can be met—or network architecture that embraces—rather than avoids— obviated—through other means, then router buffers may be widespread packet loss outweigh the potential loss in ef- removed, or at least considerably shortened. ficiency. We propose an alternative approach to Internet In this paper, we propose an alternative approach to con- congestion control called decongestion control. In a depar- gestion control that we call decongestion control [4]. As ture from traditional approaches, end hosts strive to transmit opposed to previous schemes that temper transmission rates packets faster than the network can deliver them, leveraging to arrive at an efficient, loss-free allocation of bandwidth be- end-to-end erasure coding and in-network fairness enforce- tween flows, we do not limit an end host’s sending rate. All ment. We argue that such an approach may decrease the hosts send faster than the bottleneck link capacity, which complexity of routers while increasing stability, robustness, ensures flows use all the available bandwidth along their and tolerance for misbehavior. While a number of important paths. Our architecture separates efficiency and fairness; end 1 design issues remain open, we show that our approach not hosts manage efficiency while fairness is enforced by the only avoids congestion collapse, but delivers near optimal routers. We depart from previous architectures based upon steady-state goodput for a variety of traffic demands in three network-enforced fairness by allowing overloaded routers to different backbone topologies. drop packets rather than queuing them, with the hope of de- creasing the complexity and amount of resources required to 1. INTRODUCTION enforce fairness. Our model suggests that efficiency can be maintained without requiring senders to decrease their trans- Packet loss is taboo; to an Internet architect, it immedi- mission rates in the face of congestion. ately signifies an inefficient design likely to exhibit insta- Such an approach (sometimes referred to as a fire- bility and poor performance. In this paper, we argue that hose [48]) is often dismissed in the literature due to the such an implication is not fundamental. In particular, there potential for congestion collapse—a condition in which the exist design points that provide many desirable properties— network is saturated with packets but total end-to-end good- including near optimal performance—while suffering high put is low. However, congestion collapse occurs only under loss rates. We focus specifically on congestion control, two conditions: if receivers are unable to deal with high loss where researchers have long clung to the belief that loss (so-called classical congestion collapse), or if the network avoidance is central to high throughput. Starting with Jacob- topology is such that packet drops occur deep in the net- son’s initial TCP congestion control algorithm [19], the en- work, thereby consuming network resources that could be tire tradition of end-to-end congestion control has attempted fruitfully consumed by other flows [18]. The first concern to optimize network performance by tempering transmission can be addressed by applying efficient erasure coding [32, rates in response to loss [18]. We argue that by removing the 35]. It is unknown whether the second condition arises fre- unnecessary yoke of loss avoidance from congestion control quently in practice; it occurs rarely in the backbone topolo- protocols, they can become less complex yet simultaneously gies we study. more efficient, stable, and robust. Hence, we argue that packet loss may not need to be Of course, there are a number of very good reasons to avoided, and that the potential exists to operate future net- avoid loss in today’s networks. Many of these stem from works at 100% utilization. Operating in this largely un- the fact that loss is often a symptom of overflowing router charted regime, where loss is frequent but inconsequential, buffers in the network, which can also lead to high latency, raises a number of new design challenges, but also presents jitter, and poor fairness. Hence, one should contemplate con- gestion control protocols that thrive in the face of loss solely 1For simplicity we focus initially on max-min flow fairness [20]. 1 tremendous opportunity. At first blush, we have identified perior to existing approaches while simultaneously alleviat- several potential advantages over traditional schemes: ing a number of long-standing vexing issues. Efficiency. Sending packets faster than the bottleneck ca- pacity ensures utilization of all available network resources 2. MOTIVATION between source and destination. With appropriate use of era- Decongestion control is optimistic: senders always at- sure codes, almost all delivered packets will be useful. tempt to over-drive network links. Should available capacity Simplicity. Because coding renders packet drops (and re- increase at any router due to, for example, the completion of ordering) inconsequential, it may be possible to simplify the a flow, the remaining flows instantaneously take advantage design of routers and dispense with the need for expensive, of the freed link resources. To translate increased through- power-hungry fast line-card memory. put into increased goodput, senders encode flows using an Stability. Decongestion transforms a sender’s main task erasure coding scheme appropriate for the path loss rate ex- from adjusting transmission rate to ensuring an appropriate perienced by the receiver. encoding. Unlike the former, however, one can design a pro- tocol that adjusts the latter without impacting other flows. 2.1 Simplicity Robustness. Existing congestion control protocols are susceptible to a variety of sender misbehaviors, many of Much of the complexity in today’s routers stems from the which cannot be mitigated by router fairness enforcement. elaborate cross-connects and buffering schemes necessary to Because end points are already forced to cope with high lev- ensure the loss-free forwarding at line rate that TCP requires. els of loss and reordering in steady state, decongestion is In addition, TCP’s sensitivity to packet reordering compli- inherently more tolerant. cates parallelizing router switch fabrics. Adding traditional Given our substantial departure from conventional con- fairness enforcement or policing mechanisms to high-speed gestion control, we do not attempt to exhaustively address routers even further complicates matters [34, 42, 47, 48]. all of the details needed for a complete protocol in this pa- Lifting the requirements for loss-free, in-order forwarding per. Instead, we make the following concrete contributions: may enable significantly simpler and cheaper router designs. Decongestion control requires only a fair dropping policy to • We present a rigorous study of the potential for conges- enforce inter-flow fairness. A number of efficient schemes tion collapse in a number of realistic backbone topolo- have been proposed in the literature [22, 41], and it may be gies under a variety of conditions. In particular, we possible to further reduce the implementation complexity in confront the key concern raised by our approach: By core routers [48]. saturating network links, decongestion control is in In addition to their inherent complexity, a significant por- danger of wasting network resources on dead pack- tion of the heat, board space, and cost of high-end routers ets—packets that are transmitted over several hops, is due to the need for large, high-speed RAM for packet only to be dropped before arriving at their destina- buffers. Decongestion, however, needs almost no buffer- tions [18, 26]. ing. Theoretically, this is not surprising; previous work has shown that erasure coding can reduce the need for queuing • We demonstrate that while abundant, dead packets fre- in the network. In particular, for networks with large num- quently do not impact network-wide goodput when bers of flows, coding schemes can provide similar goodput fairness is enforced by routers. We introduce the con- by replacing router buffers with coding windows of similar cept of zombie packets, a far more rare, restricted class sizes [6]. In decongestion control, router buffers exist only of dead packets, and show that they are the key cause to ensure that output links are driven to capacity: if the of- of congestion collapse. fered load is roughly equal to the router’s output link capac- • We propose a decongestion control algorithm that ity, small queues are needed to absorb the variation in arrival eliminates zombie packets, and implement a prototype rates. We show in Section 6.4, however, that if the offered called Opal. We evaluate several basic properties by load is paced or sufficiently large, we can virtually eliminate comparing its performance to TCP in simulation and buffering. Moreover, small queues help to bound delay and identify key areas of future work, including improving jitter for end-to-end traffic. the efficiency of very short flows. In addition to simplifying the routers themselves, decon- gestion control may also ease the task of configuring and To be clear, we are not claiming that existing approaches managing them. Today, traffic engineers expend a great deal to end-to-end congestion control are ill-considered. Rather, of effort to avoid overload during peak traffic times or dur- we assert that a far wider design space exists than researchers ing planned or unplanned maintenance. If a link fails, traf- have previously explored, and evaluate the pros and cons of fic is diverted along a backup path; as a result, the backup one particular alternative.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us