The Adverse Impact of the TCP Congestion-Control Mechanism In

Total Page:16

File Type:pdf, Size:1020Kb

The Adverse Impact of the TCP Congestion-Control Mechanism In Proceedings of the 2000 International Conference on Parallel Processing (ICPP 2000). The Adverse Impact of the TCP Congestion-Control Mechanism in Heterogenous Computing Systems ¡£¢ ¤ Wu-chun Feng and Peerapol Tinnakornsrisuphap [email protected], [email protected] ¡ Research & Development in Advanced Network Technology (RADIANT) Computing, Information, and Communications Division Los Alamos National Laboratory Los Alamos, NM 87545 ¤ Department of Electrical & Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Abstract scales, the aggregate traffic pattern remains bursty, regard- less of the time granularity. Additional studies have con- Via experimental study, we illustrate how TCP modu- cluded that the heavy-tailed distributions of file size, packet lates application traffic in such a way as to adversely af- interarrival, and transfer duration fundamentally contribute fect network performance in a heterogeneous computing to the self-similar nature of aggregate network traffic [16]. system. Even when aggregate application traffic smooths The problems with the above research are three-fold. out as more applications’ traffic are multiplexed, TCP in- First, the knowledge that self-similar traffic is bursty at duces burstiness into the aggregate traffic load, and thus coarse-grained time scales provides little insight into the hurts network performance. This burstiness is particularly network’s ability to achieve an expected QoS through the bad in TCP Reno, and even worse when RED gateways are Internet’s use of traditional statistical-multiplexing tech- employed. Based on the results of this experimental study, niques because the effectiveness of such techniques man- we then develop a stochastic model for TCP Reno to demon- ifests itself at the granularity of milliseconds, not tens or strate how the burstiness in TCP Reno can be modeled. hundreds of seconds [5]. Second, although current models Keywords: TCP, heterogeneous computing, high- of network traffic may apply to existing file-size distribu- performance networking, network traffic characterization. tions and traffic-arrival patterns, these models will not gen- eralize as new applications and services are introduced to the Next-Generation Internet [15]. Third, and most impor- tantly, the proofs of the relationship between heavy-tailed 1 Introduction distributions and self-similar traffic in [16, 7] ignore the in- volvement of the TCP congestion-control mechanism. The ability to characterize the behavior of the resulting A recent study [3] by Feng et al. addresses the above aggregate network traffic can provide insight into how traf- problems by focusing on TCP in a homogeneous parallel- fic should be scheduled to make efficient use of the net- computing environment (or cluster computer), where ev- work, and yet still deliver expected quality-of-service (QoS) ery computing node runs the same implementation of TCP. to end users. These issues are of fundamental importance in Feng et al. show that all flavors of TCP induce burstiness, widely-distributed, heterogeneous computational grids such and hence may contribute to the self-similar nature of ag- as the Earth System Grid [2]. gregate traffic. However, the study does not examine the ef- Recent studies in network traffic characterization have fects that different TCP implementations have on each other concluded that network traffic is self-similar in nature [9, in a heterogeneous parallel-computing system. 14]. That is, when traffic is aggregated over varying time Mo et al. [10] argue that in a heterogeneous network- ¥ ing environment that TCP Reno steals bandwidth from TCP This work was supported by the U.S. Dept. of Energy through Los Alamos National Laboratory contract W-7405-ENG-36. This paper is LA- Vegas while keeping its packet loss low as well; hence pro- UR 00-2554. viding a disincentive for researchers in high-performance computing to switch from Reno to Vegas. In fact, while the there is not enough bandwidth available to send at , and Linux 2.1 kernel had TCP Vegas as its reliable communi- therefore, any increase in the size of the congestion win- cation protocol, the Linux 2.2 kernel has reverted back to dow will result in packets filling up the buffer space at the TCP Reno based on studies such as [10]. Contrary to [10], bottleneck gateway. TCP Vegas attempts to detect this phe- we demonstrate that TCP Reno performs worse than TCP nomenon and avoid congestion at the bottleneck gateway Vegas in a heterogeneous computing environment. by adjusting the congestion-window size, and hence, reduce as necessary to adapt to the available bandwidth. 2 Background To adjust the window size appropriately, Vegas defines ! two threshold values, and , for the congestion-avoidance phase, and a third threshold value, " , for the transition be- TCP is a connection-oriented service that guarantees re- tween the slow-start phase and congestion-avoidance phase. liable, in-order delivery of data. Its flow-control mechanism Conceptually, $#&% implies that Vegas tries to keep at ensures that a sender does not overrun the buffer at the re- least one packet from each stream queued in gateway while ceiver, and its congestion-control mechanism tries to pre- !'#)( keeps at most three packets from each stream. vent too much data from being injected into the network. ¢*5 ,§¤-. ,//467 While the size of the flow-control window is static, the size If *+ ,§¤-. ,//0#1324 , then when , of the congestion window evolves over time, according to Vegas increases the congestion window linearly during the the status of the network. next RTT. When *+ ,§¤-. ,//98&! , Vegas decreases the window linearly during the next RTT. And when ;: " 2.1 TCP Congestion Control ¢*5 ,§¤-. ,//<:=! , the window remains unchanged. The parameter can be viewed as the “initial” ! when Vegas en- ters its congestion-avoidance phase. Currently, the most widely-used TCP implementation is To enhance the performance of TCP, Floyd and Jacobson TCP Reno [6]. Its congestion control mechanism has two proposed the use of random early detection (RED) gate- phases: slow start and congestion avoidance. In slow start, ways [4] to detect incipient congestion. To accomplish the congestion window grows exponentially until a timeout this detection, RED gateways maintain an exponentially- occurs, which implies that a packet has been lost. At this weighted, moving average of the queue length. As long as point, a ¢¡¤£¦¥¨§ © ¡¤£ § ©¡ ( ) value is set to the the average queue length stays below the minimum thresh- halved window size; TCP Reno resets the congestion win- old ( >? @£AB ), all packets are queued, and thus, no packets are dow size to one and re-enters the slow-start phase, increas- dropped. When the average queue length exceeds >? @£CAB , ing the congestion window exponentially up to . When packets are dropped with some calculated probability D . is reached, TCP Reno enters its congestion-avoidance And when the average queue length exceeds a maximum phase in which the congestion window is increased by “one AB threshold ( >E*+F ), all arriving packets are dropped. packet” every time the sender successfully transmits a win- dow’s worth of packets across the network. When a packet is lost during congestion avoidance, TCP Reno takes the 2.2 TCP Probability & Statistics same actions as when a packet is lost during slow start. To enhance performance, Reno also implements fast- Rather than use the Hurst parameter from self-similar retransmit and fast-recovery mechanisms for both the slow- modeling as is done in many studies of network traf- start and congestion-avoidance phases. Rather than timing fic [9, 12, 13, 14, 16], we use the coefficient of variation ¡JH KH out while waiting for the acknowledgment (ACK) of a lost ( GIH ) because it better reflects the predictability of incom- packet, if the sender receives three duplicate ACKs (indi- ing traffic at finer time granularities, and consequently, the cating that some packet was lost but later packets were re- effectiveness of statistical multiplexing over the Internet. ¡JH KLH ceived), the sender immediately retransmits the lost packet The G H is the ratio of the standard deviation to the mean (fast retransmit). Since later packets were received, the net- of the observed number of packets arriving at a gateway in ¡JH KLH work congestion is assumed to be less severe than if all each round-trip propagation delay. The G H gives a nor- packets were lost, and the sender only halves its conges- malized value for the “spread” of a distribution and allows tion window and re-enters the congestion-avoidance phase for the comparison of “spreads” over a varying number of ¡JH KLH (fast recovery) without going through the slow start again. communication streams. If the G H is small, the amount TCP Vegas [1] introduces a new congestion-control of traffic coming into the gateway in each RTT will concen- mechanism that tries to prevent congestion rather than react trate mostly around the mean, and therefore will yield better to the it after it has occurred. The basic idea is as follows: performance via statistical multiplexing. When the congestion window increases in size, the expected By the Central Limit Theorem, the sum of independent sending rate ( ) increases as well. However, if the actual variables results in a random variable with less burstiness, G H ¡JH KLH sending rate ( ) stays roughly the same, this implies that or equivalently, a smaller . However, even if traffic ¡JH KLH sources are independent, TCP introduces dependency be- traffic, and compare it to the measured G H of the aggre- tween the sources, and the traffic does not smooth out as gate TCP modulated traffic as it arrives at the gateway. The ¡JH KLH more sources are aggregated, i.e., G H is large [3]. parameters used in the simulation are shown in Table 1. Clients 3 Experimental Study 1 µ c τ The goal of this study is to understand the dynamics of c how TCP modulates application-generated traffic in a het- 2 Gateway/Router Server erogenous computing system.
Recommended publications
  • Modeling and Performance Evaluation of LTE Networks with Different TCP Variants Ghassan A
    World Academy of Science, Engineering and Technology International Journal of Electronics and Communication Engineering Vol:5, No:3, 2011 Modeling and Performance Evaluation of LTE Networks with Different TCP Variants Ghassan A. Abed, Mahamod Ismail, and Kasmiran Jumari Abstract—Long Term Evolution (LTE) is a 4G wireless Comparing the performance of 3G and its evolution to LTE, broadband technology developed by the Third Generation LTE does not offer anything unique to improve spectral Partnership Project (3GPP) release 8, and it's represent the efficiency, i.e. bps/Hz. LTE improves system performance by competitiveness of Universal Mobile Telecommunications System using wider bandwidths if the spectrum is available [2]. (UMTS) for the next 10 years and beyond. The concepts for LTE systems have been introduced in 3GPP release 8, with objective of Universal Terrestrial Radio Access Network Long-Term high-data-rate, low-latency and packet-optimized radio access Evolution (UTRAN LTE), also known as Evolved UTRAN technology. In this paper, performance of different TCP variants (EUTRAN) is a new radio access technology proposed by the during LTE network investigated. The performance of TCP over 3GPP to provide a very smooth migration towards fourth LTE is affected mostly by the links of the wired network and total generation (4G) network [3]. EUTRAN is a system currently bandwidth available at the serving base station. This paper describes under development within 3GPP. LTE systems is under an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCP- Reno, TCP-Newreno, TCP-SACK, and TCP-FACK, with full developments now, and the TCP protocol is the most widely modeling of all traffics of LTE system.
    [Show full text]
  • Traffic Management in Multi-Service Access Networks
    MARKETING REPORT MR-404 Traffic Management in Multi-Service Access Networks Issue: 01 Issue Date: September 2017 © The Broadband Forum. All rights reserved. Traffic Management in Multi-Service Access Networks MR-404 Issue History Issue Approval date Publication Date Issue Editor Changes Number 01 4 September 2017 13 October 2017 Christele Bouchat, Original Nokia Comments or questions about this Broadband Forum Marketing Draft should be directed to [email protected] Editors Francois Fredricx Nokia [email protected] Florian Damas Nokia [email protected] Ing-Jyh Tsang Nokia [email protected] Innovation Christele Bouchat Nokia [email protected] Leadership Mauro Tilocca Telecom Italia [email protected] September 2017 © The Broadband Forum. All rights reserved 2 of 16 Traffic Management in Multi-Service Access Networks MR-404 Executive Summary Traffic Management is a widespread industry practice for ensuring that networks operate efficiently, including mechanisms such as queueing, routing, restricting or rationing certain traffic on a network, and/or giving priority to some types of traffic under certain network conditions, or at all times. The goal is to minimize the impact of congestion in networks on the traffic’s Quality of Service. It can be used to achieve certain performance goals, and its careful application can ultimately improve the quality of an end user's experience in a technically and economically sound way, without detracting from the experience of others. Several Traffic Management mechanisms are vital for a functioning Internet carrying all sorts of Over-the-Top (OTT) applications in “Best Effort” mode. Another set of Traffic Management mechanisms is also used in networks involved in a multi-service context, to provide differentiated treatment of various services (e.g.
    [Show full text]
  • Analysing TCP Performance When Link Experiencing Packet Loss
    Analysing TCP performance when link experiencing packet loss Master of Science Thesis [in the Programme Networks and Distributed System] SHAHRIN CHOWDHURY KANIZ FATEMA Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, October 2013 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Analysing TCP performance when link experiencing packet loss SHAHRIN CHOWDHURY, KANIZ FATEMA © SHAHRIN CHOWDHURY, October 2013. © KANIZ FATEMA, October 2013. Examiner: TOMAS OLOVSSON Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden October 2013 Acknowledgement We are grateful to our supervisor and examiner Tomas Olovsson for his valuable time and assistance in compilation for this thesis.
    [Show full text]
  • Application Note Pre-Deployment and Network Readiness Assessment Is
    Application Note Contents Title Six Steps To Getting Your Network Overview.......................................................... 1 Ready For Voice Over IP Pre-Deployment and Network Readiness Assessment Is Essential ..................... 1 Date January 2005 Types of VoIP Overview Performance Problems ...... ............................. 1 This Application Note provides enterprise network managers with a six step methodology, including pre- Six Steps To Getting deployment testing and network readiness assessment, Your Network Ready For VoIP ........................ 2 to follow when preparing their network for Voice over Deploy Network In Stages ................................. 7 IP service. Designed to help alleviate or significantly reduce call quality and network performance related Summary .......................................................... 7 problems, this document also includes useful problem descriptions and other information essential for successful VoIP deployment. Pre-Deployment and Network IP Telephony: IP Network Problems, Equipment Readiness Assessment Is Essential Configuration and Signaling Problems and Analog/ TDM Interface Problems. IP Telephony is very different from conventional data applications in that call quality IP Network Problems: is especially sensitive to IP network impairments. Existing network and “traditional” call quality Jitter -- Variation in packet transmission problems become much more obvious with time that leads to packets being the deployment of VoIP service. For network discarded in VoIP
    [Show full text]
  • TCP Congestion Control: Overview and Survey of Ongoing Research
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 2001 TCP Congestion Control: Overview and Survey Of Ongoing Research Sonia Fahmy Purdue University, [email protected] Tapan Prem Karwa Report Number: 01-016 Fahmy, Sonia and Karwa, Tapan Prem, "TCP Congestion Control: Overview and Survey Of Ongoing Research" (2001). Department of Computer Science Technical Reports. Paper 1513. https://docs.lib.purdue.edu/cstech/1513 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. TCP CONGESTION CONTROL: OVERVIEW AND SURVEY OF ONGOING RESEARCH Sonia Fahmy Tapan Prem Karwa Department of Computer Sciences Purdue University West Lafayette, IN 47907 CSD TR #01-016 September 2001 TCP Congestion Control: Overview and Survey ofOngoing Research Sonia Fahmy and Tapan Prem Karwa Department ofComputer Sciences 1398 Computer Science Building Purdue University West Lafayette, IN 47907-1398 E-mail: {fahmy,tpk}@cs.purdue.edu Abstract This paper studies the dynamics and performance of the various TCP variants including TCP Tahoe, Reno, NewReno, SACK, FACK, and Vegas. The paper also summarizes recent work at the lETF on TCP im­ plementation, and TCP adaptations to different link characteristics, such as TCP over satellites and over wireless links. 1 Introduction The Transmission Control Protocol (TCP) is a reliable connection-oriented stream protocol in the Internet Protocol suite. A TCP connection is like a virtual circuit between two computers, conceptually very much like a telephone connection. To maintain this virtual circuit, TCP at each end needs to store information on the current status of the connection, e.g., the last byte sent.
    [Show full text]
  • Latency and Throughput Optimization in Modern Networks: a Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui
    READY TO SUBMIT TO IEEE COMMUNICATIONS SURVEYS & TUTORIALS JOURNAL 1 Latency and Throughput Optimization in Modern Networks: A Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui Abstract—Modern applications are highly sensitive to com- On one hand every user likes to send and receive their munication delays and throughput. This paper surveys major data as quickly as possible. On the other hand the network attempts on reducing latency and increasing the throughput. infrastructure that connects users has limited capacities and These methods are surveyed on different networks and surrond- ings such as wired networks, wireless networks, application layer these are usually shared among users. There are some tech- transport control, Remote Direct Memory Access, and machine nologies that dedicate their resources to some users but they learning based transport control, are not very much commonly used. The reason is that although Index Terms—Rate and Congestion Control , Internet, Data dedicated resources are more secure they are more expensive Center, 5G, Cellular Networks, Remote Direct Memory Access, to implement. Sharing a physical channel among multiple Named Data Network, Machine Learning transmitters needs a technique to control their rate in proper time. The very first congestion network collapse was observed and reported by Van Jacobson in 1986. This caused about a I. INTRODUCTION thousand time rate reduction from 32kbps to 40bps [3] which Recent applications such as Virtual Reality (VR), au- is about a thousand times rate reduction. Since then very tonomous cars or aerial vehicles, and telehealth need high different variations of the Transport Control Protocol (TCP) throughput and low latency communication.
    [Show full text]
  • Lab 6: Understanding Traditional TCP Congestion Control (HTCP, Cubic, Reno)
    NETWORK TOOLS AND PROTOCOLS Lab 6: Understanding Traditional TCP Congestion Control (HTCP, Cubic, Reno) Document Version: 06-14-2019 Award 1829698 “CyberTraining CIP: Cyberinfrastructure Expertise on High-throughput Networks for Big Science Data Transfers” Lab 6: Understanding Traditional TCP Congestion Control Contents Overview ............................................................................................................................. 3 Objectives............................................................................................................................ 3 Lab settings ......................................................................................................................... 3 Lab roadmap ....................................................................................................................... 3 1 Introduction to TCP ..................................................................................................... 3 1.1 TCP review ............................................................................................................ 4 1.2 TCP throughput .................................................................................................... 4 1.3 TCP packet loss event ........................................................................................... 5 1.4 Impact of packet loss in high-latency networks ................................................... 6 2 Lab topology...............................................................................................................
    [Show full text]
  • A QUIC Implementation for Ns-3
    This paper has been submitted to WNS3 2019. Copyright may be transferred without notice. A QUIC Implementation for ns-3 Alvise De Biasio, Federico Chiariotti, Michele Polese, Andrea Zanella, Michele Zorzi Department of Information Engineering, University of Padova, Padova, Italy e-mail: {debiasio, chiariot, polesemi, zanella, zorzi}@dei.unipd.it ABSTRACT One of the most important novelties is QUIC, a transport pro- Quick UDP Internet Connections (QUIC) is a recently proposed tocol implemented at the application layer, originally proposed by transport protocol, currently being standardized by the Internet Google [8] and currently considered for standardization by the Engineering Task Force (IETF). It aims at overcoming some of the Internet Engineering Task Force (IETF) [6]. QUIC addresses some shortcomings of TCP, while maintaining the logic related to flow of the issues that currently affect transport protocols, and TCP in and congestion control, retransmissions and acknowledgments. It particular. First of all, it is designed to be deployed on top of UDP supports multiplexing of multiple application layer streams in the to avoid any issue with middleboxes in the network that do not same connection, a more refined selective acknowledgment scheme, forward packets from protocols other than TCP and/or UDP [12]. and low-latency connection establishment. It also integrates cryp- Moreover, unlike TCP, QUIC is not integrated in the kernel of the tographic functionalities in the protocol design. Moreover, QUIC is Operating Systems (OSs), but resides in user space, so that changes deployed at the application layer, and encapsulates its packets in in the protocol implementation will not require OS updates. Finally, UDP datagrams.
    [Show full text]
  • Adaptive Method for Packet Loss Types in Iot: an Naive Bayes Distinguisher
    electronics Article Adaptive Method for Packet Loss Types in IoT: An Naive Bayes Distinguisher Yating Chen , Lingyun Lu *, Xiaohan Yu * and Xiang Li School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; [email protected] (Y.C.); [email protected] (X.L.) * Correspondence: [email protected] (L.L.); [email protected] (X.Y.) Received: 31 December 2018; Accepted: 23 January 2019; Published: 28 January 2019 Abstract: With the rapid development of IoT (Internet of Things), massive data is delivered through trillions of interconnected smart devices. The heterogeneous networks trigger frequently the congestion and influence indirectly the application of IoT. The traditional TCP will highly possible to be reformed supporting the IoT. In this paper, we find the different characteristics of packet loss in hybrid wireless and wired channels, and develop a novel congestion control called NB-TCP (Naive Bayesian) in IoT. NB-TCP constructs a Naive Bayesian distinguisher model, which can capture the packet loss state and effectively classify the packet loss types from the wireless or the wired. More importantly, it cannot cause too much load on the network, but has fast classification speed, high accuracy and stability. Simulation results using NS2 show that NB-TCP achieves up to 0.95 classification accuracy and achieves good throughput, fairness and friendliness in the hybrid network. Keywords: Internet of Things; wired/wireless hybrid network; TCP; naive bayesian model 1. Introduction The Internet of Things (IoT) is a new network industry based on Internet, mobile communication networks and other technologies, which has wide applications in industrial production, intelligent transportation, environmental monitoring and smart homes.
    [Show full text]
  • A Comparison of Voip Performance on Ipv6 and Ipv4 Networks
    A Comparison of VoIP Performance on IPv6 and IPv4 Networks Roman Yasinovskyy, Alexander L. Wijesinha, Ramesh K. Karne, and Gholam Khaksari Towson University increasingly used on the Internet today. The increase Abstract—We compare VoIP performance on IPv6 in IPv6 packet size due to the larger addresses is and IPv4 LANs in the presence of varying levels of partly offset by a streamlined header with optional background UDP traffic. A conventional softphone is extension headers (the header fields in IPv4 for used to make calls and a bare PC (operating system- fragmentation support and checksum are eliminated less) softphone is used as a control to determine the impact of system overhead. The performance measures in IPv6). are maximum and mean delta (the time between the As the growing popularity of VoIP will make it a arrival of voice packets), maximum and mean jitter, significant component of traffic in the future Internet, packet loss, MOS (Mean Opinion Score), and it is of interest to compare VoIP performance over throughput. We also determine the relative frequency IPv6 and IPv4. The results would help to determine if distribution for delta. It is found that mean values of there are any differences in VoIP performance over delta for IPv4 and IPv6 are similar although maximum values are much higher than the mean and show more IPv6 compared to IPv4 due to overhead resulting variability at higher levels of background traffic. The from the larger IPv6 header (and packet size). We maximum jitter for IPv6 is slightly higher than for IPv4 focus on comparing VoIP performance with IPv6 and but mean jitter values are approximately the same.
    [Show full text]
  • Congestion Control Overview
    ELEC3030 (EL336) Computer Networks S Chen Congestion Control Overview • Problem: When too many packets are transmitted through a network, congestion occurs At very high traffic, performance collapses Perfect completely, and almost no packets are delivered Maximum carrying capacity of subnet • Causes: bursty nature of traffic is the root Desirable cause → When part of the network no longer can cope a sudden increase of traffic, congestion Congested builds upon. Other factors, such as lack of Packets delivered bandwidth, ill-configuration and slow routers can also bring up congestion Packets sent • Solution: congestion control, and two basic approaches – Open-loop: try to prevent congestion occurring by good design – Closed-loop: monitor the system to detect congestion, pass this information to where action can be taken, and adjust system operation to correct the problem (detect, feedback and correct) • Differences between congestion control and flow control: – Congestion control try to make sure subnet can carry offered traffic, a global issue involving all the hosts and routers. It can be open-loop based or involving feedback – Flow control is related to point-to-point traffic between given sender and receiver, it always involves direct feedback from receiver to sender 119 ELEC3030 (EL336) Computer Networks S Chen Open-Loop Congestion Control • Prevention: Different policies at various layers can affect congestion, and these are summarised in the table • e.g. retransmission policy at data link layer Transport Retransmission policy • Out-of-order caching
    [Show full text]
  • An Experimental Evaluation of Low Latency Congestion Control for Mmwave Links
    An Experimental Evaluation of Low Latency Congestion Control for mmWave Links Ashutosh Srivastava, Fraida Fund, Shivendra S. Panwar Department of Electrical and Computer Engineering, NYU Tandon School of Engineering Emails: fashusri, ffund, [email protected] Abstract—Applications that require extremely low latency are −40 expected to be a major driver of 5G and WLAN networks that −45 include millimeter wave (mmWave) links. However, mmWave −50 links can experience frequent, sudden changes in link capacity −55 due to obstructions in the signal path. These dramatic variations RSSI (dBm) in link capacity cause a temporary “bufferbloat” condition −60 0 25 50 75 100 during which delay may increase by a factor of 2-10. Low Time (s) latency congestion control protocols, which manage bufferbloat Fig. 1: Effect of human blocker on received signal strength of by minimizing queue occupancy, represent a potential solution a 60 GHz WLAN link. to this problem, however their behavior over links with dramatic variations in capacity is not well understood. In this paper, we explore the behavior of two major low latency congestion control protocols, TCP BBR and TCP Prague (as part of L4S), using link Link rate: 5 packets/ms traces collected over mmWave links under various conditions. 1ms queuing delay Our evaluation reveals potential problems associated with use of these congestion control protocols for low latency applications over mmWave links. Link rate: 1 packet/ms Index Terms—congestion control, low latency, millimeter wave 5ms queuing delay Fig. 2: Illustration of effect of variations in link capacity on I. INTRODUCTION queueing delay. Low latency applications are envisioned to play a significant role in the long-term growth and economic impact of 5G This effect is of great concern to low-delay applications, and future WLAN networks [1].
    [Show full text]