Improving Latency with Active Queue Management (AQM) During COVID-19 by Allen Flickinger, Carl Klatsky, Atahualpa Ledesma, Jason Livingood, Sebnem Ozer 30 July 2021

Total Page:16

File Type:pdf, Size:1020Kb

Improving Latency with Active Queue Management (AQM) During COVID-19 by Allen Flickinger, Carl Klatsky, Atahualpa Ledesma, Jason Livingood, Sebnem Ozer 30 July 2021 Improving Latency with Active Queue Management (AQM) During COVID-19 By Allen Flickinger, Carl Klatsky, Atahualpa Ledesma, Jason Livingood, Sebnem Ozer 30 July 2021 1. Abstract During the COVID-19 pandemic the Comcast network performed well in response to unprecedented changes in Internet usage1 2 3 4. Video communications applications such as Zoom, WebEx, and FaceTime have exploded in popularity – applications that happen to be sensitive to network latency. However, in today’s typical networks – such as a home network – those applications often degrade if they are sharing a network link with other network traffic. This is a problem caused by a network design flaw often described using the term ‘buffer bloat’. Several years ago, Comcast helped to fund research & development in the technical community into new Active Queue Management (AQM) techniques to eliminate this issue and this was later built into Data Over Cable Service Interface Specification (DOCSIS) standards5 6 7. Just prior to the pandemic, Comcast also deployed a large-scale network performance measurement system that included a latency under load test. In addition, Comcast happened to deploy two otherwise identical cable modems; one with upstream AQM enabled, and the other without. This fortuitous confluence of events has enabled Comcast to perform a comparative analysis of the differences between cable modem gateways using AQM with those that lack that enhancement, at a unique level of scale across many months of time and millions of devices. These deployments were completed during a period of unprecedented network bandwidth demand growth due to work and school from home during the pandemic. The data reveals significantly better upstream latency under load performance when the Proportional Integral Controller Enhanced (DOCSIS-PIE) AQM is used. For the device with AQM, the large majority of the latency under load tests resulted in 15-30 milliseconds of latency. In comparison, the device without AQM averaged roughly 250 milliseconds of latency – between 8-16 times higher – a highly significant difference to the end user quality of experience (QoE). As a result, users of this XB6 device with AQM enabled will have a far superior experience when using all Internet applications. Delay is sometimes thought to affect primarily latency-sensitive applications such as video conferencing and online gaming, but consistent low delay provides a better user experience for any Internet application where the user is waiting to see the result of some operation, such as web browsing, stock quotes, weather forecasts, maps, 1 See 2020 network performance report at https://corporate.comcast.com/press/releases/comcast-2020-network- performance-data 2 See report on Pandemic Network Performance from the Broadband Internet Technical Advisory Group (BITAG) at https://www.bitag.org/2020-pandemic-network-performance.php 3 See report from an Internet Architecture Board workshop on COVID at https://www.rfc- editor.org/rfc/rfc9075.html 4 See Comcast-specific reports at https://corporate.comcast.com/covid-19/network/may-20-2020 and https://corporate.comcast.com/stories/covid-19-network-report-smart-network-speed-and-stability 5 https://community.cablelabs.com/wiki/plugins/servlet/cablelabs/alfresco/download?id=3a751ba9-593c-464a-bc91- 3675e1824e30 6 https://tools.ietf.org/id/draft-white-aqm-docsis-pie-00.html, 7 https://www-res.cablelabs.com/wp-content/uploads/2019/02/28094021/DOCSIS-AQM_May2014.pdf Improving Latency with Active Queue Management (AQM) During COVID-19 driving directions, etc. Even video streaming applications benefit from lower network delay (latency) because they need less initial buffering time before a video can begin playing. These large-scale measurement comparisons should provide additional data to justify accelerated deployment of AQM in DOCSIS and other Internet Service Provider (ISP) networks, particularly in the equipment installed in customer homes, including user-purchased home network equipment. 2. Introduction In 2020, the COVID-19 pandemic emerged as an extremely disruptive global event, leading to seismic changes in everyone’s day-to-day lives. In particular, the pandemic prompted a sudden, massive shift to home-based work & school. This led to unprecedented increases in network usage for ISPs across the world, including significant new demands on home networks and local access network segments. The key application to emerge and achieve mainstream adoption was real-time video conferencing, such as Zoom, which on the Comcast network increased in volume by 316% between February and October 20208. As opposed to other types of applications, such as overnight online backups or background downloads of video game updates, the quality of the end user experience is highly delay-sensitive – meaning it is significantly affected by any network latency. Even simple web browsing is improved by lower network delays, though in this case users have been trained over the last 25 years to simply to be more patient when web pages are sometimes slow to load for no apparent reason. While it has always been the case that real-time communications traffic is latency-sensitive, widespread adoption of video conferencing in parallel with significantly increased competing LAN traffic in a home during the COVID-19 pandemic has created a perfect storm of using latency-sensitive applications while other Internet traffic is sharing the home network. This can lead users to experience a degraded video conferencing experience – because of “latency under load” or higher “working latency” – during times when more than one kind of traffic is sharing the network. High latency under load and buffer bloat9, are issues that the networking community has been aware of for many years and Comcast and other ISPs and software developers have been working on solutions to this challenge. In the case of DOCSIS networks, the solution has been to shift from legacy Tail-Drop First-In First-Out (FIFO) queues and configurable buffer control in DOCSIS 3.0, to Active Queue Management (AQM)10 – a feature that is implemented DOCSIS and many other networks and software systems. Enabling AQM on a network interface that suffers from poor responsiveness under normal working conditions improves the performance of all user-facing network applications, and especially real-time, latency-sensitive applications such 8 Based on analysis of aggregate network changes from NetFlow, interconnection traffic exchanGe, and similar data sources. 9 See Buffer Bloat article, “Buffer Bloat: Dark Buffers in the Internet” by Jim Gettys in the November 29, 2011 issue of Communications of the ACM at https://queue.acm.org/detail.cfm?id=2071893 10 RFC 7567 at https://www.rfc-editor.org/rfc/rfc7567.html is a great backGround to why AQM is necessary, so is an excellent starting point for readers learning about the AQM and the context into which it fits in the history of the Internet. Page 2 of 19 Improving Latency with Active Queue Management (AQM) During COVID-19 as video conferencing, without any negative effects on overall throughput or on delay-tolerant applications. This paper explains how and where AQM was deployed in the Comcast network. It also compares latency under load performance measurements for those devices that utilized AQM to those without AQM support. 3. Developments in 2020 In early 2020 Comcast implemented a new cable modem-based Quality of Experience (QoE) measurement system. This QoE measurement system, notably for this report, included a latency under load measurement11. The measurement system was deployed in early 2020 prior to the pandemic 12 13. It consisted of millions of devices able to run performance tests, thus creating a baseline of latency under load measurements prior to the pandemic. In addition, of the range of Comcast cable modem gateways and retail cable modems, the Comcast XB6 device has two distinct model variants – one with upstream AQM enabled during the measurement period in 2020 and one without. This provided a fortuitous opportunity for a uniquely high scale side-by-side performance comparison for a specific model of Comcast home gateway at a time of high demand for latency-sensitive applications. Comcast also in late 2020 deployed and optimized downstream AQM on all Arris E6000 Cable Modem Termination Systems (CMTSes)14. Looking at the events and changes of the year 2020, it is clear that latency-sensitive applications such as video conferencing are here to stay. As well, implementing AQM and other low latency improvements can very positively impact end user QoE. Such improvements can also be deployed independently by ISPs and equipment manufacturers, with no need to coordinate with application developers15. This should provide motivation for network operators to deploy AQM and other low latency improvements and we believe that latency under load performance will emerge as a competitive differentiator – perhaps eventually on equal footing as connection speed in the minds of consumers once they learn how critical latency under load is to their perception of performance. 11 The test uses the open Realtime Response Under Load (RRUL) specification per https://www.bufferbloat.net/projects/bloat/wiki/RRUL_Spec/ 12 See NetForecast’s independent audit of the design of this measurement system at https://www.netforecast.com/netforecast-design-audit-report-of-comcasts-network-performance-measurement- system/ 13 See NetForecast’s independent assessment of performance results during the pandemic at https://www.netforecast.com/netforecasts-report-on-comcasts-network-performance-measurement-system-results-
Recommended publications
  • Buffer De-Bloating in Wireless Access Networks
    Buffer De-bloating in Wireless Access Networks by Yuhang Dai A thesis submitted to the University of London for the degree of Doctor of Philosophy School of Electronic Engineering & Computer Science Queen Mary University of London United Kingdom Sep 2018 TO MY FAMILY Abstract Excessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and thehead packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algo- rithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average end- to-end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel.
    [Show full text]
  • Performance Analysis of Receive-Side Real-Time Congestion Control for Webrtc
    Performance Analysis of Receive-Side Real-Time Congestion Control for WebRTC Varun Singh Albert Abello Lozano J¨org Ott Aalto University, Finland Aalto University, Finland Aalto University, Finland varun.singh@aalto.fi albert.abello.lozano@aalto.fi jorg.ott@aalto.fi Abstract—In the forthcoming deployments of WebRTC sys- congestion control, the IETF has chartered a new working tems, we speculate that high quality video conferencing will see group, RMCAT3, to standardize congestion-control for real- wide adoption. It is currently being deployed on Google Chrome time communication, which is expected to be a multi-year and Firefox web-browsers, meanwhile desktop and mobile clients are under development. Without a standardized signaling mech- process [1]; but early implementations are already available. anism, service providers can enable various types of topologies; In this paper, we evaluate the performance of WebRTC ranging from full-mesh to centralized video conferencing and video calls over different topologies and with varying amounts everything in between. In this paper, we evaluate the performance of cross-traffic. All experiments are conducted using the of various topologies using endpoints implementing WebRTC. Chrome browser and our testbed. With the testbed we are able We specifically evaluate the performance of the congestion con- trol currently implemented and deployed in these web-browser, to control the bottleneck link capacity, the end-to-end latency, Receive-side Real-Time Congestion Control (RRTCC). We use link loss rate and the queue size of intermediate routers. transport impairments like varying throughput, loss and delay, Consequently, in this testbed we can investigate the following: and varying amounts of cross-traffic to measure the performance.
    [Show full text]
  • Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency
    practice Doi:10.1145/2076450.2076464 protocol the presence of congestion Article development led by queue.acm.org and thus the need for compensating adjustments. Because memory now is significant- A discussion with Vint Cerf, Van Jacobson, ly cheaper than it used to be, buffering Nick Weaver, and Jim Gettys. has been overdone in all manner of net- work devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have in- BufferBloat: advertently defeated a critical TCP con- gestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been di- What’s Wrong agnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to with the steer the discussion is Vint Cerf, popu- larly known as one of the “fathers of the Internet.” As the co-designer of the TCP/IP protocols, Cerf did indeed play internet? a key role in developing the Internet and related packet data and security technologies while at Stanford Univer- sity from 1972−1976 and with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976−1982. He currently serves as Google’s chief Internet evangelist. internet DeLays nOw are as common as they are Van Jacobson, presently a research maddening. But that means they end up affecting fellow at PARC where he leads the networking research program, is also system engineers just like all the rest of us. And when central to this discussion.
    [Show full text]
  • The Bufferbloat Problem Over Intermittent Multi-Gbps Mmwave Links
    The Bufferbloat Problem over Intermittent Multi-Gbps mmWave Links Menglei Zhang, Marco Mezzavilla, Jing Zhuy, Sundeep Rangan, Shivendra Panwar NYU WIRELESS, Brooklyn, NY, USA y Intel, Santa Clara, USA emails: fmenglei, mezzavilla, srangan, [email protected], [email protected] Abstract—Due to massive available spectrum in the millimeter transport layer mechanisms and buffering must rapidly adapt wave (mmWave) bands, cellular systems in these frequencies may to the link capacities that can dramatically change. This work provides orders of magnitude greater capacity than networks addresses one particularly important problem – bufferbloat. in conventional lower frequency bands. However, due to high susceptibility to blocking, mmWave links can be extremely inter- Bufferbloat: Bufferbloat is triggered by persistently filled mittent in quality. This combination of high peak throughputs or full buffers, and usually results in long latency and packet and intermittency can cause significant challenges in end-to-end drops. This phenomenon was first pointed out in late 2010 transport-layer mechanisms such as TCP. This paper studies [16]. Optimal buffer sizes should equal the bandwidth de- the particularly challenging problem of bufferbloat. Specifically, lay product (BDP), however, as the delay is usually hard with current buffering and congestion control mechanisms, high throughput-high variable links can lead to excessive buffers to estimate, larger buffers are deployed to prevent losses. incurring long latency. In this paper, we capture the performance Even though these oversized buffer prevent packet loss, the trends obtained while adopting two potential solutions that overall performance degrades, especially when transmitting have been proposed in the literature: Active queue manage- TCP flows,1 which is the main focus of this paper.
    [Show full text]
  • Lab 14: Router's Bufferbloat
    NETWORK TOOLS AND PROTOCOLS Lab 14: Router’s Bufferbloat Document Version: 07-07-2019 Award 1829698 “CyberTraining CIP: Cyberinfrastructure Expertise on High-throughput Networks for Big Science Data Transfers” Lab 14: Router’s Bufferbloat Contents Overview ............................................................................................................................. 3 Objectives............................................................................................................................ 3 Lab settings ......................................................................................................................... 3 Lab roadmap ....................................................................................................................... 3 1 Introduction to bufferbloat ......................................................................................... 3 1.1 Packet delays ........................................................................................................ 4 1.2 Bufferbloat ........................................................................................................... 4 2 Lab topology................................................................................................................ 6 2.1 Starting host h1, host h2, and host h3 ................................................................. 7 2.2 Emulating high-latency WAN ............................................................................... 8 2.4 Testing connection ..............................................................................................
    [Show full text]
  • Tackling the Challenge of Bufferbloat in Multi-Path Transport Over Heterogeneous Wireless Networks
    Tackling the Challenge of Bufferbloat in Multi-Path Transport over Heterogeneous Wireless Networks Simone Ferlin-Oliveira, Thomas Dreibholz, Ozg¨ u¨ Alay Simula Research Laboratory, Martin Linges vei 17, 1364 Fornebu, Norway fferlin,dreibh,[email protected] Abstract—Today, most of the smart phones are equipped with generic and can also be adopted by other multi-path transport two network interfaces: Mobile Broadband (MBB) and Wireless protocols, like e.g. CMT-SCTP, as well. Local Area Network (WLAN). Multi-path transport protocols provide increased throughput or reliability, by utilizing these II.B ACKGROUND interfaces simultaneously. However, multi-path transmission over A. Multi-Path TCP networks with very different QoS characteristics is a challenge. In this paper, we studied Multi-Path TCP (MPTCP) in hetero- Multi-path transport has shown to be beneficial for band- geneous networks, specifically MBB networks and WLAN. We width aggregation and to increase end-host robustness [2], first investigate the effect of bufferbloat in MBB on MPTCP per- [3], [9]. Although many devices support multi-path, the most formance. Then, we propose a bufferbloat mitigation algorithm: prominent transport protocol, i.e. TCP, is still single path. Multi-Path Transport Bufferbloat Mitigation (MPT-BM). Using our algorithm, we conduct experiments in real operational net- MPTCP [2] is a major extension of TCP that supports multi- works. The experimental results show that MPT-BM outperforms path transmission. From the application perspective, MPTCP the current MPTCP implementation by increasing the application utilizes a single standard TCP socket, whereas lower in the goodput quality and decreasing MPTCP’s buffer delay, jitter and stack, several subflows (conventional TCP connections) may buffer space requirements.1 be opened.
    [Show full text]
  • Passive Bufferbloat Measurement Exploiting Transport Layer Information
    1 Passive bufferbloat measurement exploiting transport layer information C. Chirichella1, D. Rossi1, C. Testa1, T. Friedman2 and A. Pescape3 1Telecom ParisTech – [email protected] 2 UPMC Sorbonne Universites – [email protected] 3 Univ. Federico II – [email protected] Abstract—“Bufferbloat” is the growth in buffer size that has a delay-based congestion control algorithm. When LEDBAT led Internet delays to occasionally exceed the light propagation has exclusive use of the bottleneck resources, however, it fully delay from the Earth to the Moon. Manufacturers have built exploits the available capacity. in large buffers to prevent losses on Wi-Fi, cable and ADSL links. But the combination of some links’ limited bandwidth Like TCP, LEDBAT maintains a congestion window. But with TCP’s tendency to saturate that bandwidth results in whereas mainstream TCP variants use loss-based congestion excessive queuing delays. In response, new congestion control control (growing with ACKs and shrinking with losses), LED- protocols such as BitTorrent’s uTP/LEDBAT aim at explicitly BAT estimates the queuing delay on the bottleneck link and limiting the delay that they add at the bottleneck link. This work tunes the window size in an effort to achieve a target level proposes a methodology to monitor the upstream queuing delay experienced by remote hosts, both those using LEDBAT, through of delay. The dynamic of TCP’s AIMD window management LEDBAT’s native one-way delay measurements, and those using systematically forces the buffer to fill until a loss occurs and TCP, through the Timestamp Option. We report preliminary the window size is halved.
    [Show full text]
  • BBR Bufferbloat in DASH Video
    BBR Bufferbloat in DASH Video Santiago Vargas∗ Rebecca Drucker∗ Aiswarya Renganathan Stony Brook University Stony Brook University Stony Brook University [email protected] [email protected] [email protected] Aruna Balasubramanian Anshul Gandhi Stony Brook University Stony Brook University [email protected] [email protected] ABSTRACT 1 INTRODUCTION BBR is a new congestion control algorithm and is seeing increased BBR (Bottleneck Bandwidth and RTT) [8] is a relatively new con- adoption especially for video traffic. BBR solves the bufferbloat gestion control algorithm which is seeing surging adoption in prac- problem in legacy loss-based congestion control algorithms where tice. Recent reports suggest that 40% of Internet traffic now uses application performance drops considerably when router buffers BBR [33]. BBR is especially popular for video applications and are deep. BBR regulates traffic such that router queues don’t build is adopted by popular video streaming services like YouTube [7]. up to avoid the bufferbloat problem while still maintaining high Given the prevalence of video traffic (accounting for 60.6% of total throughput. However, our analysis shows that video applications downstream traffic worldwide [44] in 2019) and the adoption of BBR experience significantly poor performance when using BBR under as the primary congestion control algorithm for video, it is critical deep buffers. In fact, we find that video traffic sees inflated latencies to understand the impact of BBR on video Quality of Experience because of long queues at the router, ultimately degrading video (QoE). Unfortunately, given its recency, BBR is not as well-studied performance. To understand this dichotomy, we study the inter- as other congestion control algorithms, especially in the context of action between BBR and DASH video.
    [Show full text]
  • Tackling the Internet Edge Problems Solving the Home Router Disaster
    Tackling the Internet Edge Problems Jim Gettys – [email protected] Dave Taht – [email protected] Version 0.0, July 21, 2011 Solving the Home Router Disaster Home routers are a mostly unrecognized major problem for the Internet. Home routers have serious problems with bufferbloat, IPv6, and security. Commercial firmware for most of the market is based upon Linux kernels and other open source projects, but the firmware is generally unmaintained after release; worse yet, the code base they use is over five years old! This dysfunctional behavior is all-too-common in the embedded Linux market, which both of us helped spark over a decade ago in our respective careers, but failed to prevent despite our attempts. The incumbent vendor's current code bases are therefore useless as a platform for research and development as any patches for problems are impossible to integrate easily into the upstream projects. Competitive pressures among vendors make it hard for major vendors to want to contribute to solving the problems if their competition will benefit. There is an alternative bleeding edge and open code base available in the OpenWrt project. OpenWrt is already good enough that older versions of OpenWrt are shipped in some of the smaller commercial home routers. Home routers using firmware from OpenWrt have matured to where modest amounts of labor can succeed in a such a technology demonstration, providing the “proof of principle” to encourage the embedded network industry to move quickly to address these issues. CeroWrt is an OpenWrt router platform for use by individuals, researchers, and students interested in advancing the state of the art of the Internet.
    [Show full text]
  • Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach)
    University of California Santa Barbara Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach) A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Daniel Mark Havey Committee in charge: Professor Kevin Almeroth, Chair Fred Baker, Fellow Cisco Systems, Inc. Professor Elizabeth Belding Professor Phill Conrad June 2015 The Dissertation of Daniel Mark Havey is approved. Fred Baker, Fellow Cisco Systems, Inc. Professor Elizabeth Belding Professor Phill Conrad Professor Kevin Almeroth, Committee Chair March 2015 Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach) Copyright c 2015 by Daniel Mark Havey iii This dissertation is dedicated to the one true God who created the universe full of scientific wonder for us to explore and enjoy. iv Acknowledgements Funding for this PhD was provided by the University of California Santa Barbara, by Lockheed Martin, by Cisco Systems, Inc, and by Microsoft. This PhD would not have been possible without the support of many people students, professors and many others. I would like to offer thanks to all of my friends from San Bernardino who often said that someday you will be a doctor and I would love to see it. This is the day. We have made it. To all the professors in my undergraduate work who encouraged me along the way and guided me towards research in particular Keith Schubert and David Turner. To the students from my lab at UCSB. To Lara who started after, finished right before and often suffered through tough times with along with me.
    [Show full text]
  • BBR Bufferbloat in DASH Video
    BBR Bufferbloat in DASH Video Santiago Vargas∗ Rebecca Drucker∗ Aiswarya Renganathan Stony Brook University Stony Brook University Stony Brook University USA USA USA [email protected] [email protected] [email protected] Aruna Balasubramanian Anshul Gandhi Stony Brook University Stony Brook University USA USA [email protected] [email protected] ABSTRACT 1 INTRODUCTION BBR is a new congestion control algorithm and is seeing increased BBR (Bottleneck Bandwidth and RTT) [8] is a relatively new con- adoption especially for video traffic. BBR solves the bufferbloat gestion control algorithm which is seeing surging adoption in prac- problem in legacy loss-based congestion control algorithms where tice. Recent reports suggest that 40% of Internet traffic now uses application performance drops considerably when router buffers BBR [33]. BBR is especially popular for video applications and are deep. BBR regulates traffic such that router queues don’t build is adopted by popular video streaming services like YouTube [7]. up to avoid the bufferbloat problem while still maintaining high Given the prevalence of video traffic (accounting for 60.6% of total throughput. However, our analysis shows that video applications downstream traffic worldwide [44] in 2019) and the adoption of BBR experience significantly poor performance when using BBR under as the primary congestion control algorithm for video, it is critical deep buffers. In fact, we find that video traffic sees inflated latencies to understand the impact of BBR on video Quality of Experience because of long queues at the router, ultimately degrading video (QoE). Unfortunately, given its recency, BBR is not as well-studied performance.
    [Show full text]
  • Fixing the Internet: Congestion Control and Buffer Bloat
    Fixing the Internet: Congestion Control and Buffer Bloat Influential OS Research Julian Stecklina <[email protected]> Outline TCP Protocol TCP Implementation Queuing Algorithms Buffer Bloat Unreliable Packet Networks A packet can be: ● damaged, ● dropped, ● duplicated, ● reordered with other packets. Need an abstraction to deal with this! Transmission Control Protocol Provides bidirectional channel. Each direction is a reliable byte stream over unreliable best-effort packet network. TCPv4 specified in 1981. Later changes essentially backward compatible. Basis for advanced protocols (MPTCP, SCTP). TCP Header Position in byte +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+stream of first byte | Source Port in this packet | Destination Port | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+Byte stream received | Sequence correctlyNumber upto this byte | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Acknowledgment Number Can receive this many | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+bytes additionally | Data | |U|A|P|R|S|F| | | Offset| Reserved |R|C|S|S|Y|I| Window | | | |G|K|H|T|N|N| | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Checksum | Urgent Pointer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Options | Padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | data | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ TCP Header Format [RFC 793] Receive Window
    [Show full text]