Performance Analysis of Receive-Side Real-Time Congestion Control for Webrtc

Total Page:16

File Type:pdf, Size:1020Kb

Performance Analysis of Receive-Side Real-Time Congestion Control for Webrtc Performance Analysis of Receive-Side Real-Time Congestion Control for WebRTC Varun Singh Albert Abello Lozano J¨org Ott Aalto University, Finland Aalto University, Finland Aalto University, Finland varun.singh@aalto.fi albert.abello.lozano@aalto.fi jorg.ott@aalto.fi Abstract—In the forthcoming deployments of WebRTC sys- congestion control, the IETF has chartered a new working tems, we speculate that high quality video conferencing will see group, RMCAT3, to standardize congestion-control for real- wide adoption. It is currently being deployed on Google Chrome time communication, which is expected to be a multi-year and Firefox web-browsers, meanwhile desktop and mobile clients are under development. Without a standardized signaling mech- process [1]; but early implementations are already available. anism, service providers can enable various types of topologies; In this paper, we evaluate the performance of WebRTC ranging from full-mesh to centralized video conferencing and video calls over different topologies and with varying amounts everything in between. In this paper, we evaluate the performance of cross-traffic. All experiments are conducted using the of various topologies using endpoints implementing WebRTC. Chrome browser and our testbed. With the testbed we are able We specifically evaluate the performance of the congestion con- trol currently implemented and deployed in these web-browser, to control the bottleneck link capacity, the end-to-end latency, Receive-side Real-Time Congestion Control (RRTCC). We use link loss rate and the queue size of intermediate routers. transport impairments like varying throughput, loss and delay, Consequently, in this testbed we can investigate the following: and varying amounts of cross-traffic to measure the performance. 1) utilization of the bottleneck link capacity, 2) (un)friendliness Our results show that RRTCC is performant when by itself, but to other media flows or TCP cross-traffic, and 3) performance starves when competing with TCP. When competing with self- similar media streams, the late-arriving flow temporarily starves in multiparty calls. The results in the paper complement the the existing media flows. results in [2]. We structure the remainder of the paper as follows. Sec- I. INTRODUCTION tion II discusses the congestion control currently implemented The development of WebRTC systems is going to encourage by the browsers and section III describes the architecture for wide adoption of video conferencing on the Internet. The monitoring the video calls. Section IV describes the testbed main reason is the shift from a desktop or stand-alone real- and the performance of the congestion control in various scen- time communication (RTC) application to RTC-enabled web- arios. Section V puts the results in perspective and compares browser. The standardization activity for WebRTC is split it to existing work. Section VI concludes the paper. between W3C1 and the IETF2. W3C is defining the Javascript APIs and the IETF is profiling the existing media components II. RECEIVE-SIDE REAL-TIME CONGESTION CONTROL (SRTP, SDP, ICE, etc.) for use with WebRTC. Until now, to Real-time Transport Protocol (RTP) [3] carries media data enable voice and video calls from within the browser required over the network, typically using UDP. In WebRTC, the media each service to implement their real-time communication stack packets (in RTP) and the feedback packets (in RTCP) are as a plugin, which the user downloads. Consequently, each multiplexed on the same port [4] to reduce the number of NAT voice and video service needed to build a plugin containing bindings and Interactive Connectivity Establishment (ICE) a complete multimedia stack. With WebRTC the multimedia overhead. Furthermore, multiple media sources (e.g, video and stack is built into the web-browser internals and the developers audio) are also multiplexed together and sent/received on the need to just use the appropriate HTML5 API. However, same port [5]. In topologies with multiple participants [6], me- WebRTC does not mandate a specific signaling protocol, dia streams from all participants are also multiplexed together and the implementers are free to choose from pre-existing and received on the same port [7]. protocols (SIP, Jingle, etc.) or write their own. In so doing, WebRTC provides the flexibility to deploy video calls in any Currently, WebRTC-capable browsers implement a conges- application-specific topology (point-to-point, mesh, centralized tion control algorithm proposed by Google called Receiver- mixer, overlays, hub-and-spoke). side Real-time Congestion Control (RRTCC) [8]. In the follow- While multimedia communication has existed for over a ing sub-sections, we briefly discuss the important features and decade (via Skype, etc.), there is no standardized congestion- assumptions of RRTCC, which are essential for the evaluation. control, but many have been proposed in the past. To tackle RRTCC has two components, a receiver-side and a sender-side, to function properly both need to be implemented. 1http://www.w3.org/2011/04/webrtc/ 2http://tools.ietf.org/wg/rtcweb/ 3http://tools.ietf.org/wg/rmcat/ 2 A. Receiver Side Control The receiver estimates the overuse or underuse of the bottle- neck link based on the timestamps of incoming frames relative to the generation timestamps. At high bit rates, the video '(£ frames exceed the MTU size and are fragmented over multiple &¨ " RTP packets, in which case the received timestamp of the last packet is used. When a video frame is fragmented all the RTP packets have the same generation timestamp [3]. Formally, the jitter is calculated as follows: Ji =(ti − ti−1) − (Ti − Ti−1), where t is receive timestamp, T is the RTP timestamp, and the Ji ¤¥¦ £¡! ©" ¡¢¡£ ¤¥¦ i and i-1 are successive frames. Typically, if is positive, it ¡¢ ¡ £ ¨© #! $$% © ¡ §¨ © ¨© corresponds to congestion. Further, [8] proposes inter-arrival §¨ © jitter as a function serialization delay, queuing delay and network jitter: Figure 1. Description of simple testing environment topology for WebRTC. Size(i) − Size(i − 1) Ji = + m(i)+v(i) Capacity ) ⎧ ⎪A i − × − . p p> . Here, m(i) and v(i) are a sample of a stochastic process, ⎨⎪ s( 1) (1 0 5 ) 0 10 which is modeled as white Gaussian process. When the mean A i A i − . ≤ p ≤ . s( )=⎪ s( 1) 0 02 0 1 of Gaussian process, m(i) is zero, then there is no ongoing ⎩⎪ 1.05 × (As(i − 1) + 1000) p<0.02 congestion. When the bottleneck is overused, the stochastic process m(i) is expected to increase, and when the congestion Lastly, the As(i) should be between the rate estimated by v(i) is abating it is expected to decrease. However, is expected the TFRC equation and REMB value signaled by the receiver. to be zero for frames that are roughly the same size, but The interesting part of the sender side algorithm is that it does sometimes the encoder produces key-frames (also called I- not react to losses under 10% and in the case of losses less frames) which are an order of magnitude larger and are than 2%, it even increases the rate by a bit over 5%. In these expected to take more time to percolate through the network. cases, in addition to the media stream, the endpoint generates Consequently, these packets are going to have an increasing FEC packets to protect the media stream, the generated FEC inter-arrival jitter just due to their sheer size. packets are in addition to the media sending rate. This is J The receiver tracks the i and frame size, while Capacity noticeable in the Chrome browser when the rate-controller C i m i ( ( ))and ( ) are estimated. [8] uses a Kalman filter to observes packet loss between 2-10%. In low-delay networks, 1 m i T compute [ C(i) ( )] and recursively updates the matrix. we also observe retransmissions requests sent by the receiver m i Overuse is triggered only when ( ) exceeds a threshold value in the form of Negative Acknowledgements (NACKs) and for at least a certain duration and a set number of frames are Picture Loss Indication (PLI). Full details can be found in m i received. Underuse is signaled when ( ) falls below a certain the Internet-Draft [8]. threshold value. When m(i) is zero, it is considered a stable situation and the old rate is kept. The current receiver estimate, III. MONITORING ARCHITECTURE Ar(i) is calculated as follows: ⎧ We run a basic web application to establish WebRTC calls. ⎪ . × RR ,m i > It runs on a dedicated server and uses nodejs4. The signaling ⎨⎪0 85 overuse ( ) 0 between browser and the server is over websockets5. Figure 1 Ar(i)= A i − ,m i ⎪ r( 1) stable ( )=0 describes an overview of the above architecture. We run mul- ⎩⎪ 1.5 × RR under-use,m(i) < 0 tiple TURN servers (e.g., restund6) on dedicated machines to traverse NATs and firewalls. The WebRTC-capable browsers The receiving endpoint sends an RTCP feedback message (Chrome7 and Firefox8) establish a connection over HTTPS containing the Receiver Estimated Media Bitrate (REMB) [9], with our web server and download the WebRTC javascript Ar at 1s intervals. (call.js) which configures the browser internals during call establishment. The media flows either directly between the B. Sender Side Control two participants or via the TURN servers if the endpoints are The sender side uses the TFRC equation [10] to estimate behind NATs or firewalls. the sending rate based on the observed loss rate (p), measured 4 RTT and the bytes sent. If no feedback is received for two http://www.nodejs.org/ v0.10.6 5http://socket.io/ feedback intervals, the media rate is halved. In case a RTCP 6http://www.creytiv.com/restund.html receiver report is received, the sender estimate is calculated 7https://www.google.com/intl/en/chrome/browser/, v27.01453.12 based on the number of observed losses (p).
Recommended publications
  • Buffer De-Bloating in Wireless Access Networks
    Buffer De-bloating in Wireless Access Networks by Yuhang Dai A thesis submitted to the University of London for the degree of Doctor of Philosophy School of Electronic Engineering & Computer Science Queen Mary University of London United Kingdom Sep 2018 TO MY FAMILY Abstract Excessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and thehead packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algo- rithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average end- to-end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel.
    [Show full text]
  • Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency
    practice Doi:10.1145/2076450.2076464 protocol the presence of congestion Article development led by queue.acm.org and thus the need for compensating adjustments. Because memory now is significant- A discussion with Vint Cerf, Van Jacobson, ly cheaper than it used to be, buffering Nick Weaver, and Jim Gettys. has been overdone in all manner of net- work devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have in- BufferBloat: advertently defeated a critical TCP con- gestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been di- What’s Wrong agnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to with the steer the discussion is Vint Cerf, popu- larly known as one of the “fathers of the Internet.” As the co-designer of the TCP/IP protocols, Cerf did indeed play internet? a key role in developing the Internet and related packet data and security technologies while at Stanford Univer- sity from 1972−1976 and with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976−1982. He currently serves as Google’s chief Internet evangelist. internet DeLays nOw are as common as they are Van Jacobson, presently a research maddening. But that means they end up affecting fellow at PARC where he leads the networking research program, is also system engineers just like all the rest of us. And when central to this discussion.
    [Show full text]
  • The Bufferbloat Problem Over Intermittent Multi-Gbps Mmwave Links
    The Bufferbloat Problem over Intermittent Multi-Gbps mmWave Links Menglei Zhang, Marco Mezzavilla, Jing Zhuy, Sundeep Rangan, Shivendra Panwar NYU WIRELESS, Brooklyn, NY, USA y Intel, Santa Clara, USA emails: fmenglei, mezzavilla, srangan, [email protected], [email protected] Abstract—Due to massive available spectrum in the millimeter transport layer mechanisms and buffering must rapidly adapt wave (mmWave) bands, cellular systems in these frequencies may to the link capacities that can dramatically change. This work provides orders of magnitude greater capacity than networks addresses one particularly important problem – bufferbloat. in conventional lower frequency bands. However, due to high susceptibility to blocking, mmWave links can be extremely inter- Bufferbloat: Bufferbloat is triggered by persistently filled mittent in quality. This combination of high peak throughputs or full buffers, and usually results in long latency and packet and intermittency can cause significant challenges in end-to-end drops. This phenomenon was first pointed out in late 2010 transport-layer mechanisms such as TCP. This paper studies [16]. Optimal buffer sizes should equal the bandwidth de- the particularly challenging problem of bufferbloat. Specifically, lay product (BDP), however, as the delay is usually hard with current buffering and congestion control mechanisms, high throughput-high variable links can lead to excessive buffers to estimate, larger buffers are deployed to prevent losses. incurring long latency. In this paper, we capture the performance Even though these oversized buffer prevent packet loss, the trends obtained while adopting two potential solutions that overall performance degrades, especially when transmitting have been proposed in the literature: Active queue manage- TCP flows,1 which is the main focus of this paper.
    [Show full text]
  • Lab 14: Router's Bufferbloat
    NETWORK TOOLS AND PROTOCOLS Lab 14: Router’s Bufferbloat Document Version: 07-07-2019 Award 1829698 “CyberTraining CIP: Cyberinfrastructure Expertise on High-throughput Networks for Big Science Data Transfers” Lab 14: Router’s Bufferbloat Contents Overview ............................................................................................................................. 3 Objectives............................................................................................................................ 3 Lab settings ......................................................................................................................... 3 Lab roadmap ....................................................................................................................... 3 1 Introduction to bufferbloat ......................................................................................... 3 1.1 Packet delays ........................................................................................................ 4 1.2 Bufferbloat ........................................................................................................... 4 2 Lab topology................................................................................................................ 6 2.1 Starting host h1, host h2, and host h3 ................................................................. 7 2.2 Emulating high-latency WAN ............................................................................... 8 2.4 Testing connection ..............................................................................................
    [Show full text]
  • Tackling the Challenge of Bufferbloat in Multi-Path Transport Over Heterogeneous Wireless Networks
    Tackling the Challenge of Bufferbloat in Multi-Path Transport over Heterogeneous Wireless Networks Simone Ferlin-Oliveira, Thomas Dreibholz, Ozg¨ u¨ Alay Simula Research Laboratory, Martin Linges vei 17, 1364 Fornebu, Norway fferlin,dreibh,[email protected] Abstract—Today, most of the smart phones are equipped with generic and can also be adopted by other multi-path transport two network interfaces: Mobile Broadband (MBB) and Wireless protocols, like e.g. CMT-SCTP, as well. Local Area Network (WLAN). Multi-path transport protocols provide increased throughput or reliability, by utilizing these II.B ACKGROUND interfaces simultaneously. However, multi-path transmission over A. Multi-Path TCP networks with very different QoS characteristics is a challenge. In this paper, we studied Multi-Path TCP (MPTCP) in hetero- Multi-path transport has shown to be beneficial for band- geneous networks, specifically MBB networks and WLAN. We width aggregation and to increase end-host robustness [2], first investigate the effect of bufferbloat in MBB on MPTCP per- [3], [9]. Although many devices support multi-path, the most formance. Then, we propose a bufferbloat mitigation algorithm: prominent transport protocol, i.e. TCP, is still single path. Multi-Path Transport Bufferbloat Mitigation (MPT-BM). Using our algorithm, we conduct experiments in real operational net- MPTCP [2] is a major extension of TCP that supports multi- works. The experimental results show that MPT-BM outperforms path transmission. From the application perspective, MPTCP the current MPTCP implementation by increasing the application utilizes a single standard TCP socket, whereas lower in the goodput quality and decreasing MPTCP’s buffer delay, jitter and stack, several subflows (conventional TCP connections) may buffer space requirements.1 be opened.
    [Show full text]
  • Passive Bufferbloat Measurement Exploiting Transport Layer Information
    1 Passive bufferbloat measurement exploiting transport layer information C. Chirichella1, D. Rossi1, C. Testa1, T. Friedman2 and A. Pescape3 1Telecom ParisTech – [email protected] 2 UPMC Sorbonne Universites – [email protected] 3 Univ. Federico II – [email protected] Abstract—“Bufferbloat” is the growth in buffer size that has a delay-based congestion control algorithm. When LEDBAT led Internet delays to occasionally exceed the light propagation has exclusive use of the bottleneck resources, however, it fully delay from the Earth to the Moon. Manufacturers have built exploits the available capacity. in large buffers to prevent losses on Wi-Fi, cable and ADSL links. But the combination of some links’ limited bandwidth Like TCP, LEDBAT maintains a congestion window. But with TCP’s tendency to saturate that bandwidth results in whereas mainstream TCP variants use loss-based congestion excessive queuing delays. In response, new congestion control control (growing with ACKs and shrinking with losses), LED- protocols such as BitTorrent’s uTP/LEDBAT aim at explicitly BAT estimates the queuing delay on the bottleneck link and limiting the delay that they add at the bottleneck link. This work tunes the window size in an effort to achieve a target level proposes a methodology to monitor the upstream queuing delay experienced by remote hosts, both those using LEDBAT, through of delay. The dynamic of TCP’s AIMD window management LEDBAT’s native one-way delay measurements, and those using systematically forces the buffer to fill until a loss occurs and TCP, through the Timestamp Option. We report preliminary the window size is halved.
    [Show full text]
  • BBR Bufferbloat in DASH Video
    BBR Bufferbloat in DASH Video Santiago Vargas∗ Rebecca Drucker∗ Aiswarya Renganathan Stony Brook University Stony Brook University Stony Brook University [email protected] [email protected] [email protected] Aruna Balasubramanian Anshul Gandhi Stony Brook University Stony Brook University [email protected] [email protected] ABSTRACT 1 INTRODUCTION BBR is a new congestion control algorithm and is seeing increased BBR (Bottleneck Bandwidth and RTT) [8] is a relatively new con- adoption especially for video traffic. BBR solves the bufferbloat gestion control algorithm which is seeing surging adoption in prac- problem in legacy loss-based congestion control algorithms where tice. Recent reports suggest that 40% of Internet traffic now uses application performance drops considerably when router buffers BBR [33]. BBR is especially popular for video applications and are deep. BBR regulates traffic such that router queues don’t build is adopted by popular video streaming services like YouTube [7]. up to avoid the bufferbloat problem while still maintaining high Given the prevalence of video traffic (accounting for 60.6% of total throughput. However, our analysis shows that video applications downstream traffic worldwide [44] in 2019) and the adoption of BBR experience significantly poor performance when using BBR under as the primary congestion control algorithm for video, it is critical deep buffers. In fact, we find that video traffic sees inflated latencies to understand the impact of BBR on video Quality of Experience because of long queues at the router, ultimately degrading video (QoE). Unfortunately, given its recency, BBR is not as well-studied performance. To understand this dichotomy, we study the inter- as other congestion control algorithms, especially in the context of action between BBR and DASH video.
    [Show full text]
  • Tackling the Internet Edge Problems Solving the Home Router Disaster
    Tackling the Internet Edge Problems Jim Gettys – [email protected] Dave Taht – [email protected] Version 0.0, July 21, 2011 Solving the Home Router Disaster Home routers are a mostly unrecognized major problem for the Internet. Home routers have serious problems with bufferbloat, IPv6, and security. Commercial firmware for most of the market is based upon Linux kernels and other open source projects, but the firmware is generally unmaintained after release; worse yet, the code base they use is over five years old! This dysfunctional behavior is all-too-common in the embedded Linux market, which both of us helped spark over a decade ago in our respective careers, but failed to prevent despite our attempts. The incumbent vendor's current code bases are therefore useless as a platform for research and development as any patches for problems are impossible to integrate easily into the upstream projects. Competitive pressures among vendors make it hard for major vendors to want to contribute to solving the problems if their competition will benefit. There is an alternative bleeding edge and open code base available in the OpenWrt project. OpenWrt is already good enough that older versions of OpenWrt are shipped in some of the smaller commercial home routers. Home routers using firmware from OpenWrt have matured to where modest amounts of labor can succeed in a such a technology demonstration, providing the “proof of principle” to encourage the embedded network industry to move quickly to address these issues. CeroWrt is an OpenWrt router platform for use by individuals, researchers, and students interested in advancing the state of the art of the Internet.
    [Show full text]
  • Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach)
    University of California Santa Barbara Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach) A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Daniel Mark Havey Committee in charge: Professor Kevin Almeroth, Chair Fred Baker, Fellow Cisco Systems, Inc. Professor Elizabeth Belding Professor Phill Conrad June 2015 The Dissertation of Daniel Mark Havey is approved. Fred Baker, Fellow Cisco Systems, Inc. Professor Elizabeth Belding Professor Phill Conrad Professor Kevin Almeroth, Committee Chair March 2015 Throughput and Delay on the Packet Switched Internet (A Cross-Disciplinary Approach) Copyright c 2015 by Daniel Mark Havey iii This dissertation is dedicated to the one true God who created the universe full of scientific wonder for us to explore and enjoy. iv Acknowledgements Funding for this PhD was provided by the University of California Santa Barbara, by Lockheed Martin, by Cisco Systems, Inc, and by Microsoft. This PhD would not have been possible without the support of many people students, professors and many others. I would like to offer thanks to all of my friends from San Bernardino who often said that someday you will be a doctor and I would love to see it. This is the day. We have made it. To all the professors in my undergraduate work who encouraged me along the way and guided me towards research in particular Keith Schubert and David Turner. To the students from my lab at UCSB. To Lara who started after, finished right before and often suffered through tough times with along with me.
    [Show full text]
  • BBR Bufferbloat in DASH Video
    BBR Bufferbloat in DASH Video Santiago Vargas∗ Rebecca Drucker∗ Aiswarya Renganathan Stony Brook University Stony Brook University Stony Brook University USA USA USA [email protected] [email protected] [email protected] Aruna Balasubramanian Anshul Gandhi Stony Brook University Stony Brook University USA USA [email protected] [email protected] ABSTRACT 1 INTRODUCTION BBR is a new congestion control algorithm and is seeing increased BBR (Bottleneck Bandwidth and RTT) [8] is a relatively new con- adoption especially for video traffic. BBR solves the bufferbloat gestion control algorithm which is seeing surging adoption in prac- problem in legacy loss-based congestion control algorithms where tice. Recent reports suggest that 40% of Internet traffic now uses application performance drops considerably when router buffers BBR [33]. BBR is especially popular for video applications and are deep. BBR regulates traffic such that router queues don’t build is adopted by popular video streaming services like YouTube [7]. up to avoid the bufferbloat problem while still maintaining high Given the prevalence of video traffic (accounting for 60.6% of total throughput. However, our analysis shows that video applications downstream traffic worldwide [44] in 2019) and the adoption of BBR experience significantly poor performance when using BBR under as the primary congestion control algorithm for video, it is critical deep buffers. In fact, we find that video traffic sees inflated latencies to understand the impact of BBR on video Quality of Experience because of long queues at the router, ultimately degrading video (QoE). Unfortunately, given its recency, BBR is not as well-studied performance.
    [Show full text]
  • Fixing the Internet: Congestion Control and Buffer Bloat
    Fixing the Internet: Congestion Control and Buffer Bloat Influential OS Research Julian Stecklina <[email protected]> Outline TCP Protocol TCP Implementation Queuing Algorithms Buffer Bloat Unreliable Packet Networks A packet can be: ● damaged, ● dropped, ● duplicated, ● reordered with other packets. Need an abstraction to deal with this! Transmission Control Protocol Provides bidirectional channel. Each direction is a reliable byte stream over unreliable best-effort packet network. TCPv4 specified in 1981. Later changes essentially backward compatible. Basis for advanced protocols (MPTCP, SCTP). TCP Header Position in byte +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+stream of first byte | Source Port in this packet | Destination Port | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+Byte stream received | Sequence correctlyNumber upto this byte | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Acknowledgment Number Can receive this many | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+bytes additionally | Data | |U|A|P|R|S|F| | | Offset| Reserved |R|C|S|S|Y|I| Window | | | |G|K|H|T|N|N| | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Checksum | Urgent Pointer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Options | Padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | data | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ TCP Header Format [RFC 793] Receive Window
    [Show full text]
  • Improving Latency with Active Queue Management (AQM) During COVID-19 by Allen Flickinger, Carl Klatsky, Atahualpa Ledesma, Jason Livingood, Sebnem Ozer 30 July 2021
    Improving Latency with Active Queue Management (AQM) During COVID-19 By Allen Flickinger, Carl Klatsky, Atahualpa Ledesma, Jason Livingood, Sebnem Ozer 30 July 2021 1. Abstract During the COVID-19 pandemic the Comcast network performed well in response to unprecedented changes in Internet usage1 2 3 4. Video communications applications such as Zoom, WebEx, and FaceTime have exploded in popularity – applications that happen to be sensitive to network latency. However, in today’s typical networks – such as a home network – those applications often degrade if they are sharing a network link with other network traffic. This is a problem caused by a network design flaw often described using the term ‘buffer bloat’. Several years ago, Comcast helped to fund research & development in the technical community into new Active Queue Management (AQM) techniques to eliminate this issue and this was later built into Data Over Cable Service Interface Specification (DOCSIS) standards5 6 7. Just prior to the pandemic, Comcast also deployed a large-scale network performance measurement system that included a latency under load test. In addition, Comcast happened to deploy two otherwise identical cable modems; one with upstream AQM enabled, and the other without. This fortuitous confluence of events has enabled Comcast to perform a comparative analysis of the differences between cable modem gateways using AQM with those that lack that enhancement, at a unique level of scale across many months of time and millions of devices. These deployments were completed during a period of unprecedented network bandwidth demand growth due to work and school from home during the pandemic.
    [Show full text]