An Experimental Evaluation of Low Congestion Control for mmWave Links

Ashutosh Srivastava, Fraida Fund, Shivendra S. Panwar Department of Electrical and Computer Engineering, NYU Tandon School of Engineering Emails: {ashusri, ffund, panwar}@nyu.edu

Abstract—Applications that require extremely low latency are −40 expected to be a major driver of and WLAN networks that −45 include millimeter wave (mmWave) links. However, mmWave −50 links can experience frequent, sudden changes in link capacity −55 due to obstructions in the signal path. These dramatic variations RSSI (dBm) in link capacity cause a temporary “” condition −60 0 25 50 75 100 during which delay may increase by a factor of 2-10. Low Time (s) latency congestion control protocols, which manage bufferbloat Fig. 1: Effect of human blocker on received signal strength of by minimizing queue occupancy, represent a potential solution a 60 GHz WLAN link. to this problem, however their behavior over links with dramatic variations in capacity is not well understood. In this paper, we explore the behavior of two major low latency congestion control protocols, TCP BBR and TCP Prague (as part of L4S), using link Link rate: 5 packets/ms traces collected over mmWave links under various conditions. 1ms queuing delay Our evaluation reveals potential problems associated with use of these congestion control protocols for low latency applications over mmWave links. Link rate: 1 packet/ms Index Terms—congestion control, low latency, millimeter wave 5ms queuing delay Fig. 2: Illustration of effect of variations in link capacity on I.INTRODUCTION queueing delay. Low latency applications are envisioned to play a significant role in the long-term growth and economic impact of 5G This effect is of great concern to low-delay applications, and future WLAN networks [1]. These include augmented because extreme variations in mmWave link capacity cause and (AR/VR), remote surgery, and autonomous a temporary “bufferbloat” condition during which delay in- connected vehicles. As a result, stringent latency requirements creases dramatically. The reason for this is illustrated in have been established for 5G communication. The ITU [2] has Figure 2, which shows the queuing delay experienced by a set the minimum requirement for user plane latency at 4ms packet that arrives at a buffer with five packets already queued. for enhanced mobile broadband and 1ms for ultra reliable low When the egress rate of the queue is 5 packets/ms, the arriving latency communication; the minimum requirement for control packet sees 1ms of queuing delay. If the egress rate of the plane latency is 20ms, and a lower (10ms) target is strongly queue suddenly drops to 1 packet/ms, the arriving packet encouraged. sees 5ms of queuing delay. In other words, with the same To achieve high throughput as well as low latency, these number of packets in the bottleneck queue, a sudden five-fold wireless networks will rely heavily on millimeter wave fre- decrease in link capacity will give rise to a corresponding quency bands (30-300 GHz), due to the large amounts of five-fold increase in the observed queueing delay. This effect spectrum available on those bands. However, mmWave links is exacerbated by large buffers, since the delay at a given are highly susceptible to blockages such as buildings, vehicles, egress rate is proportional to buffer occupancy, and large walls, doors and even the human body [3] [4]. For example, buffers permit a greater buffer occupancy. However, because Figure 1 shows the received signal strength over a 60GHz of frequent short-term link outages, buffers adjacent to a WLAN link with an occasional human blocker in the signal mmWave link may need to hold several seconds of data - path. While the human blocks the signal path, the received which, because of the high link capacity, suggests the need for signal strength decreases about 10 dB. As a result of this very large buffers. In fact, subscribers of the Verizon 5G Home susceptibility to blockages, mmWave links can experience mmWave fixed broadband service have reported extreme delay, frequent, sudden outages or changes in link capacity. This has especially under uplink saturation conditions, which may be been confirmed in live 5G deployments as well; a recent ex- due to bufferbloat [6], [7]. perimental study on Verizon’s mmWave network deployed in Low-delay congestion controls have been proposed as a Minneapolis and Chicago reported a high handover frequency potential solution to the general problem of bufferbloat across due to frequent disruptions in mmWave connectivity [5]. the . Traditional loss-based congestion control schemes

© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. fill up the bottleneck buffer and back off only when packet In our experimental evaluation, we have focused on two loss is observed. This results in high queueing delays in the proposals which are championed by major industry players: presence of large buffers, i.e. bufferbloat [8]. Low latency TCP BBR [9] and TCP Prague (which is not a standalone congestion controls including BBR [9], PCC [10], COPA [11], congestion control, but is part of the L4S architecture [13]). and Sprout [12], as well as frameworks such as L4S [13] which We will give a brief introduction to these schemes. include low latency congestion control, seek to minimize BBR (Bottleneck and round-trip propagation queueing delay by occupying as little of the bottleneck buffer time) : BBR [9] tries to maintain low queueing delays by op- as possible while utilizing the available capacity to the fullest. erating at the bandwidth-delay product (BDP) of the network However, the behavior of these low latency congestion path. BBR maintains and continuously updates estimates of controls over links with dramatic variations in capacity (as the bottleneck bandwidth and minimum RTT of the path, and is characteristic of mmWave links) is not well understood. caps the number of packets in flight to a constant times the Early work in this area has been limited to ns-3 simulation estimated BDP. studies [14]–[20], where the mmWave link dynamics and The first release of TCP BBR operates in four phases. in most cases, the implementation of the congestion control The first phase, called the startup phase, is the bandwidth protocols, are simulated. These results have not been validated estimation phase where it seeks to quickly estimate the bottle- experimentally using a real network stack. neck bandwidth. Then, in the second phase (called the drain In this paper, we use a testbed experiment to explore phase) it reduces its sending rate in order to drain the queue the behavior of several key low latency congestion control that is expected to have built up in the first phase. At this schemes when they operate in a network with a mmWave point, BBR enters steady state, where it moves back and forth wireless link as the bottleneck. To promote reproducibility, between bandwidth probing and RTT probing phases. Most of we used a series of traces collected over 60 GHz mmWave BBR’s time is spent in the bandwidth probing phase, in which links under different obstruction conditions, and then used it usually sends at a rate equal to its bottleneck bandwidth emulation to play back the mmWave link characteristics on estimate, but occasionally probes just above this estimate to the CloudLab testbed [21]. This experiment can more closely try and discover additional capacity. During this phase, the represent the true behavior of congestion control protocols CWND is set to double the estimated BDP. Meanwhile, BBR than simulation studies, while still allowing more control and keeps a running minimum of RTT, and after 10 seconds have reproducibility than experiments on an over-the-air mmWave elapsed without updating this running minimum, it enters an testbed. The contributions of this work are as follows: RTT probing phase. During the RTT probing phase, BBR’s • We describe a reproducible testbed experiment to explore CWND is set to 4 segments in order to drain the queue and the behavior of low latency congestion control over find the base RTT of the link. After updating the minimum mmWave links. Instructions to reproduce this experiment RTT estimate, BBR returns to the bandwidth probing phase. on CloudLab are provided at [22]. Version 2 of BBR introduces many changes, but operates ac- • We show that TCP BBR, the low latency congestion cording to the same principle of estimating the BDP, spending control scheme championed by , works well to most of its time probing for more bandwidth, and occasionally limit the delay caused by sudden obstructions in the reducing its sending rate to measure the minimum RTT. signal path. However, for applications that require both L4S (low latency, low loss, scalable throughput): L4S [13] low delay and high throughput, BBR may not be a good is an architecture for network service which includes two candidate because it frequently reduces its sending rate major components : in order to drain the queue. • TCP Prague congestion control : TCP Prague is similar • We show that in TCP Prague (the L4S congestion con- to DCTCP [23], in that it uses accurate ECN feedback trol), the combination of slow convergence to a fair share to control sending rate. However, it includes several of capacity and frequent disruptions in the mmWave link extensions and adaptations relative to DCTCP that make can lead to starvation of some flows. it more suitable for use over the Internet, where it may The rest of this paper is organised as follows. We start share links with loss-based congestion control flows. with a brief introduction of the low latency congestion control • Dual-Q coupled AQM : To address the problem of schemes considered for evaluation in Section II. In Section III, unfairness with loss-based congestion control flows, and we discuss previous work evaluating TCP congestion control to preserve low queuing delay for flows that respond to over mmWave. We move on to describe the methodology of ECN, the L4S architecture uses a Dual-Q coupled AQM. our experiments in Section IV before presenting our results This queue separates the conventional loss-based flows in Section V. We present a more detailed analysis of these and ECN-responsive flows and handles them differently results in Section VI, and finally present some directions for to maintain fairness and low latency for ECN-responsive future research based on our findings. flows.

II.LOW LATENCY CONGESTION CONTROL SCHEMES III.RELATED WORK Several low latency congestion control have been A number of simulation studies have explored the problem proposed in the literature to overcome the bufferbloat problem. of latency for TCP congestion control over mmWave links. In Static Link Short Blockages Long Blockages Mobility & Blockages 4000

3000

2000

1000 Capacity (Mbps) 0 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 Time (s)

Fig. 3: Traces of link capacity collected over the 60GHz WLAN testbed in four representative scenarios. We play back these traces in our CloudLab experiment to replicate the frequent variation in mmWave link capacity.

[14], an end-to-end 5G mmWave simulation in ns-3 is used observed by the authors over a mmWave link. This study uses to evaluate how TCP New Reno and TCP CUBIC react to a the CloudLab [21] testbed to evaluate TCP BBR using its dynamic mmWave channel with short term blockages. They actual kernel implementation, using a link with jitter show that large drops in link capacity during a LOS to NLOS produced by netem. transition causes extreme delays due to the bufferbloat problem IV. EXPERIMENTSETUP for loss-based TCP congestion control. This work motivates the need to move away from classic loss-based congestion To replicate the frequent variation in mmWave link capacity control for mmWave links. Building upon this work, [15] uses in our experiment, we used link traces obtained from [27], the same ns-3 framework to evaluate the use of Controlled which were collected using a 60GHz WLAN testbed. Link Delay (CoDel) AQM as a potential solution to mitigate latency, traces include received signal strength (RSSI), signal quality and suggests that AQM will under-utilize the link capacity. index (SQI), beamforming sector index, transmit modulation As an alternative, they propose a cross-layer dynamic receive and coding scheme, transmit , and the theoretical window adaptation scheme to manage queuing delay. transmit capacity (given SQI and TX MCS), all as reported Several studies have followed this early work by exploring by the wil6210 driver in Linux. alternative congestion control schemes over mmWave links, all A link was established between two laptop devices equipped using the same end-to-end 5G mmWave simulation framework with 60GHz wireless cards by Qualcomm. One laptop was in ns-3. Multipath TCP (MP-TCP) for mmWave links is placed on a shelf and acted as a fixed access point, while the evaluated in [16], and a TCP proxy architecture for mmWave other acted as a client device. Then, link traces were captured links is proposed in [24]. More recently, [19] simulates a wide under four different conditions, each lasting 120 seconds: range of congestion control protocols over mmWave links, • Static link: in this scenario, the laptops are positioned in including TCP BBR, but with a primary focus on comparison a static configuration across the room from one another, between TCP CUBIC and TCP YeAH. Finally, in [17], TCP with no obstructions between them. BBR is compared to loss-based congestion control schemes • Short blockages: in this scenario, the laptops were in a high speed train scenario and a dense urban scenario. positioned in the same configuration as in the static link However, in this simulation the authors use a small buffer, so scenario. However, at approximately 15-second intervals, that the benefits of the low latency congestion control relative a human walked through the line of sight path between to the loss based congestion control are not fully realized. the laptops. While the above mentioned literature used internal ns-3 im- • Long blockages: this scenario is similar to the one with plementations of different TCP congestion control algorithms, short blockages. In this one, however, the human paused [18] used real TCP stacks with the Direct Code Execution for approximately four seconds while obstructing the framework in ns-3, along with the ns-3 mmWave 5G module. signal path. The results showed a general tradeoff between capacity and • Mobility and blockages: in the final scenario, the access latency when using a purely loss-based congestion control point was static, but the client laptop moved around the like CUBIC and Reno, versus using a CoDel AQM at the room in a circle. At the same time, a human moved past bottleneck. Other TCP variants like Scalable TCP and TCP the access point, obstructing its line of sight view of the Illinois were also considered, but did not show any major client, at approximately 15-second intervals. improvement in mmWave wireless conditions. In a follow up While collecting traces, iperf3 was used to send a single study [25], they further investigate fairness problems that arise TCP flow in each direction between the client and the access because of the frequent blocking events in mmWave links. point. The reason for these flows was that the behavior of the Finally, [26] investigates a problem with TCP BBR when 60 GHz NIC is different when the link is not loaded versus there is extreme jitter on the link, a problem that was first when there is traffic on the link. The theoretical capacity of the link in each scenario is with L4S’s TCP Prague, where the delay stays near the 5ms shown in Figure 3. In general, we observe that the link capacity ECN marking threshold. The delays increase in the presence is reduced whenever there is a human obstruction in the signal of blockage events but remain much lower compared to loss- path. In the scenario with mobility, the link capacity also based TCP. Both versions of TCP BBR maintain small delays increases and decreases as the client laptop moves closer to, which mostly stay in the range of 5-10ms for all mmWave then farther from, the fixed access point. link conditions. BBR achieves this without any ECN support To conduct our congestion control experiments, we use the using its model-based at the sender side. CloudLab [21] testbed. We configured a three-node, two-hop topology with a TCP sender and receiver connected by a B. Throughput and Fairness bottleneck router, all with high-capacity links between them. We begin by noting that the sum of the throughput of At the bottleneck router, we configured a FIFO queue with the ten flows in all scenarios and for all congestion control 7.5MB capacity, representing approximately 20ms of delay protocols was close to the link capacity (Figure 5). However, when the queue is full and the link operates at the typical in TCP BBR, the sending rate is reduced at regular intervals unobstructed rate of 3Gbps. To play back the link traces, we (10 seconds for BBR v1 and more frequently for BBR v2) to use the bandwidth shaping tools provided by tc in Linux to drain the queue in order to measure the minimum RTT of the limit the egress rate of this queue. For experiments with TCP link. For applications that require consistent high throughput Prague, which requires ECN, we enabled accurate ECN at as well as low delay, this tradeoff will be unacceptable. both the sender and the receiver, and configured the bottleneck With respect to fairness, we observe that TCP CUBIC queue to mark packets at 5ms. maintains a fair share of throughput between the 10 bulk In each experiment, we send ten bulk TCP flows of duration flows. TCP Prague (L4S) has poor fairness, with some flows 120 seconds from the TCP sender to the receiver using essentially being starved while others capture most of the iperf3. We record the throughput per flow reported by link throughput. This unfairness seems to be exacerbated iperf3 and the smoothed RTT measurements from each TCP by variations in capacity on the mmWave link - in some socket using ss. We consider four congestion control alterna- cases, even when the flows converge to a fair share at first, tives: TCP CUBIC (the current default in the Linux kernel, and the balance is “reset” when the link is blocked, and in the a loss-based congestion control), TCP Prague (as part of L4S), following interval some flows are starved. TCP BBR is able TCP BBR v1, and TCP BBR v2. To support these congestion to maintain better fairness between the 10 competing flows control algorithms, all hosts in our experiment run Ubuntu in this experiment, however BBR is known to have problems Linux 18.04, but with additional kernels including the low with fairness for flows with different RTTs [30] and for flows latency congestion control algorithms under consideration. For that share a link with loss-based congestion control flows [31]. TCP BBR (both BBRv1 and BBRv2) experiments, we used a 5.2-rc3 kernel with the BBRv2 alpha implementation from the VI.DISCUSSIONAND CONCLUSIONS v2alpha branch of Google’s BBR GitHub repository [28]. Our results confirm that classic loss-based congestion con- For TCP Prague (L4S) and TCP CUBIC experiments, we used trol like TCP CUBIC will not be able to provide low delays in a 5.3-rc3 kernel from the testing branch of the L4S team’s mmWave networks. Queueing delays can get as high as 150ms GitHub repository [29]. when the line of sight path is temporarily blocked. However, the low delay congestion control protocols we V. RESULTS evaluated also had some problems. TCP Prague, which is The results of our experiments, showing the throughput and used in the L4S architecture, manages queuing delay well. We transport RTT for four congestion control protocols in different observed problems with fairness and starvation of some flows, mmWave link scenarios, are shown in Figure 4 and Figure 5. however, and this problem seems to become worse with block- Instructions to reproduce these figures are provided at [22]. ages and mobility. Furthermore, TCP Prague requires accurate ECN negotiation at both the sender and the receiver, and for A. Latency routers in the network path to have ECN marking capability. TCP CUBIC and other loss-based congestion control This makes it difficult to deploy the L4S architecture at a schemes fill the bottleneck buffer, and therefore experience large scale, although in controlled environments it is much the most queuing delay. Our RTT results for TCP CUBIC more practical. TCP BBR maintains low queueing delays in with a static link (Fig. 4) are in line with this expectation, as mmWave channel conditions and has better fairness compared we observe around 20ms of queueing delay (7.5MB buffer / to L4S. However, BBR may not be the ideal candidate for 3Gbps typical capacity). We also see spikes in the RTT when applications which need uninterrupted high-speed service, the line of sight path of the mmWave link is blocked and the such as live streaming of high quality video. This is because capacity is reduced. The delay gets as high as 150ms, which BBR periodically enters its ”Probe RTT” phase wherein all is more than seven times the base delay with a static link. senders cut their CWND size drastically to drain the queue. In contrast, an ECN-based congestion control is expected to This helps the BBR algorithm to get a correct estimate of maintain queueing delays at or below the marking threshold the minimum RTT of the path, but these periodic dips in specified at the bottleneck routers. We observe this behaviour throughput will be a major cause of concern in many cases. Static Link Short Blockages Long Blockages Mobility & Blockages 150 TCP CUBIC 120 90 60 30 0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120

40 TCP Prague (L4S)

30

20

10

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 40 TCP BBR v1

30

rnpr RTT (ms) Transport 20

10

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 40 TCP BBR v2

30

20

10

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 Time (s) Fig. 4: Transport RTT(ms) vs time (s) for 10 flows with different congestion control schemes. Each flow is represented by a different color.

Static Link Short Blockages Long Blockages Mobility & Blockages 2000 TCP CUBIC

1500

1000

500

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 2000 TCP Prague (L4S)

1500

1000

500

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 2000 TCP BBR v1

1500 Throughput (Mbps) 1000

500

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 2000 TCP BBR v2

1500

1000

500

0 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 0 30 60 90 120 Time (s) Fig. 5: Throughput(Mbps) vs time (s) for 10 flows with different congestion control schemes. Each flow is represented by a different color. Our experimental setup has enabled us to investigate the [10] M. Dong, T. Meng, D. Zarchy, E. Arslan, Y. Gilad, B. Godfrey, and dynamic behavior of these low latency congestion con- M. Schapira, “PCC Vivace: Online-learning congestion control,” in 15th USENIX Symposium on Networked Systems Design and Implementation trol schemes in a realistic mmWave wireless scenario. The (NSDI 18), 2018, pp. 343–356. mmWave setting with blockages and/or mobility is highly [11] V. Arun and H. Balakrishnan, “Copa: Practical delay-based congestion dynamic, and hence a challenging environment for congestion control for the internet,” in 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18), 2018, pp. 329–342. control. This work is a first step towards finding a suitable [12] K. Winstein, A. Sivaraman, and H. Balakrishnan, “Stochastic forecasts congestion control algorithm which can minimize delay for achieve high throughput and low delay over cellular networks,” in 10th mmWave wireless networks. USENIX Symposium on Networked Systems Design and Implementation (NSDI 13), 2013, pp. 459–471. One major limitation of this experimental approach is that [13] B. Briscoe, K. D. Schepper, M. Bagnulo, and G. White, “Low it does not allow us to investigate the behavior of flows Latency, Low Loss, Scalable Throughput (L4S) Internet Service: coming from multiple mmWave clients, which are blocked Architecture,” Internet Engineering Task Force, Internet-Draft draft-ietf- tsvwg-l4s-arch-05, Feb. 2020, work in Progress. [Online]. Available: independently but share the link capacity. We hope to collect https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-l4s-arch-05 additional link capacity traces in a multi-client setting, with [14] M. Zhang, M. Mezzavilla, R. Ford, S. Rangan, S. Panwar, E. Mellios, which to extend this experiment. D. Kong, A. Nix, and M. Zorzi, “ performance in 5G mmWave cellular,” in 2016 IEEE Conference on Computer Communi- Given the results of this paper, we hope to pursue several cations Workshops (INFOCOM WKSHPS), 2016, pp. 730–735. future research directions. First, we would like to move away [15] M. Zhang, M. Mezzavilla, J. Zhu, S. Rangan, and S. Panwar, “TCP from bulk flows to a more realistic traffic model with a mixture dynamics over mmwave links,” in IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), of long and short flows. We may consider evaluating other 2017, pp. 1–6. recently proposed low latency congestion control schemes. [16] M. Polese, R. Jana, and M. Zorzi, “TCP and MP-TCP in 5G mmWave We hope to extend our experiment to include scenarios where networks,” IEEE Internet Computing, vol. 21, no. 5, pp. 12–19, 2017. [17] M. Zhang, M. Polese, M. Mezzavilla, J. Zhu, S. Rangan, S. Panwar, and flows have different RTTs, and to include other topologies in M. Zorzi, “Will TCP work in mmWave 5G cellular networks?” IEEE which the bottleneck may not be at the mmWave link. Finally, Communications Magazine, vol. 57, no. 1, pp. 65–71, 2019. we would like to more thoroughly investigate the performance [18] M. Pieska and A. Kassler, “TCP performance over 5G mmWave links - tradeoff between capacity and latency,” in IEEE 13th International with a dynamic mmWave channel, and either propose improve- Conference on Wireless and Mobile Computing, Networking and Com- ments to existing low latency congestion control schemes, or munications (WiMob), 2017, pp. 385–394. propose a new one, to achieve consistent high throughput and [19] P. J. Mateo, C. Fiandrino, and J. Widmer, “Analysis of TCP performance in 5G mm-wave mobile networks,” in IEEE International Conference on low delay in this challenging environment. Communications (ICC 2019), 2019, pp. 1–7. [20] R. Ford, M. Zhang, M. Mezzavilla, S. Dutta, S. Rangan, and M. Zorzi, ACKNOWLEDGEMENTS “Achieving ultra-low latency in 5G millimeter wave cellular networks,” IEEE Communications Magazine, vol. 55, no. 3, pp. 196–203, 2017. This work was supported by the New York State Center for [21] R. Ricci, E. Eide, and C. Team, “Introducing CloudLab: Scientific Advanced Technology in Telecommunications (CATT), NYU infrastructure for advancing cloud architectures and applications,” ; WIRELESS, and gifts from Cisco Systems and Futurewei. The login:: the magazine of USENIX & SAGE, vol. 39, no. 6, pp. 36–38, 2014. authors would also like to acknowledge the contributions of [22] A. Srivastava, “An experimental evaluation of low latency congestion Shreeshail Hingane and Youssef Azzam, who collected the control over mmWave links,” Run my experiment on GENI blog: https: mmWave link traces used in this research. //witestlab.poly.edu/blog/tcp-mmwave, 2020. [23] M. Alizadeh, A. Greenberg, D. A. Maltz, J. Padhye, P. Patel, B. Prab- hakar, S. Sengupta, and M. Sridharan, “Data center TCP (DCTCP),” in EFERENCES R Proceedings of the ACM SIGCOMM 2010 conference, 2010, pp. 63–74. [1] M. A. Lema, A. Laya, T. Mahmoodi, M. Cuevas, J. Sachs, J. Mark- [24] M. Polese, M. Mezzavilla, M. Zhang, J. Zhu, S. Rangan, S. Panwar, endahl, and M. Dohler, “Business case and technology analysis for 5G and M. Zorzi, “milliProxy: A TCP proxy architecture for 5G mmWave low latency applications,” IEEE Access, vol. 5, pp. 5917–5935, 2017. cellular systems,” in 51st Asilomar Conference on Signals, Systems, and [2] ITU-R, “Minimum requirements related to technical performance for Computers, 2017, pp. 951–957. IMT-2020 radio interface (s),” Tech. Rep. ITU-R M. 2410-0, 2017. [25] M. Pieska, A. J. Kassler, H. Lundqvist, and T. Cai, “Improving TCP [3] T. Bai and R. W. Heath, “Coverage and rate analysis for millimeter-wave fairness over latency controlled 5G mmWave communication links,” in cellular networks,” IEEE Transactions on Wireless Communications, 22nd International ITG Workshop on Smart Antennas (WSA ’18), 2018, vol. 14, no. 2, pp. 1100–1114, 2014. pp. 1–8. [4] G. R. MacCartney, T. S. Rappaport, and S. Rangan, “Rapid fading due [26] R. Kumar, A. Koutsaftis, F. Fund, G. Naik, P. Liu, Y. Liu, and S. Panwar, to human blockage in pedestrian crowds at 5g millimeter-wave fre- “TCP BBR for ultra-low latency networking: challenges, analysis, and quencies,” in IEEE Global Communications Conference (GLOBECOM solutions,” in 2019 IFIP Networking Conference (IFIP Networking), 2017), 2017, pp. 1–7. 2019, pp. 1–9. [5] A. Narayanan, J. Carpenter, E. Ramadan, Q. Liu, Y. Liu, F. Qian, and [27] S. Hingane, “Using AQM to manage ”temporary bufferbloat” on Z.-L. Zhang, “A first measurement study of commercial mmWave 5G mmwave links,” Run my experiment on GENI blog: https://witestlab. performance on smartphones,” in WWW 2020, 2020. poly.edu/blog/aqm-mmwave/, 2020. [6] “Verizon community: Buffer bloat on Verizon 5G,” https://community. [28] Google, “BBR repository,” https://github.com/google/bbr. verizonwireless.com/thread/964495, accessed: 2019-02-20. [29] L4S, “L4S Linux repository,” https://github.com/L4STeam/linux. [7] “Reddit: I got 5G Home Internet installed this week,” [30] S. Ma, J. Jiang, W. Wang, and B. Li, “Towards RTT fairness https://www.reddit.com/r/verizon/comments/9lqp6n/i got 5g home of congestion-based congestion control,” CoRR, vol. abs/1706.09115, internet installed this week and it/e7ancsf/, accessed: 2019-02-20. 2017. [Online]. Available: http://arxiv.org/abs/1706.09115 [8] J. Gettys and K. Nichols, “Bufferbloat: Dark buffers in the Internet,” [31] R. Ware, M. K. Mukerjee, S. Seshan, and J. Sherry, “Modeling BBR’s Queue, vol. 9, no. 11, pp. 40–54, 2011. interactions with loss-based congestion control,” in Proceedings of the [9] N. Cardwell, Y. Cheng, C. S. Gunn, S. H. Yeganeh, and V. Jacobson, Internet Measurement Conference, 2019, pp. 137–143. “BBR: Congestion-based congestion control,” Queue, vol. 14, no. 5, pp. 20–53, 2016.