Detecting Fair Queuing for Better Congestion Control Maximilian Bachl, Joachim Fabini, Tanja Zseby Technische Universität Wien fi[email protected] Abstract—Low delay is an explicit requirement for applications While FQ solves many problems regarding Congestion such as cloud gaming and video conferencing. Delay-based con- Control (CC), it is still not ubiquitously deployed. Flows can gestion control can achieve the same throughput but significantly benefit from knowledge of FQ enabled bottleneck links on the smaller delay than loss-based one and is thus ideal for these applications. However, when a delay- and a loss-based flow path to dynamically adapt their CC. In the case of FQ they can compete for a bottleneck, the loss-based one can monopolize all use a delay-based CCA and otherwise revert to a loss-based the bandwidth and starve the delay-based one. Fair queuing at the one. Our contribution is the design and evaluation of such bottleneck link solves this problem by assigning an equal share of a mechanism that determines the presence of FQ during the the available bandwidth to each flow. However, so far no end host startup phase of a flow’s CCA and sets the CCA accordingly based algorithm to detect fair queuing exists. Our contribution is the development of an algorithm that detects fair queuing at flow at the end of the startup. startup and chooses delay-based congestion control if there is fair We show that this solution reliably determines the presence queuing. Otherwise, loss-based congestion control can be used as of fair queuing and that by using this approach, queuing delay a backup option. Results show that our algorithm reliably detects can be considerably lowered while maintaining throughput fair queuing and can achieve low delay and high throughput in high if FQ is detected. In case no FQ is enabled, our algorithm case fair queuing is detected. detects this as well and a loss-based CC is used, which does not starve when facing competition from other loss-based I. INTRODUCTION flows. While our mechanism is beneficial for network flows in Emerging applications such as cloud gaming [1] and remote general, we argue that it is most useful for long-running delay- virtual reality [2] applications require high throughput and low sensitive flows, such as cloud gaming, video conferencing etc. delay. Furthermore, ultra low latency communications have To make our work reproducible and to encourage further been made a priority for 5G [3]. We argue that for achieving research, we make the code, the results and the figures publicly high throughput as well as low delay, congestion control must available1. be taken into account. Delay-based Congestion Control Algorithms (CCAs) were II. RELATED WORK proposed to provide high throughput and low delay for net- Our work depends on active measurements for the startup work flows. They use the queuing delay as an indication of phase of CC and proposes a new measurement technique congestion and lower their throughput as soon as the delay to detect Active Queue Management (AQM). This section increases. Such approaches have been proposed in the last discusses preliminary work related to these fields. century [4] and recently have seen a surge in popularity [5, 6, 7, 8]. While delay-based CCAs fulfill their goal of A. CC using Active Measurements low delay and high throughput, they cannot handle competing Several approaches have been recently proposed which flows with different CCAs well [9, 10]. This is especially aim to incorporate active measurements into CCAs. [15, 16] arXiv:2010.08362v3 [cs.NI] 19 Feb 2021 prevalent when they compete against loss-based CCAs, the introduced the CCA PCC, which uses active measurements to latter being more “aggressive”. While the delay-based CCAs find the sending rate which optimizes a given reward function. back off as soon as queuing delay increases, the loss-based [8] introduce the BBR CCA, which uses active measurement ones continue increasing their throughput until the buffer is to create a model of the network connection. This model is full and packet loss occurs. This results in the loss-based flows then used to determine the optimal sending rate. An approach monopolize the available bandwidth and starve the delay-based proposed by [17] aims to differentiate bulk transfer flows, ones [11, 12, 13]. which they deem “elastic” because they try to grab as much This unfairness can be mitigated by using Fair Queuing bandwidth as available, from other flows that send at a constant (FQ) at the bottleneck link, isolating each flow from all other rate, which they call “unelastic”. They use this elasticity flows and assigning each flow an equal share of bandwidth detection to determine if delay-based CC can be used or not. If [14]. Consequently, loss-based flows can no longer acquire cross traffic is deemed elastic then they use classic loss-based bandwidth that delay-based flows have released due to their CC and otherwise delay-based CC. network-friendly behavior. In this case, delay-based CCAs achieve their goal of high bandwidth and low delay. 1https://github.com/CN-TU/PCC-Uspace 1 B. Flow startup techniques sending rate receiving rate receiving rate Besides the classical TCP slow start algorithm [18] some ow 2 ow 2 ow 2 other proposals were made such as Quick Start [19], which ow 1 ow 1 ow 1 aims to speed up flow start by routers informing the end point, which rate they deem appropriate. [20] investigate the impact link speed link speed of AQM and CC during flow startup. They inspect a variety of different AQM algorithms and also investigate FQ. [21] propose “chirping” for flow startup to estimate the available bandwidth. This works by sending packets with different gaps time time time between them to verify what the actual link speed is. [22] 1 r 2 rs 3 rs 1 r 2 rs 3 rs 1 r 2 rs 3 rs propose speeding up flow startup by using packets of different (a) Sending rate (b) Receiving rate (no FQ). (c) Receiving rate (FQ). priority. The link is flooded with low-priority packets and Fig. 1: An example illustrating our proposed flow startup in case the link is not fully utilized during startup, these mechanism. Figure 1a shows the sending rate, 1b the receiving low-priority packets are going to be transmitted, allowing the rate in case there’s no FQ and 1c the receiving rate if there is starting flow to achieve high throughput. If the link is already FQ. saturated, the low-priority packets are going to be discarded and no other flow suffers. C. Detecting AQM mechanisms 2) After every Round Trip Time (RTT), if no packet loss occurred, double the sending rate of both flows. There are few publications that investigate methods to detect 3) If packet loss occurred in both flows in the previous RTT, the presence of an AQM. [23] propose a method to detect and calculate the following metric: distinguish certain popular AQM mechanisms. However, un- like our work, it doesn’t consider FQ. [24] propose a machine receiving rate of flow 1 sending rate of flow 1 learning based approach to fingerprint AQM algorithm. FQ is loss ratio = (1) not considered, too. receiving rate of flow 2 sending rate of flow 2 D. Detecting the presence of a shared bottleneck This loss ratio metric indicates if flow 2, which has Several approaches have been proposed to detect whether a higher sending rate, achieves a higher receiving rate in a set of flows, some of them share a common bottleneck (goodput) than flow 1. If this is the case, there’s no FQ, [25, 26, 27]. These approaches typically operate by compar- otherwise there is. ing some statistics of flows to check whether they correlate It is necessary to wait for packet loss in both connections between flows. For example, it can be checked if throughput because only then it is certain that the link is saturated. correlates between two flows. If throughput correlates, it can Our algorithm assumes that the link is saturated. mean that these flows share a common bottleneck link, and if 4) If FQ is used, the loss ratio is 2, otherwise it is 1. We additional flows join at the bottleneck link, all other flows’ choose 1.5 as the cutoff value. throughputs go down, which is the reason why they are 5) Since the presence or absence of FQ is now known the correlated. While approaches to detect a shared bottleneck link appropriate CCA can be launched. appear to be similar to what this paper is aiming to do, there are differences: Two flows can share a common bottleneck link Figure 1a, 1b and 1c schematically illustrate this mecha- but the bottleneck link can have fair queuing. Thus the aims nism. We assume a sender being connected to a receiver, with of shared bottleneck and fair queuing detection are different. a bottleneck link between them. We also assume that all flows between the source and the destination follow the same route. III. CONCEPT On this bottleneck link, the queue is either managed by FQ The overall concept is the following: or by a shared queue (no FQ). 1) During flow startup, determine whether the bottleneck Figure 1a shows how the sending rate (observed before the link deploys FQ or not. bottleneck link) is doubled after each RTT. Furthermore, it 2) If FQ is used change to a delay-based CCA or revert to shows that flow 2 has double the sending rate of flow 1. a loss-based CCA otherwise. Figure 1b shows the receiving rate at the receiver after the bottleneck link, if this bottleneck link has a a shared queue A.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-