Interactions Between TCP and Ethernet Flow Control Over Netgear
Total Page:16
File Type:pdf, Size:1020Kb
Interactions between TCP and Ethernet flow control over Netgear XAVB2001 HomePlug AV links Radika Veera Valli, Grenville Armitage, Jason But, Thuy Nguyen Centre for Advanced Internet Architectures, Technical Report 130121A Swinburne University of Technology Melbourne, Australia [email protected], [email protected], [email protected], [email protected] Abstract—HomePlug AV links are usually slower than buffers along network paths. A side-effect of such loss- the surrounding wired LAN links, creating a bottleneck based CC behaviour is that all traffic sharing the bottle- where traffic may be buffered and delayed. To investigate neck will experience additional latency due to increased the interactions between TCP congestion control and average queueing delays [5]. This is particularly prob- Ethernet flow control we trialled four TCP variants over a lematic for real-time application flows (such as VoIP and HomePlug AV link using two Netgear XAVB2001 devices. We observed that the XAVB2001’s use of Ethernet flow online games) given the growth of arguably-gratuitous control effectively concatenates the buffers in both the buffering (“buffer bloat”) in network devices, interface XAVB2001 and the directly attached sending host. This led cards and network software stacks [6]. to multi-second RTTs when using NewReno and CUBIC In this report we borrow from, and extend, previous (loss-based) TCPs, which is significantly troublesome for work by one of the current authors [7]. Our new work multimedia traffic on home LANs. In contrast, Vegas explores in greater detail the latency and flow control and CDG (delay-based) TCPs kept buffer utilisation low, issues that emerge when end-to-end TCP is allowed to and induced five to ten milliseconds RTT. While keeping make use of all available buffering along the path. Our RTT low, CDG achieved similar goodput to NewReno and bottleneck HomePlug AV link consists of two Netgear CUBIC. Vegas achieved roughly one third the goodput of the other three TCPs. XAVB2001 (Powerline AV 200) devices with 100Mbps LAN ports. We create long-lived data flows between I. INTRODUCTION FreeBSD hosts using two loss-based CC algorithms HomePlug AV is a consumer-oriented technology for (NewReno and CUBIC) and two delay-based CC algo- creating bridged Ethernet-like services across electrical rithms (CDG [8] version 0.1 and Vegas [9]). mains-power lines already present in most homes [1]. We identify two key characteristics of the Netgear HomePlug AV devices are often marketed using the term XAVB2001 devices: “Powerline AV 200” and have 100Mbps wired LAN • They can buffer roughly 2720 Ethernet frames ports. Achievable Ethernet-layer bitrates depend heavily • They use Ethernet flow control to push back on the on actual traffic patterns and number of HomePlug AV sending NIC when their own buffer is full devices active at a given time. With previously observed Ethernet flow control effectively concatenates the send- ‘best case’ rates in the 40 to 70Mbps range [2], Home- ing host’s NIC buffers and the XAVB2001’s buffers Plug AV links tend to be a bottleneck between 100Mbps when the HomePlug AV link becomes overloaded. Con- wired LANs. sequently we observed RTTs (round trip times) in ex- Bottlenecks usually implement internal buffering to cess of two seconds when NewReno and CUBIC were minimise packet losses during transient overloads. Un- allowed to ‘fill the pipe’. Under similar conditions CDG fortunately, TCP traffic and interactive (e.g. multimedia) and Vegas were observed to induce RTTs between five traffic generally do not interact well when sharing bot- and ten milliseconds. While keeping RTT low, CDG tleneck buffers. achieved a substantial fraction of NewReno and CUBIC Common TCP congestion control (CC) algorithms throughput and Vegas achieved roughly one third the (such as NewReno [3] and CUBIC [4]) tend to cause throughput of the other three TCPs. cyclical filling (until packet loss occurs) and drain- In environments where real-time/interactive multime- ing (while the sender temporarily slows) of bottleneck dia applications share a HomePlug AV link, CDG would CAIA Technical Report 130121A January 2013 page 1 of 13 appear a good choice of TCP algorithm, avoiding exces- cases, packets need to be temporarily queued inside the sive RTTs that will be caused by NewReno and CUBIC. node (in a buffer) to avoid losses during bursts of traffic. The rest of this report is organised as follows. Simpli- While the queue (buffer) is close to empty, packets fied background on HomePlug AV, Ethernet flow control take minimal time transiting through the node. However, and TCP is provided in Section II and our experimental during periods where the queue is significantly full, all methodology is outlined in Section III. Section IV ex- packets transiting the node experience additional queuing plores the potential performance of an XAVB2001 link, delays as they wait for packets ahead of them to depart. and the XAVB2001’s internal buffer size is estimated In the extreme, the queue fills up and new packets are in Section V. The imapct of Ethernet Flow control on dropped (lost). FreeBSD’s TCP behaviour is discussed in Section VI. Buffer size is a key consideration. Small buffers may Our measurements of TCP performance and induced over-flow easily and lose packets frequently, but impose RTTs are presented in Section VII. Sections VIII and IX a low bound on peak queuing delays. Large buffers will describe future work and conclusions respectively. absorb more congestion before losing packets, but all traffic sharing the queue will experience high queueing II. BACKGROUND INFORMATION delays when any one or more flows congest the bot- A. HomePlug AV Technology tleneck node. High queuing delays contribute to higher The HomePlug Powerline Alliance developed the end to end RTTs, which negatively impacts on interactive HomePlug AV (HPAV) technology to communicate over applications (such as Voice over IP and online games) the powerline infrastructure. Between a pair of adapters and the dynamic responsiveness of protocols that rely on HomePlug AV technology provides a peak unidirectional timely feedback (such as TCP itself). raw physical (PHY) layer rate of 200 Mbps and a peak C. Ethernet flow control information rate of 150 Mbps [1]. These are often mar- keted using the term “Powerline AV 200” to differentiate When Ethernet traffic arrives on the Netgear them from both the original 85Mbps HomePlug 1.0 XAVB2001’s 100Mbps LAN port faster than it can be specification and newer 500Mbps HomePlug AV devices forwarded across the HomePlug AV link, the XAVB2001 (which use proprietary extensions for significantly faster uses a combination of internal buffering and Ethernet physical layer rates [10], [11]). flow control [14] to manage the overload. Ethernet flow HomePlug AV uses orthogonal frequency-division control works as follows: multiplexing (OFDM) to encode data on carrier fre- Consider an host PC’s NIC (network interface card) quencies. Levels of electrical noise on the line influence Nhost directly connected to port Pswitch of a congested achievable PHY rates, and devices may synchronise at Ethernet switch. Ethernet flow control allows Pswitch to different physical rates in each direction [12]. Adapters send PAUSE frames back to Nhost requesting that the use “variable bit loading”, adapting the encoding rate NIC temporarily pause transmission of Ethernet frames (number of bits per carrier signal) at regular intervals towards Pswitch for a specific period of time. Nhost to achieve the highest possible throughput given channel responds to these PAUSE frames by pausing transmission conditions at the time [13]. for the time period requested by Pswitch. The over- HomePlug AV links are half-duplex and contention whelmed switch never has to lose any Ethernet frames. based (somewhat like WiFi), with additional overheads In practical terms Nhost honours these PAUSE frames introduced by error detection, error correction, carrier by briefly buffering outbound Ethernet frames which sensing, channel access mechanisms and so forth. the host’s own network stack is still passing down for The Ethernet bridging rate and the IP layer rate over transmission. The PAUSE frames effectively pushes the the powerline link is lower than the PHY rate due to bottleneck back into the NIC, briefly co-opting the NIC’s encapsulation overheads and the MAC protocols used to own buffers as an extension of whatever buffers exist resolve contention between devices trying to transmit at inside Pswitch. From an end to end perspective, the the same time. bottleneck appears to have a buffer roughly the sum of the buffers in both Nhost and Pswitch. B. The role and impact of buffering The network stacks of different operating systems will A node in a network path becomes a bottleneck have their own ways of reacting if Nhost’s own buffer is (congestion point) when the packet arrival (ingress) rate exhausted while honouring a PAUSE frame from Pswitch. at the node exceeds the departure (egress) rate. In such CAIA Technical Report 130121A January 2013 page 2 of 13 D. TCP Congestion Control Algorithms: supports NewReno, CUBIC, Vegas and CDG version 0.1 1 Following on from [7] we chose to utilise two loss- algorithms . based and two delay-based TCP CC variants – New Room-1 Room-2 Reno, CUBIC, CDG (CAIA Delay Gradient) and Ve- FreeBSD-8.2 RELEASE FreeBSD 9.0 STABLE r229848M (64 Bit) gas. Here we summarise the key TCP mechanisms for (32 Bit) maximising throughout while minimising congestion. IPERF UDP Transfer Every TCP sender maintains a congestion window (cwnd) representing the sender’s current best estimate 1 MAINS 2 of the number of unacknowledged bytes that the network can handle without causing further congestion. Every TCP receiver advertises a receiver window (rwnd) rep- resenting the amount of data the receiver can handle at any given point in time. The sender should allow more 100baseTX LAN Port 100baseTX LAN Port than either cwnd or rwnd packets to be transmitted (and unacknowledged) at any given point in time, to avoid Figure 1.