The Impact of Bit Error Rate on LAN Throughput

Total Page:16

File Type:pdf, Size:1020Kb

The Impact of Bit Error Rate on LAN Throughput The Impact of Bit Error Rate on LAN Throughput Synopsis Claims that a low level of re-transmissions could have a devastating affect on LAN throughput prompted us to examine the mechanisms that might cause this. The following analysis challenges the claim that just 1% packet re-transmission can reduce throughput by as much as 80% but illustrates that such claims are not without foundation. We will attempt to answer a number of related questions using basic technical assumptions but also by adopting a worst case scientific approach. Question 1: What is the correlation between Bit Error Rate (BER) and packet loss? Let’s consider a fully-loaded switched Ethernet, where frames (or packets) are being transferred back-to- back, separated by the usual 96-bit Inter-Frame Gap (IFG). With a switched LAN we can load each connection to almost 100% - it is only the IFG that prevents this. It’s well known that smaller packets are less susceptible to interference as they are statistically more likely to miss noise caused by internal or external sources (e.g. cabling crosstalk or electromagnetic interference). We will therefore examine both small and large packet scenarios. It’s easy to calculate the relationship between BER at the physical transmission level and packet loss. For a minimum sized Ethernet frame of 64 octets, the BER required to corrupt a single bit in every frame is: Ethernet min_frame + IFG = 64 x 8 bits + 96 bits = 1 bit in every 608 bits For a maximum sized Ethernet frame of 1518 octets, the BER required to corrupt a single bit in every frame is: Ethernet max_frame + IFG = 1518 x 8 bits + 96 bits = 1 bit in every 12,240 bits The above scenarios equate to 100% packet loss or zero throughput, based on the assumption that bit errors are equally spaced. Of course, this is not the case, as bit errors will be statistically distributed. We will return to this important assumption later. 50% packet loss occurs when the BER is halved. 25% packet loss occurs when it is halved again, and so on. The result is illustrated. Packet Throughput % 1% loss 1% lolosss 1% loss 100 90 W 80 70 h 60 i t 50 e 40 Ethernet Ethernet 16x Ethernet min frames max frames max frames P 30 [64 octets] [1518 octets] [big packet] a 20 p 10 e 0 r 10E2 10E3 10E4 10E5 10E6 10E7 10E8 Bit Error Rate [1 in x] viewfield industrial estate • glenrothes • fife • united kingdom • KY6 2RS T: +44 (0) 1592 772124 F: +44 (0) 1592 775314 E: [email protected] W: www.brand-rex.com The Impact of Bit Error Rate on LAN Throughput From this straightforward analysis, 1% packet loss corresponds to: • a BER of 1 in 67,000 bits (approximately) for Ethernet min-frames • a BER of 1 in 1,250,000 bits (approximately) for Ethernet max-frames At this point, it is tempting to remind ourselves that these figures are a factor of 80-times worse that the maximum BER specified for a LAN physical layer. In which case, the packet loss would be more like 0.0125% for max_frames, or 1 errored frame in every 8,000 frames sent. But this is not the end of the story. Some protocols, such as TCP/IP, may send big packets across the LAN. How big? Well, it’s quite usual to see 8kbytes and, sometimes, even 24kbytes are seen. These big packets are sent as a series of Ethernet max_frames and are error-managed on an end-to-end basis at the transport layer. In other words, up to 16 Ethernet max_frames will be sent to transfer the bigger packet and, if any of the frames are damaged in transit, then the whole sequence will be re-transmitted. Admittedly, this is unusual, however let’s continue with our pessimistic analysis. If we treat 16 Ethernet max_frames as a single large packet, then 1% packet loss now corresponds to a BER of 1 in 20,000,000 bits, as shown in the above graph. This is now only a factor of 5-times worse than the maximum BER specified by 10BASE-T (10-8) and is equivalent to a packet loss of 0.2%, or 1 errored frame in every 500 frames sent. The situation now starts getting a little less comfortable. Question 2: Does LAN speed impact the relationship between BER and packet loss? Good question, to which the short answer is – yes. It is easy to calculate the average frame error frequency vs BER for a particular data transfer rate. This is shown together with maximum BER performance specified for the main twisted-pair LAN technologies. Note that the data transfer rate is the actual number of bits sent per second – not the operational bit rate of the LAN. This is an interesting graph, as it illustrates quite clearly the relationship between BER, data transfer rate and frame error rate. Within the range of error rates shown, there is no distinction between small and large packets. The graph shows average times between errored frames corresponding to the maximum BER specified for 10BASE-T, 100BASE-TX and 1000BASE-T. For example, a 100BASE-TX LAN carrying 10 Mbit/s (10% load) has an average period between errored frames of 100 seconds. It is worth noting at this point that optical fibre LAN technologies generally specify maximum BERs 100-times better than their copper counterparts (for the same speed). This is a major technological advantage in high capacity transmission systems. Bit Error Rate [1 in y] 10E12 1000 Mbit/s W 10E11 100 Mbit/s h 1000BASE-T max BER i 10E10 t 10 Mbit/s e 100BASE-TX max BER 10E9 1 Mbit/s P 10BASE-T max BER a 10E8 p 10E7 e r 10E6 0.1 1 10 100 1000 Average Frame Error Frequency [seconds] viewfield industrial estate • glenrothes • fife • united kingdom • KY6 2RS T: +44 (0) 1592 772124 F: +44 (0) 1592 775314 E: [email protected] W: www.brand-rex.com The Impact of Bit Error Rate on LAN Throughput Question 3: Are there any higher-level system implications of the above? Once again, the short answer is – yes. Higher-level protocols generally have timers to cover the error-recovery process. Windows of 100ms and upwards are often set to bound the amount of time devoted to a re- transmission at the transport layer. If this happens occasionally, the impact on throughput would be negligible. If, on the other hand, re-transmissions are occurring every few seconds or less, we could experience significant throughput degradation, especially if a high-level of damaged frames are associated with a single device, such as a server. It’s interesting that the main twisted-pair LAN technologies have maximum BER specifications that correspond to an average frame error period of 10 seconds when fully loaded (100% data transfer rate). Question 4: How realistic are the above figures? The short answer is that they are very pessimistic. They represent the limit of reality, or what might conceivably happen in extreme worst case. I would offer the following qualifications by way of a reality check: 1. the correlation between BER and packet loss will not be as pessimistic as stated in the presence of real- life statistical noise, which will contain bursts of multiple bit errors occurring less frequently. Burst errors will typically damage a single frame, allowing many other frames targeted in this analysis to go undamaged. 2. maximum BER performance corresponds to a minimally-compliant system, comprising LAN equipment and cabling, operating in the worst electromagnetic environment for which it was designed. This is very seldom seen in practice. 3. scientific analysis is a more meaningful process than taking measurements on a single experimental configuration. Worst case is almost impossible to simulate in practice. Question 5: So, is there any substance in the claim that 1% packet re-transmission can reduce throughput by as much as 80%? Well, 1% packet loss corresponds to a BER in the range of 10-5 to 10-7, depending on packet size. This would lead to multiple re-transmissions per second, even for modest data transfer rates of 1 to 10 Mbit/s. This could indeed have a serious impact on throughput when using higher-level protocols, such as TCP/IP. As much as 80% degradation? Possibly. Of course, a BER of 10-7 or worse will only be seen in a non-compliant system, hence the claim is unrealistic. There are one to five orders of magnitude difference between BERs associated with 1% packet loss and worst case BERs specified for twisted-pair LAN technologies. Add another two orders of magnitude difference when W using optical fibre. This represents a significant margin of safety. I rest my case. h i t e P a About Brand-Rex p Brand-Rex is a designer and manufacturer of copper and fibre based cabling systems, headquartered in e Glenrothes, Scotland with facilities across Europe. Brand-Rex has two primary businesses: Connectivity and Speciality. Its Connectivity division designs and manufactures cabling systems (both copper and r fibre) for data communications and is the No.2 player in Europe. The Speciality division exclusively produces cables that are used for control, communications, power and instrumentation within hostile environments. viewfield industrial estate • glenrothes • fife • united kingdom • KY6 2RS T: +44 (0) 1592 772124 F: +44 (0) 1592 775314 E: [email protected] W: www.brand-rex.com .
Recommended publications
  • A Comparison of Mechanisms for Improving TCP Performance Over Wireless Links
    A Comparison of Mechanisms for Improving TCP Performance over Wireless Links Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan and Randy H. Katz1 {hari,padmanab,ss,randy}@cs.berkeley.edu Computer Science Division, Department of EECS, University of California at Berkeley Abstract the estimated round-trip delay and the mean linear deviation from it. The sender identifies the loss of a packet either by Reliable transport protocols such as TCP are tuned to per- the arrival of several duplicate cumulative acknowledg- form well in traditional networks where packet losses occur ments or the absence of an acknowledgment for the packet mostly because of congestion. However, networks with within a timeout interval equal to the sum of the smoothed wireless and other lossy links also suffer from significant round-trip delay and four times its mean deviation. TCP losses due to bit errors and handoffs. TCP responds to all reacts to packet losses by dropping its transmission (conges- losses by invoking congestion control and avoidance algo- tion) window size before retransmitting packets, initiating rithms, resulting in degraded end-to-end performance in congestion control or avoidance mechanisms (e.g., slow wireless and lossy systems. In this paper, we compare sev- start [13]) and backing off its retransmission timer (Karn’s eral schemes designed to improve the performance of TCP Algorithm [16]). These measures result in a reduction in the in such networks. We classify these schemes into three load on the intermediate links, thereby controlling the con- broad categories: end-to-end protocols, where loss recovery gestion in the network. is performed by the sender; link-layer protocols, that pro- vide local reliability; and split-connection protocols, that Unfortunately, when packets are lost in networks for rea- break the end-to-end connection into two parts at the base sons other than congestion, these measures result in an station.
    [Show full text]
  • Lecture 8: Overview of Computer Networking Roadmap
    Lecture 8: Overview of Computer Networking Slides adapted from those of Computer Networking: A Top Down Approach, 5th edition. Jim Kurose, Keith Ross, Addison-Wesley, April 2009. Roadmap ! what’s the Internet? ! network edge: hosts, access net ! network core: packet/circuit switching, Internet structure ! performance: loss, delay, throughput ! media distribution: UDP, TCP/IP 1 What’s the Internet: “nuts and bolts” view PC ! millions of connected Mobile network computing devices: server Global ISP hosts = end systems wireless laptop " running network apps cellular handheld Home network ! communication links Regional ISP " fiber, copper, radio, satellite access " points transmission rate = bandwidth Institutional network wired links ! routers: forward packets (chunks of router data) What’s the Internet: “nuts and bolts” view ! protocols control sending, receiving Mobile network of msgs Global ISP " e.g., TCP, IP, HTTP, Skype, Ethernet ! Internet: “network of networks” Home network " loosely hierarchical Regional ISP " public Internet versus private intranet Institutional network ! Internet standards " RFC: Request for comments " IETF: Internet Engineering Task Force 2 A closer look at network structure: ! network edge: applications and hosts ! access networks, physical media: wired, wireless communication links ! network core: " interconnected routers " network of networks The network edge: ! end systems (hosts): " run application programs " e.g. Web, email " at “edge of network” peer-peer ! client/server model " client host requests, receives
    [Show full text]
  • Bit & Baud Rate
    What’s The Difference Between Bit Rate And Baud Rate? Apr. 27, 2012 Lou Frenzel | Electronic Design Serial-data speed is usually stated in terms of bit rate. However, another oft- quoted measure of speed is baud rate. Though the two aren’t the same, similarities exist under some circumstances. This tutorial will make the difference clear. Table Of Contents Background Bit Rate Overhead Baud Rate Multilevel Modulation Why Multiple Bits Per Baud? Baud Rate Examples References Background Most data communications over networks occurs via serial-data transmission. Data bits transmit one at a time over some communications channel, such as a cable or a wireless path. Figure 1 typifies the digital-bit pattern from a computer or some other digital circuit. This data signal is often called the baseband signal. The data switches between two voltage levels, such as +3 V for a binary 1 and +0.2 V for a binary 0. Other binary levels are also used. In the non-return-to-zero (NRZ) format (Fig. 1, again), the signal never goes to zero as like that of return- to-zero (RZ) formatted signals. 1. Non-return to zero (NRZ) is the most common binary data format. Data rate is indicated in bits per second (bits/s). Bit Rate The speed of the data is expressed in bits per second (bits/s or bps). The data rate R is a function of the duration of the bit or bit time (TB) (Fig. 1, again): R = 1/TB Rate is also called channel capacity C. If the bit time is 10 ns, the data rate equals: R = 1/10 x 10–9 = 100 million bits/s This is usually expressed as 100 Mbits/s.
    [Show full text]
  • LTE-Advanced
    Table of Contents INTRODUCTION........................................................................................................ 5 EXPLODING DEMAND ............................................................................................... 8 Smartphones and Tablets ......................................................................................... 8 Application Innovation .............................................................................................. 9 Internet of Things .................................................................................................. 10 Video Streaming .................................................................................................... 10 Cloud Computing ................................................................................................... 11 5G Data Drivers ..................................................................................................... 11 Global Mobile Adoption ........................................................................................... 11 THE PATH TO 5G ..................................................................................................... 15 Expanding Use Cases ............................................................................................. 15 1G to 5G Evolution ................................................................................................. 17 5G Concepts and Architectures ................................................................................ 20 Information-Centric
    [Show full text]
  • RTM-100 Troposcatter Modem Improved Range, Stability and Throughput for Troposcatter Communications
    RTM-100 Troposcatter Modem Improved range, stability and throughput for troposcatter communications The Raytheon RTM-100 troposcatter modem sets new milestones in troposcatter communications featuring 100 MB throughput. Benefits Superior Performance The use of turbo-coding An industry first, the waveform forward error correction The RTM-100 comes as n Operation up to 100 Mbps of Raytheon’s RTM-100 (FEC) and state-of-the-art a compact 2U rack with troposcatter modem offers digital processing ensures an n Unique waveform for multipath standard 70 MHz inputs/ strong resiliency to multipath, unprecedented throughput cancellation and optimized outputs. algorithms for fading immunity which negatively affects up to 100 Mbps. The modem n Quad diversity with soft decision troposcatter communications. integrates a non-linear digital algorithm Combined with an optimized pre-distortion capability. time-interleaving process and n Highly spectrum-efficient FEC The Gigabit Ethernet (GbE) signal channel diversity, the processing data port and the ability RTM- 100 delivers superior n Dual transmission path with to command and control transmission performance. independent digital pre- the operation over Simple The sophisticated algorithms, distortion Network Management including Doppler n Ethernet data and control Protocol (SNMP) simplify the compensation and maximum interface (SNMP and Web integration of the device into ratio combining between the graphical user interface) a net-centric Internet protocol four diversity inputs, ensure n Compact 2U 19-inch
    [Show full text]
  • On the Goodput of TCP Newreno in Mobile Networks
    On the Goodput of TCP NewReno in Mobile Networks Sushant Sharma Donald Gillies Wu-chun Feng Virginia Tech, Blacksburg, VA, USA Qualcomm, San Diego, USA Virginia Tech, Blacksburg, VA, USA Abstract—Next-generation wireless networks such as LTE and WiMax can achieve throughputs of several Mbps with TCP. These higher throughputs, however, can easily be destroyed by frequent handoffs, which occur in urban environments due to shadowing. A primary reason for the throughput drop during handoffs is the out of order arrival of packets at the receiver. As a result, in this paper, we model the precise effect of packet-reordering on the goodput of TCP NewReno. Specifically, we develop a TCP NewReno model that captures the goodput of TCP as a function of round-trip time, average time duration between packet-reorder events, average number of packets reordered during every reorder event, and the congestion window threshold of TCP NewReno. We also developed an emulator that runs on a router to implement packet reordering events from time to time. We validate our NewReno model by comparing the goodput results obtained by transferring data between two hosts connected via the emulator to the goodput results that our model predicts. I. MOTIVATION Next-generation wireless technologies such as WiMax and LTE (long term evolution) offer very high data rates (on Fig. 1. Handoff description. the order of several Mbps) to mobile users. As a result, mobile users will come to expect better peak performance a rare event in the wired Internet [12], [13], [14], [15], and from the networks than from current mobile networks.
    [Show full text]
  • Webrtc Based Network Performance Measurements Miranda Mcclellan
    WebRTC Based Network Performance Measurements by Miranda McClellan Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2019 c Massachusetts Institute of Technology 2019. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science May 24, 2019 Certified by. Steven Bauer Research Scientist Thesis Supervisor Accepted by . Katrina LaCurts Chair, Master of Engineering Thesis Committee 2 WebRTC Based Network Performance Measurements by Miranda McClellan Submitted to the Department of Electrical Engineering and Computer Science on May 24, 2019, in partial fulfillment of the requirements for the degree of Masters of Engineering in Computer Science and Engineering Abstract As internet connections achieve gigabit speeds, local area networks (LANs) be- come the main bottleneck for users connection. Currently, network performance tests focus on end-to-end performance over wide-area networks and provide no platform- independent way to tests LANs in isolation. To fill this gap, I developed a suite of network performance tests that run in a web application. The network tests support LAN performance measurement using WebRTC peer-to-peer technology and statis- tically evaluate performance according to the Model-Based Metrics framework. Our network testing application is browser based for easy adoption across platforms and can empower users to understand their in-home networks. Our tests hope to give a more accurate view of LAN performance that can influence regulatory policy of internet providers and consumer decisions. Thesis Supervisor: Steven Bauer Title: Research Scientist 3 4 Acknowledgments I am grateful to the Internet Policy Research Initiative for providing support and context for my technical research within a larger ecosystem of technology policy and advancements.
    [Show full text]
  • Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency
    practice Doi:10.1145/2076450.2076464 protocol the presence of congestion Article development led by queue.acm.org and thus the need for compensating adjustments. Because memory now is significant- A discussion with Vint Cerf, Van Jacobson, ly cheaper than it used to be, buffering Nick Weaver, and Jim Gettys. has been overdone in all manner of net- work devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have in- BufferBloat: advertently defeated a critical TCP con- gestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been di- What’s Wrong agnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to with the steer the discussion is Vint Cerf, popu- larly known as one of the “fathers of the Internet.” As the co-designer of the TCP/IP protocols, Cerf did indeed play internet? a key role in developing the Internet and related packet data and security technologies while at Stanford Univer- sity from 1972−1976 and with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976−1982. He currently serves as Google’s chief Internet evangelist. internet DeLays nOw are as common as they are Van Jacobson, presently a research maddening. But that means they end up affecting fellow at PARC where he leads the networking research program, is also system engineers just like all the rest of us. And when central to this discussion.
    [Show full text]
  • Real-Time Latency: Rethinking Remote Networks
    Real-Time Latency: Rethinking Remote Networks You can buy your way out of “bandwidth problems. But latency is divine ” || Proprietary & Confidential 2 Executive summary ▲ Latency is the time delay over a communications link, and is primarily determined by the distance data must travel between a user and the server ▲ Low earth orbits (LEO) are 35 times closer to Earth than traditional geostationary orbit (GEO) used for satellite communications. Due to the closeness and shorter data paths, LEO-based networks have latency similar to terrestrial networks1 ▲ LEO’s low latency enables fiber quality connectivity, benefiting users and service providers by - Loading webpages as fast as Fiber and ~8 times faster than a traditional satellite system - Simplifying networks by removing need for performance accelerators - Improving management of secure and encrypted traffic - Allowing real-time applications from remote areas (e.g., VoIP, telemedicine, remote-control machines) ▲ Telesat Lightspeed not only offers low latency but also provides - High throughput and flexible capacity - Transformational economics - Highly resilient and secure global network - Plug and play, standard-based Ethernet service 30 – 50 milliseconds Round Trip Time (RTT) | Proprietary & Confidential 3 Questions answered ▲ What is Latency? ▲ How does it vary for different technologies? How does lower latency improve user experience? What business outcomes can lower latency enable? Telesat Lightspeed - what is the overall value proposition? | Proprietary & Confidential 4 What is
    [Show full text]
  • The Bufferbloat Problem Over Intermittent Multi-Gbps Mmwave Links
    The Bufferbloat Problem over Intermittent Multi-Gbps mmWave Links Menglei Zhang, Marco Mezzavilla, Jing Zhuy, Sundeep Rangan, Shivendra Panwar NYU WIRELESS, Brooklyn, NY, USA y Intel, Santa Clara, USA emails: fmenglei, mezzavilla, srangan, [email protected], [email protected] Abstract—Due to massive available spectrum in the millimeter transport layer mechanisms and buffering must rapidly adapt wave (mmWave) bands, cellular systems in these frequencies may to the link capacities that can dramatically change. This work provides orders of magnitude greater capacity than networks addresses one particularly important problem – bufferbloat. in conventional lower frequency bands. However, due to high susceptibility to blocking, mmWave links can be extremely inter- Bufferbloat: Bufferbloat is triggered by persistently filled mittent in quality. This combination of high peak throughputs or full buffers, and usually results in long latency and packet and intermittency can cause significant challenges in end-to-end drops. This phenomenon was first pointed out in late 2010 transport-layer mechanisms such as TCP. This paper studies [16]. Optimal buffer sizes should equal the bandwidth de- the particularly challenging problem of bufferbloat. Specifically, lay product (BDP), however, as the delay is usually hard with current buffering and congestion control mechanisms, high throughput-high variable links can lead to excessive buffers to estimate, larger buffers are deployed to prevent losses. incurring long latency. In this paper, we capture the performance Even though these oversized buffer prevent packet loss, the trends obtained while adopting two potential solutions that overall performance degrades, especially when transmitting have been proposed in the literature: Active queue manage- TCP flows,1 which is the main focus of this paper.
    [Show full text]
  • How to Generate Realistic Network Traffic? Antoine VARET and Nicolas LARRIEU ENAC (French Civil Aviation University) - Telecom/Resco Laboratory
    How to generate realistic network traffic ? Antoine Varet, Nicolas Larrieu To cite this version: Antoine Varet, Nicolas Larrieu. How to generate realistic network traffic ?. IEEE COMPSAC 2014, 38th Annual International Computers, Software & Applications Conference, Jul 2014, Västerås, Swe- den. pp xxxx. hal-00973913 HAL Id: hal-00973913 https://hal-enac.archives-ouvertes.fr/hal-00973913 Submitted on 4 Oct 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. How to generate realistic network traffic? Antoine VARET and Nicolas LARRIEU ENAC (French Civil Aviation University) - Telecom/Resco Laboratory ABSTRACT instance, an Internet Service Provider (ISP) providing access for Network engineers and designers need additional tools to generate software engineering companies manage a different profile of data network traffic in order to test and evaluate, for instance, communications than an ISP for private individuals [2]. However, application performances or network provisioning. In such a there are some common characteristics between both profiles. The context, traffic characteristics are the most important part of the goal of the tool we have developed is to handle most of Internet work. Indeed, it is quite easy to generate traffic, but it is more traffic profiles and to generate traffic flows by following these difficult to produce traffic which can exhibit real characteristics different Internet traffic properties.
    [Show full text]
  • CS2P: Improving Video Bitrate Selection and Adaptation with Data-Driven Throughput Prediction
    CS2P: Improving Video Bitrate Selection and Adaptation with Data-Driven Throughput Prediction Yi Sun⊗, Xiaoqi Yiny, Junchen Jiangy, Vyas Sekary Fuyuan Lin⊗, Nanshu Wang⊗, Tao Liu, Bruno Sinopoliy ⊗ ICT/CAS, y CMU, iQIYI {sunyi, linfuyuan, wangnanshu}@ict.ac.cn, [email protected], [email protected], [email protected], [email protected], [email protected] ABSTRACT Keywords Bitrate adaptation is critical to ensure good quality-of- Internet Video; TCP; Throughput Prediction; Bitrate Adap- experience (QoE) for Internet video. Several efforts have tation; Dynamic Adaptive Streaming over HTTP (DASH) argued that accurate throughput prediction can dramatically improve the efficiency of (1) initial bitrate selection to lower 1 Introduction startup delay and offer high initial resolution and (2) mid- There has been a dramatic rise in the volume of HTTP-based stream bitrate adaptation for high QoE. However, prior ef- adaptive video streaming traffic in recent years [1]. De- forts did not systematically quantify real-world throughput livering good application-level video quality-of-experience predictability or develop good prediction algorithms. To (QoE) entails new metrics such as low buffering or smooth bridge this gap, this paper makes three contributions. First, bitrate delivery [5, 22]. To meet these new application-level we analyze the throughput characteristics in a dataset with QoE goals, video players need intelligent bitrate selection 20M+ sessions. We find: (a) Sessions sharing similar key and adaptation algorithms [27, 30]. features (e.g., ISP, region) present similar initial throughput Recent work has shown that accurate throughput predic- values and dynamic patterns; (b) There is a natural “state- tion can significantly improve the QoE for adaptive video ful” behavior in throughput variability within a given ses- streaming (e.g., [47, 48, 50]).
    [Show full text]