Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency

Total Page:16

File Type:pdf, Size:1020Kb

Bufferbloat: Advertently Defeated a Critical TCP Con- Gestion-Detection Mechanism, with the Result Being Worsened Congestion and Increased Latency practice Doi:10.1145/2076450.2076464 protocol the presence of congestion Article development led by queue.acm.org and thus the need for compensating adjustments. Because memory now is significant- A discussion with Vint Cerf, Van Jacobson, ly cheaper than it used to be, buffering Nick Weaver, and Jim Gettys. has been overdone in all manner of net- work devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have in- BufferBloat: advertently defeated a critical TCP con- gestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been di- What’s Wrong agnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to with the steer the discussion is Vint Cerf, popu- larly known as one of the “fathers of the Internet.” As the co-designer of the TCP/IP protocols, Cerf did indeed play internet? a key role in developing the Internet and related packet data and security technologies while at Stanford Univer- sity from 1972−1976 and with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA) from 1976−1982. He currently serves as Google’s chief Internet evangelist. internet DeLays nOw are as common as they are Van Jacobson, presently a research maddening. But that means they end up affecting fellow at PARC where he leads the networking research program, is also system engineers just like all the rest of us. And when central to this discussion. Considered system engineers get irritated, they often go looking one of the world’s leading authori- for what’s at the root of the problem. Take Jim Gettys, ties on TCP, he helped develop the random early detection (RED) queue for example. His slow home network had repeatedly management algorithm that has been proved to be the source of considerable frustration, so widely credited with allowing the Inter- he set out to determine what was wrong, and he even net to grow and meet ever-increasing throughput demands over the years. coined a term for what he found: bufferbloat. Prior to joining PARC, Jacobson was Bufferbloat refers to excess buffering inside a a chief scientist at Cisco Systems and later Packet Design Networks. network, resulting in high latency and reduced Also participating is Nick Weaver, throughput. Some buffering is needed; it provides a researcher at the International space to queue packets waiting for transmission, thus Computer Science Institute (ICSI) in Berkeley, where he was part of the minimizing data loss. In the past, the high cost of team that developed Netalyzr, a tool memory kept buffers fairly small, so they filled quickly that analyzes network connections, and has been instrumental in detect- and packets began to drop shortly after the link ing bufferbloat and measuring its im- became saturated, signaling to the communications pact across the Internet. 40 communicaTions of The acM | February 2012 | VoL. 55 | No. 2 VinT ceRf The key issue we’ve been talking about is that all this excessive buffering ends up breaking many of the timeout mechanisms built into our network protocols. That gets us to the question Rounding out the discussion is Jim the Internet draft for that feature. He Gettys, who edited the HTTP/1.1 speci- lives in the next town over from me, so of just how bad fication and was a co-designer of the X we arranged to get together for lunch. the problem really Window System. He now is a member During that lunch, Rich provided me of the technical staff at Alcatel-Lucent with several pieces to the puzzle. To is and how much Bell Labs where he focuses on systems begin with, he suggested my problem worse it’s likely design and engineering, protocol design, might have to do with the excessive and free software development. buffering in a device in my path rather to get as the speeds than with the PowerBoost feature. He also pointed out that ICSI has a great out at the edge VINT CERf: What caused you to do the tool called Netalyzr that helps you fig- of the net continue analysis that led you to conclude you ure out what your buffering is. Also, had problems with your home net- much to my surprise, he said a num- to increase. work related to buffers in intermedi- ber of ISPs had told him they were run- ate devices? ning without any queue management JiM GeTTYS: I was running some whatsoever—that is, they weren’t run- bandwidth tests on an old IPsec (In- ning RED on any of their routers or ternet Protocol Security)-like device edge devices. that belongs to Bell Labs and observed The very next day I managed to get latencies of as much as 1.2 seconds a wonderful trace. I had been having whenever the device was running as trouble reproducing the problem I erf c fast as it could. That didn’t entirely had experienced earlier, but since I surprise me, but then I happened to was using a more recent cable modem run the same test without the IPsec this time around, I had a trivial one- box in the way, and I ended up with line command for reproducing the h courtesy of Vint P the same result. With 1.2-second la- problem. The resulting SmokePing tency accompanied by horrible jitter, plot clearly showed the severity of the hotogra P my home network obviously needed problem, and that motivated me to some help. The rule of thumb for good take a packet-capture so I could see telephony is 150-millisecond latency just what in the world was going on. at most, and my network had nearly 10 About a week later, I saw basically the times that much. same signature on a Verizon FiOS [a alestrieri based on a My first thought was that the prob- bundled home communications ser- b lem might relate to a feature called vice operating over a fiber network], PowerBoost that comes as part of my and that surprised me. Anyway, it be- home service from Comcast. That led came clear that what I’d been experi- me to drop a note to Rich Woundy at encing on my home network wasn’t llustration by John i Comcast since his name appears on unique to cable modems. February 2012 | VoL. 55 | No. 2 | communicaTions of The acM 41 practice JiM GeTTys in pulling on a string to figure out why my home router was misbehaving so badly, i discovered that all operating systems—Linux, Windows, and Macintosh alike— also are guilty of resorting to CERf: I assume you weren’t the only what I’d been observing was anything excessive buffering. one making noises about these sorts more than just a couple of flukes. Af- This phenomenon of problems? ter seeing the Netalyzr data, however, I GeTTYS: I had been hearing similar could see how widespread the problem pervades the whole complaints all along. In fact, Dave Reed really was. I can still remember the day path. you definitely [Internet network architect, now with when I first saw the data for the Inter- SAP Labs], about a year earlier had re- net as a whole plotted out. That was can’t fix it at any ported problems in 3G networks that rather horrifying. single location. also appeared to be caused by excessive WEAVeR: It’s actually a pretty buffering. He was ultimately ignored straightforward test that allowed us when he publicized his concerns, but to capture all that data. In putting I’ve since been able to confirm that Dave together Netalyzr at ICSI, we started was right. In his case, he would see daily out with a design philosophy that one high latency without much packet loss anonymous commenter later captured during the day, and then the latency very nicely: “This brings new meaning would fall back down again at night as to the phrase, ‘Bang it with a wrench.’” flow on the overall network dropped. Basically, we just set out to hammer on Dave Clark [Internet network archi- everything—except we weren’t inter- tect, currently senior research scientist ested in doing a bandwidth test since at MIT] had noticed that the DSLAM there were plenty of good ones out (Digital Subscriber Line Access Multi- there already. ettys plexer) his micro-ISP runs had way too I remembered, however, that Nick g much buffering—leading to as much as McKeown and others had ranted six seconds of latency. And this is some- about how amazingly over-buffered thing he had observed six years earlier, home networks often proved to be, so ourtesy of Jim c h which is what had led him to warn Rich buffering seemed like a natural thing P Woundy of the possible problem. to test for. It turns out that would also CERf: hotogra Perhaps there’s an important give us a bandwidth test as a side con- P life lesson here suggesting you may sequence. Thus we developed a pretty not want to simply throw away outliers simple test. Over just a 10-second pe- on the grounds they’re probably just riod, it sends a packet and then waits flukes. When outliers show up, it might for a packet to return. Then each time be a good idea to find out why. it receives a packet back, it sends two alestrieri based on a b NICK WEAVeR: But when testing for more. It either sends large packets this particular problem, the outliers ac- and receives small ones in return, or tually prove to be the good networks.
Recommended publications
  • Traffic Management in Multi-Service Access Networks
    MARKETING REPORT MR-404 Traffic Management in Multi-Service Access Networks Issue: 01 Issue Date: September 2017 © The Broadband Forum. All rights reserved. Traffic Management in Multi-Service Access Networks MR-404 Issue History Issue Approval date Publication Date Issue Editor Changes Number 01 4 September 2017 13 October 2017 Christele Bouchat, Original Nokia Comments or questions about this Broadband Forum Marketing Draft should be directed to [email protected] Editors Francois Fredricx Nokia [email protected] Florian Damas Nokia [email protected] Ing-Jyh Tsang Nokia [email protected] Innovation Christele Bouchat Nokia [email protected] Leadership Mauro Tilocca Telecom Italia [email protected] September 2017 © The Broadband Forum. All rights reserved 2 of 16 Traffic Management in Multi-Service Access Networks MR-404 Executive Summary Traffic Management is a widespread industry practice for ensuring that networks operate efficiently, including mechanisms such as queueing, routing, restricting or rationing certain traffic on a network, and/or giving priority to some types of traffic under certain network conditions, or at all times. The goal is to minimize the impact of congestion in networks on the traffic’s Quality of Service. It can be used to achieve certain performance goals, and its careful application can ultimately improve the quality of an end user's experience in a technically and economically sound way, without detracting from the experience of others. Several Traffic Management mechanisms are vital for a functioning Internet carrying all sorts of Over-the-Top (OTT) applications in “Best Effort” mode. Another set of Traffic Management mechanisms is also used in networks involved in a multi-service context, to provide differentiated treatment of various services (e.g.
    [Show full text]
  • A Comparison of Mechanisms for Improving TCP Performance Over Wireless Links
    A Comparison of Mechanisms for Improving TCP Performance over Wireless Links Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan and Randy H. Katz1 {hari,padmanab,ss,randy}@cs.berkeley.edu Computer Science Division, Department of EECS, University of California at Berkeley Abstract the estimated round-trip delay and the mean linear deviation from it. The sender identifies the loss of a packet either by Reliable transport protocols such as TCP are tuned to per- the arrival of several duplicate cumulative acknowledg- form well in traditional networks where packet losses occur ments or the absence of an acknowledgment for the packet mostly because of congestion. However, networks with within a timeout interval equal to the sum of the smoothed wireless and other lossy links also suffer from significant round-trip delay and four times its mean deviation. TCP losses due to bit errors and handoffs. TCP responds to all reacts to packet losses by dropping its transmission (conges- losses by invoking congestion control and avoidance algo- tion) window size before retransmitting packets, initiating rithms, resulting in degraded end-to-end performance in congestion control or avoidance mechanisms (e.g., slow wireless and lossy systems. In this paper, we compare sev- start [13]) and backing off its retransmission timer (Karn’s eral schemes designed to improve the performance of TCP Algorithm [16]). These measures result in a reduction in the in such networks. We classify these schemes into three load on the intermediate links, thereby controlling the con- broad categories: end-to-end protocols, where loss recovery gestion in the network. is performed by the sender; link-layer protocols, that pro- vide local reliability; and split-connection protocols, that Unfortunately, when packets are lost in networks for rea- break the end-to-end connection into two parts at the base sons other than congestion, these measures result in an station.
    [Show full text]
  • Analysing TCP Performance When Link Experiencing Packet Loss
    Analysing TCP performance when link experiencing packet loss Master of Science Thesis [in the Programme Networks and Distributed System] SHAHRIN CHOWDHURY KANIZ FATEMA Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, October 2013 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Analysing TCP performance when link experiencing packet loss SHAHRIN CHOWDHURY, KANIZ FATEMA © SHAHRIN CHOWDHURY, October 2013. © KANIZ FATEMA, October 2013. Examiner: TOMAS OLOVSSON Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden October 2013 Acknowledgement We are grateful to our supervisor and examiner Tomas Olovsson for his valuable time and assistance in compilation for this thesis.
    [Show full text]
  • Lecture 8: Overview of Computer Networking Roadmap
    Lecture 8: Overview of Computer Networking Slides adapted from those of Computer Networking: A Top Down Approach, 5th edition. Jim Kurose, Keith Ross, Addison-Wesley, April 2009. Roadmap ! what’s the Internet? ! network edge: hosts, access net ! network core: packet/circuit switching, Internet structure ! performance: loss, delay, throughput ! media distribution: UDP, TCP/IP 1 What’s the Internet: “nuts and bolts” view PC ! millions of connected Mobile network computing devices: server Global ISP hosts = end systems wireless laptop " running network apps cellular handheld Home network ! communication links Regional ISP " fiber, copper, radio, satellite access " points transmission rate = bandwidth Institutional network wired links ! routers: forward packets (chunks of router data) What’s the Internet: “nuts and bolts” view ! protocols control sending, receiving Mobile network of msgs Global ISP " e.g., TCP, IP, HTTP, Skype, Ethernet ! Internet: “network of networks” Home network " loosely hierarchical Regional ISP " public Internet versus private intranet Institutional network ! Internet standards " RFC: Request for comments " IETF: Internet Engineering Task Force 2 A closer look at network structure: ! network edge: applications and hosts ! access networks, physical media: wired, wireless communication links ! network core: " interconnected routers " network of networks The network edge: ! end systems (hosts): " run application programs " e.g. Web, email " at “edge of network” peer-peer ! client/server model " client host requests, receives
    [Show full text]
  • Bit & Baud Rate
    What’s The Difference Between Bit Rate And Baud Rate? Apr. 27, 2012 Lou Frenzel | Electronic Design Serial-data speed is usually stated in terms of bit rate. However, another oft- quoted measure of speed is baud rate. Though the two aren’t the same, similarities exist under some circumstances. This tutorial will make the difference clear. Table Of Contents Background Bit Rate Overhead Baud Rate Multilevel Modulation Why Multiple Bits Per Baud? Baud Rate Examples References Background Most data communications over networks occurs via serial-data transmission. Data bits transmit one at a time over some communications channel, such as a cable or a wireless path. Figure 1 typifies the digital-bit pattern from a computer or some other digital circuit. This data signal is often called the baseband signal. The data switches between two voltage levels, such as +3 V for a binary 1 and +0.2 V for a binary 0. Other binary levels are also used. In the non-return-to-zero (NRZ) format (Fig. 1, again), the signal never goes to zero as like that of return- to-zero (RZ) formatted signals. 1. Non-return to zero (NRZ) is the most common binary data format. Data rate is indicated in bits per second (bits/s). Bit Rate The speed of the data is expressed in bits per second (bits/s or bps). The data rate R is a function of the duration of the bit or bit time (TB) (Fig. 1, again): R = 1/TB Rate is also called channel capacity C. If the bit time is 10 ns, the data rate equals: R = 1/10 x 10–9 = 100 million bits/s This is usually expressed as 100 Mbits/s.
    [Show full text]
  • Latency and Throughput Optimization in Modern Networks: a Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui
    READY TO SUBMIT TO IEEE COMMUNICATIONS SURVEYS & TUTORIALS JOURNAL 1 Latency and Throughput Optimization in Modern Networks: A Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui Abstract—Modern applications are highly sensitive to com- On one hand every user likes to send and receive their munication delays and throughput. This paper surveys major data as quickly as possible. On the other hand the network attempts on reducing latency and increasing the throughput. infrastructure that connects users has limited capacities and These methods are surveyed on different networks and surrond- ings such as wired networks, wireless networks, application layer these are usually shared among users. There are some tech- transport control, Remote Direct Memory Access, and machine nologies that dedicate their resources to some users but they learning based transport control, are not very much commonly used. The reason is that although Index Terms—Rate and Congestion Control , Internet, Data dedicated resources are more secure they are more expensive Center, 5G, Cellular Networks, Remote Direct Memory Access, to implement. Sharing a physical channel among multiple Named Data Network, Machine Learning transmitters needs a technique to control their rate in proper time. The very first congestion network collapse was observed and reported by Van Jacobson in 1986. This caused about a I. INTRODUCTION thousand time rate reduction from 32kbps to 40bps [3] which Recent applications such as Virtual Reality (VR), au- is about a thousand times rate reduction. Since then very tonomous cars or aerial vehicles, and telehealth need high different variations of the Transport Control Protocol (TCP) throughput and low latency communication.
    [Show full text]
  • Adaptive Method for Packet Loss Types in Iot: an Naive Bayes Distinguisher
    electronics Article Adaptive Method for Packet Loss Types in IoT: An Naive Bayes Distinguisher Yating Chen , Lingyun Lu *, Xiaohan Yu * and Xiang Li School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; [email protected] (Y.C.); [email protected] (X.L.) * Correspondence: [email protected] (L.L.); [email protected] (X.Y.) Received: 31 December 2018; Accepted: 23 January 2019; Published: 28 January 2019 Abstract: With the rapid development of IoT (Internet of Things), massive data is delivered through trillions of interconnected smart devices. The heterogeneous networks trigger frequently the congestion and influence indirectly the application of IoT. The traditional TCP will highly possible to be reformed supporting the IoT. In this paper, we find the different characteristics of packet loss in hybrid wireless and wired channels, and develop a novel congestion control called NB-TCP (Naive Bayesian) in IoT. NB-TCP constructs a Naive Bayesian distinguisher model, which can capture the packet loss state and effectively classify the packet loss types from the wireless or the wired. More importantly, it cannot cause too much load on the network, but has fast classification speed, high accuracy and stability. Simulation results using NS2 show that NB-TCP achieves up to 0.95 classification accuracy and achieves good throughput, fairness and friendliness in the hybrid network. Keywords: Internet of Things; wired/wireless hybrid network; TCP; naive bayesian model 1. Introduction The Internet of Things (IoT) is a new network industry based on Internet, mobile communication networks and other technologies, which has wide applications in industrial production, intelligent transportation, environmental monitoring and smart homes.
    [Show full text]
  • A Comparison of Voip Performance on Ipv6 and Ipv4 Networks
    A Comparison of VoIP Performance on IPv6 and IPv4 Networks Roman Yasinovskyy, Alexander L. Wijesinha, Ramesh K. Karne, and Gholam Khaksari Towson University increasingly used on the Internet today. The increase Abstract—We compare VoIP performance on IPv6 in IPv6 packet size due to the larger addresses is and IPv4 LANs in the presence of varying levels of partly offset by a streamlined header with optional background UDP traffic. A conventional softphone is extension headers (the header fields in IPv4 for used to make calls and a bare PC (operating system- fragmentation support and checksum are eliminated less) softphone is used as a control to determine the impact of system overhead. The performance measures in IPv6). are maximum and mean delta (the time between the As the growing popularity of VoIP will make it a arrival of voice packets), maximum and mean jitter, significant component of traffic in the future Internet, packet loss, MOS (Mean Opinion Score), and it is of interest to compare VoIP performance over throughput. We also determine the relative frequency IPv6 and IPv4. The results would help to determine if distribution for delta. It is found that mean values of there are any differences in VoIP performance over delta for IPv4 and IPv6 are similar although maximum values are much higher than the mean and show more IPv6 compared to IPv4 due to overhead resulting variability at higher levels of background traffic. The from the larger IPv6 header (and packet size). We maximum jitter for IPv6 is slightly higher than for IPv4 focus on comparing VoIP performance with IPv6 and but mean jitter values are approximately the same.
    [Show full text]
  • Buffer De-Bloating in Wireless Access Networks
    Buffer De-bloating in Wireless Access Networks by Yuhang Dai A thesis submitted to the University of London for the degree of Doctor of Philosophy School of Electronic Engineering & Computer Science Queen Mary University of London United Kingdom Sep 2018 TO MY FAMILY Abstract Excessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and thehead packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algo- rithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average end- to-end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel.
    [Show full text]
  • Measuring Latency Variation in the Internet
    Measuring Latency Variation in the Internet Toke Høiland-Jørgensen Bengt Ahlgren Per Hurtig Dept of Computer Science SICS Dept of Computer Science Karlstad University, Sweden Box 1263, 164 29 Kista Karlstad University, Sweden toke.hoiland- Sweden [email protected] [email protected] [email protected] Anna Brunstrom Dept of Computer Science Karlstad University, Sweden [email protected] ABSTRACT 1. INTRODUCTION We analyse two complementary datasets to quantify the la- As applications turn ever more interactive, network la- tency variation experienced by internet end-users: (i) a large- tency plays an increasingly important role for their perfor- scale active measurement dataset (from the Measurement mance. The end-goal is to get as close as possible to the Lab Network Diagnostic Tool) which shed light on long- physical limitations of the speed of light [25]. However, to- term trends and regional differences; and (ii) passive mea- day the latency of internet connections is often larger than it surement data from an access aggregation link which is used needs to be. In this work we set out to quantify how much. to analyse the edge links closest to the user. Having this information available is important to guide work The analysis shows that variation in latency is both com- that sets out to improve the latency behaviour of the inter- mon and of significant magnitude, with two thirds of sam- net; and for authors of latency-sensitive applications (such ples exceeding 100 ms of variation. The variation is seen as Voice over IP, or even many web applications) that seek within single connections as well as between connections to to predict the performance they can expect from the network.
    [Show full text]
  • The Effects of Latency on Player Performance and Experience in A
    The Effects of Latency on Player Performance and Experience in a Cloud Gaming System Robert Dabrowski, Christian Manuel, Robert Smieja May 7, 2014 An Interactive Qualifying Project Report: submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Bachelor of Science Approved by: Professor Mark Claypool, Advisor Professor David Finkel, Advisor This report represents the work of three WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its web site without editorial or peer review. Abstract Due to the increasing popularity of thin client systems for gaming, it is important to un- derstand the effects of different network conditions on users. This paper describes our experiments to determine the effects of latency on player performance and quality of expe- rience (QoE). For our experiments, we collected player scores and subjective ratings from users as they played short game sessions with different amounts of additional latency. We found that QoE ratings and player scores decrease linearly as latency is added. For ev- ery 100 ms of added latency, players reduced their QoE ratings by 14% on average. This information may provide insight to game designers and network engineers on how latency affects the users, allowing them to optimize their systems while understanding the effects on their clients. This experiment design should also prove useful to thin client researchers looking to conduct user studies while controlling not only latency, but also other network conditions like packet loss. Contents 1 Introduction 1 2 Background Research 4 2.1 Thin Client Technology .
    [Show full text]
  • LTE-Advanced
    Table of Contents INTRODUCTION........................................................................................................ 5 EXPLODING DEMAND ............................................................................................... 8 Smartphones and Tablets ......................................................................................... 8 Application Innovation .............................................................................................. 9 Internet of Things .................................................................................................. 10 Video Streaming .................................................................................................... 10 Cloud Computing ................................................................................................... 11 5G Data Drivers ..................................................................................................... 11 Global Mobile Adoption ........................................................................................... 11 THE PATH TO 5G ..................................................................................................... 15 Expanding Use Cases ............................................................................................. 15 1G to 5G Evolution ................................................................................................. 17 5G Concepts and Architectures ................................................................................ 20 Information-Centric
    [Show full text]