Bufferbloat and Beyond

Total Page:16

File Type:pdf, Size:1020Kb

Bufferbloat and Beyond The elephant symbo- lises a big download, The topic of this thesis is the performance of computer networks in commonly referred to general, and the internet in particular. While network performance as an ”elephant flow”, which has generally improved with time, over the last several years we have blocks the link with its bulk so seen examples of performance barriers limiting network performance. everything has to wait behind it. In this work we explore such performance barriers and look for solutions. The mice are smaller Our exploration takes us through three areas where performance barriers flows, such as web are found: The bufferbloat phenomenon of excessive queueing latency, the pages, that take up little space performance anomaly in WiFi networks and related airtime resource sharing but should be able to move problems, and the problem of implementing high-speed programmable freely. Through better queue packet processing in an operating system. In each of these areas we present management, the mice are solutions that significantly advance the state of the art. guided around the elephant, The work in this thesis spans all three aspects of the field of computing, so they don’t have to wait; namely mathematics, engineering and science. We perform mathematical and interactivity is restored. analysis of algorithms, engineer solutions to the problems we explore, The penguin is Tux, and perform scientific studies of the network itself. All our solutions are the Linux mascot, who implemented as open source software, including both contributions to the stands at the bottleneck upstream Linux kernel, as well as the Flent test tool, developed to support and directs traffic onto the the measurements performed in the rest of the thesis. right path. 2018:42 DOCTORAL THESIS Karlstad University Studies, 2018:42 Print & layout ISBN 978-91-7063-878-7 (Print) University Printing Office, Karlstad 2018 ISBN 978-91-7063-973-9 (pdf) Toke Høiland-Jørgensen | Bufferbloat and Beyond Toke Bufferbloat and Beyond The topic of this thesis is the performance of computer networks in general, and the internet in particular. While network performance has generally improved with time, over the last several years we have seen examples of performance barriers limiting network performance. In this work we explore such performance barriers and look for solutions. 2018:42 Bufferbloat and Beyond Our exploration takes us through three areas where performance barriers are Removing Performance Barriers in Real-World Networks found: The bufferbloat phenomenon of excessive queueing latency, the performance anomaly in WiFi networks and related airtime resource sharing problems, and the problem of implementing high-speed programmable packet processing in an operating system. In each of these areas we present solutions that significantly advance the state of the art. Toke Høiland-Jørgensen The work in this thesis spans all three aspects of the field of computing, namely mathematics, engineering and science. We perform mathematical analysis of algorithms, engineer solutions to the problems we explore, and perform scientific studies of the network itself. All our solutions are implemented as open source software, including both contributions to the upstream Linux kernel, as well as the Flent test tool, developed to support the measurements performed in the rest of the thesis. ISBN 978-91-7063-878-7 (Print) ISBN 978-91-7063-973-9 (pdf) Faculty of Health, Science and Technology ISSN 1403-8099 Computer Science DOCTORAL THESIS | Karlstad University Studies | 2018:42 DOCTORAL THESIS | Karlstad University Studies | 2018:42 Bufferbloat and Beyond Removing Performance Barriers in Real-World Networks Toke Høiland-Jørgensen DOCTORAL THESIS | Karlstad University Studies | 2018:42 Bufferbloat and Beyond - Removing Performance Barriers in Real-World Networks Toke Høiland-Jørgensen DOCTORAL THESIS Karlstad University Studies | 2018:42 urn:nbn:se:kau:diva-69416 ISSN 1403-8099 ISBN 978-91-7063-878-7 (Print) ISBN 978-91-7063-973-9 (pdf) © The author Distribution: Karlstad University Faculty of Health, Science and Technology Department of Mathematics and Computer Science SE-651 88 Karlstad, Sweden +46 54 700 10 00 Print: Universitetstryckeriet, Karlstad 2018 WWW.KAU.SE iii Bufferbloat and Beyond: Removing Performance Barriers in Real-World Networks Toke Høiland-Jørgensen Department of Mathematics and Computer Science Karlstad University Abstract The topic of this thesis is the performance of computer networks. While net- work performance has generally improved with time, over the last several years we have seen examples of performance barriers limiting network per- formance. In this work we explore such performance barriers and look for solutions. The problem of excess persistent queueing latency, known as bufferbloat, serves as our starting point; we examine its prevalence in the public internet, and evaluate solutions for better queue management, and explore how to improve on existing solutions to make them easier to deploy. Since an increasing number of clients access the internet through WiFi networks, examining WiFi performance is a natural next step. Here we also look at bufferbloat, as well as the so-called performance anomaly, where stations with poor signal strengths can severely impact the performance of the whole network. We present solutions for both of these issues, and additionally design a mechanism for assigning policies for distributing airtime between devices on a WiFi network. We also analyse the “TCP Small Queues” latency minimisation technique implemented in the Linux TCP stack and optimise its performance over WiFi networks. Finally, we explore how high-speed network processing can be enabled in software, by looking at the eXpress Data Path framework that has been gradu- ally implemented in the Linux kernel as a way to enable high-performance programmable packet processing directly in the operating system’s networking stack. A special focus of this work has been to ensure that the results are carried forward to the implementation stage, which is achieved by releasing imple- mentations as open source software. This includes parts that have been accep- ted into the Linux kernel, as well as a separate open source measurement tool, called Flent, which is used to perform most of the experiments presented in this thesis, and also used widely in the bufferbloat community. Keywords: Bufferbloat, AQM, WiFi, XDP, TSQ, Flent, network measure- ment, performance evaluation, fairness, queueing, programmable packet pro- cessing v Acknowledgements The journey is almost over. It has been challenging, exciting, annoying, incredible and above all interesting. And it would not have been the same without the amazing people who have accompanied and supported me along the way, all of whom deserve a huge thank you. First of all, thank you to the many collaborators I have had these past years. Foremost among them of course my advisors, Anna Brunström and Per Hurtig, who have always been available with their competent feedback, ideas, suggestions and support. A special thank you also to Dave Täht, with whom I have collaborated extensively, on both the papers we have co-authored, and a list of other projects too long to reproduce here. Dave was also, along with Jim Gettys, the people who initially introduced me to the concept of bufferbloat, and they helped set me on the path towards the work represented in this thesis, for which I am exceedingly grateful. Thank you to Jesper Brouer for his collaboration, and for convincing me to continue the work with Linux and open source post-thesis. Many thanks to Fanny Reinholtz for designing the awesome dust jacket for the printed version of this thesis. And finally, thank you to my other co-authors, and all the members of the bufferbloat, make-wifi-fast, Linux and OpenWrt communities that I have collaborated with over the last six years. A journey such as this would not have been possible without the friends who have shared my joys and sorrows along the way. Thank you to my colleagues at the university, my old friends in Denmark, and my new(er) friends in Karlstad. Thank you to Lea, Stefan, Stina and Leonardo for climbing, skiing and everything else. To Martin, Daniel, Alex and Miriam for hanging out and having fun. To Timo, Leo and Ricardo for bringing the band (back) together. To the innebandy team in its various incarnations. To the dwarves, and to Sunni, Morten, Sidsel and Torben for keeping in touch through my five-year Swedish exile. Last, but certainly not least, thank you so much to my wonderful loving family. Thank you mum and dad for helping me get to this point, for always being there for me, and for your love and respect. And thank you Sascha, for coming with me to Sweden and brightening up my days. Karlstad, October 2018 Toke Høiland-Jørgensen vii List of appended papers I. Toke Høiland-Jørgensen, Bengt Ahlgren, Per Hurtig and Anna Brun- strom. Measuring Latency Variation in the Internet. ACM CoNEXT ’16, December 12–15, 2016, Irvine, CA, USA. II. Toke Høiland-Jørgensen, Per Hurtig and Anna Brunstrom. The Good, the Bad and the WiFi: Modern AQMs in a Residential Setting. Computer Networks, vol 89, pg 90–106, October 2015. III. Toke Høiland-Jørgensen. Analysing the Latency of Sparse Flows in the FQ-CoDel Queue Management Algorithm. IEEE Communication Letters, October 2018. IV. Toke Høiland-Jørgensen, Dave Täht and Jonathan Morton. Piece of CAKE: A Comprehensive Queue Management Solution for Home Gateways. IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN 2018), 25–27 June 2018, Washington, DC. V. Toke Høiland-Jørgensen, Michal Kazior, Dave Täht, Per Hurtig and Anna Brunstrom. Ending the Anomaly: Achieving Low Latency and Airtime Fairness in WiFi. 2017 USENIX Annual Technical Conference (USENIX ATC 17), July 12–14 2017, Santa Clara, CA. VI. Toke Høiland-Jørgensen, Per Hurtig and Anna Brunstrom. PoliFi: Air- time Policy Enforcement for WiFi. Under Submission. VII. Carlo Augusto Grazia, Natale Patriciello, Toke Høiland-Jørgensen, Mar- tin Klapez and Maurizio Casoni.
Recommended publications
  • Control Methods for Data Flow in Communication Networks
    Control Methods for Data Flow in Communication Networks DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Peng Yan, M.S. ***** The Ohio State University 2003 Dissertation Committee: Approved by Professor Hitay Ozbay¨ , Adviser Professor Kevin M. Passino Adviser ¨ ¨ Professor Umit Ozg¨uner Department of Electrical Engineering c Copyright by Peng Yan 2003 ABSTRACT In this dissertation, we investigate various control methods for data flow in communica- tion networks. First, we develop a rate-based flow controller for self-similar network traffic, ½ which consists of a robust À control block and an adaptive LMMSE capacity predictor. The controller guarantees robust stability against time-varying time delay uncertainties and improves the transient response by predicting the self-similar cross traffic. Window-based congestion control methods are also explored for TCP traffic on IP networks. We propose a variable structure approach in Active Queue Management (AQM) support Explicit Con- gestion Notification (ECN). By analyzing the robustness and performance of the control scheme for the nonlinear TCP/AQM model, we show that the proposed design has good performance and robustness with respect to the uncertainties of the round-trip time (RTT) and the number of active TCP sessions, which are central to the notion of AQM. Alterna- ½ tively, we design robust À AQM controllers for the linearized TCP/AQM model, with ½ the presence of uncertain time delays. The À performance is analyzed and a switching control scheme is introduced to improve the system performance. Motivated by the Lin- ½ ear Parameter Varying (LPV) nature of the linearized TCP/AQM model, a switching À control method is further investigated for LPV systems where we provide some stability conditions in terms of the dwell time and the average dwell time.
    [Show full text]
  • A Letter to the FCC [PDF]
    Before the FEDERAL COMMUNICATIONS COMMISSION Washington, DC 20554 In the Matter of ) ) Amendment of Part 0, 1, 2, 15 and 18 of the ) ET Docket No. 15­170 Commission’s Rules regarding Authorization ) Of Radio frequency Equipment ) ) Request for the Allowance of Optional ) RM­11673 Electronic Labeling for Wireless Devices ) Summary The rules laid out in ET Docket No. 15­170 should not go into effect as written. They would cause more harm than good and risk a significant overreach of the Commission’s authority. Specifically, the rules would limit the ability to upgrade or replace firmware in commercial, off­the­shelf home or small­business routers. This would damage the compliance, security, reliability and functionality of home and business networks. It would also restrict innovation and research into new networking technologies. We present an alternate proposal that better meets the goals of the FCC, not only ensuring the desired operation of the RF portion of a Wi­Fi router within the mandated parameters, but also assisting in the FCC’s broader goals of increasing consumer choice, fostering competition, protecting infrastructure, and increasing resiliency to communication disruptions. If the Commission does not intend to prohibit the upgrade or replacement of firmware in Wi­Fi ​ ​ devices, the undersigned would welcome a clear statement of that intent. Introduction We recommend the FCC pursue an alternative path to ensuring Radio Frequency (RF) compliance from Wi­Fi equipment. We understand there are significant concerns regarding existing users of the Wi­Fi ​ spectrum, and a desire to avoid uncontrolled change. However, we most strenuously advise against prohibiting changes to firmware of devices containing radio components, and furthermore advise against allowing non­updatable devices into the field.
    [Show full text]
  • Latency and Throughput Optimization in Modern Networks: a Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui
    READY TO SUBMIT TO IEEE COMMUNICATIONS SURVEYS & TUTORIALS JOURNAL 1 Latency and Throughput Optimization in Modern Networks: A Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui Abstract—Modern applications are highly sensitive to com- On one hand every user likes to send and receive their munication delays and throughput. This paper surveys major data as quickly as possible. On the other hand the network attempts on reducing latency and increasing the throughput. infrastructure that connects users has limited capacities and These methods are surveyed on different networks and surrond- ings such as wired networks, wireless networks, application layer these are usually shared among users. There are some tech- transport control, Remote Direct Memory Access, and machine nologies that dedicate their resources to some users but they learning based transport control, are not very much commonly used. The reason is that although Index Terms—Rate and Congestion Control , Internet, Data dedicated resources are more secure they are more expensive Center, 5G, Cellular Networks, Remote Direct Memory Access, to implement. Sharing a physical channel among multiple Named Data Network, Machine Learning transmitters needs a technique to control their rate in proper time. The very first congestion network collapse was observed and reported by Van Jacobson in 1986. This caused about a I. INTRODUCTION thousand time rate reduction from 32kbps to 40bps [3] which Recent applications such as Virtual Reality (VR), au- is about a thousand times rate reduction. Since then very tonomous cars or aerial vehicles, and telehealth need high different variations of the Transport Control Protocol (TCP) throughput and low latency communication.
    [Show full text]
  • Buffer De-Bloating in Wireless Access Networks
    Buffer De-bloating in Wireless Access Networks by Yuhang Dai A thesis submitted to the University of London for the degree of Doctor of Philosophy School of Electronic Engineering & Computer Science Queen Mary University of London United Kingdom Sep 2018 TO MY FAMILY Abstract Excessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and thehead packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algo- rithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average end- to-end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel.
    [Show full text]
  • Evaluation of Priority Scheduling and Flow Starvation for Thin Streams with FQ-Codel
    2015 European Conference on Networks and Communications (EuCNC) Evaluation of Priority Scheduling and Flow Starvation for Thin Streams with FQ-CoDel Eduard Grigorescu, Chamil Kulatunga, Gorry Nicolas Kuhn Fairhurst Télécom Bretagne, IRISA School of Engineering, University of Aberdeen, UK [email protected] {eduard, chamil, gorry}@erg.abdn.ac.uk Abstract— Bufferbloat is the result of oversized buffers and algorithms, managing packet scheduling and isolation/capacity induced high end-to-end latency experienced by applications allocation among flows, can be introduced. As one example of across the Internet. This additional delay can adversely impact a scheme that mixes both classes, FlowQueue-CoDel (FQ- thin streams that frequently exchange small amounts of data, but CoDel) [7] is a scheduling scheme that features prioritization have stringent latency requirements. Active Queue Management and flow isolation. FQ-CoDel creates one sub-queue per flow (AQM) techniques, such as Controlled Delay (CoDel), can control and applies CoDel on each of them. The awareness of the the queuing delay in a network device to ensure low latency by dropping packets to indicate incipient congestion. FlowQueue- latency resulting from over-provisioned buffers has been CoDel (FQ-CoDel) is a scheduling scheme that creates one sub- accompanied by an increase in real-time applications such as queue per flow and applies CoDel on each of them. FQ-CoDel Voice over Internet Protocol, gaming or financial trading features: (1) priority scheduling for low-rate traffic; (2) flow applications. As one example, the latency experienced by isolation; (3) queue management with CoDel. First, this paper gamers can directly impact the perceived value of the network fills a gap in the understanding of FQ-CoDel by analyzing what service [10].
    [Show full text]
  • Measuring Latency Variation in the Internet
    Measuring Latency Variation in the Internet Toke Høiland-Jørgensen Bengt Ahlgren Per Hurtig Dept of Computer Science SICS Dept of Computer Science Karlstad University, Sweden Box 1263, 164 29 Kista Karlstad University, Sweden toke.hoiland- Sweden [email protected] [email protected] [email protected] Anna Brunstrom Dept of Computer Science Karlstad University, Sweden [email protected] ABSTRACT 1. INTRODUCTION We analyse two complementary datasets to quantify the la- As applications turn ever more interactive, network la- tency variation experienced by internet end-users: (i) a large- tency plays an increasingly important role for their perfor- scale active measurement dataset (from the Measurement mance. The end-goal is to get as close as possible to the Lab Network Diagnostic Tool) which shed light on long- physical limitations of the speed of light [25]. However, to- term trends and regional differences; and (ii) passive mea- day the latency of internet connections is often larger than it surement data from an access aggregation link which is used needs to be. In this work we set out to quantify how much. to analyse the edge links closest to the user. Having this information available is important to guide work The analysis shows that variation in latency is both com- that sets out to improve the latency behaviour of the inter- mon and of significant magnitude, with two thirds of sam- net; and for authors of latency-sensitive applications (such ples exceeding 100 ms of variation. The variation is seen as Voice over IP, or even many web applications) that seek within single connections as well as between connections to to predict the performance they can expect from the network.
    [Show full text]
  • The Effects of Latency on Player Performance and Experience in A
    The Effects of Latency on Player Performance and Experience in a Cloud Gaming System Robert Dabrowski, Christian Manuel, Robert Smieja May 7, 2014 An Interactive Qualifying Project Report: submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Bachelor of Science Approved by: Professor Mark Claypool, Advisor Professor David Finkel, Advisor This report represents the work of three WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its web site without editorial or peer review. Abstract Due to the increasing popularity of thin client systems for gaming, it is important to un- derstand the effects of different network conditions on users. This paper describes our experiments to determine the effects of latency on player performance and quality of expe- rience (QoE). For our experiments, we collected player scores and subjective ratings from users as they played short game sessions with different amounts of additional latency. We found that QoE ratings and player scores decrease linearly as latency is added. For ev- ery 100 ms of added latency, players reduced their QoE ratings by 14% on average. This information may provide insight to game designers and network engineers on how latency affects the users, allowing them to optimize their systems while understanding the effects on their clients. This experiment design should also prove useful to thin client researchers looking to conduct user studies while controlling not only latency, but also other network conditions like packet loss. Contents 1 Introduction 1 2 Background Research 4 2.1 Thin Client Technology .
    [Show full text]
  • AQM Algorithms and Their Interaction with TCP Congestion Control Mechanisms
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidad Carlos III de Madrid e-Archivo Grado Universitario en Ingenier´ıaTelem´atica 2016/2017 Trabajo Fin de Grado Control de Congesti´onTCP y mecanismos AQM Sergio Maeso Jim´enez Tutor/es Celeste Campo V´azquez Carlos Garc´ıaRubio Legan´es,2 de Octubre de 2017 Esta obra se encuentra sujeta a la licencia Creative Commons Reconocimiento - No Comercial - Sin Obra Derivada Control de Congesti´onTCP y mecanismos AQM By Sergio Maeso Jim´enez Directed By Celeste Campo V´azquez Carlos Garc´ıaRubio A Dissertation Submitted to the Department of Telematic Engineering in Partial Fulfilment of the Requirements for the BACHELOR'S DEGREE IN TELEMATICS ENGINEERING Approved by the Supervising Committee: Chairman Marta Portela Garc´ıa Chair Carlos Alario Hoyos Secretary I~naki Ucar´ Marqu´es Deputy Javier Manuel Mu~noz Garc´ıa Grade: Legan´es,2 de Octubre de 2017 iii iv Acknowledgements I would like to thanks my tutors Celeste Campo and Carlos Garcia for all the support they gave me while I was doing this thesis with them. To my parents, who believe in me against all odds. v vi Abstract In recent years, the relevance of delay over throughput has been particularly emphasized. Nowadays our networks are getting more and more sensible to latency due to the proliferation of applications and services like VoIP, IPTV or online gaming where a low delay is essential for a proper performance and a good user experience. Most of this unnecessary delay is created by the misbehaviour of many buffers that populate Internet.
    [Show full text]
  • Evaluating the Latency Impact of Ipv6 on a High Frequency Trading System
    Evaluating the Latency Impact of IPv6 on a High Frequency Trading System Nereus Lobo, Vaibhav Malik, Chris Donnally, Seth Jahne, Harshil Jhaveri [email protected] , [email protected] , [email protected] , [email protected] , [email protected] A capstone paper submitted as partial fulfillment of the requirements for the degree of Masters in Interdisciplinary Telecommunications at the University of Colorado, Boulder, 4 May 2012. Project directed by Dr. Pieter Poll and Professor Andrew Crain. 1 Introduction Employing latency-dependent strategies, financial trading firms rely on trade execution speed to obtain a price advantage on an asset in order to earn a profit of a fraction of a cent per asset share [1]. Through successful execution of these strategies, trading firms are able to realize profits on the order of billions of dollars per year [2]. The key to success for these trading strategies are ultra-low latency processing and networking systems, henceforth called High Frequency Trading (HFT) systems, which enable trading firms to execute orders with the necessary speed to realize a profit [1]. However, competition from other trading firms magnifies the need to achieve the lowest latency possible. A 1 µs latency disadvantage can result in unrealized profits on the order of $1 million per day [3]. For this reason, trading firms spend billions of dollars on their data center infrastructure to ensure the lowest propagation delay possible [4]. Further, trading firms have expanded their focus on latency optimization to other aspects of their trading infrastructure including application performance, inter-application messaging performance, and network infrastructure modernization [5].
    [Show full text]
  • An Adaptive Active Queue Management Algorithm in Internet
    UNIVERSITÉ DU QUÉBEC À CHICOUTIMI AN ADAPTIVE ACTIVE QUEUE MANAGEMENT ALGORITHM IN INTERNET MÉMOIRE PRÉSENTÉ COMME EXIGENCE PARTIELLE DE LA MAÎTRISE EN INFORMATIQUE EXTENSIONNÉE DE L'UNIVERSITÉ DU QUÉBEC À MONTRÉAL PAR WANG JIANG JUIN 2006 UNIVERSITÉ DU QUÉBEC À MONTRÉAL Service des bibliothèques Avertissement La diffusion de ce mémoire se fait dans le respect des droits de son auteur, qui a signé le formulaire Autorisation de reproduire et de diffuser un travail de recherche de cycles supérieurs (SDU-522 - Rév.01-2006). Cette autorisation stipule que «conformément à l'article 11 du Règlement no 8 des études de cycles supérieurs, [l'auteur] concède à l'Université du Québec à Montréal une licence non exclusive d'utilisation et de publication de la totalité ou d'une partie importante de [son] travail de recherche pour des fins pédagogiques et non commerciales. Plus précisément, [l'auteur] autorise l'Université du Québec à Montréal à reproduire, diffuser, prêter, distribuer ou vendre des copies de [son] travail de recherche à des fins non commerciales sur quelque support que ce soit, y compris l'Internet. Cette licence et cette autorisation n'entraînent pas une renonciation de [la] part [de l'auteur] à [ses] droits moraux ni à [ses] droits de propriété intellectuelle. Sauf entente contraire, [l'auteur] conserve la liberté de diffuser et de commercialiser ou non ce travail dont [il] possède un exemplaire.» ABSTRACT Random Ear1y Detection (RED) algorithm a recommended active queue management scheme, that is expected to provide several Internet performance advantages such as minimizing packet loss and router queueing delay, avoiding global synchronization of sources, guaranteeing high link utilization and fairness.
    [Show full text]
  • Low-Latency Networking: Where Latency Lurks and How to Tame It Xiaolin Jiang, Hossein S
    1 Low-latency Networking: Where Latency Lurks and How to Tame It Xiaolin Jiang, Hossein S. Ghadikolaei, Student Member, IEEE, Gabor Fodor, Senior Member, IEEE, Eytan Modiano, Fellow, IEEE, Zhibo Pang, Senior Member, IEEE, Michele Zorzi, Fellow, IEEE, and Carlo Fischione Member, IEEE Abstract—While the current generation of mobile and fixed The second step in the communication networks revolutions communication networks has been standardized for mobile has made PSTN indistinguishable from our everyday life. Such broadband services, the next generation is driven by the vision step is the Global System for Mobile (GSM) communication of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub- standards suite. In the beginning of the 2000, GSM has become milliseconds. However, these new stringent requirements have a the most widely spread mobile communications system, thanks large technical impact on the design of all layers of the commu- to the support for users mobility, subscriber identity confiden- nication protocol stack. The cross layer interactions are complex tiality, subscriber authentication as well as confidentiality of due to the multiple design principles and technologies that user traffic and signaling [4]. The PSTN and its extension via contribute to the layers’ design and fundamental performance limitations. We will be able to develop low-latency networks only the GSM wireless access networks have been a tremendous if we address the problem of these complex interactions from the success in terms of Weiser’s vision, and also paved the way new point of view of sub-milliseconds latency. In this article, we for new business models built around mobility, high reliability, propose a holistic analysis and classification of the main design and latency as required from the perspective of voice services.
    [Show full text]
  • TCP and Queue Management
    White Paper TCP and Queue Management Agilent N2X TCP and Queue Management Background and Motivation Slow start – The slow start mechanism was a direct attempt to avoid congestion collapse by increasing the packet rate Transmission Control Protocol (TCP) and various queue of a connection in a controlled fashion – slowly at first, management algorithms such as tail drop and random faster later on - until congestion is detected (i.e. packets are early detection (RED), etc. are intimately related topics. dropped), and ultimately arrive at a steady packet rate, or Historically, the two evolved together and have had a equilibrium. To achieve this goal, the designers of the slow symbiotic relationship. However, a poor understanding of start mechanism chose an exponential ramp-up function to the relationship between TCP and queue-management successively increase the window size. Slow start introduces algorithms is at the root of why measured performance in a new parameter called the congestion window or cwnd, routers is dramatically different from the actual performance which specifies the number of packets that can be sent of live networks, leading to a need for massive over- without needing a response from the server. TCP starts off provisioning. Explanations of how the various components slowly (hence the name “slow start”) by sending just one of TCP work, and the effect various queuing schemes have packet, then waits for a response (an ACK) from the receiver. on TCP, tends to be isolated on specific topics and scattered. These ACKs confirm that it is safe to send more packets This paper attempts to bring the key concepts together in into the network (i.e.
    [Show full text]