STCP: a New Transport Protocol for High-Speed Networks

Total Page:16

File Type:pdf, Size:1020Kb

STCP: a New Transport Protocol for High-Speed Networks Georgia State University ScholarWorks @ Georgia State University Computer Science Theses Department of Computer Science 11-17-2009 STCP: A New Transport Protocol for High-Speed Networks Ranjitha Shivarudraiah Georgia State University Follow this and additional works at: https://scholarworks.gsu.edu/cs_theses Recommended Citation Shivarudraiah, Ranjitha, "STCP: A New Transport Protocol for High-Speed Networks." Thesis, Georgia State University, 2009. https://scholarworks.gsu.edu/cs_theses/67 This Thesis is brought to you for free and open access by the Department of Computer Science at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Computer Science Theses by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please contact [email protected]. STCP: A NEW TRANSPORT PROTOCOL FOR HIGH‐SPEED NETWORKS by RANJITHA SHIVARUDRAIAH Under the Direction of Dr Xiaojun Cao ABSTRACT Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network perform‐ ance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggres‐ sive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP). INDEX WORDS: TCP, Congestion window STCP: A NEW TRANSPORT PROTOCOL FOR HIGH‐SPEED NETWORKS by RANJITHA SHIVARUDRAIAH A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in the College of Arts and Sciences Georgia State University 2009 Copyright by Ranjitha Shivarudraiah 2009 STCP: A NEW TRANSPORT PROTOCOL FOR HIGH‐SPEED NETWORKS by RANJITHA SHIVARUDRAIAH Committee Chair: Dr. Xiaojun Cao Committee: Dr Raj Sunderraman Dr Anu Bourgeois Electronic Version Approved: Office of Graduate Studies College of Arts and Sciences Georgia State University December 2009 v ACKNOWLEDGEMENTS First of all, I would like to express my special and sincere appreciation to my advisor, Dr. Xiaojun Cao, who has been constant in his valuable guidance and encouragement in my research. He has spent con‐ siderable time in answering my questions, providing valuable suggestions to my research. The computer science department has provided an ideal working environment during the years' study under the Chair, Prof. Yi Pan. I specially thank Dr Raj Sunderraman and Dr Anu Bourgeois for being a part of the committee and guiding me to improve and enhance my work. Also I would like to thank the administrative and technical specialists for their great service and help: Tammie, Shaochieh, Celena and Venette. I would also want to thank my husband, Satish Shivarudrappa for his moral support and encour‐ agement without which my achievement would not be possible. Last but not the least; I am deeply thankful to my family, especially my parents for their relent‐ less support and care. vi TABLE OF CONTENTS ACKNOWLEDGEMENTS ................................................................................................................................. v LIST OF TABLES ............................................................................................................................................. xi 1 INTRODUCTION: INTERNET AND TRANSMISSION CONTROL PROTOCOL (TCP) .................................... 1 1.1 The Internet in Today’s World ...................................................................................................... 1 1.2 TCP in the Internet ........................................................................................................................ 2 1.3 Overview of the proposed Stratified TCP ..................................................................................... 7 2 CURRENT PERFORMANCE ISSUES OF TCP AND RELATED INVESTIGATIONS ......................................... 9 2.1 Performance Issue 1: TCP for Wireless Environments ................................................................ 10 2.1.1 Explicit Link Failure Notification (ELFN) and TCP‐Feedback (TCP‐F) Network Congestion [1] 10 2.1.2 Ad‐Hoc TCP (ATCP) .............................................................................................................. 12 2.1.3 TCP Vegas [48] [53] ............................................................................................................. 12 2.1.4 Cross‐Layer Interaction of TCP [19] .................................................................................... 13 2.1.5 TCP‐ Veno [5] ...................................................................................................................... 14 2.1.6 TCP “Adaptive Selection” Concept [20] .............................................................................. 16 2.1.7 Schemes to Enhance Performance during Handoffs .......................................................... 17 2.1.8 Contention‐based Path Selection (COPAS) [39] .................................................................. 18 2.1.9 Paced TCP [38] .................................................................................................................... 19 2.2 Performance Issue 2: Switching from Wired to Wireless Networks ........................................... 19 2.2.1 Split TCP Connections [66] .................................................................................................. 20 2.2.2 Wireless TCP Model for Short‐lived Flows [26]................................................................... 21 2.3 Performance Issue 3: TCP over Satellite [3] ................................................................................ 21 2.3.1 Performance Enhancement for TCP over Satellite [32] ...................................................... 22 2.3.2 IPSEC over Satellite Links: A New Flow Identification Method [54] ................................... 23 2.3.3 TCP‐Peach [61] .................................................................................................................... 23 2.3.4 Split TCP Connections in Satellites [3] ................................................................................. 24 2.3.5 Network Striping for Satellites: Split TCP ............................................................................ 24 2.4 Performance Issue 4: TCP Fairness ............................................................................................. 25 2.4.1 High‐Speed TCP Protocols with Pacing for Fairness and TCP Friendliness [79] .................. 25 2.4.2 Window Adjustment Method to Enhance TCP efficiency and Fairness [12] ...................... 26 vii 2.4.3 Utilizing TTL to Enhance TCP Fairness [68] ......................................................................... 26 2.4.4 Gentle High Speed TCP (gHSTCP) [84] ................................................................................ 27 2.5 Performance Issue 5: Delay in Congestion Recovery .................................................................. 27 2.5.1 TCP Net Reno [50] ............................................................................................................... 27 2.5.2 Smooth Start and Dynamic Recovery [51] .......................................................................... 28 2.5.3 “Robust Recovery” TCP Scheme [57] .................................................................................. 28 2.5.4 “TCP smart framing “: Algorithm to Reduce Latency [36] .................................................. 28 2.6 Performance Issue 6: TCP Variants for High‐Speed Networks .................................................... 29 2.6.1 End‐To‐End Protocol Solutions for Infrastructured Wireless High‐Speed Networks TCP Westwood [5] [37] [43] ....................................................................................................................... 35 2.6.2 TCP Symbiosis [64] .............................................................................................................. 37 2.6.3 TCP Tuning Daemons for Efficient Link Utilization [62] ...................................................... 38 2.6.4 Performance Issues and TCP Improvement Techniques for Optical Networks .................. 38 2.7 Summary ..................................................................................................................................... 40 3 PARALLEL TCP TRANSMISSION SCHEMES ........................................................................................... 41 3.1 Parallel Connections.................................................................................................................... 41 3.2 GridFTP [16] ................................................................................................................................ 42 3.3 MulTCP [72] ................................................................................................................................. 44 3.4 LTCP [88] ....................................................................................................................................
Recommended publications
  • Realview Real-Time Library Brochure
    Real-Time Library - Template Applications Real-Time The RealView® Real-Time Library is based on a real- time kernel that simplifies the design and implementation of complex, time-critical applications. It includes an efficient Flash File System, a flexible TCP/IP networking Library suite, and other essential communication drivers. Template applications help you to get started quickly and ® The RealView Real-Time Library (RL-ARM) solves the are royalty-free when used for product development. The Deterministic real-time and communication challenges of embedded RL-ARM components let you focus on the specific ® systems based on ARM powered MCU devices. It expands Real-Time OS Kernel requirements of your application. the Microcontroller Development Kit with essential components for sophisticated microcontroller applications. Included Template Applications TCP/IP, CAN, and USB Components of the Real-Time Library: n LED Switch Client/Server uses a UDP or Communications TCP/IP connection with Ethernet, SLIP, or PPP. RL-ARM contains TCP/IP and UDP protocols along with n RTX Kernel, a royalty-free fully deterministic RTOS that standard Internet applications such as HTTP server or SMTP client. n HTTP Server with CGI Scripting supports meets hard real-time requirements. Configurable dynamic Web pages. n CAN Drivers that utilize RTX mailboxes. n Telnet Server with user authentication. n USB Device Interfaces for standard USB device Flash File System n TFTP Server supports simple file upload. classes – no system driver development is required. n n SMTP Client for automated email messages. Flash File System with a configurable interface for data Ready-to-use storage on RAM or FLASH .
    [Show full text]
  • A Comparison of TCP Automatic Tuning Techniques for Distributed
    A Comparison of TCP Automatic Tuning Techniques for Distributed Computing Eric Weigle [email protected] Computer and Computational Sciences Division (CCS-1) Los Alamos National Laboratory Los Alamos, NM 87545 Abstract— Manual tuning is the baseline by which we measure autotun- Rather than painful, manual, static, per-connection optimization of TCP ing methods. To perform manual tuning, a human uses tools buffer sizes simply to achieve acceptable performance for distributed appli- ping pathchar pipechar cations [1], [2], many researchers have proposed techniques to perform this such as and or to determine net- tuning automatically [3], [4], [5], [6], [7], [8]. This paper first discusses work latency and bandwidth. The results are multiplied to get the relative merits of the various approaches in theory, and then provides the bandwidth £ delay product, and buffers are generally set to substantial experimental data concerning two competing implementations twice that value. – the buffer autotuning already present in Linux 2.4.x and “Dynamic Right- Sizing.” This paper reveals heretofore unknown aspects of the problem and PSC’s tuning is a mostly sender-based approach. Here the current solutions, provides insight into the proper approach for various cir- cumstances, and points toward ways to improve performance further. sender uses TCP packet header information and timestamps to estimate the bandwidth £ delay product of the network, which Keywords: dynamic right-sizing, autotuning, high-performance it uses to resize its send window. The receiver simply advertises networking, TCP, flow control, wide-area network. the maximal possible window. Paper [3] presents results for a NetBSD 1.2 implementation, showing improvement over stock I.
    [Show full text]
  • Software Vs Hardware Implementations for Real-Time Operating Systems
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 9, No. 12, 2018 Software vs Hardware Implementations for Real-Time Operating Systems Nicoleta Cristina GAITAN1, Ioan Ungurean2 Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava Integrated Center for Research, Development and Innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for Fabrication and Control (MANSiD) Suceava, Romania Abstract—In the development of the embedded systems a very overhead and the expertise/experience in using an RTOS used important role is played by the real-time operating system in previous projects are considered [4]. (RTOS). They provide basic services for multitasking on small microcontrollers and the support to implement the deadlines In an embedded market survey published in 2017, 67% of imposed by critical systems. The RTOS used can have important the embedded projects in progress in 2017 use a form of consequences in the performance of the embedded system. In operating system (RTOS, kernel, software executive). Those order to eliminate the overhead generated by RTOS, the RTOS who do not use an operating system have specified that they do primitives have begun to be implemented in hardware. Such a not need it because the applications being very simple and solution is the nMPRA architecture (Multi Pipeline Register there are not real time application. This study shows a growing Architecture - n degree of multiplication) that implements in trend in the utilization of the open source operating systems hardware of all primitives of an RTOS. This article makes a and a downward trend in the utilization of the commercial comparison between software RTOS and nMPRA systems in operating systems from 2012 to 2017.
    [Show full text]
  • Network Tuning and Monitoring for Disaster Recovery Data Backup and Retrieval ∗ ∗
    Network Tuning and Monitoring for Disaster Recovery Data Backup and Retrieval ∗ ∗ 1 Prasad Calyam, 2 Phani Kumar Arava, 1 Chris Butler, 3 Jeff Jones 1 Ohio Supercomputer Center, Columbus, Ohio 43212. Email:{pcalyam, cbutler}@osc.edu 2 The Ohio State University, Columbus, Ohio 43210. Email:[email protected] 3 Wright State University, Dayton, Ohio 45435. Email:[email protected] Abstract frequently moving a copy of their local data to tape-drives at off-site sites. With the increased access to high-speed net- Every institution is faced with the challenge of setting up works, most copy mechanisms rely on IP networks (versus a system that can enable backup of large amounts of critical postal service) as a medium to transport the backup data be- administrative data. This data should be rapidly retrieved in tween the institutions and off-site backup facilities. the event of a disaster. For serving the above Disaster Re- Today’s IP networks, which are designed to provide covery (DR) purpose, institutions have to invest significantly only “best-effort” service to network-based applications, fre- and address a number of software, storage and networking quently experience performance bottlenecks. The perfor- issues. Although, a great deal of attention is paid towards mance bottlenecks can be due to issues such as lack of ad- software and storage issues, networking issues are gener- equate end-to-end bandwidth provisioning, intermediate net- ally overlooked in DR planning. In this paper, we present work links congestion or device mis-configurations such as a DR pilot study that considered different networking issues duplex mismatches [1] and outdated NIC drivers at DR-client that need to be addressed and managed during DR plan- sites.
    [Show full text]
  • RTX Supported Operating Systems Matrix
    Operating System and Microsoft Visual Studio Compatibility Matrix for RTX Products This matrix shows the Operating System versions and recommended TESTED service pack combinations along with supported versions of Microsoft Visual Studio. Previous and subsequent Operating System Service packs may work but have not been tested by IntervalZero and therefore cannot be guaranteed to work. If in doubt about hardware or software requirements, please contact IntervalZero Support. Note that RTX Runtime only supports 32-bit operating systems. IZ-DOC-X86-0007-R5 1/24/2012 Supported Versions 7.1 8.0 8.1 8.1.1 8.1.2 9.0 1 20094 2009 SP14 2009 SP24 2011 2011 SP1 No No Windows 2000 Professional Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) No No No Windows 2000 Server Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) Yes (SP4) No No No No No Yes (SP2, Yes (SP2, Yes (SP2, Windows XP Professional Yes (SP2) Yes (SP2) Yes (SP2) Yes (SP3) Yes (SP3) Yes (SP3) Yes (SP3) Yes (SP3) SP3) SP3) SP3) Windows XP Embedded Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Windows Embedded No No No No Yes No Yes Yes Yes Yes Yes Standard 2009 Yes Yes Yes Yes Yes Yes Yes Yes2 Yes2 Windows Server 2003 Yes2 (SP1,SP2) Yes2 (SP1,SP2) (SP1,R2) (SP1,R2) (SP1,SP22) (SP1,SP22) (SP1,SP22) (SP1,SP22) (SP1,SP22) (SP1,SP2) (SP1,SP2) Yes (SP1, Yes (SP1, Yes (SP1, Yes (SP1, Yes (SP1, Windows Vista No Yes3 Yes3 Yes3 Yes3 Yes3 SP2)3 SP2)3,4 SP2)3,4 SP2)3,4 SP2)3,4 Windows 7 No No No No No No No Yes2,3 Yes 3,4 Yes 3,4 Yes3,4 (SP1) Windows Embedded No No No No No No No No Yes Yes
    [Show full text]
  • Lab 11: Introduction to RTX Real-Time Operating System
    Lab 11: Introduction to Keil RTX Real Time Operating System (RTOS) COEN-4720 Embedded Systems Cristinel Ababei Dept. of Electrical and Computer Engineering, Marquette University 1. Objective The objective of this lab is to learn how to write simple applications using Keil RTX (ARM Keil’s real time operating system, RTOS). 2. Real-Time Operating Systems (RTOS) Simple embedded systems typically use a Super-Loop concept (think of the “forever” while loop of main()) where the application executes each function in a fixed order. Interrupt Service Routines (ISR) are used for time-critical program portions. This approach is well suited for small systems but has limitations for more complex applications. These limitations include the following disadvantages: • Time-critical operations must be processed within interrupts (ISR) o ISR functions become complex and require long execution times o ISR nesting may create unpredictable execution time and stack requirements • Data exchange between Super-Loop and ISR is via global shared variables o Application programmer must ensure data consistency • A Super-Loop can be easily synchronized with the System timer, but: o If a system requires several different cycle times, it is hard to implement o Split of time-consuming functions that exceed Super-Loop cycle o Creates software overhead and application program is hard to understand • Super-Loop applications become complex and therefore hard to extend o A simple change may have unpredictable side effects; such side effects are time consuming to analyze. These disadvantages of the Super-Loop concept can be mitigated or solved by using a Real-Time Operating System (RTOS).
    [Show full text]
  • Improving the Performance of Web Services in Disconnected, Intermittent and Limited Environments Joakim Johanson Lindquister Master’S Thesis Spring 2016
    Improving the performance of Web Services in Disconnected, Intermittent and Limited Environments Joakim Johanson Lindquister Master’s Thesis Spring 2016 Abstract Using Commercial off-the-shelf (COTS) software over networks that are Disconnected, Intermittent and Limited (DIL) may not perform satis- factorily, or can even break down entirely due to network disruptions. Frequent network interruptions for both shorter and longer periods, as well as long delays, low data and high packet error rates characterizes DIL networks. In this thesis, we designed and implemented a prototype proxy to improve the performance of Web services in DIL environments. The main idea of our design was to deploy a proxy pair to facilitate HTTP communication between Web service applications. As an optimization technique, we evaluated the usage of alternative transport protocols to carry information across these types of networks. The proxy pair was designed to support different protocols for inter-proxy communication. We implemented the proxies to support the Hypertext Transfer Proto- col (HTTP), the Advanced Message Queuing Protocol (AMQP) and the Constrained Application Protocol (CoAP). By introducing proxies, we were able to break the end-to-end network dependency between two applications communicating in a DIL environment, and thus achieve higher reliability. Our evaluations showed that in most DIL networks, using HTTP yields the lowest Round- Trip Time (RTT). However, with small message payloads and in networks with very low data rates, CoAP had a lower RTT and network footprint than HTTP. 3 4 Acknowledgement This master thesis was written at the Department of Informatics at the Faculty of Mathematics and Natural Sciences, University at the University of Oslo in 2015/2016.
    [Show full text]
  • Use Style: Paper Title
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by AUT Scholarly Commons Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty Nurul I Sarkar School of Computing and Mathematical Sciences School of Computing and Mathematical Sciences Auckland University of Technology Auckland University of Technology Auckland, New Zealand Auckland, New Zealand [email protected] [email protected] Abstract – Implementing IPv6 in modern client/server Ethernet motivates us to contribute in this area and formulate operating systems (OS) will have drawbacks of lower this paper. throughput as a result of its larger address space. In this paper we quantify the performance degradation of IPv6 for TCP The results of this study will be crucial to primarily those when implementing in modern MS Windows and Linux organizations that aim to achieve high IPv6 performance via operating systems (OSs). We consider Windows Server 2008 a system architecture that is based on newer Windows or and Red Hat Enterprise Server 5.5 in the study. We measure Linux OSs. The analysis of our study further aims to help TCP throughput and round trip time (RTT) using a researchers working in the field of network traffic customized testbed setting and record the results by observing engineering and designers overcome the challenging issues OS kernel reactions. Our findings reported in this paper pertaining to IPv6 deployment. provide some insights into IPv6 performance with respect to the impact of modern Windows and Linux OS on system II. RELATED WORK performance. This study may help network researchers and engineers in selecting better OS in the deployment of IPv6 on In this paper, we briefly review a set of literature on corporate networks.
    [Show full text]
  • Exploration of TCP Parameters for Enhanced Performance in A
    Exploration of TCP Parameters for Enhanced Performance in a Datacenter Environment Mohsin Khalil∗, Farid Ullah Khan# ∗National University of Sciences and Technology, Islamabad, Pakistan #Air University, Islamabad, Pakistan Abstract—TCP parameters in most of the operating sys- Linux as an operating system is the typical go-to option for tems are optimized for generic home and office environments network operators due to it being open-source. The param- having low latencies in their internal networks. However, this eters for Linux distribution are optimized for any range of arrangement invariably results in a compromized network performance when same parameters are straight away slotted networks ranging up to 1 Gbps link speed in general. For this into a datacenter environment. We identify key TCP parameters dynamic range, TCP buffer sizes remain the same for all the that have a major impact on network performance based on networks irrespective of their network speeds. These buffer the study and comparative analysis of home and datacenter lengths might fulfill network requirements having relatively environments. We discuss certain criteria for the modification lesser link capacities, but fall short of being optimum for of TCP parameters and support our analysis with simulation results that validate the performance improvement in case of 10 Gbps networks. On the basis of Pareto’s principle, we datacenter environment. segregate the key TCP parameters which may be tuned for performance enhancement in datacenters. We have carried out Index Terms—TCP Tuning, Performance Enhancement, Dat- acenter Environment. this study for home and datacenter environments separately, and presented that although the pre-tuned TCP parame- ters provide satisfactory performance in home environments, I.
    [Show full text]
  • EGI TCP Tuning.Pptx
    TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK [email protected] EGI Technical Forum 2013, Madrid, Spain TCP ! Transmission Control Protocol (TCP) ! One of the original core protocols of the Internet protocol suite (IP) ! >90% of the internet traffic ! Transport layer ! Delivery of a stream of bytes between ! " programs running on computers ! " connected to a local area network, intranet or the public Internet. ! TCP communication is: ! " Connection oriented ! " Reliable ! " Ordered ! " Error-checked ! Web browsers, mail servers, file transfer programs use TCP Connect | Communicate | Collaborate 2 Connection-Oriented ! A connection is established before any user data is transferred. ! If the connection cannot be established the user program is notified. ! If the connection is ever interrupted the user program(s) is notified. Connect | Communicate | Collaborate 3 Reliable ! TCP uses a sequence number to identify each byte of data. ! Sequence number identifies the order of the bytes sent ! Data can be reconstructed in order regardless: ! " Fragmentation ! " Disordering ! " Packet loss that may occur during transmission. ! For every payload byte transmitted, the sequence number is incremented. Connect | Communicate | Collaborate 4 TCP Segments ! The block of data that TCP asks IP to deliver is called a TCP segment. ! Each segment contains: ! " Data ! " Control information Connect | Communicate | Collaborate 5 TCP Segment Format 1 byte 1 byte 1 byte 1 byte Source Port Destination Port Sequence Number Acknowledgment Number offset Reser. Control Window Checksum Urgent Pointer Options (if any) Data Connect | Communicate | Collaborate 6 Client Starts ! A client starts by sending a SYN segment with the following information: ! " Client’s ISN (generated pseudo-randomly) ! " Maximum Receive Window for client.
    [Show full text]
  • How to Optimize the Scalability & Performance of a Multi-Core
    How to Optimize the Scalability & Performance of a Multi-Core Operating System Architecting a Scalable Real-Time Application on an SMP Platform Overview W hen upgrading your hardware platform to will not be utilized. Even if the application a newer and more powerful CPU with more, seeks to use multiple cores, other architectural faster cores, you expect the application to run optimizations involving memory access, IO, faster. More cores should reduce the average caching strategies, data synchronization and CPU load and therefore reduce delays. In many more must be considered for the system to cases, however, the application does not run truly achieve optimal scalability. faster and the CPU load is almost the same as While no system delivers linear scalability, for the older CPU. With high-end CPUs, you may you can work to achieve each application’s even see interferences that break determinism. theoretical limit. This paper identifies the key Why does this happen, and what can you do architectural strategies that ensure the best about it? scalability of an RTOS-based application. We The answer: build for scalability. Unless an will explore CPU architectures, explain why application is architected to take advantage of a performance does not get the expected boost multicore environment, most RTOS applications with newer or more powerful cores, describe on 1-core and 4-core IPCs will perform nearly how to reduce the effects of the interferences, identically (contrary to the expectation that and provide recommendations for hardware an RTOS
    [Show full text]
  • Amazon Cloudfront Uses a Global Network of 216 Points Of
    Tuning your cloud: Improving global network performance for applications Richard Wade Principal Cloud Architect AWS Professional Services, Singapore © 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved. Topics Understanding application performance: Why TCP matters Choosing the right cloud architecture Tuning your cloud Mice: Short connections Majority of connections on the Internet are mice - Small number of bytes transferred - Short lifetime Expect fast service - Need fast, efficient startup - Loss has a high impact as there is no time to recover Elephants: Long connections Most of the traffic on the Internet is carried by elephants - Large number of bytes transferred - Long-lived single flows Expect stable, reliable service - Need efficient, fair steady-state - Time to recover from loss has a notable impact over the connection’s lifetime Transmission Control Protocol (TCP): Startup Round trip time (RTT) = two-way delay (this is what you measure with a ping) In this example, RTT is 100 ms Roughly 2 * RTT (200 ms) until the first application request is received The lower the RTT, the faster your application responds and the higher the possible throughput RTT 100 ms AWS Cloud 1.5 * RTT Connection setup Connection established Data transfer Transmission Control Protocol (TCP): Growth A high RTT negatively affects the potential throughput of your application For new connections, TCP tries to double its transmission rate with every RTT This algorithm works for large-object (elephant) transfers (MB or GB) but not so well
    [Show full text]