End-To-End Lightpaths for Large File Transfers Over High Speed Long Distance Networks

Total Page:16

File Type:pdf, Size:1020Kb

End-To-End Lightpaths for Large File Transfers Over High Speed Long Distance Networks End-to-end lightpaths for large file transfers over high speed long distance networks Corrie Kost, Steve McDonald (TRIUMF) Bryan Caron (Alberta), Wade Hong (Carleton) CHEP03 1 CHEP03 Outline • TB data transfer from TRIUMF to CERN (iGrid 2002) • e2e Lightpaths • 10 GbE technology • Performance tuning (Disk I/O, TCP) • Throughput results • Future plans and activities 2 CHEP03 The Birth of a Demo • Suggestion from Canarie to the Canadian HEP community to participate at iGrid2002 • ATLAS Canada discussed the demo at a Vancouver meeting in late May • Initial meeting at TRIUMF by participants in mid July to plan the demo • Sudden realization that there was a very short time to get all elements in place! 3 CHEP03 did we So what -a--r--e--- -w----e--- -g---o--i--n---g-- --t-o-- do? • Demonstrate a manually provisioned “e2e” lightpath • Transfer 1TB of ATLAS MC data generated in Canada from TRIUMF to CERN • Test out 10GbE technology and channel bonding • Establish a new benchmark for high performance disk to disk throughput over a large distance 4 CHEP03 TRIUMF • TRI University Meson Facility • Operated as a joint venture by Alberta, UBC, Carleton, SFU and Victoria • Located on the UBC campus in Vancouver • Proposed location for Canadian ATLAS Tier 1.5 site 5 The iGrid2002 Network 6 CHEP03 e2e Lightpaths • Core design principle of CA*net 4 • Ultimately to give control of lightpath creation, teardown and routing to the end user • Users “own” their resources and can negotiate sharing with other parties – Hence, “Customer Empowered Networks” • Ideas evolved from initial work on OBGP • Provides a flexible infrastructure for emerging grid applications via Web Services 7 CHEP03 e2e Lightpaths • Grid services architecture for user control and management • NEs are distributed objects or agents whose methods can be invoked remotely • Use OGSA and Jini/JavaSpaces for e2e customer control • Alas, can only do things manually today 8 CHEP03 CA*net 4 Topology Edmonton Saskatoon Winnipeg Vancouver Calgary Halifax Kamloops Regina Thunder Bay St. John's Victoria Quebec City Sudbury Charlottetown Seattle Montreal Ottawa Fredericton Minneapolis Halifax Toronto Kingston CA*net 4 Node Buffalo London Boston Hamilton Possible Future Breakout Albany Windsor Possible Future link or Option Chicago New York CA*net 4 OC192 9 CHEP03 10 CHEP03 We are live continent to continent! • e2e lightpath up and running 21:45 CET traceroute to cern-10g (192.168.2.2), 30 hops max, 38 byte packets 1 cern-10g (192.168.2.2) 161.780 ms 161.760 ms 161.754 ms 11 CHEP03 Intel 10GbE Cards • Intel kindly loaned us 2 of their Pro/10GbE LR server adapters cards despite the end of their Alpha program – based on Intel® 82597EX 10 Gigabit Ethernet Controller 12 CHEP03 Extreme Networks • Extreme Networks generously loaned and shipped to us 2 Black Diamond 6808s with their new10GbE LRi blades 13 CHEP03 Hardware Configurations TRIUMF CERN 14 CERN Server (Receive Host) SuperMicro P4DL6 (Dual Xeon 2GHz) 400 MHz front side bus 1 GB DDR2100 RAM Dual Channel Ultra 160 onboard SCSI SysKonnect 9843 SX GbE 2 independent PCI buses 6 PCI-X 64 bit/133 Mhz capable 2 3ware 7850 RAID controller 6 IDE drives on each 3-ware controllers RH7.3 on 13th drive connected to on-board IDE RMC4D from HARDDATA WD Caviar 120GB drives with 8Mbyte cache 15 CHEP03 TRIUMF Server (Send Host) SuperMicro P4DL6 (Dual Xeon 2GHz) 400 MHz front side bus 1 GB DDR2100 RAM Dual Channel Ultra 160 onboard SCSI SysKonnect 9843 SX GbE 2 independent PCI buses 6 PCI-X 64 bit/133 Mhz capable 3ware 7850 RAID controller 2 Promise Ultra 100 Tx2 controllers 16 CHEP03 CHEP03 Operating System • Redhat 7.3 based Linux kernel 2.4.18-3 – Needed to support filesystems > 1TB • Upgrades and patches – Patched to 2.4.18-10 – Intel Pro 10GbE Linux driver (early stable) – SysKonnect 9843 SX Linux driver (latest) – Ported Sylvain Ravot’s tcp tune patches 17 CHEP03 Black Magic • Optimizing system performance doesn’t come by default • Performance tuning is very much an art requiring Black Magic • Disk I/O Optimization • TCP Tuning 18 CHEP03 Disk I/O Black Magic • min max read ahead on both systems sysctl -w vm.min-readahead=127 sysctl -w vm.max-readahead=256 • bdflush on receive host sysctl -w vm.bdflush=“2 500 0 0 500 1000 60 20 0” or echo 2 500 0 0 500 1000 60 20 0 >/proc/sys/vm/bdflush • bdflush on send host sysctl -w vm.bdflush=“30 500 0 0 500 3000 60 20 0” or echo 30 500 0 0 500 3000 60 20 0 >/proc/sys/vm/bdflush 19 CHEP03 Disk I/O Black Magic • Disk I/O elevators (minimal impact noticed) – /sbin/elvtune – Allows some control of latency vs throughput – Max I/O scheduler read latency set to 512 (default 8192) – Max I/O scheduler write latency set to 1024 (default 16384) • atime – Disables updating the last time a file has been accessed (typically for file servers) mount –t ext2 –o noatime /dev/md0 /raid Typically, ext3 writes 90Mbytes/sec while for ext2 writes 190Mbytes/sec Reads minimally affected. We always used ext2 20 CHEP03 Disk I/O Black Magic • IRQ Affinity [root@thunder root]# more /proc/interrupts CPU0 CPU1 Need to have PROCESS Affinity 0: 15723114 0 IO-APIC-edge timer - but this requires 2.5 kernel 1: 12 0 IO-APIC-edge keyboard 2: 0 0 XT-PIC cascade 8: 1 0 IO-APIC-edge rtc 10: 0 0 IO-APIC-level usb-ohci 14: 22 0 IO-APIC-edge ide0 15: 227234 2 IO-APIC-edge ide1 16: 126 0 IO-APIC-level aic7xxx 17: 16 0 IO-APIC-level aic7xxx 18: 91 0 IO-APIC-level ide4, ide5, 3ware Storage Controller 20: 14 0 IO-APIC-level ide2, ide3 22: 2296662 0 IO-APIC-level SysKonnect SK-98xx 24: 2 0 IO-APIC-level eth3 26: 2296673 0 IO-APIC-level SysKonnect SK-98xx 30: 26640812 0 IO-APIC-level eth0 NMI: 0 0 LOC: 15724196 15724154 ERR: 0 MIS: 0 echo 1 >/proc/irq/18/smp_affinity use CPU0 echo 2 >/proc/irq/18/smp_affinity use CPU1 echo 3 >/proc/irq/18/smp_affinity use either 21 cat /proc/irq/prof_cpu_mask >/proc/irq/18/smp_affinity reset to default CHEP03 TCP Black Magic • Typically suggested TCP and net buffer tuning sysctl -w net.ipv4.tcp_rmem="4096 4194304 4194304" sysctl -w net.ipv4.tcp_wmem="4096 4194304 4194304" sysctl -w net.ipv4.tcp_mem="4194304 4194304 4194304" sysctl -w net.core.rmem_default=65535 sysctl -w net.core.rmem_max=8388608 sysctl -w net.core.wmem_default=65535 sysctl -w net.core.wmem_max=8388608 22 CHEP03 TCP Black Magic • Sylvain Ravot’s tcp tune patch parameters ssthresh sysctl -w net.ipv4.tcp_tune=“115 115 0” • Linux 2.4 retentive TCP Cong. Avoid. Increment Slow Start Increment – Caches TCP control information for a destination for 10 mins – To avoid caching sysctl -w net.ipv4.route.flush=1 23 CHEP03 Testing Methodologies • Began testing with a variety of bandwidth characterization tools – pipechar, pchar, ttcp, iperf, netpipe, pathcar, etc • Evaluated high performance file transfer applications – bbftp, bbcp, tsunami, pftp • Developed scripts to automate and to scan parameter space for a number of the tools 24 CHEP03 A Real WAN Emulator • First tested with a hardware WAN emulator but it had its limitations (RTT<100 ms, MTU=1500) • Canarie Loopback – TRIUMF – Starlight : RTT = 96 ms – Allowed testing at 1 GbE due to capacity constraints • Canarie Loopback2 – TRIUMF – Starlight : RTT = 193 ms – Close to the expected TRIUMF – CERN e2e RTT of 200 ms 25 CHEP03 did we So what -a--r--e--- -w----e--- -g---o--i--n---g-- --t-o-- do? • Demonstrate a manually provisioned “e2e” lightpath • Transfer 1TB of ATLAS MC data generated in Canada from TRIUMF to CERN • Test out 10GbE technology and channel bonding • Establish a new benchmark for high performance disk to disk throughput over a large distance 26 Comparative Results (TRIUMF to CERN) Tool Transferred Average Max Avg wuftp 100 MbE 600 MB 3.4 Mbps wuftp 10 GbE 6442 MB 71 Mbps iperf 275 MB 940 Mbps 1136 Mbps pftp 600MB 532 Mbps bbftp (13 streams) 1.4 TB 666 Mbps 731 Mbps Tsunami - disk to disk 0.5 TB 700 Mbps 825 Mbps Tusnami - disk to memory 12 GB > 1GBps 27 Sunday Nite Summaries 28 Exceeding 1Gbit/sec … ( using tsunami) 29 CHEP03 did we So what -a--r--e--- -w----e--- -g---o--i--n---g-- --t-o-- do? • Demonstrate a manually provisioned “e2e” lightpath • Transfer 1TB of ATLAS MC data generated in Canada from TRIUMF to CERN • Test out 10GbE technology and channel bonding • Establish a new benchmark for high performance disk to disk throughput over a large distance 30 CHEP03 Lessons Learned • Linux software RAID faster than most conventional SCSI and IDE based RAID systems • One controller for each drive, more disk spindles the better • Channel bonding of two GbEs seems to work very well (on an unshared link) • The larger files the better for throughput 31 CHEP03 Further Investigations • Linux TCP/IP Network Stack Performance – Efficient copy routines (zero copy, copy routines, read copy update) • Stream Control Transmission Protocol • Scheduled Transfer Protocol – OS bypass and zero copy • Web 100, Net 100, DRS 32 CHEP03 Acknowledgements • Canarie • Indiana University – Bill St. Arnaud, Rene Hatem, Damir Pobric, – Mark Meiss, Stephen Wallace Thomas Tam, Jun Jian • ATLAS Canada • Caltech – Sylvain Ravot, Harvey Neuman – Mike Vetterli, Randall Sobie, Jim Pinfold, Pekka Sinervo, Gerald Oakham, Bob Orr, • CERN Michel Lefebrve, Richard Keeler – Olivier Martin, Paolo Moroni, Martin Fluckiger, Stanley • HEPnet Canada Cannon, J.P Martin-Flatin – Dean Karlen • SURFnet/Universiteit van Amsterdam • TRIUMF – Pieter de Boer, Dennis Paus, Erik.Radius, Erik-Jan.Bos, Leon Gommans, Bert Andree, Cees de Laat – Peter Gumplinger, Fred Jones, Mike Losty, Jack Chakhalian, Renee Poutissou • BCNET • Yotta Yotta – Mike Hrybyk, Marilyn Hay, Dennis O’Reilly, – Geoff Hayward, Reg Joseph, Ying Xie, E.
Recommended publications
  • Evaluation and Tuning of Gigabit Ethernet Performance on Clusters
    EVALUATION AND TUNING OF GIGABIT ETHERNET PERFORMANCE ON CLUSTERS A thesis submitted to Kent State University in partial fulfillment of the requirements for the Degree of Master of Science by Harit Desai August, 2007 Thesis Written By Harit Desai B.E., Nagpur University, India, 2000 M.S., Kent State University, OH, 2007 Approved by Dr. Paul A. Farrell, Advisor Dr. Robert A. Walker, Chair, Dept. of Computer Science Dr Jerry Feezel, Dean, College of Arts and Sciences ii TABLE OF CONTENTS ACKNOWLEDGEMENTS …..………………………………………………………….vi CHAPTER 1 INTRODUCTION ....…………………………….…………………….. 1 1.1 Clusters for Scientific Computing ……………………………………….…….... 2 1.2 Thesis Organization .………………………………………………………........ 8 CHAPTER 2 OVERVIEW OF GIGABIT ETHERNET TECHNOLOGY ..............9 2.1 Operating Modes ………………………………………………………………... 9 2.2 Enhanced CSMA/CD…………………………………………………………… 12 2.3 Issues affecting Gigabit Ethernet performance…………………………………. 15 CHAPTER 3 VI ARCHITECTURE OVERVIEW ………………………………… 19 3.1 VI Architecture…………………………………………………………………..20 3.1.1. Virtual Interfaces……………………………………………………………….. 21 3.1.2. VI Provider …..…………………………………………………………...……. 23 3.1.3 VI Consumer……………………………………………………………………. 23 3.1.4. Completion Queues………………………………………………..……………. 24 3.2. Data Transfer Models………………………………………………..………….. 25 3.2.1 Send/Receive……………………………………………………………..………26 3.3. Managing VI Components……………………………………………….………27 iii 3.3.1 Accessing a VI NIC……………………………………………………………...27 3.3.2 Registering and De-registering Memory …..………………...…………………28 3.3.3 Creating and Destroying VIs …………………………………………………. 28 3.3.4 Creating and Destroying Completion Queue …...………………………….….39 3.4. VI Connection and Disconnection………………....…………………………..31 3.4.1. VI Connection…………………………………………………………………31 3.4.2. VI Disconnection……………………………………………………………...34 3.4.3. VI Address Format…………………………………………………………… 35 3.5. VI States…………………………………...…………………………………. 36 CHAPTER 4 NETPIPE……………………………………………………………. 37 4.1. Introduction……………………………………………………………………37 4.2. NetPIPE Design……………………………………………………………….38 4.3.
    [Show full text]
  • Institutionen För Datavetenskap Department of Computer and Information Science
    Institutionen för datavetenskap Department of Computer and Information Science Final thesis NetworkPerf – A tool for the investigation of TCP/IP network performance at Saab Transpondertech by Magnus Johansson LIU-IDA/LITH-EX-A--09/039--SE 2009-08-13 Linköpings universitet Linköpings universitet SE-581 83 Linköping, Sweden 581 83 Linköping Final thesis NetworkPerf - A tool for the investigation of TCP/IP network performance at Saab Transpondertech Version 1.0.2 by Magnus Johansson LIU-IDA/LITH-EX-A09/039SE 2009-08-13 Supervisor: Hannes Persson, Attentec AB Examiner: Dr Juha Takkinen, IDA, Linköpings universitet Abstract In order to detect network changes and network troubles, Saab Transpon- dertech needs a tool that can make network measurements. The purpose of this thesis has been to nd measurable network proper- ties that best reect the status of a network, to nd methods to measure these properties and to implement these methods in one single tool. The resulting tool is called NetworkPerf and can measure the following network properties: availability, round-trip delay, delay variation, number of hops, intermediate hosts, available bandwidth, available ports, and maximum al- lowed packet size. The thesis also presents the methods used for measuring these properties in the tool: ping, traceroute, port scanning, and bandwidth measurement. iii iv Acknowledgments This master's thesis would not be half as good as it is, if I had not received help and support from several people. Many thanks to my examiner Dr Juha Takkinen, without whose countin- uous feedback this report would not have been more than a few confusing pages.
    [Show full text]
  • Performance Testing Tools Jan Bartoň 30/10/2003
    CESNET technical report number 18/2003 Performance Testing Tools Jan Bartoň 30/10/2003 1 Abstract The report describes properties and abilities of software tools for performance testing. It also shows the tools comparison according to requirements for testing tools described in RFC 2544 (Benchmarking Terminology for Network Intercon- nection Devices). 2 Introduction This report is intended as a basic documentation for auxiliary utilities or pro- grams that have been made for the purposes of evaluating transfer performance between one or more PCs in the role of traffic generators and another one on the receiving side. Section 3 3 describes requirements for software testing tools according to RFC 2544. Available tools for performance testing are catalogued by this document in section 4 4. This section also shows the testing tools com- patibility with RFC 2544 and the evaluation of IPv6 support. The summary of supported requirements for testing tools and IPv6 support can be seen in section 5 5. 3 Requirements for software performance testing tools according to RFC-2544 3.1 User defined frame format Testing tool should be able to use test frame formats defined in RFC 2544 - Appendix C: Test Frame Formats. These exact frame formats should be used for specific protocol/media combination (ex. IP/Ethernet) and for testing other media combinations as a template. The frame size should be variable, so that we can determine a full characterization of the DUT (Device Under Test) performance. It might be useful to know the DUT performance under a number of conditions, so we need to place some special frames into a normal test frames stream (ex.
    [Show full text]
  • Realistic Network Traffic Profile Generation: Theory and Practice
    Realistic Network Traffic Profile Generation : Theory and Practice Antoine Varet, Nicolas Larrieu To cite this version: Antoine Varet, Nicolas Larrieu. Realistic Network Traffic Profile Generation : Theory and Practice. Computer and Information Science, Canadian Center of Science and Education, 2014, 7 (2), pp 1-16. 10.5539/cis.v7n2p1. hal-00955420 HAL Id: hal-00955420 https://hal-enac.archives-ouvertes.fr/hal-00955420 Submitted on 4 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Realistic network traffic profile generation: theory and practice Antoine Varet1 & Nicolas Larrieu1 1 ENAC Telecom/Resco Laboratory, Toulouse, France Correspondence: Nicolas Larrieu, ENAC, E-mail: [email protected] Abstract Network engineers and designers need additional tools to generate network traffic in order to test and evaluate application performances or network provisioning for instance. In such a context, traffic characteristics are the very important part of the work. Indeed, it is quite easy to generate traffic but it is more difficult to produce traffic which can exhibit real characteristics such as the ones you can observe in the Internet. With the lack of adequate tools to generate data flows with “realistic behaviors” at the network or transport level, we needed to develop our tool entitled “SourcesOnOff”.
    [Show full text]
  • A Tool for Evaluating Network Protocols
    XIOPerf : A Tool For Evaluating Network Protocols John Bresnahan, Rajkumar Kettimuthu and Ian Foster Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 Email: {bresnaha,kettimut,foster}@mcs.anl.gov Abstract— The nature of Grid and distributed computing user typically has one simple question. What protocol is best implies network communication between heterogeneous systems for my needs? over a wide and ever-changing variety of network environments. Answering that question on paper can be a difficult task. Often times large amounts of data is stored in remote locations and must be transmitted in bulk. It is desirable to have the bulk Many factors must be identified and considered. Every proto- data transfers be as fast as possible, however due to the dynamic col has its strengths and could be a potential candidate. There networks involved it is often hard to predict what protocol will is no single fastest protocol for every situation. The best choice provide the fastest service for a given situation. In this paper most often depends on the environment in which the users we present XIOPerf, a network protocol testing and evaluation application exists. The user must consider at a minimum the tool. XIOPerf is a command line program written on top of GlobusXIO with a simple and well defined interface to many following parameters: different protocol implementations. XIOPerf was created to give Network type: Is it a dedicated link, or does some users a way to quickly and easily experiment with an open quality of service guarantee that a portion of the ended set of protocols over real networks to determine which will bandwidth is dedicated to the user? best suit their needs.
    [Show full text]
  • Optimizing NFS Performance
    C HAPTER 1 Network Considerations NFS is an acronym for “Network File System,” so it should come as no surprise that NFS performance is heavily affected by the latency and bandwidth of the underlying network. Before embarking on a detailed investigation into a specific area of NFS, it is a good idea to first verify that the underlying network is performing as expected. This chapter focuses on three main areas: analyzing the physical layout of the network that separates your NFS clients and servers, measuring the throughput capabilities of the network, and network troubleshooting concepts. This chapter describes a recommended methodology and set of tools available for under- standing the physical layout of your network, measuring its throughput, and performing routine network troubleshooting tasks. This chapter does not discuss the myriad of networking topolo- gies and interface cards that are currently available for HP-UX systems. NFS runs on most any networking link supporting Internet Protocol (IP), and it typically performs better on faster links.1 1.1 Analyze Network Layout An important early step in troubleshooting any NFS performance issue is to learn as much as possible about the physical layout of the underlying network topology. Some of the questions you should be trying to answer at this stage are: 1. There is a wealth of information about the latest and greatest networking technologies, such as Gigabit Ethernet, Auto Port Aggregation (APA), etc., available from HP’s IT Resource Center web site: http://itrc.hp.com, and HP’s online documentation repository: http://docs.hp.com. 1 2 Chapter 1 • Network Considerations • How many network hops (i.e.
    [Show full text]
  • Initial End-To-End Performance Evaluation of 10-Gigabit Ethernet
    Proceedings of IEEE Hot Interconnects: 11th Symposium on High-Performance Interconnects; Palo Alto, CA, USA; August 2003. Initial End-to-End Performance Evaluation of 10-Gigabit Ethernet Justin (Gus) Hurwitz, Wu-chun Feng ¡ ghurwitz,feng ¢ @lanl.gov Research & Development in Advanced Network Technology (RADIANT) Computer & Computational Sciences Division Los Alamos National Laboratory Los Alamos, NM 87545 Abstract tructures, we show in this paper that such performance can also be delivered to bandwidth-hungry host applications via We present an initial end-to-end performance evaluation the new 10GbE network interface card (or adapter) from £ £ of Intel’s R 10-Gigabit Ethernet (10GbE) network inter- Intel R . We first begin with an architectural overview of face card (or adapter). With appropriate optimizations to the adapter in Section 2. In Sections 3 and 4, we present the configurations of Linux, TCP, and the 10GbE adapter, the testing environments and experiments for the 10GbE we achieve over 4-Gb/s throughput and 21- ¤ s end-to-end adapters, respectively. Section 5 provides results and analy- latency between applications in a local-area network de- sis. In Section 6, we examine the bottlenecks that currently spite using less capable, lower-end PCs. These results in- impede achieving greater performance. Section 7 compares dicate that 10GbE may also be a cost-effective solution for the 10GbE results with other high-speed interconnects. Fi- system-area networks in commodity clusters, data centers, nally, we make a few concluding remarks in Section 8. and web-server farms as well as wide-area networks in sup- port of computational and data grids.
    [Show full text]
  • Jo˜Ao Pedro Marçal Lemos Martins Sistema Para Gest˜Ao Remota De
    Departamento de Universidade de Aveiro Electr´onica, Telecomunicac¸˜oes e Inform´atica, 2011 Jo˜aoPedro Mar¸cal Sistema para gest˜aoremota de redes experimentais Lemos Martins Testbed management systems Departamento de Universidade de Aveiro Electr´onica, Telecomunicac¸˜oes e Inform´atica, 2011 Jo˜aoPedro Mar¸cal Sistema para gest˜aoremota de redes experimentais Lemos Martins Testbed management systems Dissertac¸˜ao apresentada `a Universidade de Aveiro para cumprimento dos requisitos necess´arios `a obtenc¸˜ao do grau de Mestre em Engenharia de Computadores e Telem`atica, realizada sob a orientac¸˜ao cient´ıfica do Dr. Diogo Nuno Pereira Gomes, Assistente Convidado do Departamento de Electr´onica, Telecomunicac¸˜oes e Inform´atica da Universidade de Aveiro, e do Prof. Dr. Rui Luis Andrade Aguiar, Professor Associado com Agregac¸˜ao do Departamento de Electr´onica, Telecomunicac¸˜oes e Inform´atica da Uni- versidade de Aveiro o j´uri presidente Prof. Dr. Jos´eLuis Guimar˜aesOliveira Professor associado do Departamento de Electr´onica e Telecomunicac¸˜oes e In- form´atica da Universidade de Aveiro vogais Dr. Diogo Nuno Pereira Gomes Assistente convidado do Departamento de Electr´onica e Telecomunicac¸˜oes e In- form´atica da Universidade de Aveiro Prof. Dr. Rui Luis Andrade Aguiar Professor associado com agregac¸˜ao do Departamento de Electr´onica e Telecomunicac¸˜oes e Inform´atica da Universidade de Aveiro Prof. Dr. Manuel Alberto Pereira Ricardo Professor associado do Departamento de Engenharia Electr´onica e de Computa- dores da Universidade do Porto agradecimentos Chegado o final do meu percurso acad´emico, ´e imensur´avel o numero´ de pessoas a quem gostaria de agradecer.
    [Show full text]
  • Available End-To-End Throughput Measurement
    Transactions. Georgian Technical University. AUTOMATED CONTROL SYSTEMS - No 2(13), 2012 AVAILABLE END-TO-END THROUGHPUT MEASUREMENT TOOLS Aivazov Vitali, Samkharadze Roman Georgian Technical University Summary Various types of applications working in the network are sensitive to the amount of data transferred in a period of time, which directly influences their performance and end user satisfaction. In order to ensure that the network is capable of providing sufficient resources, end-to-end achievable throughput should be measured between two end nodes. A variety of tools have been developed for providing throughput measurement and most widely used out of them are evaluated in this work. General throughput measurement mechanisms are assessed, which is followed by evaluation of each of these tools, their advantages and limitations. A comparative analysis is performed in terms of accuracy, intrusiveness and response time of active probing tools. An overall conclusion is made for the applicability of these tools. Keywords: throughput. Bandwidth. Active probing. TTCP. NetPerf. Iperf. Bwping. Pathchar. Pipechar Pathload. PathChirp. 1. Introduction Throughput is defined as an average rate of successful data delivery between end nodes in IP networks. This data may be delivered over a physical or logical link, or pass through a certain network node. The throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot. Acquiring an accurate measurement of this metric can be decisive for achieving application performance and end user satisfaction. The available bandwidth is also a key concept in congestion avoidance algorithms and intelligent routing systems. The available bandwidth measurement mechanisms can be divided into two main groups - passive and active measurement.
    [Show full text]
  • Network Working Group S. Bensley Internet-Draft Microsoft Intended Status: Informational L
    Network Working Group S. Bensley Internet-Draft Microsoft Intended status: Informational L. Eggert Expires: January 8, 2016 NetApp D. Thaler P. Balasubramanian Microsoft G. Judd Morgan Stanley July 7, 2015 Microsoft’s Datacenter TCP (DCTCP): TCP Congestion Control for Datacenters draft-bensley-tcpm-dctcp-05 Abstract This memo describes Datacenter TCP (DCTCP), an improvement to TCP congestion control for datacenter traffic. DCTCP uses improved Explicit Congestion Notification (ECN) processing to estimate the fraction of bytes that encounter congestion, rather than simply detecting that some congestion has occurred. DCTCP then scales the TCP congestion window based on this estimate. This method achieves high burst tolerance, low latency, and high throughput with shallow- buffered switches. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on January 8, 2016. Bensley, et al. Expires January 8, 2016 [Page 1] Internet-Draft DCTCP July 2015 Copyright Notice Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document.
    [Show full text]
  • Open Source Traffic Analyzer
    Open Source Traffic Analyzer DANIEL TURULL TORRENTS K T H I n f o r m a t i o n a n d C o m m u n i c a t i o n T e c h n o l o g y Master of Science Thesis Stockholm, Sweden 2010 TRITA-ICT-EX-2010:125 Abstract Proper traffic analysis is crucial for the development of network systems, services and proto- cols. Traffic analysis equipment is often based on costly dedicated hardware, and uses proprietary software for traffic generation and analysis. The recent advances in open source packet processing, with the potential of generating and receiving packets using a regular Linux computer at 10 Gb/s speed, opens up very interesting possibilities in terms of implementing a traffic analysis system based on open-source Linux. The pktgen software package for Linux is a popular tool in the networking community for generating traffic loads for network experiments. Pktgen is a high-speed packet generator, running in the Linux kernel very close to the hardware, thereby making it possible to generate packets with very little processing overhead. The packet generation can be controlled through a user interface with respect to packet size, IP and MAC addresses, port numbers, inter-packet delay, and so on. Pktgen was originally designed with the main goal of generating packets at very high rate. However, when it comes to support for traffic analysis, pktgen has several limitations. One of the most important characteristics of a packet generator is the ability to generate traffic at a specified rate.
    [Show full text]
  • Applications and Network Performance
    Applications and network performance Introduction Considerable effort imporoving networks and operating systems has vastly improved the performance of large file transfers. Increasingly the causes of poor network performance are found in application design and programming. People with long memories will recall the same experience with the performance of relational databases. Just as programmers learnt how to use SQL effectively, programmers now need to learn to use fast long-distance networks effectively. This paper can be a beginning for that learning. How fast, how long? The typical host is connected at one gigabit per second to a workgroup ethernet switch. This in turn connects to building ethernet Workgroup Core Core Border Border switch, a routing core, a firewall, and Hosts s witch s witch router firewall router Internet finally to a border router with a 1Gbps link to AARNet. AARNet has sufficient capacity to take this potential 1Gbps to many of the world©s universities. Figure 1. Typical campus network. The longest production 1Gbps path through the network is from Perth to Sydney, Sydney to Seattle (USA), Seattle to Amsterdam (Netherlands), Amsterdam to Tomsk (Russia). It takes a packet roughly 600ms to travel this path and back. The average 1Gbps path is much shorter than this. However, Australia©s position makes most sites of interest at least a Pacific Ocean, or 100ms, away. Application developers in other countries do not face such long paths, with their large round-trip times. 100ms is enough to comfortably cross Europe or North America, but is the minimum value for applications run from Australia. Figure 2.
    [Show full text]