10Gbps TCP/IP Streams from the FPGA for the CMS DAQ Eventbuilder Network

Total Page:16

File Type:pdf, Size:1020Kb

10Gbps TCP/IP Streams from the FPGA for the CMS DAQ Eventbuilder Network TWEPP 2013 10Gbps TCP/IP streams from the FPGA for the CMS DAQ Eventbuilder Network Petr Žejdl, Dominique Gigi on behalf of the CMS DAQ Group 26 September 2013 Outline ● CMS DAQ Readout System – Upgrade – DAQ2 Proposed Layout ● TCP/IP – Overview, Introduction – Simplifcation – Implementation ● FEROL – Introduction, block diagram – Modes of operation – TCP Engine ● Measurements – Point-2-point Measurements – Stream/Link Aggregation ● Summary TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 2 Current CMS DAQ Readout System ● Current system based on SLINK64 and Myrinet network – A sender (FED) card implementing an electrical LVDS link running at 400 MByte/s (3.2 Gbit/s) Detector Front-End Driver (FED) Mezzanine – A receiver (FRL) card ● Receives the SLINK data and performs CRC checking ● Interfaces to commercial Myrinet SLINK64 hardware Cable ● Myrinet NIC runs custom frmware up to 10m, designed by DAQ group 400 MB/s Front-end Readout Link (FRL) 1 or 2 optical links to Myrinet NIC TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 3 SLINK cables going into FRLs TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 4 Motivation for the Upgrade ● End-of-life of almost all PC and networking equipment – Hardware is more than 5 years old – The system was purchased in 2006 and installed in 2007 – Myrinet PCI-X cards and PCs with PCI-X slot, diffcult to buy today ● Beneft from technology evolution – New PCs with multicore CPUs and NUMA architecture – 10/40 Gbit/s Ethernet and 56 Gbit/s IB FDR network equipment ● New uTCA based FEDs will be in operation after LS1 – DAQ group developed a point-2-point optical link – SlinkXpress ● Simple interface to custom readout electronics ● Reliable link, data are retransmitted in case of error ● Current implementation allows to run up to 6.3 or at 10 Gbit/s ● IP Core is available for Altera and Xilinx FPGA TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 5 Requirements for Subsystem Readout ● A new link to replace the Myrinet network is required ● Requirements: – L1 trigger rate up to 100 kHz – Suffcient bandwidth ● Legacy S-link (electrical LVDS) FEDs with 3.2 Gbit/s (400 MByte/s) ● New (uTCA, optical link based) FEDs with 6 Gbit/s (in future 10 Gbit/s) – Reliable (loss-less) connection between underground and surface ● The new readout link discussed in this presentation is the replacement for the Myrinet network TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 6 DAQ2 Proposed Layout S-link/ Custom Optical Hardware 10 Gbit/s Ethernet Underground Surface 40 Gbit/s Ethernet Commercial Hardware 56 Gbit/s Infniband 40/10/1 Gbit/s Ethernet TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 7 DAQ2 Proposed Layout (2) S-link/ Custom Optical Hardware 10 Gbit/s Ethernet Underground Surface 40 Gbit/s Ethernet Commercial Hardware 56 Gbit/s Infniband 40/10/1 Gbit/s Ethernet TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 8 FEROL Introduction ● Front-End Readout Optical Link (FEROL) – Interface between custom and commercial hardware/network – Replace Myrinet NIC with custom FPGA based NIC card ● Input: – Legacy S-link input via FRL – SlinkXpress interface ● 2x optical 6 Gbit/s interface ● 1x optical 10 Gbit/s interface ● Output: – Optical 10 Gbit/s Ethernet link – Optional second 10 Gbit/s Ethernet link – Runs a standard protocol: TCP/IP over 10Gbit/s Ethernet TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 9 TCP/IP ● Benefts of using TCP/IP – TCP/IP guarantees a reliable and in-order data delivery ● Retransmissions deal with packet loss ● Flow control respects the occupancy of the buffers in a receiving PC ● Congestion control allows transmitting multiple streams on the same link (link aggregation) – Standard and well known protocol suite (almost) – Implemented in all mainstream operating systems – Debugging and monitoring tools widely available (tcpdump, wireshark, iperf, …) – Network composed from off-the-shell hardware, multiple vendors ● Don't re-invent a reliable network but make use of available software and commercial hardware TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 10 TCP Implementation ● In principle a very diffcult task for an FPGA – TCP/IP is a general purpose protocol suite – Even for a PC the TCP/IP is a very resource hungry protocol – ~15 000 lines of C code in the Linux Kernel for only TCP ● Consideration – CMS DAQ network has a fxed topology – The data traffc goes only in one direction from FEROL to Readout Unit (PC) – The aggregated readout network throughput is suffcient (by design) to avoid the packet congestion and packet loss ● Can we simplify the TCP? TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 11 TCP Implementation (2) ● Robustness Principle [RFC 793] – TCP implementations will follow a general principle of robustness: Be conservative in what you do, be liberal in what you accept from others. ● According to robustness principle we simplifed the TCP sender. The receiving PC (with full TCP/IP) stack will handle the rest – FEROL is a client, PC is a server – FEROL opens a TCP connection – FEROL sends the data to the PC, data fows in one direction from client to the server ● Acknowledge packets are sent back, they are part of the protocol – TCP connection is aborted instead of closed. Connection abort is unreliable and should be initialized by server (PC). – Use simple congestion control TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 12 TCP Implementation (3) TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 13 TCP Implementation (4) We don't listen (we are only client) / we don't receive any data TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 14 TCP Implementation (5) ABORT/RST We do a connection abort instead of connection close TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 15 TCP Implementation (6) ABORT/RST FInal State Diagram TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 16 But not so simple... TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 17 Implementation and Simplifcations ● Implemented – Nagle's algorithm (data merging to utilize MTU) – MTU Jumbo frame support up to 9000 bytes – Window scaling (understands window sizes greater than 64KB) – Silly window avoidance (not to send when receiver's window is small) – Six TCP/IP Timers reduced to three timers implemented by one counter ● Connection-establishment timer, Retransmission timer, Persist timer ● Complex congestion control reduced to – Exponential back-off: double the retransmit timeout if a packet is not acknowledged – Fast-retransmit: if only single segment was lost – retransmit immediately without waiting for timeout ● Not implemented (not necessary) – Timestamps, Selective acknowledgements, Out of band data (urgent data) – Server part and data reception (FEROL is client and opens TCP/IP connection) TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 18 FEROL TCP/IP Software Emulator ● Software implementation of simplifed TCP/IP – For protocol verifcation and testing before implementing in hardware (e.g. verifcation of the TCP congestion control) – Runs as a user space program ● For TCP/IP packets it is important to bypass Linux kernel otherwise they are interfering with Linux TCP/IP stack. ● Based on the PF_RING* – Received packets are stored in a circular buffer and read from user space *http://www.ntop.org/products/pf_ring/ TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 19 Is congestion control important? PC 5x10Gb/s lines Gb/s PC 10 Gb/s PC PC X 5.29 PC 2.0 PC 0.89 Senders: 2048 bytes @ 125 kHz ~ 2.048 Gb/s 5 x 2,048 = 10.24 Gb/s A little bit of congestion - all bandwidth will be eaten up by buffers being re-sent due to a temporary congestion: without congestion control the link is not able to recover from this state even though the link works fawless. TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 20 Link Aggregation (2 links into 1 link, 8 streams into 1 PC) 10 Gbit/s Ethernet Dell R310 Optical Connections 50% Dell R310 50% Two links aggregated into one Dell R310 Brocade Switch Dell R620 Dell R310 10GE NIC Dell R310 10GE NIC Dell R310 ● 2 streams aggregated into one 10GE link Dell R310 ● 8 threads receiving data (1 thread per stream) ● Linux TCP stack compared to the FEROL Dell R310 simplifed TCP TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 21 Stream Aggregation (8 streams to 1 PC) FEROL TCP Emulator Linux Sockets TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 22 FEROL Hardware Architecture Hardware – Altera Aria II GX FPGA – Vitesse transceiver 10GE / XAUI – QDR Memory (16 MBytes) – DDR2 Memory (512 MBytes) Interfaces – FED/SlinkXpress interface ● 2x optical 6 Gbit/s ● 1x optical 10 Gbit/s – DAQ interface ● 1x optical 10 Gbit/s Ethernet TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 23 FEROL Operation Modes ● Input ● Input – 2x SlinkXpress 6 Gbit/s FED input – 1x SlinkXpress 10 Gbit/s FED input – Legacy S-LINK data through PCI-X ● Output ● Output – 1x 10 Gbit/s Ethernet / Optional second 10 – 1x 10 Gbit/s Ethernet Gbit/s Ethernet link – 2x TCP streams – 1x TCP streams ● Memory buffer is divided in two, one per stream ● Memory buffer is used by one stream ● Data fragments ● Data fragments – Internal generator at 10Gbit/s speed – Internal generator at 10Gbit/s speed – PCI-X bus with maximum 6.4 Gbit/s – SlinkXpress with maximum 10 Gbit/s – SlinkXpress with maximum 2x 5.04 Gbit/s TWEPP 2013 - 10Gbps TCP/IP streams from the FPGA 24 FEROL TCP Core ● Several blocks handling different protocols (ARP/ICMP/TCP) ● TCP payload is stored in 64-bit words ● TCP sequence processed in multiples of 8 (64-bits) ● ICMP (PING) is limited to 128 bytes of payload ● IP address is static and assigned by control
Recommended publications
  • Ieee 802.1 for Homenet
    IEEE802.org/1 IEEE 802.1 FOR HOMENET March 14, 2013 IEEE 802.1 for Homenet 2 Authors IEEE 802.1 for Homenet 3 IEEE 802.1 Task Groups • Interworking (IWK, Stephen Haddock) • Internetworking among 802 LANs, MANs and other wide area networks • Time Sensitive Networks (TSN, Michael David Johas Teener) • Formerly called Audio Video Bridging (AVB) Task Group • Time-synchronized low latency streaming services through IEEE 802 networks • Data Center Bridging (DCB, Pat Thaler) • Enhancements to existing 802.1 bridge specifications to satisfy the requirements of protocols and applications in the data center, e.g. • Security (Mick Seaman) • Maintenance (Glenn Parsons) IEEE 802.1 for Homenet 4 Basic Principles • MAC addresses are “identifier” addresses, not “location” addresses • This is a major Layer 2 value, not a defect! • Bridge forwarding is based on • Destination MAC • VLAN ID (VID) • Frame filtering for only forwarding to proper outbound ports(s) • Frame is forwarded to every port (except for reception port) within the frame's VLAN if it is not known where to send it • Filter (unnecessary) ports if it is known where to send the frame (e.g. frame is only forwarded towards the destination) • Quality of Service (QoS) is implemented after the forwarding decision based on • Priority • Drop Eligibility • Time IEEE 802.1 for Homenet 5 Data Plane Today • 802.1Q today is 802.Q-2011 (Revision 2013 is ongoing) • Note that if the year is not given in the name of the standard, then it refers to the latest revision, e.g. today 802.1Q = 802.1Q-2011 and 802.1D
    [Show full text]
  • IEEE 802.11Be Multi-Link Operation: When the Best Could Be to Use Only a Single Interface
    IEEE 802.11be Multi-Link Operation: When the Best Could Be to Use Only a Single Interface Alvaro´ L´opez-Ravent´os Boris Bellalta Dept. Information and Communication Technologies Dept. Information and Communication Technologies Universitat Pompeu Fabra (UPF) Universitat Pompeu Fabra (UPF) Barcelona, Spain Barcelona, Spain [email protected] [email protected] Abstract—The multi-link operation (MLO) is a new feature can find the most disruptive updates. We refer to the adoption proposed to be part of the IEEE 802.11be Extremely High of multi-link communications, which represents a paradigm Throughput (EHT) amendment. Through MLO, access points shift towards concurrent transmissions. Although under the and stations will be provided with the capabilities to transmit and receive data from the same traffic flow over multiple radio multi-link label we find the multi-AP coordination and the interfaces. However, the question on how traffic flows should be multi-band/multi-channel operation features, this article is distributed over the different interfaces to maximize the WLAN focused on the analysis of the latter one. performance is still unresolved. To that end, we evaluate in this Upon its current version, the IEEE 802.11 standard already article different traffic allocation policies, under a wide variety defines two MAC architectures for supporting the multi- of scenarios and traffic loads, in order to shed some light on that question. The obtained results confirm that congestion-aware band/multi-channel operation. However, both designs present policies outperform static ones. However, and more importantly, a common limitation: MAC service data units (MSDUs) the results also reveal that traffic flows become highly vulnerable belonging to the same traffic flow can not be transmitted to the activity of neighboring networks when they are distributed across different bands [4].
    [Show full text]
  • MR52 Datasheet
    MR52 Datasheet MR52 Dual-band 802.11ac Wave 2 access point with separate radios dedicated to security, RF management, and Bluetooth High performance 802.11ac MR52 and Meraki cloud Wave 2 wireless management: A powerful The Cisco Meraki MR52 is a cloud-managed 4x4:4 802.11ac combo Wave 2 access point with MU-MIMO support. Designed for next- generation deployments in offices, schools, hospitals, shops, Management of the MR52 is through the Meraki cloud, with an and hotels, the MR52 offers high performance, enterprise-grade intuitive browser-based interface that enables rapid deployment security, and simple management. without time-consuming training or costly certifications. Since the MR52 is self-configuring and managed over the web, it can The MR52 provides a maximum of 2.5 Gbps* aggregate frame be deployed at a remote location in a matter of minutes, even rate with concurrent 2.4 GHz and 5 GHz radios. A dedicated without on-site IT staff. third radio provides real-time WIDS/WIPS with automated RF optimization, and a fourth integrated radio delivers Bluetooth 24x7 monitoring via the Meraki cloud delivers real-time alerts Low Energy (BLE) scanning and Beaconing. if the network encounters problems. Remote diagnostic tools enable immediate troubleshooting over the web so that With the combination of cloud management, high performance distributed networks can be managed with a minimum of hassle. hardware, multiple radios, and advanced software features, the MR52 makes an outstanding platform for the most demanding The MR52’s firmware is automatically kept up to date via the of uses - including high-density deployments and bandwidth or cloud.
    [Show full text]
  • Ds-Ruckus-R710.Pdf
    R710 Indoor 802.11ac Wave 2 4x4:4 Wi-Fi Access Point DATA SHEET Bandwidth-hungry voice and video applications. Internet of Things (IoT) connections. An explosion of new devices and content. With these kinds of demands, organizations in every industry need more from their Wi-Fi. But with hundreds of devices and nonstop wireless noise and interference, busy indoor spaces can make challenging wireless environments. The Ruckus R710 is a premier indoor access point, delivering industry-leading performance and reliability in the most demanding high-density locations. With BENEFITS data rates up to 800Mbps (2.4GHz) and 1.733Gbps (5GHz), the R710 delivers the highest available throughput for Wi-Fi clients. STUNNING WI-FI PERFORMANCE Provide a great user experience no matter The R710 delivers reliable, high-performance connectivity in schools, universities, how challenging the environment with public venues, hotels, conference centers, and other busy indoor spaces. The BeamFlex+™ adaptive antenna technology perfect choice for data-intensive streaming multimedia applications, it delivers and a library of 4K+ directional antenna picture-perfect HD-quality IP video, while supporting voice and data applications patterns. with stringent quality-of-service requirements. SERVE MORE DEVICES Connect more devices simultaneously with The R710 802.11ac Wave 2 Wi-Fi AP incorporates patented technologies found only four MU-MIMO spatial streams and in the Ruckus Wi-Fi portfolio. concurrent dual-band 2.4/5GHz radios while enhancing non-Wave 2 device • Extended coverage with patented BeamFlex+ utilizing multi-directional performance. antenna patterns. AUTOMATE OPTIMAL THROUGHPUT • Improve throughput with ChannelFly, which dynamically finds less congested ChannelFly™ dynamic channel technology Wi-Fi channels to use.
    [Show full text]
  • A 24Port 10G Ethernet Switch
    A 24-port 10G Ethernet Switch (with asynchronous circuitry) Andrew Lines 1 Agenda Product Information Technical Details Photos 2 Tahoe: First FocalPoint Family Member The lowest-latency feature-rich 10GE switch chip Tahoe · 10G Ethernet switch - 24 Ports · Line rate performance - 240Gb/s bandwidth SPI CPU JTAG LED - 360M frames/s - Full-speed multicast Frame Processor · Fully-integrated single chip (Scheduler) - 1MB frame memory - 16K MAC addresses ® ® · Lowest latency Ethernet ) 4) s s - -4 X X - 200ns with copper cables u u (C (C x x I I ™ U U e e · Rich Feature Set RapidArray A X XA N N (packet storage) - Extensive layer 2 features · Flexible SERDES interfaces - 10G XAUI (CX-4) - 1G SGMII Asynchronous Blocks 3 Tahoe Hardware Architecture Modular architecture, centralized control SPI CPU JTAG LED Interface Interface Interface Interface Management Frame Control LCI Lookup Handler Stats RX Port Logic Scheduler TX Port Logic P M M P Ser Ser C A A C Des Des S C C S Switch Element Data Path ® ® s s ™ u u x RapidArray x e (1MB Shared Memory) e N N RX Port Logic TX Port Logic P M M P Ser Ser C A A C Des Des S C C S 4 Tahoe Chip Plot Fabricated in TSMC 0.13um Ethernet Port Logic - SerDes RapidArray Memory - PCS - 1MB shared - MAC Nexus Crossbars - 1.5Tb/s total - 3ns latency Scheduler - Highly optimized - High event rate MAC Table - 16K addresses Management Frame Control - CPU interface - Frame handler - JTAG - Lookup - EEPROM interface - Statistics - LEDs 5 Bridge Features Robust set of layer-2 features · General Bridge Features · Security - 16K MAC entries - 802.1x; MAC Address Security - STP: multiple, rapid, standard · Monitoring - Learning and Ageing - Rich monitoring terms - Multicast GMRP and IGMPv3 · logical combination of terms · VLAN Tag (IEEE 802.1Q-2003) · Src Port, Dst Port, VLAN, - Add / Remove tags Traffic Type, Priority, Src - Per port association default MA, Dst MA, etc.
    [Show full text]
  • Cisco Small Business 300 Series Managed Switches Administration
    ADMINISTRATION GUIDE Cisco Small Business 300 Series Managed Switch Administration Guide Release 1.3.5 Contents Table of Contents Chapter 1: Getting Started 1 Starting the Web-based Configuration Utility 1 Quick Start Device Configuration 4 Interface Naming Conventions 5 /Window Navigation 7 Chapter 2: Status and Statistics 11 System Summary 11 Viewing Ethernet Interfaces 11 Viewing Etherlike Statistics 13 Viewing 802.1X EAP Statistics 14 Health 15 Managing RMON 15 View Log 23 Chapter 3: Administration: System Log 24 Setting System Log Settings 24 Setting Remote Logging Settings 26 Viewing Memory Logs 28 Chapter 4: Administration: File Management 30 System Files 30 Upgrade/Backup Firmware/Language 33 Download/Backup Configuration/Log 37 Configuration Files Properties 42 Copy/Save Configuration 43 Auto Configuration via DHCP 44 Cisco Small Business 300 Series Managed Switch Administration Guide 1 Contents Chapter 5: Administration 51 Device Models 51 System Settings 53 Management Interface 55 User Accounts 55 Defining Idle Session Timeout 55 Time Settings 56 System Log 56 File Management 56 Rebooting the Device 56 Health 58 Diagnostics 59 Discovery - Bonjour 59 Discovery - LLDP 59 Discovery - CDP 60 Ping 60 Chapter 6: Administration: Time Settings 62 System Time Options 63 SNTP Modes 64 Configuring System Time 65 Chapter 7: Administration: Diagnostics 73 Testing Copper Ports 73 Displaying Optical Module Status 75 Configuring Port and VLAN Mirroring 77 Viewing CPU Utilization and Secure Core Technology 78 Chapter 8: Administration: Discovery
    [Show full text]
  • GS12 Standalone Fully Managed Gigabit Ethernet Switch
    GE Intelligent Platforms GS12 Standalone Fully Managed Gigabit Ethernet Switch Features The GS12 is a rugged, fully managed, stand- Onboard built-in test (BIT) ensures the GS12 • Fully, rugged standalone alone Gigabit Ethernet switch providing can be easily linked with other boards to Gigabit Ethernet Switch twelve 10/100/1000BaseT Ethernet ports. provide integrated system level health Aimed at out-of-the-chassis applications – monitoring and diagnostics. • 12-port fully managed 10/100/1000BaseT the networking of multiple subsystems, for (Gigabit) Ethernet switch. example – the GS12 has comprehensive The switch management may be accessed • 1x 10/100BaseT management port management capabilities that include via in-band management through the 1G • Layer 2/3 switching with advanced VLANs, Link Aggregation, Spanning Tree, ports, or via out-of-band management. support for VLANs, QoS and IPv6 IPv4, IPv6, Traffic Policing, Quality of Service Out-of-band management of the switch is • 24 Gbits/sec non-blocking switch fabric (QoS) functionality such as guaranteed accessible via 10/100 BaseT Ethernet port with full wirespeed performance Bandwidth allocation and prioritization. or a RS232 serial port, available on the front • Configuration through Web interface panel D38999 connectors. • Unmanaged version also available Connection to the light-weight, space- • Conduction cooled efficient, conduction-cooled GS12 is via the The GS12 has comprehensive network • Supports IPv6 switching/routing rugged front panel connectors, which meet management capabilities. MIL-STD-810F requirements for humidity, • Remote management support, salt spray and dust. Configuration of the switch is via a (Telnet, SSH and SNMP) comprehensive and intuitive web interface, • 2x RS232 communication ports Featuring non-blocking shared memory command line interface or SNMP.
    [Show full text]
  • Network Virtualization Using Shortest Path Bridging (802.1Aq) and IP/SPB
    avaya.com Network Virtualization using Shortest Path Bridging and IP/SPB Abstract This White Paper discusses the benefits and applicability of the IEEE 802.1aq Shortest Path Bridging (SPB) protocol which is augmented with sophisticated Layer 3 routing capabilities. The use of SPB and the value to solve virtualization of today’s network connectivity in the enterprise campus as well as the data center are covered. This document is intended for any technically savvy network manager as well as network architect who are faced with: • Reducing time to service requirements • Less tolerance for network down time • Network Virtualization requirements for Layer 2 (VLAN-extensions) and Layer 3 (VRF-extensions) • Server Virtualization needs in data center deployments requiring a large set of Layer 2 connections (VLANs) • Traffic separation requirements in campus deployments for security purposes as well as robustness considerations (i.e. contractors for maintenance reasons needing access to their equipment or guest access needs) • Multi-tenant applications such as airports, governments or any other network with multiple discrete (legal) entities that require traffic separation WHITE PAPER 1 avaya.com Table of Contents 1. Introduction ........................................................................................................................ 3 2. Benefits of SPB ................................................................................................................... 4 2.1 Network Service Enablement ............................................................................................................
    [Show full text]
  • Bandwidth Aggregation Across Multiple Smartphone Devices
    BANDWIDTH AGGREGATION ACROSS MULTIPLE SMARTPHONE DEVICES A Thesis Presented to the Faculty of California Polytechnic State University San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Bradley Raymond Zeller January 2014 c 2014 Bradley Raymond Zeller ALL RIGHTS RESERVED ii COMMITTEE MEMBERSHIP TITLE: Bandwidth Aggregation Across Multiple Smartphone Devices AUTHOR: Bradley Raymond Zeller DATE SUBMITTED: January 2014 COMMITTEE CHAIR: Phillip Nico, Ph.D. Professor of Computer Science COMMITTEE MEMBER: John Bellardo, Ph.D. Professor of Computer Science COMMITTEE MEMBER: Foaad Khosmood, Ph.D. Professor of Computer Science iii ABSTRACT Bandwidth Aggregation Across Multiple Smartphone Devices Bradley Raymond Zeller Smartphones now account for the majority of all cell phones in use today [23]. Ubiquitous Internet access is a valuable feature offered by these devices and the vast majority of smartphone applications make use of the Internet in one way or another. However, the bandwidth offered by these cellular networks is often much lower than we typically experience on our standard home networks, leading to a less-than-optimal user experience. This makes it very challenging and frustrating to access certain types of web content such as video streaming, large file downloads, loading large webpages, etc. Given that most modern smartphones are multi-homed and are capable of ac- cessing multiple networks simultaneously, this thesis attempts to utilize all available network interfaces in order to achieve the aggregated bandwidth of each to improve the overall network performance of the phone. To do so, I implement a bandwidth aggregation system for iOS that combines the bandwidths of multiple devices located within close proximity of each other.
    [Show full text]
  • LAN Aggregation
    CHAPTER 3 LAN Aggregation Objectives Upon completion of this chapter, you will be able to answer the following questions: n What is link aggregation? n What are the commands to configure EtherChannel? n What is EtherChannel technology? n What are the methods to troubleshoot link aggregation with EtherChannel? Key Terms This chapter uses the following key terms. You can find the definitions in the Glossary. Port Aggregation Protocol (PAgP) page 122 PAgP auto page 127 Link Aggregation Control Protocol LACP active page 129 (LACP) page 122 LACP passive page 129 PAgP desirable page 127 03_SNCG_3282_r2a.indd 121 2/12/14 3:19 PM 122 Scaling Networks Companion Guide Introduction (3.0.1.1) Link aggregation is the ability to create one logical link using multiple physical links between two devices. This allows load sharing among the physical links, rather than having STP block one or more of the links. EtherChannel is a form of link aggrega­ tion used in switched networks. This chapter describes EtherChannel and the methods used to create an Ether­ Channel. An EtherChannel can be manually configured or can be negotiated by using the Cisco­proprietary protocol Port Aggregation Protocol (PAgP) or the IEEE 802.3ad–defined protocol Link Aggregation Control Protocol (LACP). The configuration, verification, and troubleshooting of EtherChannel are discussed. Class Activity 3.0.1.2: Imagine This It is the end of the work day. In your small­ to medium­sized business, you are try­ ing to explain to the network engineers about EtherChannel and how it looks when it is physically set up.
    [Show full text]
  • MIF Vs Other IETF Efforts
    MIF vs Other IETF Efforts MIF BoF 74th IETF, San Francisco 26‐Mar‐2009 Gabriel Montenegro How is MIF different from other efforts • MIF: how to cope with multiple different interfaces • Address Selection and RFC3484 Design Team efforts • Handling multiple interfaces via one address or identifier – SHIM6 – SCTP – Mobile IP – HIP – RRG and LISP • Handling multiple interfaces via aggregation at Transport or above – Trilogy and Resource Pooling • Proxy MIP and address sharing across interfaces • Link Aggregation, Load‐balancing and failover (LBFO) Default Address Selection and RFC3484 • RFC3484 (and bis) only provide Default address selection. Specific scenarios or protocols beyond this may depart from the default • RFC3484 is currently being revised via a 6MAN design team – Their focus is more on the site/enterprise perspective. • MIF focuses on the node with multiple interfaces case. • Revision to RFC3484 should reflect both of these (and perhaps other) perspectives. • RFC3484 revision is in scope of proposed MIF WG – Including policy injection from multiple interfaces – MIF would work with 6man towards this • But MIF also deals with other issues (e.g., DNS, default gateway) Handling multiple addresses or interfaces • Many efforts to better handle multiple interfaces, addresses, RLOCs (“routing locators”), etc • Two main approaches – via one address or identifier: SHIM6, SCTP, Mobile IP, HIP, RRG and LISP – as aggregated paths at the Transport Layer or above: Trilogy and Resource Pooling at Transport layer • They all share with MIF: – how to choose src/dst pair to talk with peer, so all can benefit from RFC3484 discussions (including better policy injection) – But in doing so, each of the above efforts may depart from the RFC3484 default.
    [Show full text]
  • BATCP: Bandwidth-Aggregation Transmission Control Protocol
    S S symmetry Article BATCP: Bandwidth-Aggregation Transmission Control Protocol Ismael Amezcua Valdovinos 1,†, Jesus Arturo Perez Diaz 2,†, Luis Javier Garcia Villalba 3,*,† and Tai-hoon Kim 4,† 1 Facultad de Telemática, Universidad de Colima, Av. Universidad No 333-Colonia Las Víboras, Colima 28040, Mexico; [email protected] 2 Computer Science Department, Faculty of Engineering, Tecnológico de Monterrey, Autopista del Sol km 104 + 060, Xochitepec, Morelos 62790, Mexico; [email protected] 3 Group of Analysis, Security and Systems (GASS), Department of Software Engineering and Artificial Intelligence (DISIA), Faculty of Computer Science and Engineering, Office 431, Universidad Complutense de Madrid (UCM), Calle Profesor José García Santesmases, 9, Ciudad Universitaria, 28040 Madrid, Spain 4 Department of Convergence Security, Sungshin Womens University, 249-1 Dongseon-dong 3-ga, Seoul 136-742, Korea; [email protected] * Correspondence: [email protected]; Tel.: +34-91-394-7638 † These authors contributed equally to this work. Received: 21 July 2017; Accepted: 17 August 2017; Published: 21 August 2017 Abstract: The Transmission Control Protocol (TCP) is the most used transport protocol to exchange reliable data between network devices. A considerable number of extensions have been implemented into TCP to achieve better performance. In this paper, we will present, describe, implement, and analyze a new protocol extension called Bandwidth-Aggregation TCP (BATCP), which enables the concurrent use of network interfaces, to improve network performance on multi-homed nodes. BATCP allows the use of multiple TCP connections to accept multiple IP addresses from a multi-homed node, scheduling segments among them based on a scheduling algorithm. Our results show that BATCP achieves full exploitation of each network interface, achieving up to 100% network utilization using two ADSL connections in real-world scenarios.
    [Show full text]