Converged Networking in the Data Center

Total Page:16

File Type:pdf, Size:1020Kb

Converged Networking in the Data Center Converged Networking in the Data Center Peter P. Waskiewicz Jr. LAN Access Division, Intel Corp. [email protected] Abstract data center as a whole. In addition to the general power and cooling costs, other areas of focus are the physical The networking world in Linux has undergone some sig- amount of servers and their associated cabling that re- nificant changes in the past two years. With the expan- side in a typical data center. Servers very often have sion of multiqueue networking, coupled with the grow- multiple network connections to various network seg- ing abundance of multi-core computers with 10 Gigabit ments, plus they’re usually connected to a SAN: ei- Ethernet, the concept of efficiently converging different ther a Fiber Channel fabric or an iSCSI infrastructure. network flows becomes a real possibility. These multiple network and SAN connections mean large amounts of cabling being laid down to attach a This paper presents the concepts behind network con- server. Converged Networking takes a 10GbE device vergence. Using the IEEE 802.1Qaz Priority Group- that is capable of Data Center Bridging in hardware, ing and Data Center Bridging concepts to group mul- and consolidates all of those network connections and tiple traffic flows, this paper will demonstrate how dif- SAN connections into a single, physical device and ca- ferent types of traffic, such as storage and LAN traf- ble. The rest of this paper will illustrate the different fic, can efficiently coexist on the same physical connec- aspects of Data Center Bridging, which is the network- tion. With the support of multi-core systems and MSI- ing feature allowing the coexistence of multiple flows X, these different traffic flows can achieve latency and on a single physical port. It will first define and describe throughput comparable to the same traffic types’ spe- the different components of DCB. It then will show how cialized adapters. DCB consolidates network connections while keeping traffic segregated, and how this can be done in an effi- 1 Introduction cient manner. Ethernet continues to march forward in today’s comput- 2 Priority Grouping and Bandwidth Control ing environment. It has now reached a point where PCI Express devices running at 10GbE are becoming more 2.1 Quality of Service common and more affordable. The question is, what do we do with all the bandwidth? Is it too much for today’s Quality of Service is not a stranger to networking setups workloads? Fortunately, the adage of "if you build it, today. The QoS layer is composed of three main com- they will come" provides answers to these questions. ponents: queuing disciplines, or qdiscs (packet sched- Data centers have a host of operational costs and upkeep ulers), classifiers (filter engines), and filters [1]. In associated with them. Cooling and power costs are the the Linux kernel, there are many QoS options that can two main areas that data center managers continue to an- be deployed: One qdisc provides packet-level filter- alyze to reduce cost. The reality is as machines become ing into different priority-based queues (sch_prio); An- faster and more energy efficient, the cost to power and other can make bandwidth allocation decisions based cool these machines is also reduced. The next question on other criteria (sch_htb and sch_cbq). All of these to ask is, how can we push the envelope of efficiency built-in schedulers run in the kernel, as part of the even more? dev_queue_xmit() routine in the core networking layer (qdisc_run()). While these pieces of the QoS Converged Networking, also known as Unified Net- layer can separate traffic flows into different priority working, is designed to increase the efficiency of the queues in the kernel, the priority is isolated to the packet • 297 • 298 • Converged Networking in the Data Center dev_queue_xmit() stations. This allows bandwidth allocations plus priori- tization for specific network flows to be honored across all nodes of a network. The mechanism used to tag packets for prioritization is the 3-bit priority field of the 802.1P/Q tag. This sch_prio field offers 8 possible priorities into which traffic can be grouped. When a base network driver implements DCB QoS Layer (assuming the device supports DCB in hardware), the sch_multiq cls_u32 driver is expected to insert the VLAN tag, including the priority, before it posts the packet to the transmit DMA sch_cbq engine. One example of a driver that implements this is ixgbe, the Intel R 10GbE PCI Express driver. Both de- vices supported by this driver, 82598 and 82599, have DCB capabilities in hardware. Network device driver (invoked via hard_start_xmit() ) The priority tag in the VLAN header is utilized by both the Linux kernel and network infrastructure. In the ker- nel, vconfig, used to configure VLANs, can modify the priority tag field. Network switches can be configured to Figure 1: QoS Layer in the Linux kernel modify their switching policies based on the priority tag. In DCB though, it is not required for DCB packets to scheduler within the kernel itself. The priorities, along belong to a traditional VLAN. All that needs to be con- with any bandwidth throttling, are completely isolated figured is the priority tag field, and whatever VLAN that to the kernel, and are not propagated to the network. was already in the header is preserved. When no VLAN This highlights an issue where these kernel-based prior- is present, VLAN group 0 is used, meaning the lack of ity queues in the qdisc can cause head-of-line-blocking a VLAN. This mechanism allows non-VLAN networks in the network device. For example, if a high priority to work with DCB alongside VLAN networks, while packet is dequeued from the sch_prio qdisc and sent to maintaining the priority tags for each network flow. The the driver, it can still be blocked in the network device expectation is that the switches being used are DCB- by a low priority, bulk data packet that was previously capable, which will guarantee that network scheduling dequeued. in the switch fabric will be based on the 802.1P tag found in the VLAN header of the packet. Converged Networking makes use of the QoS layer of the kernel to help identify its network flows. This identi- Certain packet types are not tagged though. All of fication is used by the network driver to decide on which the inter-switch and inter-router frames being passed Tx queue to place the outbound packets. Since this through the network are not tagged. DCB uses a pro- model is going to be enforcing a network-wide prioriti- tocol, LLDP (Link Layer Discovery Protocol), for its zation of network flows (discussed later), the QoS layer DCBx protocol. These frames are not tagged in a DCB should not enforce any priority when dequeuing pack- network. LLDP and DCBx are discussed in more detail ets to the network driver. In other words, Converged later in this paper. Networking will not make use of the sch_prio qdisc. Rather, Converged Networking uses the sch_multiq 2.3 Bandwidth Groups qdisc, which is a round-robin based queuing discipline. The importance of this is discussed in Section 2.2. Once flows are identified by a priority tag, they are allo- cated bandwidth on the physical link. DCB uses band- 2.2 Priority Tagging width groups to multiplex the prioritized flows. Each bandwidth group is given a percentage of the over- Data Center Bridging (DCB) takes the QoS mechanism all bandwidth on the network device. The bandwidth into hardware. It also defines the network-wide infras- group can further enforce bandwidth sharing within it- tructure for a QoS policy across all switches and end- self among the priority flows already added to it. 2009 Linux Symposium • 299 Bandwidth Group Layout • select_queue The network stack in recent ker- nels (2.6.27 and beyond) has a function pointer called select_queue(). It is part of the net_ Bandwidth Group 1: device struct, and can be overridden by a net- 10% of link bandwidth work driver if desired. A driver would do this if there is a special need to control the Tx queueing Priority 5 Priority 8 specific to an underlying technology. DCB is one 80% of BWG 20% of BWG of those cases. However, if a network driver hasn’t overridden it (which is normal), then a hash is com- puted by the core network stack. This hash gener- Bandwidth Group 2: Bandwidth Group 3: ates a value which is assigned to skb->queue_ 60% of link bandwidth 30% of link bandwidth mapping. The skb is then passed to the driver for transmit. The driver then uses this value to select one of its Tx queues to transmit the packet onto the Priority 2 Priority 4 Priority 1 Priority 3 Priority 6 Priority 7 wire. 30% 40% 10% 20% 70% 30% of BWG of BWG of BWG of BWG of BWG of BWG • tc filters The userspace tool, tc, can be used to program filters into the qdisc layer. tc is part of the iproute2 package. The filters can match essen- tially anything in the skb headers from layer 2 and Figure 2: Example Bandwidth Group Layout up. The filters use classifiers, such as u32 match- ing, to match different pieces of the skbs. Once Each bandwidth group can be configured to use certain a filter matches, it has an action part of the filter. methods of dequeuing packets during a Tx arbitration Most common for qdiscs such as sch_multiq is the cycle. The first is group strict priority: It will allow a skbedit action, which will allow the tc filter to mod- single priority flow within the bandwidth group to grow ify the skb->queue_mapping in the skb.
Recommended publications
  • On Ttethernet for Integrated Fault-Tolerant Spacecraft Networks
    On TTEthernet for Integrated Fault-Tolerant Spacecraft Networks Andrew Loveless∗ NASA Johnson Space Center, Houston, TX, 77058 There has recently been a push for adopting integrated modular avionics (IMA) princi- ples in designing spacecraft architectures. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and de- sign complexity. Ethernet technology is attractive for inclusion in more integrated avionic systems due to its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components. Furthermore, Ethernet can be augmented with a variety of quality of service (QoS) enhancements that enable its use for transmitting critical data. TTEthernet introduces a decentralized clock synchronization paradigm enabling the use of time-triggered Ethernet messaging appropriate for hard real-time applications. TTEther- net can also provide two forms of event-driven communication, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. This paper explores the application of TTEthernet technology to future IMA spacecraft architectures as part of the Avionics and Software (A&S) project chartered by NASA's Advanced Ex- ploration Systems (AES) program. Nomenclature A&S = Avionics and Software Project AA2 = Ascent Abort 2 AES = Advanced Exploration Systems Program ANTARES = Advanced NASA Technology Architecture for Exploration Studies API = Application Program Interface ARM = Asteroid Redirect Mission
    [Show full text]
  • Ciena 5305 Service Aggregation Switch Datasheet
    5305 SERVICE AGGREGATION SWITCH Features and Benefits The 5305 Ethernet/MPLS service aggregation > Features advanced Ethernet and MPLS to support demanding business, mobile switch is purpose-built for Carrier Ethernet backhaul, transport, and residential applications including L2VPN service to deliver cost-effective capacity, scalability, delivery and aggregation, 3G/4G wireless backhaul, FTTx and IP DSLAM and resiliency. With this switch, service Aggregation and L2 backhaul of L3VPNs > Delivers optimal density and service providers can keep pace with the constantly flexibility with a compact modular chassis, supporting incremental increasing demand for bandwidth and next- expansion of service and bandwidth capacity with linear CAPEX outlay generation services that support business, > Supports tens of thousands of services mobile backhaul, transport, and residential on a single system with robust scalability of up to 30,000+ VLANs and applications in metro networks. two million MAC addresses per chassis > Delivers high reliability, five-9s The 5305 is a modular, chassis-based system optimized for metro-edge deployments availability, and 50 ms protection in a wide variety of network topologies, including fiber and microwave rings, point-to- switching resiliency using state-of- the-art hardware and software design point fiber, microwave mesh, and fiber or copper to the subscriber. The switch coupled with advanced control plane supports high-density Gigabit Ethernet (GbE) connectivity to the subscriber edge and and Ethernet OAM capabilities
    [Show full text]
  • Ethernet Alliance Hosts Third IEEE 802.1 Data Center Bridging
    MEDIA ALERT Ethernet Alliance® Hosts Third IEEE 802.1 Data Center Bridging Interoperability Test Event Participation is Open to Both Ethernet Alliance Members and Non‐Members WHAT: The Ethernet Alliance Ethernet in the Data Center Subcommittee has announced it will host an IEEE 802.1 Data Center Bridging (DCB) interoperability test event the week of May 23 in Durham, NH at the University of New Hampshire Interoperability Lab. The Ethernet Alliance invites both members and non‐members to participate in this third DCB test event that will include both protocol and applications testing. The event targets interoperability testing of Ethernet standards being developed by IEEE 802.1 DCB task force to address network convergence issues. Testing of protocols will include projects such as IEEE P802.1Qbb Priority Flow Control (PFC), IEEE P802.1Qaz Enhanced Transmission Selection (ETS) and DCB Capability Exchange Protocol (DCBX). The test event will include testing across a broad vendor community and will exercise DCB features across multiple platforms as well as exercise higher layer protocols such as Fibre Channel over Ethernet (FCoE), iSCSI over DCB, RDMA over Converged Ethernet (RoCE) and other latency sensitive applications. WHY: These test events help vendors create interoperable, market‐ready products that interoperate to the IEEE 802.1 DCB standards. WHEN: Conference Call for Interested Participants: Friday, March 4 at 10 AM PST Event Registration Open Until: March 4, 2011 Event Dates (tentative): May 23‐27, 2011 WHERE: University of New Hampshire Interoperability Lab (UNH‐IOL) Durham, NH WHO: The DCB Interoperability Event is hosted by the Ethernet Alliance. REGISTER: To get more information, learn about participation fees and/or register for the DCB plugfest, please visit Ethernet Alliance DCB Interoperability Test Event page or contact [email protected].
    [Show full text]
  • Data Center Ethernet 2
    DataData CenterCenter EthernetEthernet Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: http://www.cse.wustl.edu/~jain/cse570-15/ Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-1 OverviewOverview 1. Residential vs. Data Center Ethernet 2. Review of Ethernet Addresses, devices, speeds, algorithms 3. Enhancements to Spanning Tree Protocol 4. Virtual LANs 5. Data Center Bridging Extensions Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-2 Quiz:Quiz: TrueTrue oror False?False? Which of the following statements are generally true? T F p p Ethernet is a local area network (Local < 2km) p p Token ring, Token Bus, and CSMA/CD are the three most common LAN access methods. p p Ethernet uses CSMA/CD. p p Ethernet bridges use spanning tree for packet forwarding. p p Ethernet frames are 1518 bytes. p p Ethernet does not provide any delay guarantees. p p Ethernet has no congestion control. p p Ethernet has strict priorities. Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-3 ResidentialResidential vs.vs. DataData CenterCenter EthernetEthernet Residential Data Center Distance: up to 200m r No limit Scale: Few MAC addresses r Millions of MAC Addresses 4096 VLANs r Millions of VLANs Q-in-Q Protection: Spanning tree r Rapid spanning tree, … (Gives 1s, need 50ms) Path determined by r Traffic engineered path spanning tree Simple service r Service Level Agreement.
    [Show full text]
  • PROFINET for Network Geeks
    PROFINET for Network Geeks (and those who want to be) Introduction PROFINET is an open Industrial Ethernet standard. It is a communication protocol that exchanges data between automation controllers and devices. With over 25 million installed nodes (as of 2018), PROFINET is one of the most widely used Industrial Ethernet standards worldwide. But even though millions of users are familiar with PROFINET, not all users understand how it works. This white paper starts with a brief overview of Ethernet and the 7-layer ISO-OSI model. Then, it describes how PROFINET’s 3 communication channels fit in the model: TCP/IP and UDP/IP, Real-Time (RT), and Isochronous Real-Time (IRT). 1 Ethernet The transition from using 4-20 mA analog signals for I/O communication to digital fieldbuses provided the benefits of reduced wiring, access to network data, and robust diagnostics. The later transition from digital fieldbuses to Ethernet was also similarly a shift to a more modern technology. Ethernet incorporated and improved upon the benefits of fieldbuses. Ethernet is ubiquitous and PROFINET uses standard Ethernet. Ethernet gives PROFINET the ability to provide faster updates, more bandwidth, larger messages, an unlimited address space, and even more diagnostic capabilities. Also, as commercial Ethernet evolves, PROFINET can take advantage of these physical layer improvements. Figure 1 ISO-OSI Model The ISO-OSI Model Ethernet-based communications can be represented by a seven-layer model: the ISO/OSI Reference Model. The model generically describes the means and methods used to transmit data. Each layer has a specific name and function, as shown in Figure 1.
    [Show full text]
  • Žilinská Univerzita V Žiline Elektrotechnická Fakulta Katedra Telekomunikácií
    Žilinská univerzita v Žiline Elektrotechnická fakulta Katedra telekomunikácií Teoretický návrh a realizácia sieťového uzla na báze protokolov 802.1Q, IP a BGP Pavol Križan 2006 Teoretický návrh a realizácia sieťového uzla na báze protokolov 802.1Q, IP a BGP DIPLOMOVÁ PRÁCA PAVOL KRIŽAN ŽILINSKÁ UNIVERZITA V ŽILINE Elektrotechnická fakulta Katedra telekomunikácií Študijný odbor: TELEKOMUNIKÁCIE Vedúci diplomovej práce: Ing. Peter Zuberec Stupeň kvalifikácie: inžinier (Ing.) Dátum odovzdania diplomovej práce: 19.05.2006 ŽILINA 2006 Abstrakt Diplomová práca popisuje základy fungovania počítačových sietí VLAN, protokoly, ktoré sa v nich využívajú a základy smerovacích protokolov, špeciálne Border Gateway Protokol (BGP). Práca sa tiež zaoberá vytvorením modelu sieťového uzla, ktorý bude poskytovať smerovanie s použitím BGP protokolu, podporu VLAN a vysokú redundanciu zariadení alebo liniek. This diploma work is dealing with basic functions of Virtual Local Area Networks, protocols used in those netwoks and basics of routing protocols, specially Border Gateway Protocol (BGP). It describes creation of network node, which will provide routing using BGP protocol, VLAN support and high redundancy of devices or links . Žilinská univerzita v Žiline, Elektrotechnická fakulta, Katedra telekomunikácií ________________________________________________________________________ ANOTAČNÝ ZÁZNAM - DIPLOMOVÁ PRÁCA Priezvisko, meno: Križan Pavol školský rok: 2005/2006 Názov práce: Teoretický návrh a realizácia sieťového uzla na báze protokolov 802.1Q, IP a BGP Počet strán: 50 Počet obrázkov: 23 Počet tabuliek: 0 Počet grafov: 0 Počet príloh: 0 Použitá lit.: 16 Anotácia: Diplomová práca popisuje základy fungovania počítačových sietí VLAN, protokoly, ktoré sa v nich využívajú a základy smerovacích protokolov, špeciálne Border Gateway Protokol (BGP). Práca sa tiež zaoberá vytvorením modelu sieťového uzla, ktorý bude poskytovať smerovanie s použitím BGP protokolu, podporu VLAN a vysokú redundanciu zariadení alebo liniek.
    [Show full text]
  • IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008)
    IEEE Standard for Ethernet IEEE Computer Society Sponsored by the LAN/MAN Standards Committee IEEE 3 Park Avenue IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008) 28 December 2012 IEEE Std 802.3™-2012 (Revision of IEEE Std 802.3-2008) IEEE Standard for Ethernet Sponsor LAN/MAN Standards Committee of the IEEE Computer Society Approved 30 August 2012 IEEE-SA Standard Board Abstract: Ethernet local area network operation is specified for selected speeds of operation from 1 Mb/s to 100 Gb/s using a common media access control (MAC) specification and management information base (MIB). The Carrier Sense Multiple Access with Collision Detection (CSMA/CD) MAC protocol specifies shared medium (half duplex) operation, as well as full duplex operation. Speed specific Media Independent Interfaces (MIIs) allow use of selected Physical Layer devices (PHY) for operation over coaxial, twisted-pair or fiber optic cables. System considerations for multisegment shared access networks describe the use of Repeaters that are defined for operational speeds up to 1000 Mb/s. Local Area Network (LAN) operation is supported at all speeds. Other specified capabilities include various PHY types for access networks, PHYs suitable for metropolitan area network applications, and the provision of power over selected twisted-pair PHY types. Keywords: 10BASE; 100BASE; 1000BASE; 10GBASE; 40GBASE; 100GBASE; 10 Gigabit Ethernet; 40 Gigabit Ethernet; 100 Gigabit Ethernet; attachment unit interface; AUI; Auto Negotiation; Backplane Ethernet; data processing; DTE Power via the MDI; EPON; Ethernet; Ethernet in the First Mile; Ethernet passive optical network; Fast Ethernet; Gigabit Ethernet; GMII; information exchange; IEEE 802.3; local area network; management; medium dependent interface; media independent interface; MDI; MIB; MII; PHY; physical coding sublayer; Physical Layer; physical medium attachment; PMA; Power over Ethernet; repeater; type field; VLAN TAG; XGMII The Institute of Electrical and Electronics Engineers, Inc.
    [Show full text]
  • 10-Port 2.5GBASE-T Web Smart Switch with 2 X 10G SFP+ Slots
    TEG-30102WS 10-Port 2.5GBASE-T Web Smart Switch with 2 x 10G SFP+ Slots TEG-30102WS (v1.0R) • 8 x 2.5GBASE-T RJ-45 ports with 2 x 10G SFP+ slots • 2.5GBASE-T supports up to 2.5Gbps connection speeds • Compatible with existing Cat5e or better cabling • Easy to use web-based management interface • Supports up to 32 IPv4/IPv6 static routes • Supports LACP, VLAN, and IGMP Snooping • IEEE 802.1p QoS with queue scheduling support • Per port MAC restriction and dynamic ARP inspection • Bandwidth control per port • 80Gbps switching capacity • 1U rack mountable (brackets included) TRENDnet’s 10-Port 2.5GBASE-T Web Smart Switch with eight 2.5GBASE-T ports and two 10G SFP+ slots, model TEG-30102WS, delivers advanced management features with an 80Gbps switching capacity. The TEG-30102WS is equipped with 2.5GBASE-T RJ-45 ports that provide higher gigabit speeds capable of up to 2.5Gbps over existing Cat5e or better cabling. This rack mountable IPv6 ready switch comes with an intuitive web-based interface. Advanced traffic management controls include IP routing, VLAN, QoS, access controls, link aggregation, troubleshooting, and SNMP monitoring, making this a powerful solution for SMB networks. TEG-30102WS Web Smart Management 2.5GBASE-T Ports 10G SFP+ Slots Provides an easy to use web-based GUI Equipped with eight 2.5GBASE-T RJ-45 Offers two 10G SFP+ slots for high-speed management interface for advanced traffic ports that provide higher gigabit speeds network connections providing a cost- management controls, IP routing, VLAN, capable of up to 2.5Gbps over existing effective solution in adding 10G link QoS, access controls, link aggregation, Cat5e or better cabling.
    [Show full text]
  • Network Virtualization Using Shortest Path Bridging (802.1Aq) and IP/SPB
    avaya.com Network Virtualization using Shortest Path Bridging and IP/SPB Abstract This White Paper discusses the benefits and applicability of the IEEE 802.1aq Shortest Path Bridging (SPB) protocol which is augmented with sophisticated Layer 3 routing capabilities. The use of SPB and the value to solve virtualization of today’s network connectivity in the enterprise campus as well as the data center are covered. This document is intended for any technically savvy network manager as well as network architect who are faced with: • Reducing time to service requirements • Less tolerance for network down time • Network Virtualization requirements for Layer 2 (VLAN-extensions) and Layer 3 (VRF-extensions) • Server Virtualization needs in data center deployments requiring a large set of Layer 2 connections (VLANs) • Traffic separation requirements in campus deployments for security purposes as well as robustness considerations (i.e. contractors for maintenance reasons needing access to their equipment or guest access needs) • Multi-tenant applications such as airports, governments or any other network with multiple discrete (legal) entities that require traffic separation WHITE PAPER 1 avaya.com Table of Contents 1. Introduction ........................................................................................................................ 3 2. Benefits of SPB ................................................................................................................... 4 2.1 Network Service Enablement ............................................................................................................
    [Show full text]
  • Interactions Between TCP and Ethernet Flow Control Over Netgear
    Interactions between TCP and Ethernet flow control over Netgear XAVB2001 HomePlug AV links Radika Veera Valli, Grenville Armitage, Jason But, Thuy Nguyen Centre for Advanced Internet Architectures, Technical Report 130121A Swinburne University of Technology Melbourne, Australia [email protected], [email protected], [email protected], [email protected] Abstract—HomePlug AV links are usually slower than buffers along network paths. A side-effect of such loss- the surrounding wired LAN links, creating a bottleneck based CC behaviour is that all traffic sharing the bottle- where traffic may be buffered and delayed. To investigate neck will experience additional latency due to increased the interactions between TCP congestion control and average queueing delays [5]. This is particularly prob- Ethernet flow control we trialled four TCP variants over a lematic for real-time application flows (such as VoIP and HomePlug AV link using two Netgear XAVB2001 devices. We observed that the XAVB2001’s use of Ethernet flow online games) given the growth of arguably-gratuitous control effectively concatenates the buffers in both the buffering (“buffer bloat”) in network devices, interface XAVB2001 and the directly attached sending host. This led cards and network software stacks [6]. to multi-second RTTs when using NewReno and CUBIC In this report we borrow from, and extend, previous (loss-based) TCPs, which is significantly troublesome for work by one of the current authors [7]. Our new work multimedia traffic on home LANs. In contrast, Vegas explores in greater detail the latency and flow control and CDG (delay-based) TCPs kept buffer utilisation low, issues that emerge when end-to-end TCP is allowed to and induced five to ten milliseconds RTT.
    [Show full text]
  • Optical Transport Networks & Technologies Standardization Work
    Optical Transport Networks & Technologies Standardization Work Plan Issue 24, February 2018 GENERAL ........................................................................................................................... 3 PART 1: STATUS REPORTS AS OF JANUARY 2018 ...................................................... 4 1 HIGHLIGHT OF ITU-T SG15 ........................................................................................ 4 2 REPORTS FROM OTHER ORGANIZATIONS ............................................................ 4 PART 2: STANDARD WORK PLAN ................................................................................... 8 1 INTRODUCTION TO PART 2 ...................................................................................... 8 2 SCOPE ......................................................................................................................... 8 3 ABBREVIATIONS ........................................................................................................ 8 4 DEFINITIONS AND DESCRIPTIONS .......................................................................... 9 4.1 Optical and other Transport Networks & Technologies (OTNT) ....................................................... 9 4.2 Optical Transport Network (OTN) (largely revised in 09/2016 reflecting B100G) ............................ 9 4.2.1 FlexE in OIF (updated in June-2017) .......................................................................................... 11 4.3 Support for mobile networks (reference to ITU-R M2375 added
    [Show full text]
  • Chapter 2 Link Layer Verilog Hardware Implementation, and One Wireless Broadcast Link, I.E
    Modern Computer Networks: An open source approach Chapter 2 Modern Computer Networks: An open source approach Chapter 2 package, one wired broadcast link, i.e. Ethernet in Section 2.4 along with its Chapter 2 Link Layer Verilog hardware implementation, and one wireless broadcast link, i.e. wireless LAN (WLAN) in Section 2.5 plus a brief on Bluetooth and WiMAX. PPP is Problem Statement popularly used in the last-mile dial-up services or routers carrying various network protocols over point-to-point links. Ethernet has occupied more than 95 percent of To effectively and efficiently transmit data over physical links from one node wired LANs. It is also poised to be ubiquitous in MANs and WANs. In contrast to to one or more nodes, there is much more to do than simply modulating or desktop PCs, which usually use wired links, many devices such as laptop PCs encoding bit stream into signal. Transmission impairments, such as crosstalk and cellular phones are mobile and prefer wireless links such as WLAN, Bluetooth, between two adjacent pairs, can unexpectedly change transmission signal and and WiMAX. thus result in errors. The transmitter might transmit faster than the receiver can Table 2.1 Link protocols. handle. The transmitter has to somehow indicate the destination(s), if on a PAN/LAN MAN/WAN broadcast link, i.e. LAN, and usually needs to name itself to let the receiver know Token Bus (802.4) DQDB (802.6) Token Ring (802.5) HDLC where the source is. If multiple stations share a LAN, an arbitration mechanism is HIPPI X.25 required to determine who can transmit next.
    [Show full text]