End-To-End Performance of 10-Gigabit Ethernet on Commodity Systems

Total Page:16

File Type:pdf, Size:1020Kb

End-To-End Performance of 10-Gigabit Ethernet on Commodity Systems END-TO-END PERFORMANCE OF 10-GIGABIT ETHERNET ON COMMODITY SYSTEMS INTEL’SNETWORK INTERFACE CARD FOR 10-GIGABIT ETHERNET (10GBE) ALLOWS INDIVIDUAL COMPUTER SYSTEMS TO CONNECT DIRECTLY TO 10GBE ETHERNET INFRASTRUCTURES. RESULTS FROM VARIOUS EVALUATIONS SUGGEST THAT 10GBE COULD SERVE IN NETWORKS FROM LANSTOWANS. From its humble beginnings as such performance to bandwidth-hungry host shared Ethernet to its current success as applications via Intel’s new 10GbE network switched Ethernet in local-area networks interface card (or adapter). We implemented (LANs) and system-area networks and its optimizations to Linux, the Transmission anticipated success in metropolitan and wide Control Protocol (TCP), and the 10GbE area networks (MANs and WANs), Ethernet adapter configurations and performed sever- continues to evolve to meet the increasing al evaluations. Results showed extraordinari- demands of packet-switched networks. It does ly higher throughput with low latency, so at low implementation cost while main- indicating that 10GbE is a viable intercon- taining high reliability and relatively simple nect for all network environments. (plug and play) installation, administration, Justin (Gus) Hurwitz and maintenance. Architecture of a 10GbE adapter Although the recently ratified 10-Gigabit The world’s first host-based 10GbE adapter, Wu-chun Feng Ethernet standard differs from earlier Ether- officially known as the Intel PRO/10GbE LR net standards, primarily in that 10GbE oper- server adapter, introduces the benefits of Los Alamos National ates only over fiber and only in full-duplex 10GbE connectivity into LAN and system- mode, the differences are largely superficial. area network environments, thereby accom- Laboratory More importantly, 10GbE does not make modating the growing number of large-scale obsolete current investments in network infra- cluster systems and bandwidth-intensive structure. The 10GbE standard ensures inter- applications, such as imaging and data mir- operability not only with existing Ethernet roring. This first-generation 10GbE adapter but also with other networking technologies contains a 10GbE controller implemented in such as Sonet, thus paving the way for Ether- a single chip that contains functions for both net’s expanded use in MANs and WANs. the media access control and physical layers. Although the developers of 10GbE mainly The controller is optimized for servers that intended it to allow easy migration to higher use the I/O bus backplanes of the Peripheral performance levels in backbone infrastruc- Component Interface (PCI) and its higher tures, 10GbE-based devices can also deliver speed extension, PCI-X. 10 Published by the IEEE Computer Society 0272-1732/04/$20.00 2004 IEEE 8/10 bytes PCS 3.125 Gbps SerDes MAC DMA PCIX-I/F XGM II 4×3.125 Gbps RX 10.3 Gbps in Intel optics 82597EX XAUI PCS PMA PCI-X Bus XGXS TX 10.3 Gbps out (8.5 Gbps) optics 4×3.125 Gbps Intel 1310-nm serial optics 512-Kbytes flash Intel PRO/10 GbE LR Figure 1. Architecture of the 10GbE adapter. The three main components are a 10GbE controller, flash memory, and serial optics. Figure 1 gives an architectural overview of formance to these other adapters.) the 10GbE adapter, which consists of three main components: an 82597EX 10GbE con- Testing environment and tools troller, 512 Kbytes of flash memory, and We evaluate the 10GbE adapter’s perfor- 1,310-nm wavelength single-mode serial mance in three different LAN or system-area optics transmitter and receiver. The 10GbE network environments, as Figure 2 shows: controller includes an Ethernet interface that delivers high performance by providing direct • direct single flow between two comput- access to all memory without using mapping ers connected back-to-back via a registers, minimizing the interrupts and pro- crossover cable, grammed I/O (PIO) read access required to •indirect single flow between two com- manage the device, and offloading simple puters through a Foundry FastIron 1500 tasks from the host CPU. switch, and As is common practice with high-perfor- •multiple flows through the Foundry Fast- mance adapters such as Myricom’s Myrinet1 Iron 1500 switch. and Quadrics’ QsNet,2 the 10GbE adapter frees up host CPU cycles by performing cer- Host computers were either Dell Pow- tain tasks (in silicon) on behalf of the host erEdge 2650 (PE2650) or Dell PowerEdge CPU. However, the 10GbE adapter focuses 4600 (PE4600) servers. on host off-loading of certain tasks—specifi- Each PE2650 contains dual 2.2-GHz Intel cally, TCP and Internet Protocol (IP) check- Xeon CPUs running on a 400-MHz front- sums and TCP segmentation—rather than side bus (FSB) using a ServerWorks GC-LE relying on remote direct memory access chipset with 1 Gbyte of memory and a dedi- (RDMA) and source routing. Consequently, cated 133-MHz PCI-X bus for the 10GbE unlike Myrinet and QsNet, the 10GbE adapter. Theoretically, this architectural con- adapter provides a general-purpose solution figuration provides 25.6-Gbps CPU band- to applications that does not require modify- width, up to 25.6-Gbps memory bandwidth, ing application code to achieve high perfor- and 8.5-Gbps network bandwidth via the mance. (The “Putting the 10GbE Numbers PCI-X bus. in Perspective” sidebar compares 10GbE’s per- Each PE4600 contains dual 2.4-GHz Intel JANUARY–FEBRUARY 2004 11 HOT INTERCONNECTS 11 Putting the 10GbE Numbers in Perspective To appreciate the 10GbE adapter’s performance, we consider it in the Elan4 libraries, QsNet II achieves sub-2-µs latencies1 while InfiniBand’s context of similar network interconnects. In particular, we compare latency is around 6 µs.2 10GbE’s performance with that of its predecessor, 1GbE, and the most Our experience with Quadrics’ previous-generation QsNet (using the common interconnects for high-performance computing (HPC): Myrinet, Elan3 API) produced unidirectional MPI bandwidth of 2.456 Gbps and a 4.9- QsNet, and InfiniBand. µs latency. As with Myrinet’s GM API, the Elan3 API may require appli- cation programmers to rewrite the application’s network code, typically 1GbE to move from a socket API to the Elan3 API. To address this issue, Quadrics With our extensive experience with 1GbE chipsets, we can achieve also has a highly efficient TCP/IP implementation that produces 2.240 near line-speed performance with a 1,500-byte maximum transmission Gbps of bandwidth and less than 30-µs latency. Additional performance unit (MTU) in a LAN or system-area network environment with most pay- results are available elsewhere.3 load sizes. Additional optimizations in a WAN environment should let us We have yet to see performance analyses of TCP/IP over either Infini- achieve similar performance while still using a 1,500-byte MTU. Band or QsNet II. Similarly, we have yet to evaluate MPI over 10GbE. This makes direct comparison between these interconnects and 10GbE rela- Myrinet tively meaningless at present. This lack of comparison might be for the We compare 10GbE’s performance to Myricom’s published performance best. 10GbE is a fundamentally different architecture, embodying a design numbers for its Myrinet adapters (available at http://www.myri.com/ philosophy that is fundamentally different from other interconnects for myrinet/performance/ip.html and http://www.myri.com/myrinet/perfor- high-performance computing. Ethernet standards, including 10GbE, are mance/index.html). Using Myrinet’s GM API, sustained unidirectional meant to transfer data in any network environment; they value versatility bandwidth is 1.984 Gbps and bidirectional bandwidth is 3.912 Gbps. Both over performance. Interconnects designed for high-performance comput- numbers are within 3 percent of the 2-Gbps unidirectional hardware limit. ing environments (for example, supercomputing clusters) are specific to The GM API provides latencies of 6 to 7 µs. system-area networks and emphasize performance over versatility. Using the GM API, however, might require rewriting portions of lega- cy application code. Myrinet provides a TCP/IP emulation layer to avoid this extra work. This layer’s performance, however, is notably less than that References of the GM API: Bandwidth drops to 1.853 Gbps, and latencies skyrocket 1. J. Beecroft et al., “Quadrics QsNet II: A Network for to more than 30 µs. Supercomputing Applications,” presented at Hot Chips 14 When running Message-Passing Interface (MPI) atop the GM API, band- (HotC 2003); available from http://www.quadrics.com. width reaches 1.8 Gbps with latencies between 8 to 9 µs. 2. J. Liu et al., “Micro-Benchmark Level Performance Comparison of High-Speed Cluster Interconnects,” Proc. Hot InfiniBand and QsNet II Interconnects 11 (HotI 2003), IEEE CS Press, 2003, pp. 60-65. InfiniBand and Quadrics’ QsNet II offer MPI throughput performance 3. F. Petrini et al., “The Quadrics Network: High-Performance that is comparable to 10GbE’s socket-level throughput. Both InfiniBand and Clustering Technology,” IEEE Micro, vol. 22, no. 1, Jan.-Feb. Quadrics have demonstrated sustained throughput of 7.2 Gbps. Using the 2002, pp. 46-57. Xeon CPUs running on a 400-MHz FSB, 10GbE streams from (or to) many hosts into using a ServerWorks GC-HE chipset with a 10GbE stream to (or from) a single host. 1 GByte of memory and a dedicated 100- The total backplane bandwidth (480 Gbps) MHz PCI-X bus for the 10GbE adapter. This in the switch far exceeds our test needs because configuration provides theoretical bandwidths both 10GbE ports are limited to 8.5 Gbps. of 25.6-, 51.2-, and 6.4-Gbps for the CPU, From a software perspective, all the hosts memory, and PCI-X bus. run current installations of Debian Linux with Since publishing our initial 10GbE results, customized kernel builds and tuned TCP/IP we have run additional tests on Xeon proces- stacks. We tested numerous kernels from the sors with faster clocks and FSBs, as well as on Linux 2.4 and 2.5 distributions.
Recommended publications
  • 40 and 100 Gigabit Ethernet Overview
    Extreme Networks White Paper 40 and 100 Gigabit Ethernet Overview Abstract This paper takes a look at the main forces that are driving Ethernet bandwidth upwards. It looks at the standards and architectural practices adopted by the different segments, how the different speeds of Ethernet are used and how its popularity has resulted in an ecosystem spanning data centers, carrier networks, enterprise networks, and consumers. Make Your Network Mobile © 2011 Extreme Networks, Inc. All rights reserved. Do not reproduce. Extreme Networks White Paper: 40 and 100 Gigabit Ethernet Overview and how its popularity has resulted in a complex ecosys- Overview tem between carrier networks, enterprise networks, and consumers. There are many reasons driving the need for higher bandwidth Ethernet, however, the main reason is our insatiable appetite for content. The definition of content Driving the Need for Speed in itself has evolved over time – where once the majority of traffic on an Ethernet network may have been occa- Ethernet in the Enterprise and Data sional file transfers, emails and the like, today technology Center is allowing us to push and receive richer content such Data center virtualization, which includes storage and as voice, video and high definition multimedia. Simi- server virtualization, is all about the efficient use of larly, mechanisms for delivering content have evolved resources. In the data center this is multifaceted. On over time to reflect this demand. While there were a few the one hand data center managers are trying to bring technologies competing for LAN dominance in the early power, cooling and space utilization under control, while days of networks, Ethernet has become the clear choice.
    [Show full text]
  • Gigabit Ethernet - CH 3 - Ethernet, Fast Ethernet, and Gigabit Ethern
    Switched, Fast, and Gigabit Ethernet - CH 3 - Ethernet, Fast Ethernet, and Gigabit Ethern.. Page 1 of 36 [Figures are not included in this sample chapter] Switched, Fast, and Gigabit Ethernet - 3 - Ethernet, Fast Ethernet, and Gigabit Ethernet Standards This chapter discusses the theory and standards of the three versions of Ethernet around today: regular 10Mbps Ethernet, 100Mbps Fast Ethernet, and 1000Mbps Gigabit Ethernet. The goal of this chapter is to educate you as a LAN manager or IT professional about essential differences between shared 10Mbps Ethernet and these newer technologies. This chapter focuses on aspects of Fast Ethernet and Gigabit Ethernet that are relevant to you and doesn’t get into too much technical detail. Read this chapter and the following two (Chapter 4, "Layer 2 Ethernet Switching," and Chapter 5, "VLANs and Layer 3 Switching") together. This chapter focuses on the different Ethernet MAC and PHY standards, as well as repeaters, also known as hubs. Chapter 4 examines Ethernet bridging, also known as Layer 2 switching. Chapter 5 discusses VLANs, some basics of routing, and Layer 3 switching. These three chapters serve as a precursor to the second half of this book, namely the hands-on implementation in Chapters 8 through 12. After you understand the key differences between yesterday’s shared Ethernet and today’s Switched, Fast, and Gigabit Ethernet, evaluating products and building a network with these products should be relatively straightforward. The chapter is split into seven sections: l "Ethernet and the OSI Reference Model" discusses the OSI Reference Model and how Ethernet relates to the physical (PHY) and Media Access Control (MAC) layers of the OSI model.
    [Show full text]
  • Parallel Computing at DESY Peter Wegner Outline •Types of Parallel
    Parallel Computing at DESY Peter Wegner Outline •Types of parallel computing •The APE massive parallel computer •PC Clusters at DESY •Symbolic Computing on the Tablet PC Parallel Computing at DESY, CAPP2005 1 Parallel Computing at DESY Peter Wegner Types of parallel computing : •Massive parallel computing tightly coupled large number of special purpose CPUs and special purpose interconnects in n-Dimensions (n=2,3,4,5,6) Software model – special purpose tools and compilers •Event parallelism trivial parallel processing characterized by communication independent programs which are running on large PC farms Software model – Only scheduling via a Batch System Parallel Computing at DESY, CAPP2005 2 Parallel Computing at DESY Peter Wegner Types of parallel computing cont.: •“Commodity ” parallel computing on clusters one parallel program running on a distributed PC Cluster, the cluster nodes are connected via special high speed, low latency interconnects (GBit Ethernet, Myrinet, Infiniband) Software model – MPI (Message Passing Interface) •SMP (Symmetric MultiProcessing) parallelism many CPUs are sharing a global memory, one program is running on different CPUs in parallel Software model – OpenPM and MPI Parallel Computing at DESY, CAPP2005 3 Parallel computing at DESY: Zeuthen Computer Center Massive parallel PC Farms PC Clusters computer Parallel Computing Parallel Computing at DESY, CAPP2005 4 Parallel Computing at DESY Massive parallel APE (Array Processor Experiment) - since 1994 at DESY, exclusively used for Lattice Simulations for simulations of Quantum Chromodynamics in the framework of the John von Neumann Institute of Computing (NIC, FZ Jülich, DESY) http://www-zeuthen.desy.de/ape PC Cluster with fast interconnect (Myrinet, Infiniband) – since 2001, Applications: LQCD, Parform ? Parallel Computing at DESY, CAPP2005 5 Parallel computing at DESY: APEmille Parallel Computing at DESY, CAPP2005 6 Parallel computing at DESY: apeNEXT Parallel computing at DESY: apeNEXT Parallel Computing at DESY, CAPP2005 7 Parallel computing at DESY: Motivation for PC Clusters 1.
    [Show full text]
  • Ethernet and Wifi
    Ethernet and WiFi hp://xkcd.com/466/ CSCI 466: Networks • Keith Vertanen • Fall 2011 Overview • Mul?ple access networks – Ethernet • Long history • Dominant wired technology – 802.11 • Dominant wireless technology 2 Classic Ethernet • Ethernet – luminferous ether through which electromagne?c radiaon once thought to propagate – Carrier Sense, Mul?ple Access with Collision Detec?on (CSMA/CD) – IEEE 802.3 Robert Metcalfe, co- inventor of Ethernet 3 Classic Ethernet • Ethernet – Xerox Ethernet standardized as IEEE 802.3 in 1983 – Xerox not interested in commercializing – Metcalfe leaves and forms 3Com 4 Ethernet connec?vity • Shared medium – All hosts hear all traffic on cable – Hosts tapped the cable – 2500m maximum length – May include repeaters amplifying signal – 10 Mbps bandwidth 5 Classic Ethernet cabling Cable aSer being "vampire" tapped. Thick Ethernet cable (yellow), 10BASE-5 transceivers, cable tapping tool (orange), 500m maximum length. Thin Ethernet cable (10BASE2) with BNC T- connector, 185m maximum length. 6 Ethernet addressing • Media Access Control address (MAC) – 48-bit globally unique address • 281,474,976,710,656 possible addresses • Should last ?ll 2100 • e.g. 01:23:45:67:89:ab – Address of all 1's is broadcast • FF:FF:FF:FF:FF:FF 7 Ethernet frame format • Frame format – Manchester encoded – Preamble products 10-Mhz square wave • Allows clock synch between sender & receiver – Pad to at least 64-bytes (collision detec?on) Ethernet 802.3 AlternaWng 0's 48-bit MAC and 1's (except addresses SoF of 11) 8 Ethernet receivers • Hosts listens to medium – Deliver to host: • Any frame with host's MAC address • All broadcast frames (all 1's) • Mul?cast frames (if subscribed to) • Or all frames if in promiscuous mode 9 MAC sublayer • Media Access Control (MAC) sublayer – Who goes next on a shared medium – Ethernet hosts can sense if medium in use – Algorithm for sending data: 1.
    [Show full text]
  • Gigabit Ethernet
    Ethernet Technologies and Gigabit Ethernet Professor John Gorgone Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 1 Topics • Origins of Ethernet • Ethernet 10 MBS • Fast Ethernet 100 MBS • Gigabit Ethernet 1000 MBS • Comparison Tables • ATM VS Gigabit Ethernet •Ethernet8SummaryCopyright 1998, John T. Gorgone, All Rights Reserved 2 Origins • Original Idea sprang from Abramson’s Aloha Network--University of Hawaii • CSMA/CD Thesis Developed by Robert Metcalfe----(1972) • Experimental Ethernet developed at Xerox Palo Alto Research Center---1973 • Xerox’s Alto Computers -- First Ethernet Ethernet8systemsCopyright 1998, John T. Gorgone, All Rights Reserved 3 DIX STANDARD • Digital, Intel, and Xerox combined to developed the DIX Ethernet Standard • 1980 -- DIX Standard presented to the IEEE • 1980 -- IEEE creates the 802 committee to create acceptable Ethernet Standard Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 4 Ethernet Grows • Open Standard allows Hardware and Software Developers to create numerous products based on Ethernet • Large number of Vendors keeps Prices low and Quality High • Compatibility Problems Rare Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 5 What is Ethernet? • A standard for LANs • The standard covers two layers of the ISO model – Physical layer – Data link layer Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 6 What is Ethernet? • Transmission speed of 10 Mbps • Originally, only baseband • In 1986, broadband was introduced • Half duplex and full duplex technology • Bus topology Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 7 Components of Ethernet • Physical Medium • Medium Access Control • Ethernet Frame Ethernet8 Copyright 1998, John T. Gorgone, All Rights Reserved 8 CableCable DesignationsDesignations 10 BASE T SPEED TRANSMISSION MAX TYPE LENGTH Ethernet8 Copyright 1998, John T.
    [Show full text]
  • 100 Gigabit Ethernet Is Here!
    100 Gigabit Ethernet is Here! Introduction Ethernet technology has come a long way since its humble beginning in 1973 at Xerox PARC. With each subsequent iteration, there has been a lag between time of standardization and large scale adoption. The latest iteration, dubbed 802.3ba by the IEEE Higher Speed Study Group (HSSG), was ratified in June, 2010 and follows this same pattern but with a slight twist. For the first time in Ethernet history a single standard defines two separate speeds; 100 Gigabit Ethernet (100GbE) as well as 40 Gigabit Ethernet (40GbE). Figure 1: Original Ethernet Sketch The technical challenges facing 100GbE have been significant; ranging from developing a whole new generation of optics that can handle 4 lanes of 25Gbps, to simply dealing with normal router and switch functions such as packet inspection, queuing, lookups, filtering and table updates, all in one-tenth the amount of time as with 10GbE. And of course this all has to be done with complete backwards compatibility and meeting all expectations with respect to bit error rate, latency, jitter and the like. As expected 40GbE gained some level of market acceptance first, but some 5 years after ratification the time for 100 Gigabit Ethernet is now! 2 | P a g e This whitepaper will discuss the evolution of 100GbE technology in the service provider and data center markets and provide insights in to how network application acceleration hardware can be leveraged to maximize performance and efficiency in emerging 100GbE network appliances. 100GbE in Service Providers Networks 100GbE is rapidly approaching large scale adoption in the wide area network (WAN), which is largely the purview of service providers.
    [Show full text]
  • The Future Is 40 Gigabit Ethernet White Paper Cisco Public
    The Future Is 40 Gigabit Ethernet White Paper Cisco Public The Future Is 40 Gigabit Ethernet © 2016 Cisco and/or its affiliates. All rights reserved. The Future Is 40 Gigabit Ethernet White Paper Cisco Public Executive Summary The business case for 40 Gigabit Ethernet is becoming inescapably compelling. While 10 Gigabit Ethernet is still making its way into the data centers, CIOs and IT managers must now consider how they are going to handle what’s coming next: high-bandwidth applications such as server virtualization and cloud computing; fabric consolidation within the data center; and a greater demand for high-performance computing among end users (see Figure 1). The need for faster data transfer rates is relentless and carries significant implications with regard to network productivity as well as operating expenditure (OpEx) costs. Figure 1. Current Trends Driving the Demand for This report addresses the impending move to 40 Higher-Speed Ethernet Gigabit Ethernet, how it may change the network architecture, and what IT managers can do now to Market Drivers for More Bandwidth prepare to migrate to the new standard. Consumer & Broadband Access Introduction: The Business Case for Content 40 Gigabit Ethernet Providers Since February 1980, when the first IEEE 802 Server Virtualization standards committee convened, speeds in Ethernet Video on delivery to all layers have made increasingly greater Demand leaps over increasingly shorter intervals. In 2016, Blade Server Higher eight years after the adoption of 10 Gigabit Ethernet, Speed Service the IEEE has adopted 802.3ba, paving the way for Providers & Ethernet IXCs 40 Gigabit Ethernet and 100 Gigabit Ethernet.
    [Show full text]
  • Data Center Architecture and Topology
    CENTRAL TRAINING INSTITUTE JABALPUR Data Center Architecture and Topology Data Center Architecture Overview The data center is home to the computational power, storage, and applications necessary to support an enterprise business. The data center infrastructure is central to the IT architecture, from which all content is sourced or passes through. Proper planning of the data center infrastructure design is critical, and performance, resiliency, and scalability need to be carefully considered. Another important aspect of the data center design is flexibility in quickly deploying and supporting new services. Designing a flexible architecture that has the ability to support new applications in a short time frame can result in a significant competitive advantage. Such a design requires solid initial planning and thoughtful consideration in the areas of port density, access layer uplink bandwidth, true server capacity, and oversubscription, to name just a few. The data center network design is based on a proven layered approach, which has been tested and improved over the past several years in some of the largest data center implementations in the world. The layered approach is the basic foundation of the data center design that seeks to improve scalability, performance, flexibility, resiliency, and maintenance. Figure 1-1 shows the basic layered design. 1 CENTRAL TRAINING INSTITUTE MPPKVVCL JABALPUR Figure 1-1 Basic Layered Design Campus Core Core Aggregation 10 Gigabit Ethernet Gigabit Ethernet or Etherchannel Backup Access The layers of the data center design are the core, aggregation, and access layers. These layers are referred to extensively throughout this guide and are briefly described as follows: • Core layer—Provides the high-speed packet switching backplane for all flows going in and out of the data center.
    [Show full text]
  • Towards 100 Gbps Ethernet: Development of Ethernet / Physical Layer Aspects
    SEMINAR ON TOPICS IN COMMUNICATIONS ENGINEERING 1 Towards 100 Gbps Ethernet: Development of Ethernet / Physical Layer Aspects Ömer Bulakci Abstract — Physical layer features of Ethernet from the first released clauses and ongoing architecture researches for 100 realization towards the 100 Gb Ethernet (100 GbE) development GbE are elaborated. have been considered. Comparisons of these features are made according to the standardized data rates. Feasible physical layer TABLE I options are then discussed for high data rates. Milestones of 802.3 IEEE Standard I. INTRODUCTION Clause Date of Bit Physical THERNET is the most widely deployed Local Area Name Release Rate Medium Network (LAN) protocol and has been extended to E 802.3a Single Metropolitan Area Networks (MAN) and Wide Area (Thin Ethernet) 1985 10 Mbps Thin Coaxial Networks (WAN) [1]. The major advantages that characterize (Cheapernet) Cable Ethernet can be stated as its cost efficiency, traditional tenfold bit rate increase (from 10 Mbps to 100 Gbps), simplicity, high 802.3i 1990 10 Mbps TP Copper transmission reliability and worldwide interoperability 802.3j 1993 10 Mbps Two MMFs between vendors [2]. TP Copper The first experimental Ethernet was developed during the 802.3u 1995 100 Mbps Two Fibers early 1970s by XEROX Corporation in a coaxial cable (Fast Ethernet) (MMF,SMF) network with a data rate about 3 Mbps [3]. The initial 802.3z 1998 1 Gbps MMF, SMF standardization process of Ethernet was started in 1979 by (Gigabit Ethernet) Digital Equipment Corporation (DEC), Intel and Xerox. In 802.3ab 1999 1 Gbps TP Copper 1980, DIX Standard known as the “Thick Ethernet” was 802.3ae 2002 10 Gbps MMF,SMF released.
    [Show full text]
  • Latticesc/M Broadcom XAUI/Higig 10 Gbps Lattice Semiconductor Physical Layer Interoperability Over CX-4
    LatticeSC/M Broadcom® XAUI/HiGig™ 10 Gbps Physical Layer Interoperability Over CX-4 August 2007 Technical Note TN1155 Introduction This technical note describes a physical layer 10-Gigabit Ethernet and HiGig (10 Gbps) interoperability test between a LatticeSC/M device and the Broadcom BCM56800 network switch. The test was limited to the physical layer (up to XGMII) of the 10-Gigabit Ethernet protocol stack. Specifically, the document discusses the following topics: • Overview of LatticeSC™ and LatticeSCM™ devices and Broadcom BCM56800 network switch • Physical layer interoperability setup and results Two significant aspects of the interoperability test need to be highlighted: • The BCM56800 uses a CX-4 HiGig port, whereas the LatticeSC Communications Platform Evaluation Board provides an SMA connector. A CX-4 to SMA conversion board was used as a physical medium interface to cre- ate a physical link between both boards. The SMA side of the CX-4 to SMA conversion board has four differential TX/RX channels (10 Gbps bandwidth total). All four SMA channels (Quad 360) were connected to the LatticeSC side. • The physical layer interoperability ran at a 10-Gbps data rate (12.5-Gbps aggregated rate). XAUI Interoperability XAUI is a high-speed interconnect that offers reduced pin count and the ability to drive up to 20” of PCB trace on standard FR-4 material. In order to connect a 10-Gigabit Ethernet MAC to an off-chip PHY device, an XGMII inter- face is used. The XGMII is a low-speed parallel interface for short range (approximately 2”) interconnects. XAUI interoperability is based on the 10-Gigabit Ethernet standard (IEEE Standard 802.3ae-2002).
    [Show full text]
  • Modern Ethernet
    Color profile: Generic CMYK printer profile Composite Default screen All-In-One / Network+ Certification All-in-One Exam Guide / Meyers / 225345-2 / Chapter 6 CHAPTER Modern Ethernet 6 The Network+ Certification exam expects you to know how to • 1.2 Specify the main features of 802.2 (Logical Link Control) [and] 802.3 (Ethernet): speed, access method, topology, media • 1.3 Specify the characteristics (for example: speed, length, topology, and cable type) of the following cable standards: 10BaseT and 10BaseFL; 100BaseTX and 100BaseFX; 1000BaseTX, 1000BaseCX, 1000BaseSX, and 1000BaseLX; 10GBaseSR, 10GBaseLR, and 10GBaseER • 1.4 Recognize the following media connectors and describe their uses: RJ-11, RJ-45, F-type, ST,SC, IEEE 1394, LC, MTRJ • 1.6 Identify the purposes, features, and functions of the following network components: hubs, switches • 2.3 Identify the OSI layers at which the following network components operate: hubs, switches To achieve these goals, you must be able to • Define the characteristics, cabling, and connectors used in 10BaseT and 10BaseFL • Explain how to connect multiple Ethernet segments • Define the characteristics, cabling, and connectors used with 100Base and Gigabit Ethernet Historical/Conceptual The first generation of Ethernet network technologies enjoyed substantial adoption in the networking world, but their bus topology continued to be their Achilles’ heel—a sin- gle break anywhere on the bus completely shut down an entire network. In the mid- 1980s, IBM unveiled a competing network technology called Token Ring. You’ll get the complete discussion of Token Ring in the next chapter, but it’s enough for now to say that Token Ring used a physical star topology.
    [Show full text]
  • Cisco Catalyst 6500 Series 10 Gigabit Ethernet Modules
    Data Sheet Cisco Catalyst 6500 Series 10 Gigabit Ethernet Modules Cisco data center switching delivers relentless velocity: Architecture scalability supports growth in any direction; Operational manageability maximizes service velocity and IT staff productivity; Comprehensive resilience addresses many potential sources of downtime. Figure 1. Cisco Catalyst 6500 Series 4-Port 10 Gigabit Ethernet Module Figure 2. Cisco Catalyst 6500 Series 8-Port 10 Gigabit Ethernet Module PRODUCT OVERVIEW The Cisco Catalyst 6500 Series has an 8-port 10 Gigabit Ethernet module and a 4-port 10 Gigabit Ethernet module. These modules support pluggable optics to support distances up to 80km over single-mode fiber, 300m over multimode fiber, and 15m over copper. The 8-port 10 Gigabit Ethernet module provides up to 64 10 Gigabit Ethernet ports in a single Catalyst 6500 chassis, ideal for deployment in the aggregation layer of LAN campus and data centers. Both modules are interoperable with the Cisco Catalyst 6500 Series Supervisor Engine 720 and provide 40 Gbps connection to the switch fabric. Building upon the award-winning Catalyst 6500 Series, these 10 Gigabit Ethernet modules are backward compatible with all existing Catalyst 6500 line cards and services modules, enabling service providers and enterprises to offer new Layer 2 through 7 services and network capabilities to increase revenue and user productivity without complete equipment upgrades. The Cisco Catalyst 6500 Series 10 Gigabit Ethernet modules are designed for deployment in the distribution and core of campus and data center for traffic aggregation or for interbuilding, points of presence (POPs), WAN edge, and MAN connections. These modules support IEEE 802.3ad link aggregation and Cisco EtherChannel ® technology for fault-tolerant connectivity and bandwidth scalability of up to 80 Gbps per EtherChannel connection.
    [Show full text]