Undestanding Rapidio, Pcie and Ethernet

Total Page:16

File Type:pdf, Size:1020Kb

Undestanding Rapidio, Pcie and Ethernet INTERCONNECTS Undestanding RapidIO, PCIe and Ethernet By Barry Wood Tundra While there are many ways to connect components in embedded systems, the most prominent are the high speed serial standards of Ethernet, PCI Express, and RapidIO. All of these standards leverage similar Serialiser/Deserialiser (SerDes) technology to deliver throughput and latency performance greater than what is possible with wide parallel bus technology. For ex- Figure 1: RapidIO embedded control symbols and PCIe DLLPs. ample, RapidIO and PCI Express leveraged the XAUI SerDes tech- Transaction Layer Packets (TLPs) nology developed for Ethernet. such as reads and writes, and The trend towards leveraging a smaller quantities of link-specific common SerDes technology will information called Data Link Layer continue with future versions of Packets (DLLPs). DLLPs are used these specifications. The implica- for link management functions, in- tion is that raw bandwidth is not a cluding physical layer flow control. significant differentiator for these PCIe was designed to be back- protocols. Instead, the usefulness wards compatible with the legacy of each protocol is determined by of PCI and PCI-X devices, which how the bandwidth is used. assumed that the processor(s) sat at the top of a hierarchy of buses. Protocol summaries This had the advantage of lever- Most designers are familiar with aging PCI-related software and basic Ethernet protocol charac- hardware intellectual property. As teristics. Ethernet is a ‘best effort’ discussed later in this article, the means of delivering packets. The PCI bus legacy places significant software protocols built on top of constraints on the switched PCIe the Ethernet physical layer, such protocol. as TCP/IP, are necessary to provide RapidIO technology has been reliable delivery of information, as optimised for embedded systems, Ethernet-based systems gener- particularly those which require ally perform flow control at the multiple processing elements to network layer, not the physical cooperate. Like PCIe, the RapidIO layer. Typically, the bandwidth of protocol exchanges packets and Figure 2: RapidIO Multicast Event Control Symbol and PCIe DLLP. Ethernet-based systems is over- smaller quantities of link-specific provisioned between 20 and 70 information called control sym- Physical layer time. The limited physical layer per cent. Ethernet is best suited bols. RapidIO has characteristics At the physical/link layer, the pro- flow control means that Ethernet for high latency inter-chassis of both PCIe and Ethernet. For tocols have different capabilities networks discard packets to deal applications or on-board/inter- example, RapidIO provides both when it comes to flow control with congestion. board applications where band- reliable and unreliable packet and error recovery. Ethernet flow In contrast, PCIe and RapidIO width requirements are low. delivery mechanisms. RapidIO control is primarily implemented physical-layer flow control mech- PCI Express (PCIe), in contrast, also has many unique capabilities in software at the network layer, anisms ensure reliable delivery of is optimised for reliable delivery of which make it the optimal inter- as this is the most effective for packets. Each packet is retained packets within an on-board inter- connect for on-board, inter-board, large networks. Ethernet’s only by the transmitter until it is ac- connect where latencies are typi- and short distance (<100 m) inter- physical layer flow control mech- knowledged. If a transmission er- cally in the microsecond range. chassis applications. anism is PAUSE, which halts trans- ror is detected, a link maintenance The PCIe protocol exchanges mission for a specified period of protocol ensures that corrupted 1 eetindia.com | EE Times-India packets are retransmitted. Unit Interval of jitter and 50 nano- PCIe ensures delivery using seconds of latency per switch, DLLPs, while RapidIO uses control regardless of packet traffic. symbols. Unlike DLLPs, RapidIO control symbols can be embed- PCIe and Ethernet may choose ded within packets. This allows to extend their respective speci- RapidIO flow control informa- fications in future to allow events tion, such as buffer occupancy to be distributed with low latency. levels, to be exchanged with low Introduction of a control-symbol latency, allowing more packets like concept would be a large to be sent sooner. Figure 1 illus- step for Ethernet. Several initia- trates this concept. In the leftmost tives are underway within the panel, Device A cannot send any Ethernet ecosystem to improve packets to Device B because the Ethernet’s abilities within stor- buffers in Device B are full. Device age applications that may need B is sending packets to Device A a control-symbol like concept. continually. In the middle panel, Ethernet is also being enhanced one buffer in Device B becomes to incorporate simple XON/XOFF free. At this point, Device B must flow control. inform Device A that it can send PCIe currently does not allow Figure 3: Read, write, and message semantics. The read semantic retrieves a packet. In the PCIe panel on the DLLPs to be embedded within data 01 02 03 04 from address 0x1234_0000. The write semantic writes data bottom right, the DLLP cannot be TLPs as this concept is incompat- FF FE FD FC to address 0x1234_0000. The message semantic determines the transmitted until transmission of ible with the legacy of the PCI/X location of Fault_Handler without knowing referencing the memory map. the current packet is complete. In bus operation. DLLPs embedded the RapidIO panel on the top right, within TLPs create periods where a control symbol is embedded in no data is available to be placed a packet that is being transmitted. on the legacy bus. PCIe end-points This allows the RapidIO protocol could operate in store-and-for- to deliver packets with lower la- ward mode to ensure that packets tency and higher throughput than are completely received before the other protocols. The ability to being forwarded to the bus, at embed control symbols within the cost of a drastic increase in packets allows the rest of the latency and lowered throughput. RapidIO flow control story to be Given the PCIe focus of on-board much richer than PCIe or Ethernet, interconnect for a uniprocessor as discussed later in this article. system, and the continued need to maintain compatibility with Beyond more efficient flow legacy bus standards, it is un- control, the ability to embed con- likely that the PCIe community trol symbols within packets gives will be motivated to allow DLLPs RapidIO an ability that currently to be embedded within TLPs. neither PCIe nor Ethernet can of- fer. Control symbols can be used Bandwidth options to distribute events throughout a Beyond flow control and link RapidIO system with low latency maintenance, the most obvious and low jitter (Figure 2). This en- difference between Ethernet, ables applications such as distribu- PCIe and RapidIO at the physi- tion of a common real time clock cal/link layer are the bandwidth signal to multiple end-points, or a options supported. Ethernet has frame signal for antenna systems. a long history of evolving band- It also can be used for signalling width by ten times with each other system events, and for de- step. Ethernet currently operates bugging within a multi-proces- at 10Mbps, 100Mbps, 1Gbps, and Figure 4: RapidIO’s virtual output queue backpressure mechanism. sor system. As shown in Figure 2, 10Gbps. Some proprietary parts PCIe DLLPs introduce a significant also support a 2Gbps (2.5 Gbaud) terconnects require power to be while RapidIO supports lane rates amount of latency and jitter ev- option. Next generation Ethernet matched with the data flows. As of 1, 2, 2.5, 4 and 5Gbps (1.25, 2.5, ery time the DLLP is transferred will be capable of operating at 40 a result, PCIe and RapidIO support 3.125, 5, and 6.25 Gbaud). Both PCIe through a switch. In contrast, the and/or 100Gbps. more lane rate and lane width and RapidIO support lane width RapidIO protocol allows a signal PCIe and RapidIO take a dif- combinations than Ethernet. PCIe combinations from a single lane to be distributed throughout a ferent approach, as on-board, 2.0 allows lanes to operate at ei- up to 16 lanes. PCIe also supports RapidIO fabric with less than 10 inter-board and inter-chassis in- ther 2 or 4Gbps (2.5 and 5 Gbaud), a 32 lane port in its specification. 2 eetindia.com | EE Times-India For a given lane width, a RapidIO it is not certain if it will be adopted semantics. To send a message, messaging semantics is the en- port can supply both more and within the PCIe ecosystem. PCIe the originator only needs to capsulation of other protocols. less bandwidth than PCIe, allow- components also support NTB know the address of the recipient. RapidIO has standards for encap- ing system designers to tune the functionality, whereby one PCIe Addressing schemes are generally sulating a variety of protocols. amount of power used to the data hierarchy is granted access to a hierarchical, so that no node must The ability to encapsulate gives flows in a system. defined memory range in another know all addresses. Addresses system designers many advan- PCIe hierarchy. System reliability may change as a system evolves, tages, including future-proofing Transport layer and availability of this approach enabling software components to RapidIO backplanes. Any future, The RapidIO and Ethernet speci- are discussed further on in this be loosely coupled to each other. legacy or proprietary protocol can fications are topology agnostic. article. These attributes are necessary for be encapsulated and transported Any set of end-points can be con- Many Ethernet-based systems data plane applications. using standard RapidIO compo- nected using any fabric topology, are also functionally limited to tree nents. For example, the existing including rings, trees, meshes, structures.
Recommended publications
  • Ciena 5305 Service Aggregation Switch Datasheet
    5305 SERVICE AGGREGATION SWITCH Features and Benefits The 5305 Ethernet/MPLS service aggregation > Features advanced Ethernet and MPLS to support demanding business, mobile switch is purpose-built for Carrier Ethernet backhaul, transport, and residential applications including L2VPN service to deliver cost-effective capacity, scalability, delivery and aggregation, 3G/4G wireless backhaul, FTTx and IP DSLAM and resiliency. With this switch, service Aggregation and L2 backhaul of L3VPNs > Delivers optimal density and service providers can keep pace with the constantly flexibility with a compact modular chassis, supporting incremental increasing demand for bandwidth and next- expansion of service and bandwidth capacity with linear CAPEX outlay generation services that support business, > Supports tens of thousands of services mobile backhaul, transport, and residential on a single system with robust scalability of up to 30,000+ VLANs and applications in metro networks. two million MAC addresses per chassis > Delivers high reliability, five-9s The 5305 is a modular, chassis-based system optimized for metro-edge deployments availability, and 50 ms protection in a wide variety of network topologies, including fiber and microwave rings, point-to- switching resiliency using state-of- the-art hardware and software design point fiber, microwave mesh, and fiber or copper to the subscriber. The switch coupled with advanced control plane supports high-density Gigabit Ethernet (GbE) connectivity to the subscriber edge and and Ethernet OAM capabilities
    [Show full text]
  • Data Center Ethernet 2
    DataData CenterCenter EthernetEthernet Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: http://www.cse.wustl.edu/~jain/cse570-15/ Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-1 OverviewOverview 1. Residential vs. Data Center Ethernet 2. Review of Ethernet Addresses, devices, speeds, algorithms 3. Enhancements to Spanning Tree Protocol 4. Virtual LANs 5. Data Center Bridging Extensions Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-2 Quiz:Quiz: TrueTrue oror False?False? Which of the following statements are generally true? T F p p Ethernet is a local area network (Local < 2km) p p Token ring, Token Bus, and CSMA/CD are the three most common LAN access methods. p p Ethernet uses CSMA/CD. p p Ethernet bridges use spanning tree for packet forwarding. p p Ethernet frames are 1518 bytes. p p Ethernet does not provide any delay guarantees. p p Ethernet has no congestion control. p p Ethernet has strict priorities. Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-3 ResidentialResidential vs.vs. DataData CenterCenter EthernetEthernet Residential Data Center Distance: up to 200m r No limit Scale: Few MAC addresses r Millions of MAC Addresses 4096 VLANs r Millions of VLANs Q-in-Q Protection: Spanning tree r Rapid spanning tree, … (Gives 1s, need 50ms) Path determined by r Traffic engineered path spanning tree Simple service r Service Level Agreement.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 7,676,600 B2 Davies Et Al
    USOO7676600B2 (12) United States Patent (10) Patent No.: US 7,676,600 B2 Davies et al. (45) Date of Patent: Mar. 9, 2010 (54) NETWORK, STORAGE APPLIANCE, AND 4,245,344 A 1/1981 Richter METHOD FOR EXTERNALIZING AN INTERNAL AO LINK BETWEEN A SERVER AND A STORAGE CONTROLLER (Continued) INTEGRATED WITHIN THE STORAGE APPLIANCE CHASSIS FOREIGN PATENT DOCUMENTS (75) Inventors: Ian Robert Davies, Longmont, CO WO WOO21 O1573 12/2002 (US); George Alexander Kalwitz, Mead, CO (US); Victor Key Pecone, Lyons, CO (US) (Continued) (73) Assignee: Dot Hill Systems Corporation, OTHER PUBLICATIONS Longmont, CO (US) Williams, Al. “Programmable Logic & Hardware.” Dr. Dobb's Jour nal. Published May 1, 2003, downloaded from http://www.dd.com/ (*) Notice: Subject to any disclaimer, the term of this architect 184405342. pp. 1-7. patent is extended or adjusted under 35 U.S.C. 154(b) by 1546 days. (Continued) Primary Examiner Hassan Phillips (21) Appl. No.: 10/830,876 Assistant Examiner—Adam Cooney (22) Filed: Apr. 23, 2004 (74) Attorney, Agent, or Firm—Thomas J. Lavan: E. Alan pr. A5, Davis (65) Prior Publication Data (57) ABSTRACT US 2005/0027751A1 Feb. 3, 2005 Related U.S. Application Data A network Storage appliance is disclosed. The storage appli (60) Provisional application No. 60/473,355, filed on Apr. ance includes a port combiner that provides data COU1 23, 2003, provisional application No. 60/554,052, cation between at least first, second, and third I/O ports; a filed on Mar. 17, 2004. storage controller that controls storage devices and includes s the first I/O port; a server having the second I/O port; and an (51) Int.
    [Show full text]
  • Making the Switch to Rapidio
    QNX Software Systems Ltd. 175 Terence Matthews Crescent Ottawa, Ontario, Canada, K2M 1W8 Voice: +1 613 591-0931 1 800 676-0566 Fax: +1 613 591-3579 Email: [email protected] Web: www.qnx.com Making the Switch to RapidIO Using a Message-passing Microkernel OS to Realize the Full Potential of the RapidIO Interconnect Paul N. Leroux Technology Analyst QNX Software Systems Ltd. [email protected] Introduction Manufacturers of networking equipment have hit a bottleneck. On the one hand, they can now move traffic from one network element to another at phenomenal speeds, using line cards that transmit data at 10 Gigabits per second or higher. But, once inside the box, data moves between boards, processors, and peripherals at a much slower clip: typically a few hundred megabits per second. To break this bottleneck, equipment manufacturers are seeking a new, high-speed — and broadly supported — interconnect. In fact, many have already set their sights on RapidIO, an open-standard interconnect developed by the RapidIO Trade Association and designed for both chip-to-chip and board-to-board communications. Why RapidIO? Because it offers low latency and extremely high bandwidth, as well as a low pin count and a small silicon footprint — a RapidIO interface can easily fit Making the Switch to RapidIO into the corner of a processor, FPGA, or ASIC. Locked Out by Default: The RapidIO is also transparent to software, allowing Problem with Conventional any type of data protocol to run over the intercon- Software Architectures nect. And, last but not least, RapidIO addresses the demand for reliability by offering built-in While RapidIO provides a hardware bus that is error recovery mechanisms and a point-to-point both fast and reliable, system designers must find or architecture that helps eliminate single points develop software that can fully realize the benefits of failure.
    [Show full text]
  • Converged Networking in the Data Center
    Converged Networking in the Data Center Peter P. Waskiewicz Jr. LAN Access Division, Intel Corp. [email protected] Abstract data center as a whole. In addition to the general power and cooling costs, other areas of focus are the physical The networking world in Linux has undergone some sig- amount of servers and their associated cabling that re- nificant changes in the past two years. With the expan- side in a typical data center. Servers very often have sion of multiqueue networking, coupled with the grow- multiple network connections to various network seg- ing abundance of multi-core computers with 10 Gigabit ments, plus they’re usually connected to a SAN: ei- Ethernet, the concept of efficiently converging different ther a Fiber Channel fabric or an iSCSI infrastructure. network flows becomes a real possibility. These multiple network and SAN connections mean large amounts of cabling being laid down to attach a This paper presents the concepts behind network con- server. Converged Networking takes a 10GbE device vergence. Using the IEEE 802.1Qaz Priority Group- that is capable of Data Center Bridging in hardware, ing and Data Center Bridging concepts to group mul- and consolidates all of those network connections and tiple traffic flows, this paper will demonstrate how dif- SAN connections into a single, physical device and ca- ferent types of traffic, such as storage and LAN traf- ble. The rest of this paper will illustrate the different fic, can efficiently coexist on the same physical connec- aspects of Data Center Bridging, which is the network- tion.
    [Show full text]
  • Interactions Between TCP and Ethernet Flow Control Over Netgear
    Interactions between TCP and Ethernet flow control over Netgear XAVB2001 HomePlug AV links Radika Veera Valli, Grenville Armitage, Jason But, Thuy Nguyen Centre for Advanced Internet Architectures, Technical Report 130121A Swinburne University of Technology Melbourne, Australia [email protected], [email protected], [email protected], [email protected] Abstract—HomePlug AV links are usually slower than buffers along network paths. A side-effect of such loss- the surrounding wired LAN links, creating a bottleneck based CC behaviour is that all traffic sharing the bottle- where traffic may be buffered and delayed. To investigate neck will experience additional latency due to increased the interactions between TCP congestion control and average queueing delays [5]. This is particularly prob- Ethernet flow control we trialled four TCP variants over a lematic for real-time application flows (such as VoIP and HomePlug AV link using two Netgear XAVB2001 devices. We observed that the XAVB2001’s use of Ethernet flow online games) given the growth of arguably-gratuitous control effectively concatenates the buffers in both the buffering (“buffer bloat”) in network devices, interface XAVB2001 and the directly attached sending host. This led cards and network software stacks [6]. to multi-second RTTs when using NewReno and CUBIC In this report we borrow from, and extend, previous (loss-based) TCPs, which is significantly troublesome for work by one of the current authors [7]. Our new work multimedia traffic on home LANs. In contrast, Vegas explores in greater detail the latency and flow control and CDG (delay-based) TCPs kept buffer utilisation low, issues that emerge when end-to-end TCP is allowed to and induced five to ten milliseconds RTT.
    [Show full text]
  • Chapter 2 Link Layer Verilog Hardware Implementation, and One Wireless Broadcast Link, I.E
    Modern Computer Networks: An open source approach Chapter 2 Modern Computer Networks: An open source approach Chapter 2 package, one wired broadcast link, i.e. Ethernet in Section 2.4 along with its Chapter 2 Link Layer Verilog hardware implementation, and one wireless broadcast link, i.e. wireless LAN (WLAN) in Section 2.5 plus a brief on Bluetooth and WiMAX. PPP is Problem Statement popularly used in the last-mile dial-up services or routers carrying various network protocols over point-to-point links. Ethernet has occupied more than 95 percent of To effectively and efficiently transmit data over physical links from one node wired LANs. It is also poised to be ubiquitous in MANs and WANs. In contrast to to one or more nodes, there is much more to do than simply modulating or desktop PCs, which usually use wired links, many devices such as laptop PCs encoding bit stream into signal. Transmission impairments, such as crosstalk and cellular phones are mobile and prefer wireless links such as WLAN, Bluetooth, between two adjacent pairs, can unexpectedly change transmission signal and and WiMAX. thus result in errors. The transmitter might transmit faster than the receiver can Table 2.1 Link protocols. handle. The transmitter has to somehow indicate the destination(s), if on a PAN/LAN MAN/WAN broadcast link, i.e. LAN, and usually needs to name itself to let the receiver know Token Bus (802.4) DQDB (802.6) Token Ring (802.5) HDLC where the source is. If multiple stations share a LAN, an arbitration mechanism is HIPPI X.25 required to determine who can transmit next.
    [Show full text]
  • VME, Gig-E, Serial Rapidio (Srio) Or PCI Express (Pcie) CONTROL
    Model 67C3 6U, OpenVPX, MULTI-FUNCTION I/O CARD VME, Gig-E, Serial RapidIO (sRIO) or PCI Express (PCIe) CONTROL A/D, D/A, Discrete/TTL/CMOS/Differential I/O, RTD, Synchro/Resolver/LVDT/RVDT Measurement and Simulation, Encoder/Counter ARINC 429/575, RS-422/485/232, MIL-STD-1553, CANbus FEATURES Multiple functions on a single slot 6U OpenVPX card OpenVPX Slot Profile: SLT6-BRG-4F1V2T-10.5.1 User can specify up to six different function modules Automatic background BIT testing continually checks and reports the health of each channel Control via VME or Dual Gig-E interfaces PCIe (x1) options or sRIO (1x) Front and/or Rear I/O Conduction or Convection cooled versions Commercial and Rugged applications Software Support Kit and Drivers are available Front I/O GigE Front I/O DESCRIPTION The 67C3 is a single slot 6U OpenVPX Function Function Function Function Function Function (0.8” pitch) multi-function I/O and serial Module Module Module Module Module Module # 1 # 2 # 3 # 4 # 5 # 6 communications card. With VME, Gigabit Ethernet (Gig- E) and High Speed sRIO or PCIe control interface Module Module Module Module Module Module resources resources resources resources resources resources selections, users can confidently choose to take advantage of the OpenVPX form-factor offering higher speed switched fabric communication options. Inter Module Bus The motherboard contains six independent module slots, each of which can be populated with a function- specific module, and can now be controlled by the VME, Optional Board Resources Reference Gig-E and sRIO or PCIe. This enhanced motherboard, Generator using multiple DSPs, allows for higher processing power and dedicated control for each module.
    [Show full text]
  • Rapidio™: an Embedded System Component Network Architecture
    February 22, 2000 RapidIOª: An Embedded System Component Network Architecture Architecture and Systems Platforms Motorola Semiconductor Product Sector 7700 West Parmer Lane, MS: PL30 Austin, TX 78729 Abstract This paper describes RapidIO, a high performance low pin count packet switched sys- tem level interconnect architecture. The interconnect architecture is intended to be an open standard which addresses the needs of a variety of applications from embedded infrastructure to desktop computing. Applications include interconnecting microproces- sors, memory, and memory mapped I/O devices in networking equipment, storage sub- systems, and general purpose computing platforms. This interconnect is intended primarily as an intra-system interface, allowing chip to chip and board to board com- munications at giga-byte per second performance levels. Supported programming mod- els include globally shared distributed memory and message-passing. In its simplest form, the interface can be implemented in an FPGA end point. The interconnect archi- tecture deÞnes a protocol independent of a physical implementation. The physical fea- tures of an implementation utilizing the interconnect are deÞned by the requirements of the implementation, such as I/O signalling levels, interconnect topology, physical layer protocol, error detection, etc. The interconnect is deÞned as a layered architecture which allows scalability and future enhancements while maintaining compatibility. 1 of 25 Introduction 1 Introduction Computer and embedded system development continues to be burdened by divergent requirements. On one hand the performance must increase at a nearly logarithmic rate, while on the other hand system cost must stay the same or decrease. Several applica- tions, such as those found in telecommunications infrastructure equipment, are also bur- dened with increasing capabilities while decreasing the board size and ultimately the ßoor space which equipment occupies.
    [Show full text]
  • Understanding Ethernet
    Chapter 1 Understanding Ethernet In This Chapter ▶ Exploring carrier sensing and collision detection methods ▶ Understanding directional traffic and simultaneous transmissions ▶ Examining Ethernet speed limits and wire-rated standards ▶ Exploring Gigabit Ethernet ▶ Distinguishing between local and backbone Ethernet ▶ Recognizing Ethernet interfaces ▶ Understanding Ethernet’s simplicity n today’s connected business world, ubiquitous access to Idigitized information and data is critical. Networking has not only reshaped the information landscape, but it has also vastly increased speeds at which information moves and is consumed. In fact, networking is the key to accessing the com- plex services and applications within which information plays a starring role. Although access to the Internet is essential, all networking begins at the local level. For modern networks, Ethernet is the standard infrastructure upon which local net- works rest. Ethernet comprises a family of frame-based protocols and tech- nologies designed to connect local area networks (LANs) — computersCOPYRIGHTED and devices situated in MATERIAL close proximity and able to communicate with one another. Ethernet draws its name from the fanciful Latin terminology luminiferous aether, which trans- lates to a light-bearing medium once thought to fill space and propagate magnetic waves. As such, Ethernet is a metaphorical reference for the physical transmission medium in the work- place (and at home) that propagates digitized information in electronic form. The term “Ethernet” was originally coined by 4 Carrier Ethernet For Dummies Bob Metcalfe while jointly developing basic Ethernet network computing with David Boggs at Xerox Palo Alto Research Center (PARC). Sensing a Carrier and Handling Collisions Two people on opposite ends of a phone conversation can sense carrier presence (either a dial-tone or a connected call) and handle collisions (overlapping conversations).
    [Show full text]
  • Storage and Network Convergence Using Fcoe and Iscsi
    Front cover Storage and Network Convergence Using FCoE and iSCSI Learn how to improve IT service performance and availability Simplify your storage and network infrastructure See how to reduce data center network costs Sangam Racherla Silvio Erdenberger Harish Rajagopal Kai Ruth ibm.com/redbooks International Technical Support Organization Storage and Network Convergence Using FCoE and iSCSI January 2014 SG24-7986-01 Note: Before using this information and the product it supports, read the information in “Notices” on page xi. Second Edition (January 2014) This edition applies to the latest supported Converged Network Adapters and Switches in the IBM System Networking Portfolio of products. © Copyright International Business Machines Corporation 2012, 2014. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . xi Trademarks . xii Preface . xiii Authors. xiii Now you can become a published author, too! . .xv Comments welcome. .xv Stay connected to IBM Redbooks . xvi Part 1. Overview of storage and network convergence . 1 Chapter 1. Introduction to convergence . 3 1.1 What convergence is. 4 1.1.1 Calling it what it is . 4 1.2 Vision of convergence in data centers . 4 1.3 The interest in convergence now . 5 1.4 Fibre Channel SANs today . 5 1.5 Ethernet-based storage today. 6 1.6 Benefits of convergence in storage and network . 7 1.7 Challenge of convergence . 8 1.8 Conclusion . 10 Chapter 2. Fibre Channel over Ethernet . 11 2.1 Background: Data Center Bridging . 12 2.1.1 Priority-based Flow Control: IEEE 802.1Qbb .
    [Show full text]
  • TROUBLESHOOTING FEATURES for Media Conversion Products Table of Contents
    WHITE PAPER : TROUBLESHOOTING FEATURES for Media Conversion Products Table of Contents Introduction ......................................................................................................... 3 Link Integrity ....................................................................................................... 4 Ethernet Troubleshooting Features ................................................................... 4 1. Auto Negotiation (AN) ....................................................................................... 4 a. Full Duplex ................................................................................................ 4 b. 100 Mbps Speed ..................................................................................... 4 c. Selective Advertising ................................................................................... 5 d. Rate Converters ....................................................................................... 5 e. FX Auto Negotiation (FX AN) ..................................................................... 5 2. FX LinkLoss (FXLL) ........................................................................................... 6 3. TX LinkLoss (TXLL) ........................................................................................... 6 4. FiberAlert (FA) ................................................................................................... 7 5. LinkLoss & FiberAlert ........................................................................................ 7
    [Show full text]