Analysis and Optimisation of Communication Links for Signal Processing Applications

Total Page:16

File Type:pdf, Size:1020Kb

Analysis and Optimisation of Communication Links for Signal Processing Applications Analysis and Optimisation of Communication Links for Signal Processing Applications ANDREAS ÖDLING Examensarbete inom elektronik- och datorsystem, avancerad nivå, 30 hp Degree Project, in Electronic- and Computer Systems, second level School of Information and Communication Technology, ICT Royal Institute of Technology, KTH Supervisor: Johnny Öberg Examiner: Ingo Sander Stockholm, November 12, 2012 TRITA-ICT-EX-2012:287 Abstract There are lots of communication links and standards cur- rently being employed to build systems today. These meth- ods are in many way standardised, but far from everyone of them are. The trick is to select the communication method that best suit your needs. Also there is currently a trend that things have to be cheaper and have shorter time to market. That leads to more Component Off The Shelf (COTS) systems being build using commodity components. As one part of this work, Gigabit Ethernet is evaluated as a COTS-solution to building large, high-end systems. The computers used are running Windows and the pro- tocol used over Ethernet will be both TCP and UDP. In this work an attempt is also made to evaluate one of the non-standard protocols, the Link Port protocol for the TigerSHARC 20X-series, which is a narrow-bus, double- data-rate protocol, able to provide multi-gigabit-per-second performance. The studies have shown lots of interesting things, e.g. that using a standard desktop computer and network card, the theoretical throughput of TCP over Gigabit Ethernet can almost be met, reaching well over 900 Mbps. UDP performance gives on the other hand birth to a series of new questions about how to achieve good performance in a Windows environment, since it is constantly outperformed by the TCP connections. For the Link Port assessment a custom built IP block is made that is able to support the protocol in full speed, using a Xilinx Virtex 6 FPGA. The IP block is verified through simulation against a model of the Link Port pro- tocol. It is also shown that the transmitter of the IP block is able to send successfully to the receiver IP block. The IP block that is created, is evaluated against some competing multi-gigabit protocols to show it in comparison, and it is a rather small IP block, capable of handling all transactions on the bus as long as data is provided by its host. Referat I nuläget finns många olika sorters kommunikationslänkar, både standardiserade och inte. Dessutom har krav på kor- tare tid till marknad i många fall gett upphov till att fler och fler system byggs med färdiga komponenter som kopp- las ihop till hela system. Som ett led i detta används ofta väl beprövade tekniker som man vet fungerar. Som en del i det här arbetet kommer prestandan hos Gigabit Ethernet att utvärderas för vanliga persondato- rer som kör Windows genom att använda TCP och UDP- protokollen. Dessa är utrustade med standardnätverkskort med låg kostnad och undersökningen går ut på att ta reda på om dessa kort och datorer kan användas till att byg- ga system med hög prestanda. Dessutom kommer ett ic- kestandardiserat protokoll, Länkportsprotokollet för Ti- gerSHARC 20X-serien, som är ett protokoll som stödjer flera Gbps, att utvärderas för prestanda. Studien av TCP och UDP ledde till mycket intressan- ta resultat. Bland annat så har studien visat att man kan få TCP-kommunikation mellan två persondatorer att vara bara enstaka Mbps från det teoretiska maximala värdet, och kommunikationshastigheter långt över 900 Mbps har uppmätts för TCP. UDP i sin tur, väckte mer frågor än det nåddes svar, och den hade genomgående sämre prestanda än TCP-testerna. Det tyder på att man, när man gör pro- gram för vanliga persondatorer, inte tjänar något på att använda UDP utan snarare tvärt om. För studien av Länkportar så skapades ett IP-block som kan sända och ta emot data i samma hastighet som specificeras som den högsta i protokollbeskrivningen, fyra gigabit per sekund. Blocket verifierades genom simulering och genom att låta sändaren sända data som mottagaren lyckades ta emot. Slutligen jämfördes Länkporten mot andra protokoll med liknande karakteristik, och jämförelsen framställer det skapade IP-blocket som ett gott alternativ till andra protokoll, mycket på grund av sin enkelhet. Contents Abstract iii Refereat iv Contents v List of Figures ix List of Tables xi Listings xii Definitions xiii I Prelude 1 1 Introduction 3 1.1 Purpose . 3 1.2 Goals . 4 1.3 Motivations for This Work . 4 1.4 Limitations for This Work . 5 1.5 Layout for the Report . 5 2 Background and Related Work 7 2.1 History of Radar Systems . 7 2.1.1 Radar Construction Basics . 8 2.1.2 A Probable Future . 9 2.2 An Example System . 9 2.2.1 Conceptual Radar System . 9 2.2.2 Data Transfers in the Conceptual Radar System . 10 2.3 A Background to Physical Signalling . 12 2.4 Multi-Gigabit Transceivers . 14 2.5 The Link Port Protocol . 15 2.5.1 Some Link Port Characteristics . 15 v 2.5.2 Previous Work on Link Ports . 17 2.6 Communication Protocols . 18 2.7 Previous Work on Protocol Comparison . 19 2.7.1 TCP and UDP Performance Over Ethernet . 20 2.7.2 RapidIO Analysis . 21 2.7.3 PCI Express Evaluation . 21 2.7.4 USB Experiments . 22 2.7.5 Infiniband Studies . 22 2.7.6 Intel Thunderbolt . 23 2.8 Data Acquisition Networks . 23 2.8.1 Common Features for DAQ Networks . 24 II Contributions 25 3 Methods 27 3.1 Link Port . 27 3.2 Gigabit Ethernet . 28 3.2.1 Setup for the experiment . 28 3.3 Other High-Speed Protocols . 28 4 Ethernet On Windows Computers 29 4.1 Hardware and Software Setup . 30 4.1.1 Offloading Checksum Calculations . 30 4.1.2 Increasing the Transfer Buffers . 31 4.1.3 Increasing the Receiver Buffers . 31 4.1.4 Increasing the Ethernet Frame Size . 31 4.1.5 Control the Interrupt Rate . 31 4.2 Evaluating the Performance . 32 4.2.1 The Measurement Environment . 32 4.3 TCP Specifics . 34 4.3.1 TCP and IP Checksum Offloading . 35 4.3.2 Effects from Interrupt Moderation . 36 4.3.3 Changing the Ethernet Frame Size . 38 4.3.4 Variable Buffer Size . 39 4.4 TCP Evaluation and Summary . 41 4.5 UDP Specifics . 42 4.5.1 Interrupt Moderation Effects . 43 4.5.2 Buffer Size Exploration . 45 4.5.3 Does Frame Size Affect UDP Performance? . 46 4.6 Analysis of UDP Performance . 46 4.7 Summary of Ethernet Performance . 49 4.8 Which Settings to Choose . 50 5 Creating a Link Port IP Block 51 5.1 Link Port Implementation Idea . 51 5.1.1 Key Coding Considerations . 52 5.2 Link Port Transmitter . 52 5.2.1 Transmitter Clocking . 54 5.2.2 Transmitter State Machine . 55 5.2.3 Transmitter LVDS Outputs . 56 5.2.4 The Data Path and Memory Design . 58 5.2.5 Controlling the Transmitter . 58 5.2.6 Checksum Calculator . 60 5.2.7 The Implementation of Block Complete . 61 5.3 Link Port Receiver . 61 5.3.1 Receiver Finite State Machine . 62 5.3.2 Controlling the Receiver . 63 5.3.3 The Deserialisation of Incoming Data . 63 5.3.4 Receiver LVDS Inputs . 64 5.3.5 Getting the Receiver Through Timing . 68 5.4 Testing and Verification . 70 5.5 IP Block Restrictions . 70 5.6 IP Block Metrics . 71 5.7 Link Port Implementation Time . 71 5.8 This Link Port Implementation Contributions . 72 5.9 Comments and Analysis of the Link Port IP Block . 72 6 Comparison of Communication Techniques 75 6.1 Hard facts . 75 6.2 Making a Choice . 76 7 Goal Follow Up and Conclusions 79 8 Future Work 81 Bibliography 83 IIIAppendices 93 A Abbreviations 95 B A Selection of Used Xilinx Primitives 97 C Selection of Needed Constraints 99 D The OSI Model 101 D.1 Physical Layer . 101 D.2 Data Link Layer . 101 D.3 Network Layer . 102 D.4 Transport Layer . 102 D.5 Session Layer . 103 D.6 Presentation Layer . 103 D.7 Application Layer . 103 E PCI Express 105 E.1 Associated Overhead . 107 F Gigabit Ethernet 109 F.1 Real-Time Ethernet . 111 F.2 Efficiency of Gigabit Ethernet . 112 G TCP/IP Protocol Suite 115 G.1 The Internet Protocol Version 4 . 115 G.1.1 Efficiency of the Internet Protocol Datagrams . 117 G.2 The User Datagram Protocol . 118 G.3 The Transmission Control Protocol . 119 G.3.1 Socket Buffer Size . 120 G.3.2 Different TCP Implementations . 120 G.3.3 TCP Offload Engine . 121 G.3.4 RDMA-Enhanced TCP Decoding . 121 G.3.5 TCP Efficiency Over Ethernet . 122 H Link Port for TS20X-Series 125 H.1 Performance of Link Ports . 125 H.2 Uses of Link Ports . 126 I RapidIO 127 I.1 The Logical Layer . 127 I.2 Transaction Layer . 129 I.3 Physical Layers . 130 I.3.1 Serial RapidIO . 130 I.3.2 Parallel RapidIO . 131 J USB 133 K Infiniband 135 L 8B/10B Encoding 137 M Case Study: The ATLAS TADQ-System 139 M.1 The Communication Protocols in ATLAS . 142 M.2 The Physical Interconnects and Software of ATLAS.
Recommended publications
  • End-To-End Performance of 10-Gigabit Ethernet on Commodity Systems
    END-TO-END PERFORMANCE OF 10-GIGABIT ETHERNET ON COMMODITY SYSTEMS INTEL’SNETWORK INTERFACE CARD FOR 10-GIGABIT ETHERNET (10GBE) ALLOWS INDIVIDUAL COMPUTER SYSTEMS TO CONNECT DIRECTLY TO 10GBE ETHERNET INFRASTRUCTURES. RESULTS FROM VARIOUS EVALUATIONS SUGGEST THAT 10GBE COULD SERVE IN NETWORKS FROM LANSTOWANS. From its humble beginnings as such performance to bandwidth-hungry host shared Ethernet to its current success as applications via Intel’s new 10GbE network switched Ethernet in local-area networks interface card (or adapter). We implemented (LANs) and system-area networks and its optimizations to Linux, the Transmission anticipated success in metropolitan and wide Control Protocol (TCP), and the 10GbE area networks (MANs and WANs), Ethernet adapter configurations and performed sever- continues to evolve to meet the increasing al evaluations. Results showed extraordinari- demands of packet-switched networks. It does ly higher throughput with low latency, so at low implementation cost while main- indicating that 10GbE is a viable intercon- taining high reliability and relatively simple nect for all network environments. (plug and play) installation, administration, Justin (Gus) Hurwitz and maintenance. Architecture of a 10GbE adapter Although the recently ratified 10-Gigabit The world’s first host-based 10GbE adapter, Wu-chun Feng Ethernet standard differs from earlier Ether- officially known as the Intel PRO/10GbE LR net standards, primarily in that 10GbE oper- server adapter, introduces the benefits of Los Alamos National ates only over fiber and only in full-duplex 10GbE connectivity into LAN and system- mode, the differences are largely superficial. area network environments, thereby accom- Laboratory More importantly, 10GbE does not make modating the growing number of large-scale obsolete current investments in network infra- cluster systems and bandwidth-intensive structure.
    [Show full text]
  • Design of a High Speed XAUI Based on Dynamic Reconfigurable
    International Journal of Soft Computing And Software Engineering (JSCSE) e-ISSN: 2251-7545 Vol.2,o.9, 2012 DOI: 10.7321/jscse.v2.n9.4 Published online: Sep 25, 2012 Design of a High Speed XAUI Based on Dynamic Reconfigurable Transceiver IP Core * 1Haipeng Zhang, 1Lingjun Kong, 2Xiuju Huang, 3Mengmeng Cao 1 .School of Electronics & Information, Hangzhou Dianzi University, Hangzhou, China, 310018 2. UTSTARCOM Co. Ltd. Hangzhou, China, 310052 3. North China Electric Power University, Department of electronics and Communication Engineering, Baoding, China, 071003 Email:1 [email protected],2 [email protected],3 [email protected] Abstract. By using the dynamic reconfigurable transceiver in high speed interface design, designer can solve critical technology problems such as ensuring signal integrity conveniently, with lower error binary rate. In this paper, we designed a high speed XAUI (10Gbps Ethernet Attachment Unit Interface) to transparently extend the physical reach of the XGMII. The following points are focused: (1) IP (Intellectual Property) core usage. Altera Co. offers two transceiver IP cores in Quartus II MegaWizard Plug-In Manager for XAUI design which is featured of dynamic reconfiguration performance, that is, ALTGX_RECOFIG instance and ALTGX instance, we can get various groups by changing settings of the devices without power off. These two blocks can accomplish function of PCS (Physical Coding Sub-layer) and PMA (Physical Medium Attachment), however, with higher efficiency and reliability. (2) 1+1 protection. In our design, two ALTGX IP cores are used to work in parallel, which named XAUI0 and XAUI1. The former works as the main channel while the latter redundant channel.
    [Show full text]
  • Parallel Computing at DESY Peter Wegner Outline •Types of Parallel
    Parallel Computing at DESY Peter Wegner Outline •Types of parallel computing •The APE massive parallel computer •PC Clusters at DESY •Symbolic Computing on the Tablet PC Parallel Computing at DESY, CAPP2005 1 Parallel Computing at DESY Peter Wegner Types of parallel computing : •Massive parallel computing tightly coupled large number of special purpose CPUs and special purpose interconnects in n-Dimensions (n=2,3,4,5,6) Software model – special purpose tools and compilers •Event parallelism trivial parallel processing characterized by communication independent programs which are running on large PC farms Software model – Only scheduling via a Batch System Parallel Computing at DESY, CAPP2005 2 Parallel Computing at DESY Peter Wegner Types of parallel computing cont.: •“Commodity ” parallel computing on clusters one parallel program running on a distributed PC Cluster, the cluster nodes are connected via special high speed, low latency interconnects (GBit Ethernet, Myrinet, Infiniband) Software model – MPI (Message Passing Interface) •SMP (Symmetric MultiProcessing) parallelism many CPUs are sharing a global memory, one program is running on different CPUs in parallel Software model – OpenPM and MPI Parallel Computing at DESY, CAPP2005 3 Parallel computing at DESY: Zeuthen Computer Center Massive parallel PC Farms PC Clusters computer Parallel Computing Parallel Computing at DESY, CAPP2005 4 Parallel Computing at DESY Massive parallel APE (Array Processor Experiment) - since 1994 at DESY, exclusively used for Lattice Simulations for simulations of Quantum Chromodynamics in the framework of the John von Neumann Institute of Computing (NIC, FZ Jülich, DESY) http://www-zeuthen.desy.de/ape PC Cluster with fast interconnect (Myrinet, Infiniband) – since 2001, Applications: LQCD, Parform ? Parallel Computing at DESY, CAPP2005 5 Parallel computing at DESY: APEmille Parallel Computing at DESY, CAPP2005 6 Parallel computing at DESY: apeNEXT Parallel computing at DESY: apeNEXT Parallel Computing at DESY, CAPP2005 7 Parallel computing at DESY: Motivation for PC Clusters 1.
    [Show full text]
  • Ipug68 01.3 Lattice Semiconductor XAUI IP Core User’S Guide
    ispLever TM CORECORE XAUI IP Core User’s Guide November 2009 ipug68_01.3 Lattice Semiconductor XAUI IP Core User’s Guide Introduction The 10Gb Ethernet Attachment Unit Interface (XAUI) IP Core User’s Guide for the LatticeECP2M™ and LatticeECP3™ FPGAs provides a solution for bridging between XAUI and 10-Gigabit Media Independent Interface (XGMII) devices. This user’s guide implements 10Gb Ethernet Extended Sublayer (XGXS) capabilities in soft logic that together with PCS and SERDES functions implemented in the FGPA provides a complete XAUI-to-XGMII solu- tion. The XAUI IP core package comes with the following documentation and files: • Protected netlist/database • Behavioral RTL simulation model • Source files for instantiating and evaluating the core The XAUI IP core supports Lattice’s IP hardware evaluation capability, which makes it possible to create versions of the IP core that operate in hardware for a limited period of time (approximately four hours) without requiring the pur- chase on an IP license. It may also be used to evaluate the core in hardware in user-defined designs. Details for using the hardware evaluation capability are described in the Hardware Evaluation section of this document. Features • XAUI compliant functionality supported by embedded SERDES PCS functionality implemented in the LatticeECP2M and LatticeECP3, including four channels of 3.125 Gbps serializer/deserializer with 8b10b encod- ing/decoding. • Complete 10Gb Ethernet Extended Sublayer (XGXS) solution based on LatticeECP2M and LatticeECP3 FPGA. • Soft IP targeted to the FPGA implements XGXS functionality conforming to IEEE 802.3ae-2002, including: – 10 GbE Media Independent Interface (XGMII). – Optional Slip buffers for clock domain transfer to/from the XGMII interface.
    [Show full text]
  • IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008)
    IEEE Standard for Ethernet IEEE Computer Society Sponsored by the LAN/MAN Standards Committee IEEE 3 Park Avenue IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008) 28 December 2012 IEEE Std 802.3™-2012 (Revision of IEEE Std 802.3-2008) IEEE Standard for Ethernet Sponsor LAN/MAN Standards Committee of the IEEE Computer Society Approved 30 August 2012 IEEE-SA Standard Board Abstract: Ethernet local area network operation is specified for selected speeds of operation from 1 Mb/s to 100 Gb/s using a common media access control (MAC) specification and management information base (MIB). The Carrier Sense Multiple Access with Collision Detection (CSMA/CD) MAC protocol specifies shared medium (half duplex) operation, as well as full duplex operation. Speed specific Media Independent Interfaces (MIIs) allow use of selected Physical Layer devices (PHY) for operation over coaxial, twisted-pair or fiber optic cables. System considerations for multisegment shared access networks describe the use of Repeaters that are defined for operational speeds up to 1000 Mb/s. Local Area Network (LAN) operation is supported at all speeds. Other specified capabilities include various PHY types for access networks, PHYs suitable for metropolitan area network applications, and the provision of power over selected twisted-pair PHY types. Keywords: 10BASE; 100BASE; 1000BASE; 10GBASE; 40GBASE; 100GBASE; 10 Gigabit Ethernet; 40 Gigabit Ethernet; 100 Gigabit Ethernet; attachment unit interface; AUI; Auto Negotiation; Backplane Ethernet; data processing; DTE Power via the MDI; EPON; Ethernet; Ethernet in the First Mile; Ethernet passive optical network; Fast Ethernet; Gigabit Ethernet; GMII; information exchange; IEEE 802.3; local area network; management; medium dependent interface; media independent interface; MDI; MIB; MII; PHY; physical coding sublayer; Physical Layer; physical medium attachment; PMA; Power over Ethernet; repeater; type field; VLAN TAG; XGMII The Institute of Electrical and Electronics Engineers, Inc.
    [Show full text]
  • 1 Reference 10Gbe Implementation • XAUI/XGXS and XGMII Are Both
    Reference 10GbE Implementation Device A includes XGMII + XAUI , Device B includes XGMII Device PHY XGMII XAUI MDI A TXC X TXD P P P G MAC RS 36 XGXS C M M X RXC S A D S RXD 36 Transceiver Modules Initial 10 GbE Form Factor: Device PHY XGMII Daughter Card B TXC TXD P P P Medium MAC RS 36 C M M RXC S A D RXD 36 MDI • XAUI/XGXS and XGMII are both optional physical instantiations of the PCS Service Interface. • An Ethernet device implementation may contain either, neither, both, or multiple instances of either XAUI/XGXS and XGMII. • For purposes of data and code transport, Device A represents the case of either XAUI/XGXS + XGMII or XAUI alone since the XGMII does not perform code translation. • For purposes of data and code transport, Device B represents the case of either XGMII alone, neither XAUI/XGXS nor XGMII, or XAUI/XGXS with XGMII on both sides since the XGMII does not perform code translation. • It is assumed that the Reconciliation Sublayer is required to transport the following data and control information: • Start of Packet /S/ • Data /d/ • End of Packet /T/ • Idle /I/ • Error /E/ • Remote Fault /RF/ (used in Fast/Gigabit Ethernet) • Break Link /BL/ (used in Fast/Gigabit Ethernet) • Other /O/ (reserved or for other standards, OAM&P, etc.) 1 Serial PHY, 64B/66B PCS, XGXS never forwards /A/K/R/ /S/d/T/I/E/ /S/d/T/E/ /S/d/T/E/ /RF/BL/O/ /S/d/T/I/E/ /A/K/R/ /A/K/R/ /S/d/T/I/E/ /S/d/T/I/E/ /RF/BL/O/ /RF/BL/O/ /RF/BL/O/ /RF/BL/O/ /RF/BL/O/ Device PHY XGMII XAUI MDI A TXC X TXD P P P G MAC RS 36 XGXS C M M X RXC S A D S RXD 36 Device PHY XGMII B TXC TXD P P P Medium MAC RS 36 C M M RXC S A D RXD 36 MDI /S/d/T/I/E/ /S/d/T/I/E/ /S/d/T/I/E/ /RF/BL/O/ /RF/BL/O/ /RF/BL/O/ Device A to Device B data and control transport • XGXS adjacent to Device A XGMII translates Idle /I/ to XAUI Idle /A/K/R/.
    [Show full text]
  • Data Center Architecture and Topology
    CENTRAL TRAINING INSTITUTE JABALPUR Data Center Architecture and Topology Data Center Architecture Overview The data center is home to the computational power, storage, and applications necessary to support an enterprise business. The data center infrastructure is central to the IT architecture, from which all content is sourced or passes through. Proper planning of the data center infrastructure design is critical, and performance, resiliency, and scalability need to be carefully considered. Another important aspect of the data center design is flexibility in quickly deploying and supporting new services. Designing a flexible architecture that has the ability to support new applications in a short time frame can result in a significant competitive advantage. Such a design requires solid initial planning and thoughtful consideration in the areas of port density, access layer uplink bandwidth, true server capacity, and oversubscription, to name just a few. The data center network design is based on a proven layered approach, which has been tested and improved over the past several years in some of the largest data center implementations in the world. The layered approach is the basic foundation of the data center design that seeks to improve scalability, performance, flexibility, resiliency, and maintenance. Figure 1-1 shows the basic layered design. 1 CENTRAL TRAINING INSTITUTE MPPKVVCL JABALPUR Figure 1-1 Basic Layered Design Campus Core Core Aggregation 10 Gigabit Ethernet Gigabit Ethernet or Etherchannel Backup Access The layers of the data center design are the core, aggregation, and access layers. These layers are referred to extensively throughout this guide and are briefly described as follows: • Core layer—Provides the high-speed packet switching backplane for all flows going in and out of the data center.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 7,676,600 B2 Davies Et Al
    USOO7676600B2 (12) United States Patent (10) Patent No.: US 7,676,600 B2 Davies et al. (45) Date of Patent: Mar. 9, 2010 (54) NETWORK, STORAGE APPLIANCE, AND 4,245,344 A 1/1981 Richter METHOD FOR EXTERNALIZING AN INTERNAL AO LINK BETWEEN A SERVER AND A STORAGE CONTROLLER (Continued) INTEGRATED WITHIN THE STORAGE APPLIANCE CHASSIS FOREIGN PATENT DOCUMENTS (75) Inventors: Ian Robert Davies, Longmont, CO WO WOO21 O1573 12/2002 (US); George Alexander Kalwitz, Mead, CO (US); Victor Key Pecone, Lyons, CO (US) (Continued) (73) Assignee: Dot Hill Systems Corporation, OTHER PUBLICATIONS Longmont, CO (US) Williams, Al. “Programmable Logic & Hardware.” Dr. Dobb's Jour nal. Published May 1, 2003, downloaded from http://www.dd.com/ (*) Notice: Subject to any disclaimer, the term of this architect 184405342. pp. 1-7. patent is extended or adjusted under 35 U.S.C. 154(b) by 1546 days. (Continued) Primary Examiner Hassan Phillips (21) Appl. No.: 10/830,876 Assistant Examiner—Adam Cooney (22) Filed: Apr. 23, 2004 (74) Attorney, Agent, or Firm—Thomas J. Lavan: E. Alan pr. A5, Davis (65) Prior Publication Data (57) ABSTRACT US 2005/0027751A1 Feb. 3, 2005 Related U.S. Application Data A network Storage appliance is disclosed. The storage appli (60) Provisional application No. 60/473,355, filed on Apr. ance includes a port combiner that provides data COU1 23, 2003, provisional application No. 60/554,052, cation between at least first, second, and third I/O ports; a filed on Mar. 17, 2004. storage controller that controls storage devices and includes s the first I/O port; a server having the second I/O port; and an (51) Int.
    [Show full text]
  • Latticesc/M Broadcom XAUI/Higig 10 Gbps Lattice Semiconductor Physical Layer Interoperability Over CX-4
    LatticeSC/M Broadcom® XAUI/HiGig™ 10 Gbps Physical Layer Interoperability Over CX-4 August 2007 Technical Note TN1155 Introduction This technical note describes a physical layer 10-Gigabit Ethernet and HiGig (10 Gbps) interoperability test between a LatticeSC/M device and the Broadcom BCM56800 network switch. The test was limited to the physical layer (up to XGMII) of the 10-Gigabit Ethernet protocol stack. Specifically, the document discusses the following topics: • Overview of LatticeSC™ and LatticeSCM™ devices and Broadcom BCM56800 network switch • Physical layer interoperability setup and results Two significant aspects of the interoperability test need to be highlighted: • The BCM56800 uses a CX-4 HiGig port, whereas the LatticeSC Communications Platform Evaluation Board provides an SMA connector. A CX-4 to SMA conversion board was used as a physical medium interface to cre- ate a physical link between both boards. The SMA side of the CX-4 to SMA conversion board has four differential TX/RX channels (10 Gbps bandwidth total). All four SMA channels (Quad 360) were connected to the LatticeSC side. • The physical layer interoperability ran at a 10-Gbps data rate (12.5-Gbps aggregated rate). XAUI Interoperability XAUI is a high-speed interconnect that offers reduced pin count and the ability to drive up to 20” of PCB trace on standard FR-4 material. In order to connect a 10-Gigabit Ethernet MAC to an off-chip PHY device, an XGMII inter- face is used. The XGMII is a low-speed parallel interface for short range (approximately 2”) interconnects. XAUI interoperability is based on the 10-Gigabit Ethernet standard (IEEE Standard 802.3ae-2002).
    [Show full text]
  • Making the Switch to Rapidio
    QNX Software Systems Ltd. 175 Terence Matthews Crescent Ottawa, Ontario, Canada, K2M 1W8 Voice: +1 613 591-0931 1 800 676-0566 Fax: +1 613 591-3579 Email: [email protected] Web: www.qnx.com Making the Switch to RapidIO Using a Message-passing Microkernel OS to Realize the Full Potential of the RapidIO Interconnect Paul N. Leroux Technology Analyst QNX Software Systems Ltd. [email protected] Introduction Manufacturers of networking equipment have hit a bottleneck. On the one hand, they can now move traffic from one network element to another at phenomenal speeds, using line cards that transmit data at 10 Gigabits per second or higher. But, once inside the box, data moves between boards, processors, and peripherals at a much slower clip: typically a few hundred megabits per second. To break this bottleneck, equipment manufacturers are seeking a new, high-speed — and broadly supported — interconnect. In fact, many have already set their sights on RapidIO, an open-standard interconnect developed by the RapidIO Trade Association and designed for both chip-to-chip and board-to-board communications. Why RapidIO? Because it offers low latency and extremely high bandwidth, as well as a low pin count and a small silicon footprint — a RapidIO interface can easily fit Making the Switch to RapidIO into the corner of a processor, FPGA, or ASIC. Locked Out by Default: The RapidIO is also transparent to software, allowing Problem with Conventional any type of data protocol to run over the intercon- Software Architectures nect. And, last but not least, RapidIO addresses the demand for reliability by offering built-in While RapidIO provides a hardware bus that is error recovery mechanisms and a point-to-point both fast and reliable, system designers must find or architecture that helps eliminate single points develop software that can fully realize the benefits of failure.
    [Show full text]
  • EDSA-401 ISA Security Compliance Institute – Embedded Device Security Assurance – Testing the Robustness of Implementations of Two Common “Ethernet” Protocols
    EDSA-401 ISA Security Compliance Institute – Embedded Device Security Assurance – Testing the robustness of implementations of two common “Ethernet” protocols Version 2.01 September 2010 Copyright © 2009-2010 ASCI – Automation Standards Compliance Institute, All rights reserved EDSA-401-2.01 A. DISCLAIMER ASCI and all related entities, including the International Society of Automation (collectively, “ASCI”)provide all materials, work products and, information (‘SPECIFICATION’) AS IS, WITHOUT WARRANTY AND WITH ALL FAULTS, and hereby disclaim all warranties and conditions, whether express, implied or statutory, including, but not limited to, any (if any) implied warranties, duties or conditions of merchantability, of fitness for a particular purpose, of reliability or availability, of accuracy or completeness of responses, of results, of workmanlike effort, of lack of viruses, and of lack of negligence, all with regard to the SPECIFICATION, and the provision of or failure to provide support or other services, information, software, and related content through the SPECIFICATION or otherwise arising out of the use of the SPECIFICATION. ALSO, THERE IS NO WARRANTY OR CONDITION OF TITLE, QUIET ENJOYMENT, QUIET POSSESSION, CORRESPONDENCE TO DESCRIPTION, OR NON- INFRINGEMENT WITH REGARD TO THE SPECIFICATION. WITHOUT LIMITING THE FOREGOING, ASCI DISCLAIMS ALL LIABILITY FOR HARM TO PERSONS OR PROPERTY, AND USERS OF THIS SPECIFICATION ASSUME ALL RISKS OF SUCH HARM. IN ISSUING AND MAKING THE SPECIFICATION AVAILABLE, ASCI IS NOT UNDERTAKING TO RENDER PROFESSIONAL OR OTHER SERVICES FOR OR ON BEHALF OF ANY PERSON OR ENTITY, NOR IS ASCI UNDERTAKING TO PERFORM ANY DUTY OWED BY ANY PERSON OR ENTITY TO SOMEONE ELSE. ANYONE USING THIS SPECIFICATION SHOULD RELY ON HIS OR HER OWN INDEPENDENT JUDGMENT OR, AS APPROPRIATE, SEEK THE ADVICE OF A COMPETENT PROFESSIONAL IN DETERMINING THE EXERCISE OF REASONABLE CARE IN ANY GIVEN CIRCUMSTANCES.
    [Show full text]
  • Error Behaviour in Optical Networks Laura Bryony James
    Error Behaviour in Optical Networks Laura Bryony James Corpus Christi College This dissertation is submitted for the degree of Doctor of Philosophy 30th September 2005 Department of Engineering, University of Cambridge This dissertation is my own work and contains nothing which is the outcome of work done in collaboration with others, except as speci¯ed in the text and Acknow- ledgements. Abstract Optical ¯bre communications are now widely used in many applications, including local area computer networks. I postulate that many future optical LANs will be required to operate with limited optical power budgets for a variety of reasons, including increased system complexity and link speed, low cost components and minimal increases in transmit power. Some developers will wish to run links with reduced power budget margins, and the received data in these systems will be more susceptible to errors than has been the case previously. The errors observed in optical systems are investigated using the particular case of Gigabit Ethernet on ¯bre as an example. Gigabit Ethernet is one of three popular optical local area interconnects which use 8B/10B line coding, along with Fibre Channel and In¯niband, and is widely deployed. This line encoding is also used by packet switched optical LANs currently under development. A probabilistic analysis follows the e®ects of a single channel error in a frame, through the line coding scheme and the MAC layer frame error detection mechanisms. Empirical data is used to enhance this original analysis, making it directly relevant to deployed systems. Experiments using Gigabit Ethernet on ¯bre with reduced power levels at the receiver to simulate the e®ect of limited power margins are described.
    [Show full text]