Advanced Computer Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Advanced Computer Architecture Advanced Computer Architecture Interconnection Networks Slides courtesy © Michel Dubois, Murali Annavaram and Per Stenström 1 Parallel Computer Systems M M M L2 Cache Interconnection network Interconnection network L1 L1 L1 CMP CMP CMP P P P INTERCONNECT BETWEEN PROCESSOR CHIPS (SYSTEM AREA NETWORK--SAN) INTERCONNECT BETWEEN CORES ON EACH CHIP (ON-CHIP-NETWORK-- OCN or NETWORK ON A CHIP--NOC) OTHERS (NOT COVERED): WAN (WIDE-AREA NETWORK) LAN (LOCAL AREA NETWORK) 2 Example: Mesh Switch N.I. Node Switch B Link A CONNECTS NODES: CACHE MODULES, MEMORY MODULES, CMPS… NODES ARE CONNECTED TO SWITCHES THROUGH A NETWORK INTERFACE (NI) SWITCH: CONNECTS INPUT PORTS TO OUTPUT PORTS LINKS: WIRES TRANSFERING SIGNALS BETWEEN SWITCHES LINKS WIDTH, CLOCK TRANSFER CAN BE SYNCHRONOUS OR ASYNCHRONOUS FROM A TO B: HOP FROM SWITCH TO SWITCH DECENTRALIZED (DIRECT) 3 Simple Communication Model Machine A Machine B in-buffer out-buffer out-buffer in-buffer POINT-TO-POINT REQUEST/REPLY E B C A F D MULTICAST G BROADCAST POINT-TO-POINT MESSAGE TRANSFER REQUEST/REPLY: REQUEST CARRIES ID OF SENDER MULTICAST: ONE TO MANY BROADCAST: ONE TO ALL 4 Messages and Packets MESSAGES CONTAIN THE INFORMATION TRANSFERED MESSAGES ARE BROKEN DOWN INTO PACKETS PACKETS ARE SENT ONE BY ONE PAYLOAD: MESSAGE--NOT RELEVANT TO INTERCONNECTION HEADER/TRAILER: CONTAINS INFORMATION TO ROUTE PACKET ERROR CODE: ECC TO DETECT AND CORRECT TRANSMISSION ERRORS HEADER+ECC+TRAILER = PACKET ENVELOPE 5 Example: Bus ADDRESS DATA CONTROL ARBITRATION BUS=WIRES BROADCAST/BROADCALL COMMUNICATION. NEEDS ARBITRATION. CENTRALIZED vs DISTRIBUTED ARBITRATION LINE MULTIPLEXING (ADDRESS/DATA FOR EXAMPLE) PIPELINING FOR EXAMPLE: ARBITRATION => ADDRESS => DATA SPLIT-TRANSACTION BUS vs CIRCUIT-SWITCHED BUS CENTRALIZED (INDIRECT) LOW COST SHARED LOW BANDWIDTH 6 Switching Strategy DEFINES HOW CONNECTIONS ARE ESTABLISHED IN THE NETWORK CIRCUIT SWITCHING ESTABLISH A CONNECTION FOR THE DURATION OF THE NETWORK SERVICE EXAMPLE 1: REMOTE MEMORY READ ON A BUS: CONNECT WITH REMOTE NODE HOLD THE BUS WHILE THE REMOTE MEMORY IS ACCESSED RELEASE THE BUS WHEN THE DATA HAS BEEN RETURNED EXAMPLE 2: REMOTE MEMORY READ IN MESH ESTABLISH PATH IN NETWORK HOLD PATH UNTIL DATA IS RETURNED ALSO GOOD FOR TRANSMITTING PACKETS CONTINUOUSLY BETWEEN TWO NODES PACKET SWITCHING MULTIPLEX SEVERAL SERVICES BY SENDING PACKETS WITH ADDRESSES EXAMPLE 1: REMOTE MEMORY READ ON A BUS SEND A REQUEST PACKET TO REMOTE NODE RELEASE BUS WHILE REMOTE MEMORY ACCESS TAKES PLACE REMOTE NODE SENDS REPLY PACKET TO REQUESTER EXAMPLE 2: REMOTE MEMORY READ ON A MESH 7 Packet Switching Strategy TWO STRATEGIES: STORE-AND-FORWARD AND CUT-THROUGH PACKET SWITCHING IN STORE-AND-FORWARD PACKETS MOVE FROM NODE TO NODE AND ARE STORED IN BUFFERS IN EACH NODE IN CUT-THROUGH, PACKETS MOVE THROUGH NODES IN PIPELINED FASHION, SO THAT THE ENTIRE PACKET MOVES THROUGH SEVERAL NODES AT ONE TIME, WITHOUT INTERMEDIATE BUFFERING Store-and-forward Cut-through switching switching Switch L L-2 cycles Switch 2 Switch 1 Time 8 Two Implementations of Cut-Through IN PRACTICE WE MUST DEAL WITH CONFLICTS AND STALLED PACKETS VIRTUAL CUT-THROUGH SWITCHING: EACH NODE HAS ENOUGH BUFFERING FOR THE ENTIRE PACKET THE ENTIRE PACKET IS BUFFERED IN THE NODE WHEN THERE IS A TRANSMISSION CONFLICT WHEN TRAFFIC IS CONGESTED AND CONFLICTS ARE HIGH, VIRTUAL CUT- THROUGH BEHAVES LIKE STORE-AND-FORWARD WORMHOLE SWITCHING: EACH NODE HAS ENOUGH BUFFERING FOR A FLIT (FLOW CONTROL UNIT) A FLIT IS MADE OF CONSECUTIVE PHITs (PHYSICAL TRANSFER UNIT), i.e. THE BITS TRANSFERRED PER CLOCK (SIZE= THE WIDTH OF A LINK) THE FLIT IS A FRACTION OF A PACKET, SO THE PACKET MUST BE STORED IN SEVERAL CONSECUTIVE NODES (ONE FLIT PER NODE) ON ITS PATH AS IT MOVES THROUGH THE NETWORK. THE FLIT MUST BE ABLE TO CONTAIN THE HEADER OF A PACKET IN VIRTUAL CUT-THROUGH THE FLIT IS THE WHOLE PACKET 9 Switch Microarchitecture From Duato and Pinkston in Hennessy and Patterson, 4th edition PHYSICAL CHANNEL = LINK VIRTUAL CHANNEL = BUFFERS + LINK LINK IS TIME-MULTIPLEXED AMONG FLITS 10 Latency Models END-TO-END PACKET LATENCY = SENDER OH + TIME OF FLIGHT + TRANSMISSION TIME + ROUTING TIME + RECEIVER OH SENDER OVERHEAD: CREATING THE ENVELOPE AND MOVING PACKET TO NI TIME OF FLIGHT: TIME TO SEND A BIT FROM SOURCE TO DESTINATION WHEN THE ROUTE IS ESTABLISHED AND WITHOUT CONFLICTS. (INCLUDES SWITCHING TIME.) TRANSMISSION TIME: TIME TO TRANSFER A PACKET FROM SOURCE TO DESTINATION, ONCE THE FIRST BIT HAS ARRIVED AT DESTINATION PHIT: NUMBER OF BITS TRANSFERED ON A LINK PER CYCLE TRANSMISSION TIME = PACKET SIZE/PHIT SIZE FLIT: FLOW CONTROL UNIT 11 Latency Models ROUTING TIME: TIME TO SET UP SWITCHES SWITCHING TIME: DEPENDS ON SWITCHING STRATEGY (STORE-AND-FORWARD vs CUT-THROUGH vs CIRCUIT-SWITCHED). AFFECTS TIME OF FLIGHT AND IS INCLUDED IN THAT. RECEIVER OVERHEAD: TIME TO STRIP OUT ENVELOPE AND MOVE PACKET IN MEASURES OF LATENCY: ROUTING DISTANCE: NB OF LINKS TRAVERSED BY A PACKET AVERAGE ROUTING DISTANCE: AVERAGE OVER ALL PAIRS OF NODES NETWORK DIAMETER: LONGEST ROUTING DISTANCE OVER ALL PAIRS OF NODES PACKETS OF A MESSAGE CAN BE PIPELINED TRANSFER PIPELINE HAS 3 STAGES SENDER OVERHEAD-->TRANSMISSION -->RECEIVER OVERHEAD TOTAL MESSAGE TIME = TIME FOR THE FIRST PACKET + (N-1)/PIPELINE THROUGHPUT END-TO-END MESSAGE LATENCY = SENDER OH + TIME OF FLIGHT + TRANSMISSION TIME + ROUTING TIME + (N-1) X MAX (SENDER OH, TRANSMISSION TIME, RECEIVER OH) 12 Latency Models PACKET LATENCY = SENDER OH + TOF (INCL.SWITCHING TIME) + TRANSMISSION TIME + ROUTING TIME + RECEIVER OH R: ROUTING TIME PER SWITCH; N: NB OF PHITS; L: NB OF SWITCHES; TOF: TIME OF FLIGHT CIRCUIT SWITCHING ROUTING: LxR (TO SET THE SWITCHES) + L (TO ACK TOSENDER)= Lx(R+1) PACKET LATENCY = SENDER OH + ToF + TRANSMISSION + ROUTING + REC OH = SENDER OH + L + N + Lx(R+1) + RECEIVER OH = SENDER OH + N + Lx(R+2) + RECEIVER OH STORE-AND-FORWARD PACKET LATENCY = SENDER OH + L x N + N + LxR + RECEIVER OH = SENDER OH + L x N + N + LxR + RECEIVER OH CUT-THROUGH SWITCHING PACKET LATENCY = SENDER OH + L + N + LxR + RECEIVER OH = SENDER OH + Lx(R+1) + N + RECEIVER OH. 13 Bandwidth Models BOTTLENECKS INCREASE LATENCY TRANSFERS ARE PIPELINED UNLOADED BANDWIDTH = PACKET_SIZE/MAX(SENDER OV, RECEIVER OV, TRANSMISSION TIME) NETWORK CONTENTION AFFECTS LATENCY AND EFFECTIVE BANDWIDTH (NOT COUNTED IN ABOVE FORMULA) BISECTION WIDTH NETWORK IS SEEN AS A GRAPH VERTICES ARE SWITCHES AND EDGES ARE LINKS BISECTION IS A CUT THROUGH A MINIMUM SET OF EDGES SUCH THAT THE CUT DIVIDES THE NETWORK GRAPH INTO TWO ISOMORPHIC --I.E., SAME-- SUBGRAPHS EXAMPLE: MESH MEASURES BANDWIDTH WHEN ALL NODES IN ONE SUBGRAPH COMMUNICATE ONLY WITH NODES IN THE OTHER SUBGRAPH AGGREGATE BANDWIDTH BANDWIDTH ACROSS ALL LINKS DIVIDED BY THE NUMBER OF NODES 14 Interconnecting Two Devices Basic Network Structure and Functions • Media and Form Factor Metal layers InfiniBand Ethernet connectors Fiber Optics Cat5E twisted pair Media type Coaxial cables Printed circuit boards Myrinet connectors OCNs SANs LANs WANs 0.01 1 10 100 >1,000 Distance (meters) 15 Topologies INDIRECT NETWORKS: IN IS CENTRALIZED BUS CROSSBAR SWITCH MULTISTAGE INTERCONNECTION NETWORK (MIN) 0 1 2 3 4 5 6 7 2-BY-2 CROSSBAR 2by2 0 0 0 1 1 1 2 2 2 3 3 3 4 5 4 4 6 5 5 7 6 PERFECT_SHUFFLE 6 PERFECT_SHUFFLE CROSSBAR (8by8) PERFECT_SHUFFLE 7 7 OMEGA NETWORK 16 Topologies INDIRECT NETWORKS: IN IS CENTRALIZED TREE BUTTERFLY 0 0 1 1 2 2 3 3 4 5 0 1 2 3 6 4 4 5 5 BINARY TREE 6 6 7 7 BUTTERFLY: EMBEDDED TREES BEST TO CONNECT DIFFERENT TYPES OF NODES 17 Topologies DIRECT NETWORKS: NODES ARE DIRECTLY CONNECTED TO ONE ANOTHER DECENTRALIZED LINEAR ARRAY AND RING MESH AND TORI 18 Topologies DIRECT NETWORKS: NODES ARE DIRECTLY CONNECTED TO ONE ANOTHER HYPERCUBE AND k-ARY n-CUBE 19 Network Topology Centralized Switched (Indirect) Networks Multistage interconnection networks (MINs) Crossbar split into several stages consisting of smaller crossbars Complexity grows as O(N × log N), where N is # of end nodes Inter-stage connections represented by a set of permutation functions 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 Omega topology, perfect-shuffle exchange 20 Network Topology Centralized Switched (Indirect) Networks 0000 0000 0001 0001 0010 0010 0011 0011 0100 0100 0101 0101 0110 0110 0111 0111 1000 1000 1001 1001 1010 1010 1011 1011 1100 1100 1101 1101 1110 1110 1111 1111 21 16 port, 4 stage Omega network Network Topology Centralized Switched (Indirect) Networks 0000 0000 0001 0001 0010 0010 0011 0011 0100 0100 0101 0101 0110 0110 0111 0111 1000 1000 1001 1001 1010 1010 1011 1011 1100 1100 1101 1101 1110 1110 1111 1111 22 16 port, 4 stage Baseline network Network Topology Centralized Switched (Indirect) Networks 0000 0000 0001 0001 0010 0010 0011 0011 0100 0100 0101 0101 0110 0110 0111 0111 1000 1000 1001 1001 1010 1010 1011 1011 1100 1100 1101 1101 1110 1110 1111 1111 23 16 port, 4 stage Butterfly network Network Topology Centralized Switched (Indirect) Networks 0000 0000 0001 0001 0010 0010 0011 0011 0100 0100 0101 0101 0110 0110 0111 0111 1000 1000 1001 1001 1010 1010 1011 1011 1100 1100 1101 1101 1110 1110 1111 1111 24 16 port, 4 stage Cube network Network Topology Centralized Switched (Indirect) Networks Multistage interconnection networks (MINs) MINs interconnect N input/output ports using k x k switches logkN switch stages, each with N/k switches N/k(logkN) total number of switches Example: Compute the switch and link costs of interconnecting 4096 nodes using a crossbar relative to a
Recommended publications
  • What Is It and How We Use It
    Infiniband Overview What is it and how we use it What is Infiniband • Infiniband is a contraction of "Infinite Bandwidth" o can keep bundling links so there is no theoretical limit o Target design goal is to always be faster than the PCI bus. • Infiniband should not be the bottleneck. • Credit based flow control o data is never sent if receiver can not guarantee sufficient buffering What is Infiniband • Infiniband is a switched fabric network o low latency o high throughput o failover • Superset of VIA (Virtual Interface Architecture) o Infiniband o RoCE (RDMA over Converged Ethernet) o iWarp (Internet Wide Area RDMA Protocol) What is Infiniband • Serial traffic is split into incoming and outgoing relative to any port • Currently 5 data rates o Single Data Rate (SDR), 2.5Gbps o Double Data Rate (DDR), 5 Gbps o Quadruple Data Rate (QDR), 10 Gbps o Fourteen Data Rate (FDR), 14.0625 Gbps o Enhanced Data Rate (EDR) 25.78125 Gbps • Links can be bonded together, 1x, 4x, 8x and 12x HDR - High Data Rate NDR - Next Data Rate Infiniband Road Map (Infiniband Trade Association) What is Infiniband • SDR, DDR, and QDR use 8B/10B encoding o 10 bits carry 8 bits of data o data rate is 80% of signal rate • FDR and EDR use 64B/66B encoding o 66 bits carry 64 bits of data Signal Rate Latency SDR 200ns DDR 140ns QDR 100ns Hardware 2 Hardware vendors • Mellanox o bought Voltaire • Intel o bought Qlogic Infiniband business unit Need to standardize hardware. Mellanox and Qlogic cards work in different ways.
    [Show full text]
  • FICON Native Implementation and Reference Guide
    Front cover FICON Native Implementation and Reference Guide Architecture, terminology, and topology concepts Planning, implemention, and migration guidance Realistic examples and scenarios Bill White JongHak Kim Manfred Lindenau Ken Trowell ibm.com/redbooks International Technical Support Organization FICON Native Implementation and Reference Guide October 2002 SG24-6266-01 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. Second Edition (October 2002) This edition applies to FICON channel adaptors installed and running in FICON native (FC) mode in the IBM zSeries procressors (at hardware driver level 3G) and the IBM 9672 Generation 5 and Generation 6 processors (at hardware driver level 26). © Copyright International Business Machines Corporation 2001, 2002. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team that wrote this redbook. ix Become a published author . .x Comments welcome. .x Chapter 1. Overview . 1 1.1 How to use this redbook . 2 1.2 Introduction to FICON . 2 1.3 zSeries and S/390 9672 G5/G6 I/O connectivity. 3 1.4 zSeries and S/390 FICON channel benefits . 5 Chapter 2. FICON topology and terminology . 9 2.1 Basic Fibre Channel terminology . 10 2.2 FICON channel topology. 12 2.2.1 Point-to-point configuration . 14 2.2.2 Switched point-to-point configuration . 15 2.2.3 Cascaded FICON Directors configuration. 16 2.3 Access control. 18 2.4 Fibre Channel and FICON terminology.
    [Show full text]
  • Infiniband Event-Builder Architecture Test-Beds for Full Rate Data
    International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010) IOP Publishing Journal of Physics: Conference Series 331 (2011) 022008 doi:10.1088/1742-6596/331/2/022008 Infiniband Event-Builder Architecture Test-beds for Full Rate Data Acquisition in LHCb Enrico Bonaccorsi and Juan Manuel Caicedo Carvajal and Jean-Christophe Garnier and Guoming Liu and Niko Neufeld and Rainer Schwemmer CERN, 1211 Geneva 23, Switzerland E-mail: [email protected] Abstract. The LHCb Data Acquisition system will be upgraded to address the requirements of a 40 MHz readout of the detector. It is not obvious that a simple scale-up of the current system will be able to meet these requirements. In this work we are therefore re-evaluating various architectures and technologies using a uniform test-bed and software framework. Infiniband is a rather uncommon technology in the domain of High Energy Physics data acquisition. It is currently mainly used in cluster based architectures. It has however interesting features which justify our interest : large bandwidth with low latency, a minimal overhead and a rich protocol suite. An InfiniBand test-bed has been and set-up, and the purpose is to have a software interface between the core software of the event-builder and the software related to the communication protocol. This allows us to run the same event-builder over different technologies for comparisons. We will present the test-bed architectures, and the performance of the different entities of the system, sources and destinations, according to their implementation. These results will be compared with 10 Gigabit Ethernet testbed results.
    [Show full text]
  • Test, Simulation, and Integration of Fibre Channel Networked Systems by Mike Glass Principal Marketing Engineer Data Device Corporation
    Test, Simulation, and Integration of Fibre Channel Networked Systems By Mike Glass Principal Marketing Engineer Data Device Corporation September 2006 Test, Simulation, and Integration Of Fibre Channel Networked Systems Introduction Fibre Channel is a high-speed networking technology deployed on a number of military/aerospace platforms and programs. These include F- 18E/F, F-16, F-35, B1-B, B-2, E-2D, the Apache Longbow and MMH helicopters, and AESA Radar. Applications for Fibre Channel include mission computers, processor and DSP clusters; data storage; video processing, distribution, and displays; sensors such as radar, FLIR, and video; serial backplanes and IFF. Basic characteristics of Fibre Channel include a choice of copper and optical media options; 1 and 2 Gb operation, including auto-speed negotiation; and operation on multiple topologies including point-to-point, arbitrated loop, and switched fabric. In addition, Fibre Channel provides low latency to the single-digit microsecond level; address scalability with a space up to 224 ports; broadcast and multicast operation; unacknowledged and acknowledged classes of service; a means for providing variable quality of service (QoS); and multiple upper layer protocols (ULPs). Test and simulation applications for deployable Fibre Channel networks include the development of board and box-level systems and sub-systems, network integration, production test, and equipment maintenance. For software development and network integration, it’s often necessary to rely on Fibre Channel testers and analyzers to simulate unavailable equipment for traffic generation and monitoring. Some development and integration environments provide demanding requirements for real-time data monitoring, storage, and subsequent offline analysis. For production test and field maintenance, low cost testers are often better-suited than higher-priced analyzers.
    [Show full text]
  • Etsi Gr Ip6 009 V1.1.1 (2017-03)
    ETSI GR IP6 009 V1.1.1 (2017-03) GROUP REPORT IPv6-based Industrial Internet leveraging 6TiSCH technology Disclaimer The present document has been produced and approved by the IPv6 Integration (IP6) ETSI Industry Specification Group (ISG) and represents the views of those members who participated in this ISG. It does not necessarily represent the views of the entire ETSI membership. 2 ETSI GR IP6 009 V1.1.1 (2017-03) Reference DGR/IP6-0009 Keywords 6TiSCH, IPv6, network ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI.
    [Show full text]
  • News Letter V7
    Robin MacGillivray, BCS President, SBC West Message to Telecom Consultants Let me start out by wishing each of you a very Happy New Year, with hopes that 2004 will be your most successful year ever! Through your recommendations you have influenced millions of UPDATE dollars in revenue for the SBC family of companies Solutions for Success and it’s my commitment to you to do all we can to continue to earn your trust and confidence in Consultant/Vendor Sales Group 2004. That’s what “Stand & Deliver” is all about. February 2004 It’s a theme we’re using internally to constantly remind ourselves of our promise to you that we will always be the brand you can trust to deliver World-Class products and reliable service to your clients. It’s my personal goal to Stand & Deliver beyond your expectations. We believe the best competitive strategy we have for keeping Douglas Ireland customers is our ability to meet their needs, the first time, on time, SM everytime. There’s probably nothing more critical to the success of SBC FreedomLink Creating Special our business. As we enter a new year, there are a lot of things going National Wi-Fi Hotspot Network on that may directly impact you and your clients. Here are a few: ͷ SBC Internet Services, Inc. is now creating one of the nation’s We’re now offering Long Distance at incredible prices in all 13 states SBC Long Distance serves. largest wireless Internet access networks, commonly known as ͷ We’re offering all sorts of spectacular packages and bundles to Wi-Fi hotspots, under the FreedomLinkSM marquee.
    [Show full text]
  • Putting Switched Fabric to Work for Software Radio
    PUTTING SWITCHED FABRIC TO WORK FOR SOFTWARE RADIO Rodger H. Hosking (Pentek, Inc., Upper Saddle River, NJ, USA, [email protected]) ABSTRACT In order to take advantage of the wealth of high- volume, low-cost devices for mass-market electronics, and The most difficult problem for designers of high- to reap the same benefits of easier connectivity, even the performance, software radio systems is simply moving data most powerful high-end software radio RISC and DSP within the system because of data throughput limitations. processors from Freescale and Texas Instruments are now Driving this dilemma are processors with higher clock rates sporting gigabit serial interfaces. and wider buses, data converter products with higher sampling rates, more complex digital communication 2. GIGABIT SERIAL STANDARDS standards with increased bandwidths, disk storage devices with faster I/O rates, FPGAs and DSPs offering incredible The descriptive phrase “gigabit serial” covers a truly diverse computational rates, and system connections and network range of implementations and application spaces. Figure 1 links operating at higher speeds. shows most of the popular standards used in embedded Traditional system architectures relying on buses and systems suitable for software radio, along with how each parallel connections between system boards and mezzanines standard is normally deployed in a system. fall far short of delivering the required peak rates, and suffer even worse if they must be shared and arbitrated. New Standard Main Application strategies for solving these problems exploit gigabit serial Gigabit Ethernet Computer Networking links and switched fabric standards to create significantly FibreChannel Data Storage more powerful architectures ideally suited for embedded software radio systems.
    [Show full text]
  • Brocade Mainframe Connectivity Solutions
    PART 1: BROCADE MAINFRAME CHAPTER 2 CONNECTIVITY SOLUTIONS The modern IBM mainframe, also known as IBM zEnterprise, has a distinguished 50-year history BROCADEMainframe I/O and as the leading platform for reliability, availability, serviceability, and scalability. It has transformed Storage Basics business and delivered innovative, game-changing technology that makes the extraordinary possible, and has improved the way the world works. For over 25 of those years, Brocade, MAINFRAME the leading networking company in the IBM mainframe ecosystem, has provided non-stop The primary purpose of any computing system is to networks for IBM mainframe customers. From parallel channel extension to ESCON, FICON, process data obtained from Input/Output devices. long-distance FCIP connectivity, SNA/IP, and IP connectivity, Brocade has been there with IBM CONNECTIVITY and our mutual customers. Input and Output are terms used to describe the SOLUTIONStransfer of data between devices such as Direct This book, written by leading mainframe industry and technology experts from Brocade, discusses Access Storage Device (DASD) arrays and main mainframe SAN and network technology, best practices, and how to apply this technology in your storage in a mainframe. Input and Output operations mainframe environment. are typically referred to as I/O operations, abbreviated as I/O. The facilities that control I/O operations are collectively referred to as the mainframe’s channel subsystem. This chapter provides a description of the components, functionality, and operations of the channel subsystem, mainframe I/O operations, mainframe storage basics, and the IBM System z FICON qualification process. STEVE GUENDERT DAVE LYTLE FRED SMIT Brocade Bookshelf www.brocade.com/bookshelf i BROCADE MAINFRAME CONNECTIVITY SOLUTIONS STEVE GUENDERT DAVE LYTLE FRED SMIT BROCADE MAINFRAME CONNECTIVITY SOLUTIONS ii © 2014 Brocade Communications Systems, Inc.
    [Show full text]
  • Be Based on Ethernet, Infiniband, Or Fibre Channel in DEPTH / STORAGE
    Running SAN traffic over the LAN is becoming a realistic proposition, but will this future converged network be based on Ethernet, InfiniBand, or Fibre Channel IN DEPTH / STORAGE realistic proposition, but will this future converged network e Channel? By Andy Dornan One Network To Rule Them All ONVERGENCE IS about more iSCSI is a viable option, it’s mostly used in than just voice and data. Storage low-end networks that need speeds of 1 Gbps and networking vendors prom- or less. And even here, many users prefer to ise that the next-generation net- keep the SAN separate from the LAN, even if work will unite local and storage both are built from the same commodity Gi- Carea networks, virtualizing both over a single gabit Ethernet switches. network at 20 Gbps, 40 Gbps, or even 100 The new push toward unified networks dif- Gbps. And it isn’t just the networks that are fers from iSCSI in both its ambition and the coming together.The future SAN also involves resources behind it. Vendors from across the convergence of storage with memory. The networking and storage industries are collabo- same principles that originally abstracted rating on new standards that aim to unite mul- storage out of a local server can be applied to tiple networks into one. Startups Xsigo and RAM, too, while storage targets shift from 3Leaf Systems already have shipped propri- hard disks to flash memory. etary hardware and software aimed at virtual Longtime SAN users might feel a sense of I/O that can converge SAN with LAN, though n a m déjà vu.
    [Show full text]
  • Control Plane Vs
    Everything You Wanted To Know about Storage (But Were Too Proud To Ask) Part Mauve J Metz, Cisco Dror Goldenberg, Mellanox Chad Hintz, Cisco Fred Knight, NetApp November 1, 2016 Today’s Presenters J Metz Dror Goldenberg Chad Hintz Fred Knight SNIA Board VP Software SNIA-ESF Standards of Directors Architecture Board Technologist Cisco Mellanox Cisco NetApp 2 SNIA Legal Notice ! " The material contained in this presentation is copyrighted by the SNIA unless otherwise noted. ! " Member companies and individual members may use this material in presentations and literature under the following conditions: ! " Any slide or slides used must be reproduced in their entirety without modification ! " The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. ! " This presentation is a project of the SNIA. ! " Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. ! " The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 3 About SNIA 4 Agenda ! " Channel vs. Bus Host Network Storage ! " Control Plane vs. Data Plane ! " Fabric vs. Network Channel Busses Control Plane Data Plane Fabric Network 5 Off We Go! ! " Channel vs.
    [Show full text]
  • Starlink Switched Fabric Data Interconnect PMC
    StarLink Switched Fabric Data Interconnect PMC Features Switched Fabric Interconnect PMC The StarLink PMC card 400Mbytes/sec total sustained bandwidth provides the Zero protocol, PCI to PCI transfers converted to packets which user with a automatically route through the fabric flexible, switched fab- 64-bit, 66MHz PCI Interface ric board PCI to StarFabric bridge (2 ports) interconnect system that easily scales from a few to many boards. The 6 port StarFabric switch flexibility of the system is a virtue of the underlying packet 4 ports via PMC Pn4 connectors or switching technology, where data is automatically and trans- – 2 ports via front panel RJ-45’s & 2 ports via Pn4 connectors parently routed through a fabric network of switches to it’s destination. 800 Mbytes switching cross-sectional bandwidth Quality of service features for real-time determinism StarLink is based on StarFabric technology from Stargen Inc. StarFabric is a high-speed serial, switched-fabric technology. Robust LVDS physical layer The system is based on two types of device, a PCI to Star- High availability features Fabric bridge, and a StarFabric switch. The elegance of Star- Fabric is in it’s simplicity, as a network of bridges and Low power operation switches presents itself to the application as a collection of VxWorks drivers bridged PCI devices. Memory attached to one node, is made visible in PCI address space to the other nodes in the net- Air-cooled and conduction-cooled versions available. work. This is an existing architecture in many systems (i.e. cPCI), so from a software interface perspective, a group of cards linked through StarFabric network, appears the same as Description if they were connected to each other through non-transparent Many Signal Processing problems demand the use of multiple PCI to PCI bridges.
    [Show full text]
  • Connectrix Storage Area Networking Portfolio
    Data Sheet Dell EMC Connectrix Storage Area Networking Portfolio Connectrix Enables Business Applications The Connectrix™ family of directors and switches moves your vital business information to where it’s needed securely, with the highest performance, the highest availability and unsurpassed reliability. Connectrix is the only storage networking platform that offers Dell EMC E-Lab™ interoperability testing. Connectrix products can connect physical or virtual servers through Fibre Channel Storage Area Networks (SAN) technology. Connectrix enables all application environments from Oracle, Microsoft and SAP to local backup/restore, and business continuity/disaster recovery solutions over distance. Connectivity Matters for Data Storage: NVMe/FC All-flash storage environments require a network that is deterministic, and easy to manage with low latencies. Connectrix has always delivered low latency, deterministic behavior, scalability and reliability. So, as you move to Solid State Drives (SSD) or Storage Class Memory (SCM), make sure your storage network can keep pace. Today’s storage networks deliver up to 64 Gigabit per second (Gb/s) Fibre Channel speeds. The latest Connectrix systems include exclusive diagnostic and error-collection capabilities, as well as the ability to monitor, analyze and identify specific data automatically to avoid errors, reduce bottlenecks and automate your networking resources. Connectrix models allow seamless transition to Non-Volatile Memory Express Fibre Channel (NVMe/FC) workloads without any hardware upgrade in the SAN. In addition, Connectrix platforms support the concurrent use of both traditional SCSI-based Fibre Channel and NVMe/FC traffic, allowing organizations to easily integrate Fibre Channel networks with NVMe-based storage. Live Optics with SAN Health – Free SAN Assessment If you’re thinking about upgrading your storage environment, make sure your SAN isn’t a bottleneck by running Live Optics with SAN Health.
    [Show full text]