INFINIBAND 97 Ment Without CPU Interaction

Total Page:16

File Type:pdf, Size:1020Kb

INFINIBAND 97 Ment Without CPU Interaction Cluster Computing 6, 95–104, 2003 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. InfiniBand: The “De Facto” Future Standard for System and Local Area Networks or Just a Scalable Replacement for PCI Buses? TIMOTHY MARK PINKSTON ∗ University of Southern California ALAN F. BENNER IBM Corporation MICHAEL KRAUSE Hewlett Packard IRV M. ROBINSON Intel Corporation THOMAS STERLING California Institute of Technology Abstract. InfiniBand is a new industry-wide general-purpose interconnect standard designed to provide significantly higher levels of relia- bility, availability, performance, and scalability than alternative server I/O technologies. After more than two years since its official release, many are still trying to understand what are the profitable uses for this new and promising interconnect technology, and how this technology might evolve. In this article, we provide a summary of several industry and academic perspectives on this issue expressed during a panel discussion at the Workshop for Communication Architecture for Clusters (CAC), held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS) in April 2001, in hopes of narrowing down the design space for InfiniBand-based systems. Keywords: InfiniBand, I/O, system area network, data center fabric, interconnection network standard 1. Introduction ture systems, and how it might be improved. In addition, not everyone is enamored with this technology. Some claim it is In an attempt to solve a wide spectrum of problems associ- too expensive, others that it is too complex, still others that it ated with server I/O, many commercial entities worked to- attempts to address too many disparate problems. Moreover, gether to develop an industry-wide general-purpose intercon- some believe that because of the way InfiniBand has been po- nect standard called InfiniBand [1]. InfiniBand was designed sitioned, it directly competes with PCI, Ethernet, Fibre Chan- to provide significantly higher levels of reliability, availabil- nel, and other well-established industry standards and, thus, ity, performance, and scalability than could be achieved with may never be widely accepted. While there is some validity alternative server I/O technology. In October of 2000, the first in some of these claims and beliefs, the reality is that version of the InfiniBand specs were released with much fan- InfiniBand is the first technology to really solve the entire fare. At its release, this non-proprietary, low-overhead point- server I/O problem and much of the high-speed, low-latency to-point communication standard was poised to become the inter-processor communication (IPC) problem within a sin- interconnection network fabric technology on which com- gle, open industry standard specification. modity and high-end servers could be based [2]. Nevertheless, since nature abhors a vacuum, it is likely The first generation of InfiniBand products have appeared, that many vendors will continue to invest in evolutionary ap- and prototype InfiniBand-based clustered applications have proaches to solve some of the same problems addressed by been demonstrated, but it is not yet clear in which areas In- InfiniBand. It will take some time for any new technology finiBand technology will be most successfully employed as targeted for the server market to gain a foot-hold – many be- it matures. Since its release, many realize that InfiniBand is lieving that 2003/2004 will be the time frame at which Infini- not a panacea and was never meant to be one. There has been Band could really start to take flight. More important than much effort put towards understanding just what this tech- when, however, is the question of where does it make sense nology is best used for, how it should be integrated into fu- to deploy this new technology? What will be the possible ap- ∗ Corresponding author. plication areas for InfiniBand: for I/O interconnect, system E-mail: [email protected] area network (SAN), storage area network (STAN), or local 96 PINKSTON ET AL. Figure 1. Conceptual diagram of InfiniBand’s layered architecture. area network (LAN) application? Is it useful only for IPC or might it also be useful as a unified network fabric (backbone) in servers, server clusters, and data centers? Is there inter- esting research to be done on InfiniBand architecture? Will InfiniBand have a significant impact on the way in which fu- ture systems are designed or might it have only limited impact like some of its predecessors, i.e., VIA, SCI, etc.? These and other such questions were raised and debated during a panel discussion at the CAC Workshop, held in con- junction with IPDPS’15. As with many such panel discus- sions, a wide variety of views were expressed, with a variety of similarities as well as disagreements between them. This paper represents an attempt to summarize and clarify the var- (a) ious converging and conflicting perspectives shared during that workshop to help narrow the possible design space for InfiniBand-based systems. 2. InfiniBand overview InfiniBand is a layered architecture that provides physical, data link, network, and transport layer services (see figure 1). At the physical and data link layers, its switch-based archi- tecture allows for richly connected, arbitrary topologies to be configured with some degree of flexibility in routing across logical and physical channels. It provides scalable increased I/O bandwidth for driving I/O at link rates from 2.5 Gbps to 12 times that rate, increased distance (as compared to PCI) (b) of up to 300 meters, and standardized form factors for sup- Figure 2. Block diagram of an InfiniBand fabric. porting a variety of simple to complex I/O solutions includ- ing serial or parallel lines, copper or fiber links, and wide or Discrete message passing via send and receive queue pairs tall modules. It also provides support for traffic prioritiza- (QPs) and completion queue elements (CQEs) is supported, tion, deadlock avoidance, and segregation of traffic classes. as shown in figure 2. Its programming model is derived from At the network and transport layers, it provides various types Virtual Interface Architecture (VIA) [3], however InfiniBand of connection-oriented and datagram communication services is intended to enable the most efficient interface possible between consumers at network endpoints, including remote between a message passing interconnection network and a direct memory access (RDMA) and atomic operations. It also server’s memory controller to facilitate highly efficient data provides standardized fabric management services, fault iso- transfers. For example, data movement is via DMA, sched- lation/containment, and reliability functions. uled by fabric-connected devices, which enables data move- INFINIBAND 97 ment without CPU interaction. In support of this, InfiniBand I/O, HyperTransport, and PCI Express (formerly 3GIO) have is defined to make it practical to implement protocol stack caused a fair amount of confusion about the role of InfiniBand processing in ASICs, with a strategy for integration of In- in the I/O arena. finiBand Host Channel Adapters (HCAs) and target channel Some of these I/O fabrics offer bandwidths of 400 Mbps adapters (TCAs) into server chipsets.1 While chipset inte- to 16 Gbps (aggregate, full-duplex application bandwidth), gration of other interconnection networks is certainly pos- inter-rack distances of up to 5 meters, standardized hot plug sible, InfiniBand was conceived to make the process easier and swap capability, high fan-out attachment of multiple and provide the highest performance and efficiency of any cards, and load/store/interrupt semantics which are software non-proprietary alternative. One such efficiency2 has been compatible with traditional PCI-based I/O. This would sug- achieved by the use of a message passing network which can gest prolonged usage of these I/O fabrics in future server sys- be used for nearly every kind of server I/O, perhaps making tems. Nevertheless, there are a few very important capabili- I/O buses like PCI superfluous. It is this application – as a ties that InfiniBand offers that these multi-drop and switched single fabric for server I/O use – that causes the greatest spec- I/O fabrics do not. Among these are protection, partitioning, ulation regarding its role juxtaposed to other standard alterna- operating system (OS) by-pass, and transport level features. tives, such as PCI (for I/O) and Ethernet (for IPC). This issue PCI-based I/O technologies rely on a trust model of the is addressed in the following sections. highest degree since they provide open access to memory. Although many important elements have been specified in Although misbehaving PCI devices could be relatively rare, the standard, some details have been left for vendor innova- the potential for intentional or unintentional user corruption tion or have not been specified in the current version, possibly in large database servers, for example, is unacceptable. The left for future improvement of the standard. For example, at problem increases in scope when one considers the grow- the higher layers, operations over InfiniBand are strategically ing functionality and complexity of the PCI-connected de- specified at a functional level using verbs in a vendor-neutral, vices, which makes the possibility of errant operations even operating-system-independent way. As application program- greater. This is only one of many such deficiencies inher- mer interfaces (APIs) are included in the operating system, ent to PCI-based I/O architectures. InfiniBand separates it- it is up to operating system vendors to decide how the verbs self from the PCI comparison by having additional I/O func- should be mapped to particular operating systems to support tionality features. Among these are a sophisticated virtual various APIs. While this level of specifying things purpose- memory protection scheme (using registration), atomic and fully allows for vendor differentiation, some details are not remote memory access, protection key-based fabric partition- specified at any level.
Recommended publications
  • What Is It and How We Use It
    Infiniband Overview What is it and how we use it What is Infiniband • Infiniband is a contraction of "Infinite Bandwidth" o can keep bundling links so there is no theoretical limit o Target design goal is to always be faster than the PCI bus. • Infiniband should not be the bottleneck. • Credit based flow control o data is never sent if receiver can not guarantee sufficient buffering What is Infiniband • Infiniband is a switched fabric network o low latency o high throughput o failover • Superset of VIA (Virtual Interface Architecture) o Infiniband o RoCE (RDMA over Converged Ethernet) o iWarp (Internet Wide Area RDMA Protocol) What is Infiniband • Serial traffic is split into incoming and outgoing relative to any port • Currently 5 data rates o Single Data Rate (SDR), 2.5Gbps o Double Data Rate (DDR), 5 Gbps o Quadruple Data Rate (QDR), 10 Gbps o Fourteen Data Rate (FDR), 14.0625 Gbps o Enhanced Data Rate (EDR) 25.78125 Gbps • Links can be bonded together, 1x, 4x, 8x and 12x HDR - High Data Rate NDR - Next Data Rate Infiniband Road Map (Infiniband Trade Association) What is Infiniband • SDR, DDR, and QDR use 8B/10B encoding o 10 bits carry 8 bits of data o data rate is 80% of signal rate • FDR and EDR use 64B/66B encoding o 66 bits carry 64 bits of data Signal Rate Latency SDR 200ns DDR 140ns QDR 100ns Hardware 2 Hardware vendors • Mellanox o bought Voltaire • Intel o bought Qlogic Infiniband business unit Need to standardize hardware. Mellanox and Qlogic cards work in different ways.
    [Show full text]
  • SAS Enters the Mainstream Although Adoption of Serial Attached SCSI
    SAS enters the mainstream By the InfoStor staff http://www.infostor.com/articles/article_display.cfm?Section=ARTCL&C=Newst&ARTICLE_ID=295373&KEYWORDS=Adaptec&p=23 Although adoption of Serial Attached SCSI (SAS) is still in the infancy stages, the next 12 months bode well for proponents of the relatively new disk drive/array interface. For example, in a recent InfoStor QuickVote reader poll, 27% of the respondents said SAS will account for the majority of their disk drive purchases over the next year, although Serial ATA (SATA) topped the list with 37% of the respondents, followed by Fibre Channel with 32%. Only 4% of the poll respondents cited the aging parallel SCSI interface (see figure). However, surveys of InfoStor’s readers are skewed by the fact that almost half of our readers are in the channel (primarily VARs and systems/storage integrators), and the channel moves faster than end users in terms of adopting (or at least kicking the tires on) new technologies such as serial interfaces. Click here to enlarge image To get a more accurate view of the pace of adoption of serial interfaces such as SAS, consider market research predictions from firms such as Gartner and International Data Corp. (IDC). Yet even in those firms’ predictions, SAS is coming on surprisingly strong, mostly at the expense of its parallel SCSI predecessor. For example, Gartner predicts SAS disk drives will account for 16.4% of all multi-user drive shipments this year and will garner almost 45% of the overall market in 2009 (see figure on p. 18).
    [Show full text]
  • Gen 6 Fibre Channel Technology
    WHITE PAPER Better Performance, Better Insight for Your Mainframe Storage Network with Brocade Gen 6 TABLE OF CONTENTS Brocade and the IBM z Systems IO product team share a unique Overview .............................................................. 1 history of technical development, which has produced the world’s most Technology Highlights .................................. 2 advanced mainframe computing and storage systems. Brocade’s Gen 6 Fibre Channel Technology technical heritage can be traced to the late 1980s, with the creation of Benefits for z Systems and Flash channel extension technologies to extend data centers beyond the “glass- Storage ................................................................ 7 house.” IBM revolutionized the classic “computer room” with the invention Summary ............................................................. 9 of the original ESCON Storage Area Network (SAN) of the 1990s, and, in the 2000s, it facilitated geographically dispersed FICON® storage systems. Today, the most compelling innovations in mainframe storage networking technology are the product of this nearly 30-year partnership between Brocade and IBM z Systems. As the flash era of the 2010s disrupts between the business teams allows both the traditional mainframe storage organizations to guide the introduction networking mindset, Brocade and of key technologies to the market place, IBM have released a series of features while the integration between the system that address the demands of the data test and qualification teams ensures the center. These technologies leverage the integrity of those products. Driving these fundamental capabilities of Gen 5 and efforts are the deep technical relationships Gen 6 Fibre Channel, and extend them to between the Brocade and IBM z Systems the applications driving the world’s most IO architecture and development teams, critical systems.
    [Show full text]
  • Serial Attached SCSI (SAS)
    Storage A New Technology Hits the Enterprise Market: Serial Attached SCSI (SAS) A New Interface Technology Addresses New Requirements Enterprise data access and transfer demands are no longer driven by advances in CPU processing power alone. Beyond the sheer volume of data, information routinely consists of rich content that increases the need for capacity. High-speed networks have increased the velocity of data center activity. Networked applications have increased the rate of transactions. Serial Attached SCSI (SAS) addresses the technical requirements for performance and availability in these more I/O-intensive, mission-critical environments. Still, IT managers are pressed on the other side by cost constraints and the need White paper for flexibility and scalability in their systems. While application requirements are the first measure of a storage technology, systems based on SAS deliver on this second front as well. Because SAS is software compatible with Parallel SCSI and is interoperable with Serial ATA (SATA), SAS technology offers the ability to manage costs by staging deployment and fine-tuning a data center’s storage configuration on an ongoing basis. When presented in a small form factor (SFF) 2.5” hard disk drive, SAS even addresses the increasingly important facility considerations of space, heat, and power consumption in the data center. The backplanes, enclosures and cabling are less cumbersome than before. Connectors are smaller, cables are thinner and easier to route impeding less airflow, and both SAS and SATA HDDs can share a common backplane. White paper Getting this new technology to the market in a workable, compatible fashion takes various companies coming together.
    [Show full text]
  • EMC’S Perspective: a Look Forward
    The Performance Impact of NVM Express and NVM Express over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters J Metz, R&D Engineer for the Office of the CTO, Cisco Amber Huffman, Senior Principal Engineer, Intel Steve Sardella , Distinguished Engineer, EMC Dave Minturn, Storage Architect, Intel SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 3 What This Presentation Is A discussion of a new way of talking to Non-Volatile
    [Show full text]
  • Comparing Fibre Channel, Serial Attached SCSI (SAS) and Serial ATA (SATA)
    Comparing Fibre Channel, Serial Attached SCSI (SAS) and Serial ATA (SATA) by Allen Hin Wing Lam Bachelor ofElectrical Engineering Carleton University 1996 PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ENGINEERING In the School ofEngineering Science © Allen Hin Wing Lam 2009 SIMON FRASER UNIVERSITY Fall 2009 All rights reserved. However, in accordance with the Copyright Act ofCanada, this work may be reproduced, without authorization, under the conditions for Fair Dealing. Therefore, limited reproduction ofthis work for the purposes ofprivate study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly ifcited appropriately. Approval Name: Allen Hin Wing Lam Degree: Master ofEngineering Title ofProject: Comparing Fibre Channel, Serial Attached SCSI (SAS) and Serial ATA (SATA) Examining Committee: Chair: Dr. Daniel Lee Chair ofCommittee Associate Professor, School ofEngineering Science Simon Fraser University Dr. Stephen Hardy Senior Supervisor Professor, School ofEngineering Science Simon Fraser University Jim Younger Manager, Product Engineering PMC- Sierra, Inc. Date ofDefence/Approval r 11 SIMON FRASER UNIVERSITY LIBRARY Declaration of Partial Copyright Licence The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response
    [Show full text]
  • Brocade G620 Switch Datasheet
    DATA SHEET Brocade G620 Switch Ultra-dense, Highly Scalable, Easy-to-Use Enterprise-Class Storage Networking Switch HIGHLIGHTS Today’s mission-critical storage environments require greater • Provides high scalability in an ultra- consistency, predictability, and performance to keep pace with growing dense, 1U, 64-port switch to support business demands. Faced with explosive data growth, data centers high-density server virtualization, need more IO capacity to accommodate the massive amounts of data, cloud architectures, and flash-based applications, and workloads. In addition to this surge in data, collective storage environments expectations for availability continue to rise. Users expect applications to • Increases performance for demanding workloads across 32 Gbps links and be available and accessible from anywhere, at any time, on any device. shatters application performance barriers with up to 100 million IOPS To meet these dynamic and growing The Brocade® G620 Switch meets the • Enables “pay-as-you-grow” business demands, organizations need to demands of hyper-scale virtualization, scalability—with 24 to 64 ports—for deploy and scale up applications quickly. larger cloud infrastructures, and growing on-demand flexibility As a result, many are moving to higher flash-based storage environments by • Provides proactive, non-intrusive, Virtual Machine (VM) densities to enable delivering market-leading Gen 6 Fibre real-time monitoring and alerting of rapid deployment of new applications Channel technology and capabilities. It storage IO health and performance and deploying flash storage to help those provides a high-density building block for with IO Insight, the industry’s first applications scale to support thousands increased scalability, designed to support integrated network sensors of users.
    [Show full text]
  • What Is a Storage Area Network?
    8646_Barker_01_d.qxd 9/20/01 10:21 AM Page 3 CHAPTER1 What Storage Networking Is and What It Can Mean to You n this chapter we’ll learn more about: II What a storage area network is I What properties a storage area network must have, should have, and may have I The importance of software to storage area networks I Why information services professionals should be interested in storage networking I Information processing capabilities enabled by storage area networks I Some quirks in the vocabulary of storage networking What Is a Storage Area Network? According to the Storage Networking Industry Association (and who should know better?): A storage area network (SAN) is any high-performance network whose primary pur- pose is to enable storage devices to communicate with computer systems and with each other. We think that the most interesting things about this definition are what it doesn’t say: I It doesn’t say that a SAN’s only purpose is communication between computers and storage. Many organizations operate perfectly viable SANs that carry occa- sional administrative and other application traffic. I It doesn’t say that a SAN uses Fibre Channel or Ethernet or any other specific interconnect technology. A growing number of network technologies have archi- tectural and physical properties that make them suitable for use in SANs. 3 8646_Barker_01_d.qxd 9/20/01 10:21 AM Page 4 4 STORAGE AREA NETWORK ESSENTIALS I It doesn’t say what kind of storage devices are interconnected. Disk and tape drives, RAID subsystems, robotic libraries, and file servers are all being used pro- ductively in SAN environments today.
    [Show full text]
  • Fibre Channel Solution Guide
    Fibre Channel Solutions Guide 2019 fibrechannel.org FIBRE CHANNEL Powering the next generation private, public, and hybrid cloud storage networks ABOUT THE FCIA The Fibre Channel Industry Association (FCIA) is a non-profit international organization whose sole purpose is to be the independent technology and marketing voice of the Fibre Channel industry. We are committed to helping member organizations promote and position Fibre Channel, and to providing a focal point for Fibre Channel information, standards advocacy, and education. CONTACT THE FCIA For more information: www.fibrechannel.org • [email protected] TABLE OF CONTENTS Foreword .........................................................................................3 FCIA President Introduction.............................................................4 The State of Fibre Channel by Storage Switzerland .......................6 Fibre Channel New Technologies: FC-NVMe-2 ............................... 7 The 2019 Fibre Channel Roadmap ................................................. 8 Fibre Channel’s Future is Bright in Media and Entertainment ......10 Securing Fibre Channel SANs with End-to-End Encryption ..........12 FOREWORD By Rupin Mohan, FCIA Marketing Chairman; Director R&D and CTO, Hewlett-Packard Enterprise It’s 2019, and Fibre Channel continues to remain the premier storage fabric connectivity protocol in today’s data centers. Fibre Channel is deployed by thousands of customers in their data centers around the world and 80–90% of all All-Flash storage arrays are connected to servers via Fibre Channel. Customers have recently made a considerable investment in Gen 6 (32GFC), and given the 4-5-year depreciation cycle, this equipment will continue to run critical business applications requiring reliable, fast and scalable storage infrastructure. The NVMe over Fibre Channel (FC-NVMe) standard is published, and we see products being announced and released in the market across the board.
    [Show full text]
  • IBM Flex System IB6131 Infiniband Switch User's Guide
    IBM Flex System IB6131 InfiniBand Switch User’s Guide ii IBM Flex System IB6131 InfiniBand Switch User’s Guide IBM Flex System IB6131 InfiniBand Switch User’s Guide Note: Before using this information and the product it supports, read the general information in , ʺAppen‐ dix B: Noticesʺ on page 33, the Safety Information and Environmental Notices and Userʹs Guide docu‐ ments on the IBM Notices for Network Devices CD, and the Warranty Information document that comes with the product. First Edition February 2012 © Copyright IBM Corporation 2012. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Safety . vii Chapter1. Introduction . 1 Related documentation. 1 Notices and statements in this document . 3 Features and specifications. 3 Specifications . 4 Major components of the switch . 5 Chapter2. Installing the switch and basic setup. 7 Installing the IBM Flex system blade . 7 CMM . 7 Serial port access (Method 1) . 7 Configuration . 9 Configuration Wizard (Method 2) . 9 Cabling the switch . 15 Chapter3. LEDs and interfaces. 17 Port LEDs . 17 Switch status lights . 17 Power LED . 18 Fault LED . 18 Unit identification switch identifier LED . 18 RS‐232 interface through mini connector. 18 RJ‐45 Ethernet connector . 19 Configuring the IBM Flex System IB6131 InfiniBand switch . 19 Rerunning the Wizard. 19 Updating the switch software . 20 Chapter4. Connecting to the switch platform . 25 Starting an SSH connection to the switch (CLI) . 25 Starting a WebUI connection to the switch . 25 Managing the IBM Flex System IB6131 InfiniBand switch . 26 Chapter5. Solving problems . 27 Running POST .
    [Show full text]
  • Global MV Standards English
    www.visiononline.org www.emva.org www.jiia.org www.china-vision.org www.vdma.com/vision Member-supported trade associations promote the growth of the global vision and imaging industry. Standards development is key to the success of the industry and its trade groups help fund, maintain, manage and promote standards. In 2009, three leading vision associations, AIA, EMVA and JIIA began a cooperative initiative to coordinate the development of globally adopted vision standards. In 2015 they were joined by CMVU and VDMA-MV. This publication is one product of this cooperative effort. Version: April 2016 Copyright 2013, AIA, EMVA and JIIA. All rights reserved. Data within is intended as an information resource and no warranty for its use is given. Camera Link (including PoCL and PoCL-Lite), Camera Link HS, GigE Vision and USB3 Vision are the trademarks of AIA. GenICam is the trademark of EMVA. CoaXPress and IIDC2 are the trademarks of JIIA. FireWire is the trademark of Apple Inc. IEEE 1394 is the trademark of The 1394 Trade Association. USB is the trademark of USB Implementers Forum, Inc. All other names are trademarks or trade names of their respective companies. This is a comprehensive look at the various digital hardware and software interface standards used in machine vision and imaging. In the early days of machine vision, the industry adopted existing analog television standards such as CCIR or RS-170 for the interface between cameras and frame grabbers. The defining In the 1990s, digital technology became characteristics of today’s prevalent and a multitude of proprietary hardware and software interface solutions were used.
    [Show full text]
  • Infiniband Trade Association Integrators' List
    InfiniBand Trade Association Integrators’ List April 2019 © InfiniBand Trade Association SPACE IBTA InfiniBand Integrators' List April 2019 Manufacturer Description Model Type Speed FW SW ConnectX®-3 Pro VPI, FDR IB (56Gb/s) and 40/56GbE MLNX_OFED_LINUX 4.4- Mellanox MCX354A-FCCT HCA FDR 2.42.5000 Dual-port QSFP, PCIe3.0 x8 2.0.7.0 ConnectX®-4 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX456A-ECAT HCA EDR 12.24.1000 Dual-port QSFP28, PCIe3.0 x16 1.0.1.0 ConnectX®-5 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX556A-EDAT HCA EDR 16.24.1000 Dual-port QSFP28, PCIe4.0 x16 1.0.1.0 ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE, MLNX_OFED_LINUX 4.5- Mellanox MCX653A-HCAT HCA HDR 20.25.0330 dual-port QSFP56, Socket Direct 2x PCIe3.0 x16 1.0.1.0 SwitchX®2 InfiniBand to Ethernet gateway, 36 QSFP ports, Mellanox MSX6036G-2SFS Switch FDR 9.4.5070 3.6.8010 Managed Switch Switch-IB 2 based EDR InfiniBand 1U Switch, 36 QSFP28 ports, Mellanox MSB7800-ES2F Switch EDR 15.1910.0618 3.7.1134 x86 dual core Mellanox® Quantum(TM) HDR InfiniBand Switch, 40 QSFP56 Mellanox MQM8700-HS2F Switch HDR 27.2000.1016 3.7.1942 ports, 2 Power Supplies (AC), x86 dual core Software Versions Diagnostic Software Operating System Cent OS 7.5.1804 ibutils2 MLNX OFED 4.5-1.0.1.0 Mellanox OFED MLNX OFED 4.5-1.0.1.0 Compliance Test Suite v. 1.0.52 Open MPII Open MPI 3.1.2 Benchmark Intel MPI Benchmarks Benchmarks Performed Test Plan Software Forge IBTA MOI Suite PingPong Gather Duration 2-5 Minutes PingPing Gatherv Sendrecv Scatter Conditions for Passing Testing Exchange Scatterv Link Width Link width is @ expected width - i.e.
    [Show full text]