Hypertransport Over Ethernet - a Scalable, Commodity Standard for Resource Sharing in the Data Center

Total Page:16

File Type:pdf, Size:1020Kb

Hypertransport Over Ethernet - a Scalable, Commodity Standard for Resource Sharing in the Data Center Proceedings of the Second International Workshop on HyperTransport Research and Applications (WHTRA2011) Feb. 09, 2011, Mannheim, Germany HyperTransport Over Ethernet - A Scalable, Commodity Standard for Resource Sharing in the Data Center Jeffrey Young, Sudhakar Yalamanchili∗ Brian Holden, Mario Cavalli Georgia Institute of Technology HyperTransport Consortium [email protected], [email protected] fbrian.holden, [email protected] Paul Miranda AMD [email protected] Abstract 1. Introduction HyperTransport interconnect technology has been Future data center configurations are driven by total in use for several years as a low-latency interconnect for cost of ownership (TCO) for specific performance ca- processors and peripherals [9] [7] and more recently as pabilities. Low-latency interconnects are central to per- an off-chip interconnect using the HTX card [5]. How- formance, while the use of commodity interconnects ever, HyperTransport adoption for scalable cluster solu- is central to cost. This paper reports on an effort to tions has typically been limited by the number of avail- combine a very high-performance, commodity intercon- able coherent connections between AMD processors (8 nect (HyperTransport) with a high-volume interconnect sockets) and by the need for custom HyperTransport (Ethernet). Previous approaches to extending Hyper- connectors between nodes. Transport (HT) over a cluster used custom FPGA cards The HyperTransport Consortium’s new Hyper- [5] and proprietary extensions to coherence schemes Share market strategy has presented three new options [22], but these solutions mainly have been adopted for for building scalable, low-cost cluster solutions using use in research-oriented clusters. The new HyperShare HyperTransport technology: 1) HyperTransport-native strategy from the HyperTransport Consortium proposes torus-based network fabric using PCI Express-enabled several new ways to create low-cost, commodity clus- network interface cards implementing the HyperTrans- ters that can support scalable high performance com- port High Node Count specification [14], 2) Hyper- puting in either clusters or in the data center. Transport encapsulated into InfiniBand physical layer packets, and 3) HyperTransport encapsulated into Eth- HyperTransport over Ethernet (HToE) is the ernet physical layer packets. These three approaches newest specification in the HyperShare strategy that provide different levels of advantages and trade-offs aims to combine favorable market trends with a high- across the spectrum of cost and performance. This bandwidth and low-latency hardware solution for non- paper describes the encapsulation of HyperTransport coherent sharing of resources in a cluster. This paper packets into Ethernet, thereby leveraging the cost and illustrates the motivation behind using 10, 40, or 100 performance advantages of Ethernet to enable sharing Gigabit Ethernet as an encapsulation layer for Hyper- of resources and (noncoherent) memory across future Transport, the requirements for the HToE specification, data centers. More specifically, this paper describes key and engineering solutions for implementing key por- aspects of the HyperTransport over Ethernet (HToE) tions of the specification. specification that is part of the HyperShare strategy. In the following sections, we describe 1) the moti- vation for using HToE in both the HPC and data cen- ter arenas, 2) challenges facing the encapsulation of ∗This research was supported in part by NSF grant CCF-0874991, and Jeffrey Young was supported by a NSF Graduate Research Fel- HT packets over Ethernet, 3) an overview of the major lowship components of this specification, and 4) use cases that 1 Proceedings of the Second International Workshop on HyperTransport Research and Applications (WHTRA2011) Feb. 09, 2011, Mannheim, Germany demonstrate how this new specification can be utilized cessible to the application software layers without hav- for resource sharing in high node count environments. ing to engage higher overhead legacy software protocol stacks that can add microseconds of latency [4] [23]. 2. The Motivation for HToE: Trends in In- The HToE specification described here is a step towards terconnects that goal, since it focuses on using Layer 2 (L2) pack- ets and a global address space memory model to reduce dependencies on software and OS-level techniques in The past ten years in the high-performance com- performing remote memory accesses. puting world have seen dramatic decreases in off-chip latency along with increases in available off-chip band- width, due largely to the introduction of commodity 2.2. Cost and Market Share networking technologies like InfiniBand and 10 Giga- bit Ethernet (10GE) from companies such as Myrinet While 10 Gigabit Ethernet has had a relatively slow and Quadrics. Arguably, InfiniBand has made the most adoption rate in the past few years, it should be noted inroads in the high-performance computing space, with that 1 Gigabit and 10 Gigabit Ethernet still have a 45.6% InfiniBand composing 42.6% of the fabrics for clusters share of the Top 500 Supercomputing list [18], with a on the current Top 500 Supercomputing list [18]. majority of these installations still using 1 Gigabit Eth- At the same time, Ethernet has evolved as a lower- ernet. This indicates that cost plays an important role cost and “software-friendly” alternative that enjoys in the construction of computational clusters on this higher volumes. The ability to integrate HT over Ether- list (for example, for market analysis and geological net would enjoy significant infrastructure and operating data analysis in the mineral and natural resource indus- cost advantages in data center applications and certain tries). Additionally, networks composed of 1 and 10 segments of the high-performance marketplace. Gigabit Ethernet also have a dominant position in high- performance web server farms. Part of this widespread 2.1. Performance market share is due to the low cost of Gigabit Ether- net and falling cost of 10 Gigabit Ethernet as well as The ratification of the 10 Gigabit Ethernet standard the management and operational simplicity of Ethernet in 2002 [1] has led to its adoption in data centers and the networks. high-performance community. Woven Systems (now However, it should also be noted that InfiniBand Fortinet) in 2007 demonstrated that 10 Gigabit Ethernet still enjoys a price and power advantage over 10 and 40 with TCP offloading can compete in terms of perfor- Gbps Ethernet due to being first to market. A 40 Gbps, mance with SDR InfiniBand, with both fabrics demon- 36 port InfiniBand switch now costs around $6,500 and strating latencies in the low microseconds during a San- has a typical power dissipation of 226 Watts [8], while a dia test [28]. In addition, switch manufacturers have 10 Gbps, 48 port Ethernet switch costs around $20,900 built 10 Gigabit Ethernet devices with latencies in the and has a power dissipation of 360 Watts. low hundreds of nanoseconds [11] [31]. Recent tests One of the strongest factors for using Ethernet is with iWARP-enabled 10GE adapters have shown laten- the trend toward converged networks, driven in large cies that are on the order of 8-10 microseconds, as com- part by the need to lower the total cost of ownership pared to similar InfiniBand adapters with latencies of 4- (TCO). For example, Fibre Channel (FC) has been the 6 microseconds [12]. More recent tests have confirmed de facto high-performance standard for SANs for the that 10 Gigabit Ethernet latency for MPI with iWARP past 15 years. The technical committee behind FC has is in the range of 8 microseconds [20]. been a major proponent of convergence in the data cen- These latencies already are low enough to support ter with their introduction of the Fibre Channel over the needs of many high-throughput applications, such Ethernet (FCoE) standard [15]. This standard relies on as retail forecasting and many forms of financial analy- several new IEEE Ethernet standards that are collec- sis which typically require end-to-end packet latencies tively referred to as either Data Center Bridging (DCB) in the range of a few microseconds. The new IEEE or Converged Enhanced Ethernet (CEE) and are de- 802.3ba standard for 40 and 100 Gbps Ethernet also scribed in more detail in Section 3.3. The approval of aims to make Ethernet more competitive with Infini- this standard and subsequent adoption by hardware ven- Band. Although full-scale adoption is likely to take sev- dors bodes well for the continued usage of Ethernet in eral years, there are already some early products that data centers and smaller high-performance clusters. support 100 Gigabit Ethernet [25]. Possibly one of the best indicators of the future The challenge with using these lower-latency fab- market share for Ethernet as a high-performance data rics is in making these lower hardware latencies ac- center and cluster fabric is the willingness of competi- 2 Proceedings of the Second International Workshop on HyperTransport Research and Applications (WHTRA2011) Feb. 09, 2011, Mannheim, Germany tors to embrace and extend Ethernet technologies. Two examples are the creation of high-performance Ethernet switches [24] and the development of RDMA over Con- verged Ethernet (RoCE) [3], which has been referred to by some as “InfiniBand over Ethernet” since it utilizes the InfiniBand verbs and transport layer with a DCB Ethernet link layer and physical network. 2.3. Scalability Figure 1. HyperTransport Over Ethernet Layers As the most prevalent commodity interconnect technology in previous generation data centers, there has been considerable effort devoted to constructing scalable Ethernet fabrics for data centers. For instance, the barrier to entry in using HyperTransport over Ether- consider the use of highly scalable fat tree networks for net is low in most cases, and using converged Ethernet data centers using 10 Gigabit Ethernet [27], while net- negates the need for a custom sharing fabric like NU- work vendors have already embraced the in-progress MAlink [16] or additional cabling for an InfiniBand or standards for Data Center Bridging as a way to cre- other custom network.
Recommended publications
  • What Is It and How We Use It
    Infiniband Overview What is it and how we use it What is Infiniband • Infiniband is a contraction of "Infinite Bandwidth" o can keep bundling links so there is no theoretical limit o Target design goal is to always be faster than the PCI bus. • Infiniband should not be the bottleneck. • Credit based flow control o data is never sent if receiver can not guarantee sufficient buffering What is Infiniband • Infiniband is a switched fabric network o low latency o high throughput o failover • Superset of VIA (Virtual Interface Architecture) o Infiniband o RoCE (RDMA over Converged Ethernet) o iWarp (Internet Wide Area RDMA Protocol) What is Infiniband • Serial traffic is split into incoming and outgoing relative to any port • Currently 5 data rates o Single Data Rate (SDR), 2.5Gbps o Double Data Rate (DDR), 5 Gbps o Quadruple Data Rate (QDR), 10 Gbps o Fourteen Data Rate (FDR), 14.0625 Gbps o Enhanced Data Rate (EDR) 25.78125 Gbps • Links can be bonded together, 1x, 4x, 8x and 12x HDR - High Data Rate NDR - Next Data Rate Infiniband Road Map (Infiniband Trade Association) What is Infiniband • SDR, DDR, and QDR use 8B/10B encoding o 10 bits carry 8 bits of data o data rate is 80% of signal rate • FDR and EDR use 64B/66B encoding o 66 bits carry 64 bits of data Signal Rate Latency SDR 200ns DDR 140ns QDR 100ns Hardware 2 Hardware vendors • Mellanox o bought Voltaire • Intel o bought Qlogic Infiniband business unit Need to standardize hardware. Mellanox and Qlogic cards work in different ways.
    [Show full text]
  • IBM Flex System IB6131 Infiniband Switch User's Guide
    IBM Flex System IB6131 InfiniBand Switch User’s Guide ii IBM Flex System IB6131 InfiniBand Switch User’s Guide IBM Flex System IB6131 InfiniBand Switch User’s Guide Note: Before using this information and the product it supports, read the general information in , ʺAppen‐ dix B: Noticesʺ on page 33, the Safety Information and Environmental Notices and Userʹs Guide docu‐ ments on the IBM Notices for Network Devices CD, and the Warranty Information document that comes with the product. First Edition February 2012 © Copyright IBM Corporation 2012. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Safety . vii Chapter1. Introduction . 1 Related documentation. 1 Notices and statements in this document . 3 Features and specifications. 3 Specifications . 4 Major components of the switch . 5 Chapter2. Installing the switch and basic setup. 7 Installing the IBM Flex system blade . 7 CMM . 7 Serial port access (Method 1) . 7 Configuration . 9 Configuration Wizard (Method 2) . 9 Cabling the switch . 15 Chapter3. LEDs and interfaces. 17 Port LEDs . 17 Switch status lights . 17 Power LED . 18 Fault LED . 18 Unit identification switch identifier LED . 18 RS‐232 interface through mini connector. 18 RJ‐45 Ethernet connector . 19 Configuring the IBM Flex System IB6131 InfiniBand switch . 19 Rerunning the Wizard. 19 Updating the switch software . 20 Chapter4. Connecting to the switch platform . 25 Starting an SSH connection to the switch (CLI) . 25 Starting a WebUI connection to the switch . 25 Managing the IBM Flex System IB6131 InfiniBand switch . 26 Chapter5. Solving problems . 27 Running POST .
    [Show full text]
  • Global MV Standards English
    www.visiononline.org www.emva.org www.jiia.org www.china-vision.org www.vdma.com/vision Member-supported trade associations promote the growth of the global vision and imaging industry. Standards development is key to the success of the industry and its trade groups help fund, maintain, manage and promote standards. In 2009, three leading vision associations, AIA, EMVA and JIIA began a cooperative initiative to coordinate the development of globally adopted vision standards. In 2015 they were joined by CMVU and VDMA-MV. This publication is one product of this cooperative effort. Version: April 2016 Copyright 2013, AIA, EMVA and JIIA. All rights reserved. Data within is intended as an information resource and no warranty for its use is given. Camera Link (including PoCL and PoCL-Lite), Camera Link HS, GigE Vision and USB3 Vision are the trademarks of AIA. GenICam is the trademark of EMVA. CoaXPress and IIDC2 are the trademarks of JIIA. FireWire is the trademark of Apple Inc. IEEE 1394 is the trademark of The 1394 Trade Association. USB is the trademark of USB Implementers Forum, Inc. All other names are trademarks or trade names of their respective companies. This is a comprehensive look at the various digital hardware and software interface standards used in machine vision and imaging. In the early days of machine vision, the industry adopted existing analog television standards such as CCIR or RS-170 for the interface between cameras and frame grabbers. The defining In the 1990s, digital technology became characteristics of today’s prevalent and a multitude of proprietary hardware and software interface solutions were used.
    [Show full text]
  • Infiniband Trade Association Integrators' List
    InfiniBand Trade Association Integrators’ List April 2019 © InfiniBand Trade Association SPACE IBTA InfiniBand Integrators' List April 2019 Manufacturer Description Model Type Speed FW SW ConnectX®-3 Pro VPI, FDR IB (56Gb/s) and 40/56GbE MLNX_OFED_LINUX 4.4- Mellanox MCX354A-FCCT HCA FDR 2.42.5000 Dual-port QSFP, PCIe3.0 x8 2.0.7.0 ConnectX®-4 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX456A-ECAT HCA EDR 12.24.1000 Dual-port QSFP28, PCIe3.0 x16 1.0.1.0 ConnectX®-5 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX556A-EDAT HCA EDR 16.24.1000 Dual-port QSFP28, PCIe4.0 x16 1.0.1.0 ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE, MLNX_OFED_LINUX 4.5- Mellanox MCX653A-HCAT HCA HDR 20.25.0330 dual-port QSFP56, Socket Direct 2x PCIe3.0 x16 1.0.1.0 SwitchX®2 InfiniBand to Ethernet gateway, 36 QSFP ports, Mellanox MSX6036G-2SFS Switch FDR 9.4.5070 3.6.8010 Managed Switch Switch-IB 2 based EDR InfiniBand 1U Switch, 36 QSFP28 ports, Mellanox MSB7800-ES2F Switch EDR 15.1910.0618 3.7.1134 x86 dual core Mellanox® Quantum(TM) HDR InfiniBand Switch, 40 QSFP56 Mellanox MQM8700-HS2F Switch HDR 27.2000.1016 3.7.1942 ports, 2 Power Supplies (AC), x86 dual core Software Versions Diagnostic Software Operating System Cent OS 7.5.1804 ibutils2 MLNX OFED 4.5-1.0.1.0 Mellanox OFED MLNX OFED 4.5-1.0.1.0 Compliance Test Suite v. 1.0.52 Open MPII Open MPI 3.1.2 Benchmark Intel MPI Benchmarks Benchmarks Performed Test Plan Software Forge IBTA MOI Suite PingPong Gather Duration 2-5 Minutes PingPing Gatherv Sendrecv Scatter Conditions for Passing Testing Exchange Scatterv Link Width Link width is @ expected width - i.e.
    [Show full text]
  • Infiniband Technology Overview
    InfiniBand Technology Overview Dror Goldenberg, Mellanox Technologies SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. InfiniBand Technology Overview 2 © 2008 Storage Networking Industry Association. All Rights Reserved. Abstract InfiniBand Technology Overview The InfiniBand architecture brings fabric consolidation to the data center. Storage networking can concurrently run with clustering, communication and management fabrics over the same infrastructure, preserving the behavior of multiple fabrics. The tutorial provides an overview of the InfiniBand architecture including discussion of High Speed – Low Latency, Channel I/O, QoS scheduling, partitioning, high availability and protocol offload. InfiniBand based storage protocols, iSER (iSCSI RDMA Protocol), NFS over RDMA and SCSI RDMA Protocol (SRP), are introduced and compared with alternative storage protocols, such as iSCSI and FCP. The tutorial further enumerates value-add features that the InfiniBand brings to clustered storage, such as atomic operations and end to end data integrity. Learning Objectives: Understand the InfiniBand architecture and feature set. Understand the benefits of InfiniBand for networked storage. Understand the standard InfiniBand storage protocols. InfiniBand Technology Overview 3 © 2008 Storage Networking Industry Association. All Rights Reserved. Agenda Motivation and General Overview Protocol Stack Layers Storage Protocols over InfiniBand Benefits InfiniBand Technology Overview 4 © 2008 Storage Networking Industry Association.
    [Show full text]
  • Opencapi, Gen-Z, CCIX: Technology Overview, Trends, and Alignments
    OpenCAPI, Gen-Z, CCIX: Technology Overview, Trends, and Alignments BRAD BENTON| AUGUST 8, 2017 Agenda 2 | OpenSHMEM Workshop | August 8, 2017 AGENDA Overview and motivation for new system interconnects Technical overview of the new, proposed technologies Emerging trends Paths to convergence? 3 | OpenSHMEM Workshop | August 8, 2017 Overview and Motivation 4 | OpenSHMEM Workshop | August 8, 2017 NEWLY EMERGING BUS/INTERCONNECT STANDARDS Three new bus/interconnect standards announced in 2016 CCIX: Cache Coherent Interconnect for Accelerators ‒ Formed May, 2016 ‒ Founding members include: AMD, ARM, Huawei, IBM, Mellanox, Xilinx ‒ ccixconsortium.com Gen-Z ‒ Formed August, 2016 ‒ Founding members include: AMD, ARM, Cray, Dell EMC, HPE, IBM, Mellanox, Micron, Xilinx ‒ genzconsortium.org OpenCAPI: Open Coherent Accelerator Processor Interface ‒ Formed September, 2016 ‒ Founding members include: AMD, Google, IBM, Mellanox, Micron ‒ opencapi.org 5 5 | OpenSHMEM Workshop | August 8, 2017 NEWLY EMERGING BUS/INTERCONNECT STANDARDS Motivations for these new standards Tighter coupling between processors and accelerators (GPUs, FPGAs, etc.) ‒ unified, virtual memory address space ‒ reduce data movement and avoid data copies to/from accelerators ‒ enables sharing of pointer-based data structures w/o the need for deep copies Open, non-proprietary standards-based solutions Higher bandwidth solutions ‒ 25Gbs and above vs. 16Gbs forPCIe-Gen4 Better exploitation of new and emerging memory/storage technologies ‒ NVMe ‒ Storage class memory (SCM),
    [Show full text]
  • Designcon 2003 Tecforum I2C Bus Overview January 27 2003
    DesignCon 2003 TecForum I2C Bus Overview January 27 2003 Philips Semiconductors Jean Marc Irazabal –Technical Marketing Manager for I2C Devices Steve Blozis –International Product Manager for I2C Devices Agenda • 1st Hour • Serial Bus Overview • I2C Theory Of Operation • 2nd Hour • Overcoming Previous Limitations • I2C Development Tools and Evaluation Board • 3rd Hour • SMBus and IPMI Overview • I2C Device Overview • I2C Patent and Legal Information • Q & A Slide speaker notes are included in AN10216 I2C Manual 2 DesignCon 2003 TecForum I C Bus Overview 2 1st Hour 2 DesignCon 2003 TecForum I C Bus Overview 3 Serial Bus Overview 2 DesignCon 2003 TecForum I C Bus Overview 4 Com m uni c a t i o ns Automotive SERIAL Consumer BUSES IEEE1394 DesignCon 2003 TecForum I UART SPI 2 C Bus Overview In d u s t r ia l 5 General concept for Serial communications SCL SDA select 3 select 2 select 1 READ Register or enable Shift Reg# enable Shift Reg# enable Shift Reg# WRITE? // to Ser. // to Ser. // to Ser. Shift R/W Parallel to Serial R/W R/W “MASTER” DATA SLAVE 1 SLAVE 2 SLAVE 3 • A point to point communication does not require a Select control signal • An asynchronous communication does not have a Clock signal • Data, Select and R/W signals can share the same line, depending on the protocol • Notice that Slave 1 cannot communicate with Slave 2 or 3 (except via the ‘master’) Only the ‘master’ can start communicating. Slaves can ‘only speak when spoken to’ 2 DesignCon 2003 TecForum I C Bus Overview 6 Typical Signaling Characteristics LVTTL 2 RS422/485
    [Show full text]
  • DGX-1 User Guide
    NVIDIA DGX-1 DU-08033-001 _v15 | May 2018 User Guide TABLE OF CONTENTS Chapter 1. Introduction to the NVIDIA DGX-1 Deep Learning System................................. 1 1.1. Using the DGX-1: Overview............................................................................. 1 1.2. Hardware Specifications................................................................................. 2 1.2.1. Components.......................................................................................... 2 1.2.2. Mechanical............................................................................................3 1.2.3. Power Requirements................................................................................ 3 1.2.4. Connections and Controls.......................................................................... 3 1.2.5. Rear Panel Power Controls.........................................................................4 1.2.6. LAN LEDs..............................................................................................5 1.2.7. IPMI Port LEDs....................................................................................... 5 1.2.8. Hard Disk Indicators................................................................................ 6 1.2.9. Power Supply Unit (PSU) LED..................................................................... 7 Chapter 2. Installation and Setup............................................................................ 8 2.1. Registering Your DGX-1.................................................................................
    [Show full text]
  • I2C Overview
    I2C Overview 2H 2003 Agenda • What is I2C and why would you be interested in using the I2C bus? • What new I2C devices are there and what are the typical applications? • How are we going to help you design-in these devices? Semiconductors 2 Transmission Standards 2 2500 I C data can be transmitted at speeds of 100 kHz, 400 kHz CML 655 or 3.4 MHz. bps) M 400 GTLP 1394.a I2C data can be LV BTL DS = 35 ECL RS- ETL /PECL 644 transmitted longer /LV 10 PECL distances using bus buffers like the 3.4 MHz General RS-422 P82B96 Purpose 1 Logic RS-485 Data Transfer Rate ( 400 KHz 0.1 100 KHz I2C RS-232 RS-423 0.5 0 10 100 1000 Backplane Length (meters) Cable Length (meters) Semiconductors 3 I2C Bus Basics - Address and Data µcon- GPIO A/D LCD RTC µcon- troller troller The master always D/A sends the SCL II (clock) signal. SCL SDA Each device is addressed individually by software with a unique address that New devices or can be modified by hardware pins. functions can be easily 1010A A A R/W A ‘clipped on to an 2 1 0 2 EEPROM existing bus! The open drain/collector outputs A1 provide for a “wired-AND” connection A0 that allows devices to be added or removed without impact and always 1010100R/W require a pull-up resistor. Write data Master Slave S slaveslave address address W W A Adata data A data A data A P transmitter receiver A P < n data bytes > Read data S slave address R A data A data A P receiver transmitter < n data bytes > last data byte S = Start condition R/W = read / write not A = Acknowledge A = Not Acknowledge P = Stop condition Semiconductors
    [Show full text]
  • HPE EDR Infiniband Adapters Overview
    QuickSpecs HPE EDR InfiniBand Adapters Overview HPE EDR InfiniBand Adapters HPE EDR InfiniBand 100Gb 1-port 841QSFP28 Adapter is based on Mellanox ConnectX®-5 technology. It supports InfiniBand function for HPE ProLiant XL and DL Servers. It is designed for customers who need low latency and high bandwidth InfiniBand interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100Gb 2-port 841QSFP28 Adapter is based on Mellanox ConnectX®-5 VPI technology. It supports dual-function InfiniBand and Ethernet for HPE ProLiant XL and DL Servers. It can function as a dual ported EDR InfiniBand card, a dual ported 100Gb Ethernet card, or a mixed function card. It is designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100G 2-port 840QSFP28 Adapter is based on Mellanox ConnectX®-4 technology. It supports dual- function InfiniBand and Ethernet for HPE ProLiant XL and DL Servers. It can function as a dual ported EDR InfiniBand card, a dual ported 100Gb Ethernet card, or a mixed function card. They are designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100Gb 1-port 841OCP QSFP28 Adapter is based on Mellanox ConnectX®-5 VPI technology. It supports dual-function InfiniBand and Ethernet for the HPE Apollo 70 Server. It is designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems based on HPE Apollo 70 servers. HPE Ethernet 100Gb 1-port QSFP28 MCX515A-CCAT Adapter is based on Mellanox ConnectX®-5 EN technology.
    [Show full text]
  • Mellanox Connectx®-6 VPI Adapter Cards User Manual for Dell EMC Poweredge Servers
    Mellanox ConnectX®-6 VPI Adapter Cards User Manual for Dell EMC PowerEdge Servers OPNs: Y1T43, 7TKND, 1GK7G, CY7GD Rev 1.1 www.mellanox.com Mellanox Technologies Confidential NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT PRODUCT(S) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS. THE CUSTOMER'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT(S) AND/OR THE SYSTEM USING IT. THEREFORE, MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT, INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Mellanox Technologies 350 Oakmead Parkway Suite 100 Sunnyvale, CA 94085 U.S.A. www.mellanox.com Tel: (408) 970-3400 Fax: (408) 970-3403 © Copyright 2019. Mellanox Technologies Ltd.
    [Show full text]
  • Cable Assy 5.07.Qxd
    hdmi cable dvi assemblies serial ata lxi data storage flat ribbon custom consumer electronics • standard and custom • overmold capability • accessory options HDMI HDMI Series Cable Assemblies The High Definition Multimedia Interface (HDMI)TM was developed as the digital interface standard for the consumer electronics market. HDMI combines high-definition video and multi-channel audio into a single digital interface to provide clear digital quality through a single cable. A single cable for audio and video significantly simplifies home entertainment system installation. CA's high quality cable assemblies are available as HDMI-to-HDMI assemblies, as well as HDMI-to-DVI assemblies. DVI Series Cable Assemblies Circuit Assembly’s DVI (Digital Visual Interface) cable assemblies utilize CA’s high- quality DVI plugs, designed to reduce EMI across copper cables. Circuit Assembly offers DVI cable assemblies in multiple configurations for use in a variety of applications. DVI USB and IEEE 1394 Series Cable Assemblies Universal Serial Bus is a simple way to attach a wide variety of peripherals to a PC. CA’s USB cable assemblies support two-way data transmission from devices like the keyboard, mouse, notepad and joystick. CA’s high-quality IEEE 1394 cables integrate computers with audio/video entertainment units and digital consumer electronics to allow real time transmission with high image resolution. USB INFINIBAND 1394 Infiniband Cable Assemblies Infiniband is a serial interface used in high-performance computing to connect processors with high-speed peripherals and enable inter-switch connections. Circuit Assembly’s Infiniband cable assemblies support 4X and 12X Infiniband link implementation and are available in a wide range of configurations using industry approved connectors and cable.
    [Show full text]