ADAPTER CARDS

PRODUCT BRIEF

ConnectX®-4 Lx EN rNDC † 10/25 Gigabit Ethernet Adapter Cards supporting RDMA, Overlay Networks Encapsulation/Decapsulation and more

ConnectX-4 Lx EN rNDC Network Controller with 10/25Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and HIGHLIGHTS applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible BENEFITS –– Highest performing boards for solution for Web 2.0, Cloud, data analytics, database, and storage platforms. applications requiring high bandwidth, low latency and high message rate With the exponential increase in usage of data and I/O Virtualization –– Industry leading throughput and latency the creation of new applications, the demand for the ConnectX-4 Lx EN SR-IOV technology provides dedi- for Web 2.0, Cloud and Big Data highest throughput, lowest latency, virtualization and cated adapter resources and guaranteed isolation applications sophisticated data acceleration engines continues and protection for virtual machines (VMs) within –– Cutting-edge performance in virtualized to rise. ConnectX-4 Lx EN enables data centers to the server. I/O virtualization with ConnectX-4 Lx overlay networks leverage the world’s leading interconnect adapter EN gives administrators better server –– Efficient I/O consolidation, lowering data for increasing their operational efficiency, improving utilization while reducing cost, power, and cable center costs and complexity servers’ utilization, maximizing applications complexity, allowing more Virtual Machines and –– Virtualization acceleration productivity, while reducing total cost of ownership more tenants on the same hardware. –– Power efficiency (TCO). Overlay Networks KEY FEATURES ConnectX-4 Lx EN provides an unmatched –– 10/25Gb/s speeds In order to better scale their networks, data center combination of 10, and 25GbE bandwidth, –– Erasure Coding offload operators often create overlay networks that carry sub-microsecond latency and a 75 million packets –– Virtualization traffic from individual virtual machines over logical per second message rate. It includes native –– Low latency RDMA over Converged tunnels in encapsulated formats such as NVGRE hardware support for RDMA over Converged Ethernet (RoCE) and VXLAN. While this solves network scalability Ethernet, Ethernet stateless offload engines, –– CPU offloading of transport operations issues, it hides the TCP packet from the hardware Overlay Networks,and GPUDirect® Technology. –– Application offloading offloading engines, placing higher loads on the –– Mellanox PeerDirectTM communication High Speed Ethernet Adapter host CPU. ConnectX-4 Lx EN effectively addresses acceleration this by providing advanced NVGRE and VXLAN –– Hardware offloads for NVGRE and VXLAN ConnectX-4 Lx EN offers the best cost effective hardware offloading engines that encapsulate and encapsulated traffic Ethernet adapter solution for 10 and 25Gb/s de-capsulate the overlay protocol headers, enabling –– End-to-end QoS and congestion control Ethernet speeds, enabling seamless networking, the traditional offloads to be performed on the –– Hardware-based I/O virtualization clustering, or storage. The adapter reduces encapsulated traffic for these and other tunneling –– RoHS-R6 application runtime, and offers the flexibility and protocols (MPLS, QinQ, and so on). With ConnectX-4 scalability to make infrastructure run as efficiently Lx EN, data center operators can achieve native and productively as possible. performance in the new network architecture.

©2015 Mellanox Technologies. All rights reserved. † For illustration only. Actual products may vary. ConnectX®-4 Lx EN 10/25 Gigabit Ethernet rNDC Adapter page 2

RDMA over Converged Ethernet (RoCE) standard block and file access protocols can leverage RoCE for high-performance storage ConnectX-4 Lx EN supports RoCE specifications access. A consolidated compute and storage delivering low-latency and high- performance network achieves significant cost-performance over Ethernet networks. Leveraging data advantages over multi-fabric networks. center bridging (DCB) capabilities as well as ConnectX-4 Lx EN advanced congestion control Distributed RAID hardware mechanisms, RoCE provides efficient ConnectX-4 Lx EN delivers advanced Erasure low-latency RDMA services over Layer 2 and Coding offloading capability, enabling Layer 3 networks. distributed Redundant Array of Inexpensive Mellanox PeerDirect™ Disks (RAID), a data storage technology that combines multiple disk drive components PeerDirect™ communication provides high into a logical unit for the purposes of data efficiency RDMA access by eliminating redundancy and performance improvement. unnecessary internal data copies between ConnectX-4 Lx EN’s Reed-Solomon capability components on the PCIe bus (for example, introduces redundant block calculations, from GPU to CPU), and therefore significantly which, together with RDMA, achieves high reduces application run time. ConnectX-4 performance and reliable storage access. Lx EN advanced acceleration technology enables higher cluster efficiency and Software Support scalability to tens of thousands of nodes. All Mellanox adapter cards are supported Storage Acceleration by Windows, Linux distributions, VMware and Citrix XENServer. ConnectX-4 Lx EN Storage applications will see improved supports various management interfaces and performance with the higher bandwidth has a rich set of tools for configuration and ConnectX-4 Lx EN delivers. Moreover, management across operating systems.

COMPATIBILITY*

PCI EXPRESS INTERFACE OPERATING SYSTEMS/DISTRIBUTIONS* –– PCIe Gen 3.0 compliant, 1.1 and 2.0 compatible –– RHEL –– 2.5, 5.0, or 8.0GT/s link rate x8 –– SLES –– Auto-negotiates to x8, x4, x2, or x1 –– Windows –– Support for MSI/MSI-X mechanisms –– VMware –– OpenFabrics Enterprise Distribution (OFED) CONNECTIVITY –– OpenFabrics Windows Distribution (WinOF-2) –– Interoperable with10/25Gb Ethernet switches –– Passive copper cable with ESD protection

*Refer to for current OS/distributions

©2015 Mellanox Technologies. All rights reserved. ConnectX®-4 Lx EN 10/25 Gigabit Ethernet rNDC Adapter page 3

FEATURES SUMMARY*

ETHERNET OVERLAY NETWORKS REMOTE BOOT –– 25GbE / 10GbE –– Stateless offloads for overlay networks and –– Remote boot over Ethernet –– 25G Ethernet Consortium 25, Gigabit Ethernet tunneling protocols –– Remote boot over iSCSI –– IEEE 802.3ae 10 Gigabit Ethernet –– Hardware offload of encapsulation and –– PXE and UEFI –– EEE 802.3ad, 802.1AX Link Aggregation decapsulation of NVGRE and VXLAN overlay PROTOCOL SUPPORT –– IEEE 802.1Q, 802.1P VLAN tags and priority networks –– OpenMPI, IBM PE, OSU MPI (MVAPICH/2), –– IEEE 802.1Qau (QCN) – Congestion HARDWARE-BASED I/O VIRTUALIZATION MPI Notification –– Single Root IOV –– Platform MPI, UPC, Open SHMEM –– EEE 802.1Qaz (ETS) –– Address translation and protection –– TCP/UDP, MPLS, VxLAN, NVGRE, GENEVE –– Jumbo frame support (9.6KB) –– Multiple queues per virtual machine –– iSER, NFS RDMA, SMB Direct EN HANCED FEATURES –– Enhanced QoS for vNICs –– uDAPL –– Hardware-based reliable transport –– VMware NetQueue support MANAGEMENT AND CONTROL –– Collective operations offloads VIRTUALIZATION INTERFACES –– PeerDirectTM RDMA (aka GPUDirect –– SR-IOV: Up to 62 Virtual Functions –– NC-SI MCTP over SMBus and MCTP over communication acceleration –– 1K ingress and egress QoS levels PCIe –– 64/66 encoding –– Guaranteed QoS for VMs –– Baseboard Management Controller interface –– Extended Reliable Connected transport (XRC) CPU OFFLOADS –– SDN management interface for managing the –– Dynamically Connected transport (DCT) –– RDMA over Converged Ethernet (RoCE) eSwitch –– Enhanced Atomic operations –– TCP/UDP/IP stateless offload –– Advanced memory mapping support, allowing –– LSO, LRO, checksum offload user mode registration and remapping of –– RSS (can be done on encapsulated packet), memory (UMR) TSS, HDS, VLAN insertion / stripping, Receive –– On demand paging (ODP) – registration free flow steering RDMA memory access –– Intelligent interrupt coalescence

* This section describes hardware features and capabilities. Please refer to the driver release notes for feature availability

SKU Description 406-BBLG Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP rNDC 406-BBLH Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP rNDC, Customer Install

350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: 408-970-3400 • Fax: 408-970-3403 www.mellanox.com

© Copyright 2015. Mellanox Technologies. All rights reserved. The information contained in this document, including all instructions, cautions, and regulatory approvals and certifica- Mellanox, BridgeX, Connect-IB, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, IPtronics, Kotura, Mellanox tions, is provided by Mellanox and has not been independently verified or tested by Dell. Dell cannot be responsible for ScalableHPC, MetroX, MLNX-OS, PhyX, SwitchX, UFM and Unified Fabric Manager, Virtual Protocol Interconnect, UltraVOA, damage caused as a result of either following or failing to follow these instructions. All statements or claims regarding the and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Accelio, Connect. Accelerate. Outperform, CoolBox, properties, capabilities, speeds or qualifications of the part referenced in this document are made by Mellanox and not by ExtendX, FabricIT, Mellanox CloudX, Federal Systems, OpenCloud + OpenCloud logo, Mellanox Software Defined Storage, Dell. Dell specifically disclaims knowledge of the accuracy, completeness or substantiation for any such statements. All Mellanox Virtual Modular Switch, MetroDX, Open Ethernet, The Generation of Open Ethernet, Software Defined Storage, questions or comments relating to such statements or claims should be directed to Mellanox. Visit www.dell.com for more TestX, are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. information. Dell is a registered trademark of Dell Inc. MLNX-67-11 Rev 1.4