SOFTWARE PRODUCT BRIEF

MLNX_OFED (OpenFabrics Enterprise Distribution)

High-Performance Server and Storage Connectivity Software for Field-Proven RDMA and Transport Offload BENEFITS Hardware Solutions

Clustering using commodity servers and storage systems is seeing • Supports InfiniBand and Ethernet connectivity on widespread deployments in large and growing markets such as the same adapter card high-performance computing, data warehousing, online transaction • Single software stack operating across all Mellanox processing, financial services and large scale Web 2.0 deployments. InfiniBand and Ethernet devices • Support for HPC applications such as scientific To enable distributed computing with maximum efficiency, applications in these markets require research, oil and gas exploration, car crash tests the highest I/O bandwidth and lowest possible latency. These requirements are compounded by • User level verbs allow protocols such as MPI and the need to support a large interoperable ecosystem of networking, virtualization, storage, and other UDAPL to interface to Mellanox InfiniBand and up to applications and interfaces. The OFED Distribution from OpenFabrics Alliance (www. openfabrics.org) 200GbE RoCE hard-ware. Kernel levels verbs allow has been hardened through collaborative development and testing by major high performance protocols like NVMeOF, iSER, to interface to Mellanox I/O vendors. Mellanox OFED (MLNX_OFED) is a Mellanox-tested and packaged version of OFED, InfiniBand and up to 200GbE RoCE hardware. and supports RDMA (remote DMA) and kernel bypass APIs called OFED verbs over InfiniBand and • Enhancing performance and scalability through Ethernet, allowing OEMs and System Integrators to meet the needs of end-users in their markets. Mellanox Message Accelerations (MXM)

• RoCEv2 enables L3 routing to provide better Virtual Protocol Interconnect Support isolation and to enable hyperscale Web2.0 and cloud MLNX_OFED utilizes an efficient multi-layer device driver architecture to enable multiple I/O deployments with superior performance and efficiency connectivity options over the same hardware I/O adapter (HCA or NIC). The same OFED stack • Support for Data Center applications such as Oracle installed on a server can deliver I/O services over both InfiniBand and Ethernet simultaneously, 11g RAC, IBM DB2, Purescale Financial services and ports can be repurposed to meet application and end-user needs. For example, one port on low-latency messaging applications such as IBM the adapter can function as a standard NIC or RoCE Ethernet NIC, and the other port can operate WebSphere LLM, MRG Tibco as InfiniBand; or, both ports can be repurposed to run as InfiniBand or Ethernet. Doing so enables • Support for high-performance storage applications IT managers to realize maximum flexibility in how they deliver converged and higher I/O service utilizing RDMA benefits levels. • Support I/O Virtualization technology such as SR-IOV and paraVirtualization over KVM and XenServer Delivering Converged and Higher I/O Service Levels • Support OVS offload with ASAP2

Networking in data center environments is comprised of server-to-server messaging (IPC), server-to- • IPoIB component enable TCP/IP and sockets-based LAN and server-to-SAN traffic. To enable the highest performance, applications that run in the data applications to benefi from InfiniBand transport center, especially those belonging to the target markets, expect different APIs or interfaces optimized • Supports the OpenFabrics defined Verbs API at the for these different types of traffic. user and kernel levels. • SCSI Mid Layer interface enabling SCSI-based block storage and management

©2019 . All rights reserved. Mellanox_OFED (OpenFabrics Enterprise Distribution) page 2

Efficient & High Performance HPC Clustering Web 2.0 and Other Traditional Sockets- In HPC applications, MPI (message passing interface) is widely used based Applications as a parallel programming communications library. In emerging MLNX_OFED includes the implementation of IP-over-IB, enabling large-scale HPC applications, the I/O bottleneck is no longer only in IP-based applications to work seamlessly over InfiniBand Standard UDP/ the fabric, but has extended to the communication libraries. In order TCP/IP sockets interfaces are supported over L2 NIC (Ethernet) driver to provide the most scalable solution for MPI, SHMEM and PGAS implementation for applications that are not sensitive to lowest latency. applications, Mellanox provides a new accelerations library named MXM (Mellanox Messaging) that enables MPI, SHMEM and PGAS Storage Applications programming languages to scale to very large clusters by improving To enable traditional SCSI and iSCSI-based storage applications to on memory and latency related efficiencies, and to assure that the enjoy similar RDMA performance benefits, MLNX_OFED includes communication libraries are fully optimized over Mellanox interconnect iSCSI RDMA Protocol (iSER) that interoperate with various target solutions. components available in the industry. iSER can be implemented over both InfiniBand or RoCE. Accelerating Cloud Infrastructure MLNX_OFED integrates Mellanox’ ASAP2 -Accelerated Switch and MLNX_OFED supports Mellanox storage acceleration, which is a Packet Processing® technology. ASAP2 offloads the vSwitch/vRouter consolidated compute and storage network that achieves significant cost-performance advantages over multi-fabric networks. by handling the data plane in the NIC hardware while maintaining the control plane unmodified. As a result, significantly higher vSwitch/ vRouter performance is achieved, minus the associated CPU load. High Availability (HA) MLNX_OFED includes high availability support for message passing, Lowest Latency for Financial Services sockets and storage applications. The standard channel bonding module is supported over IPoIB that enables failover across ports on Applications the same adapter, or across adapters. Some vendor-specific fail-over/ MLNX_OFED with InfiniBand, RoCE and VMA delivers the lowest load-balancing driver models are supported as well.Database (OVSDB) latency for financial applications and the highest Packet Per Second or other virtual switches to create a secure solution for bare metal (PPS) performance. provisioning. The software package also includes support for DPDK, and the ability to enable IPsec and a stateful L4-based firewall.

SUPPORTED OPERATING SYSTEMS RedHat CentOS

Ubuntu

SLES

OEL Citrix XenServer Host Fedora Debian

©2019 Mellanox Technologies. All rights reserved. Mellanox_OFED (OpenFabrics Enterprise Distribution) page 3

DEVICE SUPPORT COMPONENTS ConnectX-3 Drivers for InfiniBand, RoCE, L2 NIC iSER Initiator ConnectX-3 Pro Access Layers and common verbs interface uDAPL

ConnectX-4 and ConnectX-4 Lx VPI (Virtual Protocol Interconnect) Subnet Manager (OpenSM)

ConnectX-5 OSU MVAPICH and Open MPI Installation, Administration and Diag- nostics ConnectX-6 IP-over-IB Tools Performance test suites

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 • Fax: 408-970-3403 www.mellanox.com

2 © Copyright 2019. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, ConnectX, and ASAP -Accelerated Switch and Packet Processing are registered trademarks of Mellanox Technologies, Ltd. 2492PB All other trademarks are property of their respective owners. Rev 1.9