INFINIBAND/ETHERNET (VPI) ADAPTER SILICON

PRODUCT BRIEF

ConnectX®-3 Pro Single/Dual-Port Adapter Silicon with Virtual Protocol Interconnect®

ConnectX-3 Pro Adapter Silicons with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, Enterprise Data Centers, and High Performance Computing (HPC). HIGHLIGHTS

Public and private cloud clustered databases, maximum efficiency, data center operators are BENEFITS parallel processing, transactional services, and creating overlay networks that carry traffic from –– One design for InfiniBand, Ethernet high-performance embedded I/O applications will individual Virtual Machines (VMs) in encapsulated (10/40/56GbE), or Data Center Bridging achieve significant performance improvements formats such as Network Virtualization using fabrics resulting in reduced completion time and lower Generic Routing Encapsulation (NVGRE) and –– World-class cluster, network, and storage cost per operation. ConnectX-3 Pro with VPI also Virtual Extensible Local Area Network (VXLAN) performance simplifies system development by serving multiple over a logical “tunnel,” thereby decoupling the –– Cutting edge performance in virtualized fabrics with one hardware design. workload’s location from its network address. overlay networks (VXLAN and NVGRE) –– Guaranteed and low-latency Virtual Protocol Interconnect Overlay Networks architecture introduces an additional layer of packet processing at the services VPI-enabled adapters enable any standard hypervisor level, adding and removing protocol –– I/O consolidation networking, clustering, storage, or management headers for the encapsulated traffic. The new –– Virtualization acceleration protocol to seamlessly operate over any encapsulation prevents many of the traditional –– Power efficient converged network leveraging a consolidated “offloading” capabilities (e.g. checksum, Time –– Scales to tens-of-thousands of nodes software stack. With auto-sense capability, Sharing Option {TSO}) from being performed at KEY FEATURES each ConnectX-3 Pro VPI port can identify and the NIC. –– Virtual Protocol Interconnect operate on InfiniBand, Ethernet, or Data Center –– 1us MPI ping latency Bridging (DCB) fabrics. FlexBoot™ provides ConnectX-3 Pro effectively addresses the –– Up to 56Gb/s InfiniBand or 56 Gigabit additional flexibility by enabling servers to boot increasing demand for an overlay network, Ethernet per port from remote InfiniBand or LAN storage targets. enabling superior performance by introducing –– Single- and Dual-Port options available ConnectX-3 Pro with VPI and FlexBoot simplifies advanced NVGRE and VXLAN hardware offload –– PCI Express 3.0 (up to 8GT/s) I/O system design and makes it easier for IT engines that enable the traditional offloads to –– CPU offload of transport operations managers to deploy infrastructure that meets the be performed on the encapsulated traffic. With –– Application offload challenges of a dynamic data center. ConnectX-3 Pro, data center operators can –– GPU communication acceleration decouple the overlay network layer from the World-Class Performance –– Precision Clock Synchronization physical NIC performance, thus achieving native Virtualized Overlay Networks — –– HW Offloads for NVGRE and VXLAN performance in the new network architecture. Infrastructure as a Service (IaaS) cloud demands encapsulated traffic that data centers host and serve multiple tenants, I/O Virtualization — ConnectX-3 Pro SR-IOV –– End-to-end QoS and congestion control each with their own isolated network domain technology provides dedicated adapter resources –– Hardware-based I/O virtualization over a shared network infrastructure. To achieve and guaranteed isolation and protection for –– Ethernet encapsulation (EoIB) –– 17mm x 17mm RoHS-R6

©2013 . All rights reserved. ConnectX®-3 Pro Single/Dual-Port Adapter Device with Virtual Protocol Interconnect® page 2

virtual machines (VMs) within the server. I/O eliminating unnecessary internal data copies stateless offload engines in ConnectX-3 virtualization with ConnectX-3 Pro gives data to significantly reduce application run time. Pro reduce the CPU overhead of IP packet center managers better server utilization ConnectX-3 Pro advanced acceleration transport. Socket acceleration software while reducing cost, power, and cable technology enables higher cluster efficiency further increases performance for latency complexity. and large scalability to tens of thousands of sensitive applications. nodes. InfiniBand— ConnectX-3 Pro delivers low Storage Acceleration — A consolidated latency, high bandwidth, and computing RDMA over Converged Ethernet (RoCE) compute and storage network achieves efficiency for performance-driven server and - ConnectX-3 Pro utilizing IBTA RoCE significant cost-performance advantages storage clustering applications. Efficient technology delivers similar low-latency and over multi-fabric networks. Standard block computing is achieved by offloading from high-performance over Ethernet networks. and file access protocols can leverage the CPU protocol processing and data Leveraging Data Center Bridging capabilities, Ethernet or InfiniBand RDMA for high- movement overhead such as Remote Direct RoCE provides efficient low latency RDMA performance storage access. Memory (RDMA) Access and Send/Receive services over Layer 2 Ethernet. With link- Software Support semantics, allowing more processor power level interoperability in existing Ethernet All Mellanox adapter cards are supported for the application. CORE-Direct™ brings infrastructure, Network Administrators by Windows, Linux distributions, VMware, the next level of performance improvement can leverage existing data center fabric FreeBSD, Ubuntu, and Citrix XenServer. by offloading application overhead such as management solutions. ConnectX-3 Pro VPI adapters support data broadcasting and gathering, as well Sockets Acceleration — Applications OpenFabrics-based RDMA protocols as global synchronization communication utilizing TCP/UDP/IP transport can achieve and software and are compatible with routines. GPU communication acceleration industry-leading throughput over InfiniBand configuration and management tools from provides additional efficiencies by or 10/40/56GbE. The hardware-based OEMs and operating system vendors.

ConnectX-3 Pro Intelligent NIC — The Foundation of Cloud 2.0

©2013 Mellanox Technologies. All rights reserved. ConnectX®-3 Pro Single/Dual-Port Adapter Device with Virtual Protocol Interconnect® page 3

FEATURESFEATURE SUMMARY* SUMMARY* COMPATIBILITYCOMPATIBILITY

INFINIBAND OVERLAY NETWORKS PCI EXPRESS INTERFACE –– IBTA Specification 1.2.1 compliant –– VXLAN and NVGRE - A Framework for –– PCIe Base 3.0 compliant, 1.1 and 2.0 compatible –– Hardware-based congestion control Overlaying Virtualized Layer 2 Networks over –– 2.5, 5.0, or 8.0GT/s link rate x8 –– 16 million I/O channels Layer 3 Networks. Network Virtualization –– Auto-negotiates to x8, x4, x2, or x1 –– 256 to 4Kbyte MTU, 1Gbyte messages hardware offload engines –– Support for MSI/MSI-X mechanisms ENHANCED INFINIBAND HARDWARE-BASED I/O VIRTUALIZATION CONNECTIVITY –– Hardware-based reliable transport –– Single Root IOV –– Interoperable with InfiniBand or 10/40GbE Ethernet –– Collective operations offloads –– Address translation and protection switches. Interoperable with 56GbE Mellanox Switches. –– GPU communication acceleration –– Dedicated adapter resources –– Passive copper cable with ESD protection –– Hardware-based reliable multicast –– Multiple queues per virtual machine –– Powered connectors for optical and active cable –– Extended Reliable Connected transport –– Enhanced QoS for vNICs support –– Enhanced Atomic operations –– VMware NetQueue support –– QSFP to SFP+ connectivity through QSA module ETHERNET ADDITIONAL CPU OFFLOADS OPERATING SYSTEMS/DISTRIBUTIONS –– IEEE Std 802.3ae 10 Gigabit Ethernet –– RDMA over Converged Ethernet –– Citrix XenServer 6.1 –– IEEE Std 802.3ba 40 Gigabit Ethernet –– TCP/UDP/IP stateless offload –– RHEL/CentOS 5.X and 6.X, Novell SLES10 SP4; –– IEEE Std 802.3ad Link Aggregation –– Intelligent interrupt coalescence FLEXBOOT™ TECHNOLOGY SLES11 SP1 , SLES 11 SP2, OEL, Fedora 14,15,17, –– IEEE Std 802.3az Energy Efficient Ethernet Ubuntu 12.04 –– Remote boot over InfiniBand –– IEEE Std 802.1Q, .1P VLAN tags and priority –– Windows Server 2008/2012 –– Remote boot over Ethernet –– IEEE Std 802.1Qau Congestion Notification –– FreeBSD –– Remote boot over iSCSI –– IEEE Std 802.1Qbg –– OpenFabrics Enterprise Distribution (OFED) PROTOCOL SUPPORT –– IEEE P802.1Qaz D0.2 ETS –– OpenFabrics Windows Distribution (WinOF) –– Open MPI, OSU MVAPICH, MPI, MS –– IEEE P802.1Qbb D1.0 Priority-based Flow –– VMware ESXi 4.x and 5.x Control –– MPI, Platform MPI –– IEEE 1588v2 –– TCP/UDP, EoIB, IPoIB, RDS –– Jumbo frame support (9600B) –– SRP, iSER, NFS RDMA –– uDAPL

*This brief describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability. **Image depicts sample product only; actual product may differ.

Ordering Part Number Description MT27524A0-FCCR-FV ConnectX®-3 Pro VPI, 1-Port IC, FDR/56GbE, PCIe 3.0 8GT/s (RoHS R6) with HW offloads for NVGRE and VxLAN MT27528A0-FCCR-FV ConnectX®-3 Pro VPI, 2-Port IC, FDR/56GbE, PCIe 3.0 8GT/s (RoHS R6) with HW offloads for NVGRE and VxLAN

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 • Fax: 408-970-3403 www.mellanox.com

© Copyright 2013. Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, IPtronics, Kotura, MLNX-OS, PhyX, SwitchX, UltraVOA,irtual V Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MetroDX, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and 15-503PB Rev 1.2 Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.