CASE STUDY

Mellanox Virtual Protocol Interconnect® Creates the Ideal Gateway Between InfiniBand and Ethernet

Background term viability of a company’s data center. In today’s computing environment, it is critical Mellanox now offers the latest technology for companies to get the most possible out both for providing its customers with the of their networks: the highest performance, best available interconnect and for ensuring the most storage, seamless data transfers, scalability affordably. Virtual Protocol maximum , and the lowest latency. Interconnect® (VPI) allows any port within a To remain static is to fall behind. Only by switch to run either InfiniBand or Ethernet, taking advantage of the latest advances in and to alternate between the two protocols technology is it possible to maintain market as necessary. This creates a perfect gateway supremacy or to gain on the competition. out-of-the-box, enabling integration between OVERVIEW One area in which data centers can realize InfiniBand and Ethernet fabric and clusters. this need for optimum network performance Mellanox VPI offers both port flexibility and Mellanox VPI is in their interconnect. Whether connecting financial flexibility. Instead of pay-as-you-grow, clustered database servers, native file you can adapt ports as needed to allow for provides all the storage, clouds of virtual machines, or any growth. Moreover, Mellanox VPI provides all benefits of InfiniBand combination thereof, high speed compute the benefits of InfiniBand while maintaining while maintaining over InfiniBand enables increased throughput easy connectivity via the existing Ethernet easy connectivity via in less time with lower latency and with less fabric. need for CPU involvement. the existing Ethernet However, it is also important that any solution fabric. be scalable, providing future-proofing by Gateway to Network Management enabling growth whenever necessary without Recently, Mellanox successfully deployed breaking the bank. The ability to add nodes its VPI solution for two different customers to further increase compute capacity without from two different market segments and sacrificing performance is crucial to the long- addressing two different needs.

Figure 1. Virtual Protocol Interconnect from Mellanox

©2014 . All rights reserved. CASE STUDY: Mellanox Virtual Protocol Interconnect Creates the Ideal Gateway Between InfiniBand and Ethernet page 2

One recent Mellanox VPI gateway deployment occurred Ethernet (Figure 2). This provided PensionsFirst with with PensionsFirst, a leading solution provider to the a flexible mixed fabric, yet without any associated global defined benefit pensions industry. PensionsFirst performance penalty. hosts its customers on a multitenant -based “By using Mellanox’s innovative and cost effective private cloud architecture that uses Mellanox RDMA- interconnect solutions, PensionsFirst was able to enabled InfiniBand and Ethernet interconnects to converge all of its data communications into a single, provide a high-performance and cost effective solution protected and cost-effective fabric,” said Nick Francis, that scales easily. CTO at PensionsFirst. “We are thrilled with the At PensionsFirst, InfiniBand was an overwhelming performance and flexibility that RDMA affords us, and choice for its cloud computing platform, as it offered it’s a superb platform to underpin our private cloud suitable performance and bandwidth to allow proper infrastructure.” network convergence. 10 GbE was not considered scalable enough for client, storage, and backup traffic Gateway to Storage combined, and by using RDMA, PensionsFirst gained strong VM scalability for its private cloud, enabling faster A second VPI deployment was with the Joint data analytics and a better return on investment. Supercomputer Center (JSCC) of the Russian Academy of Sciences, one of the top 100 most powerful However, since PensionsFirst uses multiple data centers computer centers in the world with a current peak to create its private cloud, and since its client base performance of 523.8 teraflops. accesses the cloud via IP traffic, PensionsFirst required JSCC uses a pure InfiniBand network for its high- an Ethernet solution above and beyond its InfiniBand performance computing, and decided to use a Mellanox network. Furthermore, as most network management VPI SX6036 switch to maintain InfiniBand connectivity software still interfaces via Ethernet, PensionsFirst with the compute cluster while establishing Ethernet needed the ability to combine InfiniBand and Ethernet connectivity to its Netapp storage (Figure 3). The vast architectures within its network. majority of storage interfaces use 10GbE to connect Mellanox VPI provided a gateway for PensionsFirst to to compute clusters, but by using Mellanox VPI as a rely on InfiniBand between its compute, storage, and gateway to its storage, JSCC was able to seamlessly Virtualization layers but to interconnect with its network mix its fabric to maintain its InfiniBand network while management plane and external client IP traffic via accessing its storage via Ethernet.

Figure 2. Mellanox’s VPI Gateway connects PensionsFirst’s InfiniBand compute, storage and Virtualization layers with its Ethernet network management plane and external client IP traffic CASE STUDY: Mellanox Virtual Protocol Interconnect Creates the Ideal Gateway Between InfiniBand and Ethernet page 3

Figure 3. Mellanox’s VPI Gateway connects JSCC’s InfiniBand high performance compute cluster to its Ethernet NetApp storage

“Mellanox’s unique gateway platform enabled us to Conclusion interconnect our Ethernet-based storage to a fast Mellanox VPI offers all the benefits of a scalable InfiniBand scientific compute center,” said Boris InfiniBand data center with the ease of Ethernet Shabanov, Deputy Director, JSCC of RAS. “The use of connectivity to network management or storage. This the Mellanox gateway eliminated the need for additional allows a network to scale its existing infrastructure infrastructure, saved money, and improved our network instead of paying for additional hardware as it grows, performance.” and it offers the most flexibility in allocating ports for JSCC now plans to increase its Ethernet bandwidth the ideal architecture to meet networking requirements. to add to its storage capability by moving more 10GbE Most of all, this is achievable with no performance links to its storage cluster, which can be accomplished penalty whatsoever, as Mellanox ensures the highest without adding any further infrastructure simply by bandwidth and lowest latency in both its InfiniBand and allocating additional ports on the VPI SX6036 switch that Ethernet offerings. is serving as the gateway.

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 • Fax: 408-970-3403 www.mellanox.com

© Copyright 2014 Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, IPtronics, Kotura, MLNX-OS, PhyX, SwitchX, UltraVOA,irtual V Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MetroDX, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, Open Ethernet, 15-3504CS ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox echnologies,T Ltd. All other trademarks are property of their respective owners. Rev 1.0