SUPERIOR PERFORMANCE FOR LOW-LATENCY, HIGH-THROUGHPUT CLUSTERS RUNNING WINDOWS ® HPC SERVER 2008

Windows HPC Server 2008 and Mellanox InfiniBand interconnect adapters deliver maximum performance and scalability for HPC networks. OVERVIEW WINDOWS HPC

High-performance computing (HPC) SERVER 2008 has become a fundamental resource for Windows HPC Server 2008 is built on innovative researchers, scientists, analysts proven ® 2008 x64- and engineers in industries spanning bit technology, and can efficiently manufacturing, computational science, scale to thousands of processing cores. life sciences, financial services, aerospace, Windows HPC Server 2008 is simple to oil and gas, and more. These experts deploy, manage, and use with existing require dedicated HPC capacity to run infrastructure, and enables a solution that PARTNER HPC performance-sensitive applications delivers high-performance and stability. PROFILE that demand vast amounts of computing power to solve complex simulations, MS-MPI SIMPLIFIES NETWORK DRIVER Mellanox™ Technologies, Ltd. and long-running calculations. Meeting MANAGEMENT is a leading supplier of this requirement allows professionals to semiconductor-based, high- accomplish more in less time, and thereby Windows HPC Server 2008 simplifies performance, InfiniBand to improve their productivity. network driver management by supporting and Ethernet connectivity the Message Passing Interface products that facilitate HPC applications require the highest (MS-MPI) which is based on the Argonne data transmission between throughput and lowest possible latency. National Labs implementation (MPIH2) of servers, communications HPC server clusters require high- the MPI2 standard. MS MPI can utilize any infrastructure equipment, and performance interconnect technology to interconnect (such as InfiniBand) that is storage systems. provide high-throughput with low-latency, supported on . and direct data transfer (CPU offload) Mellanox ConnectX IB between compute nodes. NetDirect ENSURES HIGH-SPEED InfiniBand adapter devices INTERCONNECTS deliver low-latency and high- The InfiniBand interconnect architecture is bandwidth for performance- the only industry standard technology that Windows HPC Server 2008 is designed driven server and storage advances I/O connectivity for HPC clusters. to ensure high-speed interconnects by clustering applications in using NetDirect– Microsoft’s new Remote enterprise data centers, high- These HPC requirements demand an HPC Direct Memory Access (RDMA) interface performance computing, and platform that helps ensure compatibility, for high-speed, low-latency networks embedded environments. seamless integration, optimal performance, such as those running on InfiniBand. and scalability. By using an architecture that directly Mellanox InfiniBand devices bypasses operating system (OS) and TCP/IP have been deployed in Windows HPC Server 2008 combined overhead, NetDirect takes advantage of clusters scaling to thousands with Mellanox InfiniBand interconnect the advanced InfiniBand capabilities and of nodes and are being adapters delivers computing performance achieves better performance for massively deployed end-to-end in data that addresses the productivity and parallel programs that utilize low-latency, centers and Top500 systems scalability needs of current and future HPC high-bandwidth networks, see Figure 1. around the world. computing platforms. Figure 1: ANSYS CFX “We are excited to partner Solver Manager, running with Microsoft to bring a identical jobs on two high-performance computing different cluster groups— experience that offers industry- InfiniBand enabled cluster leading performance, while (left), and a Gigabit Ethernet enabled cluster maintaining simplicity and ease (right). Given equal time, of use for end users. Mellanox the InfiniBand enabled 10, 20 and 40Gb/s InfiniBand cluster converges faster end-to-end solutions running than on GigE compute on Windows HPC Server nodes. 2008 provide the stringent productivity requirements of current and future scientific and engineering simulations to INFINIBAND INTERCONNECT ADAPTERS reduce time to market, and to enable safer and more reliable Clusters linked with the high-speed, industry standard InfiniBand interconnect, deliver products.” maximum performance, scalability, flexibility and ease of management. HPC applications achieve maximum performance over InfiniBand networks because CPU cycles are available Gilad Shainer, Mellanox to focus on critical application processing instead of networking functions. Node-to- node latency of less than 2µs and almost 3GByte/sec uni-directional bandwidth has been BENEFITS demonstrated on Mellanox InfiniBand clusters using the Microsoft MS-MPI protocol. The combination of Windows With InfiniBand’s proven scalability and efficiency, small and large clusters easily scale up HPC Server 2008 and Mellanox to tens-of-thousands of nodes. InfiniBand drivers for Windows HPC Server 2008 are based InfiniBand adapters increases: on the OpenFabrics Alliance Windows driver set (WinOF), ensuring that certified, high- performance networking drivers like InfiniBand, are available to both end users and to • Performance major original equipment manufacturers (OEMs) creating HPC offerings. Supports native InfiniBand performance for HPC WINDOWS HPC SERVER 2008 ARCHITECTURE applications. The Windows HPC Server 2008 • Productivity architecture is shown in Figure 2. Accomplish more in less time than before. The Windows HPC Server 2008 head node: • Scalability Small and large clusters easily • Controls and mediates all scale up to thousands of access to the cluster resources. nodes.

• Is the single point of management, deployment, FURTHER and job scheduling for the INFORMATION cluster. For more information about • Can failover to a backup head Windows HPC Server 2008 and node. HPC please visit: http://www.microsoft.com/hpc Windows HPC Server 2008 uses the existing corporate Figure 2: Windows HPC Server 2008 architecture. For more information about infrastructure and Microsoft Mellanox Technologies please Active Directory® for security, visit: account management, and http://www.mellanox.com operations management.

This data sheet is for informational purposes only. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without the express written permission of Microsoft Corporation. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.