Interconnect Your Future Enabling the Best Datacenter Return on Investment

TOP500 , June 2017 InfiniBand Accelerates Majority of New Systems on TOP500

. InfiniBand connects 2.5 times more new systems versus OmniPath

. EDR InfiniBand solutions grew 2.5X in six months

. Mellanox accelerates the fastest in the world

. InfiniBand provides 1.7X Higher ROI for Petascale Platforms

. Mellanox connects 39% of overall TOP500 systems (192 systems, InfiniBand and Ethernet)

. InfiniBand connects 36% of the total TOP500 systems (179 systems)

. InfiniBand connects 60% of the HPC TOP500 systems

. InfiniBand accelerates 48% of the Petascale systems

. Mellanox connects all of 40G Ethernet systems, connects the first 100G Ethernet system on the list

InfiniBand is the Interconnect of Choice for HPC Infrastructures Enabling Machine Learning, High-Performance, Web 2.0, Cloud, Storage, Big Data Applications

© 2017 Mellanox Technologies 2 Mellanox Connects the World’s Fastest Supercomputer

National Supercomputing Center in Wuxi, China #1 on the TOP500 List

. 93 Petaflop performance, 3X higher versus #2 on the TOP500

. 41K nodes, 10 million cores, 256 cores per CPU

. Mellanox adapter and switch solutions

* Source: “Report on the Sunway TaihuLight System”, Jack Dongarra (University of Tennessee) , June 20, 2016 (Tech Report UT-EECS-16-742)

© 2017 Mellanox Technologies 3 InfiniBand Accelerates Artificial Intelligence (AI) and Deep Learning

Facebook AI Supercomputer #31 on the TOP500 List

NVIDIA AI Supercomputer #32 on the TOP500 List

. EDR InfiniBand In-Network Computing technology key for scalable Deep Learning systems

. RDMA accelerates Deep Learning performance by 2X, becomes de-facto solution for AI

© 2017 Mellanox Technologies 4 Mellanox In the TOP500

. Mellanox accelerates the fastest supercomputer on the list . InfiniBand most used HPC interconnect in first half of 2017 . InfiniBand connects 2.5X more new end-user projects versus Omni-Path, and 3X more versus other proprietary products . Mellanox connects nearly 39 percent of overall TOP500 systems (192 systems, InfiniBand and Ethernet) . InfiniBand connects 36 percent of the total TOP500 systems (179 systems) . InfiniBand connects 60 percent of the HPC TOP500 systems . InfiniBand accelerates 48 percent of the Petascale systems . EDR InfiniBand installations grew 2.5X in six months . InfiniBand provides 1.7X higher system efficiency for Petascale systems versus OmniPath . Mellanox connects all of 40G Ethernet systems . Mellanox connects the first 100G Ethernet system on the list . InfiniBand is the most used Interconnect on the TOP500 for TOP100, TOP200, and TOP300 systems . InfiniBand is the preferred interconnect for Artificial Intelligence and Deep Learning systems . Mellanox solutions enable highest ROI for Machine Learning, High-Performance, Cloud, Storage, Big Data and more applications

Paving The Road to Exascale Performance

© 2017 Mellanox Technologies 5 TOP500 Interconnect Trends

The TOP500 List has Evolved to Include Both HPC and Cloud / Web2.0 Hyperscale Platforms. For the HPC Platforms, InfiniBand Continues it’s Leadership as the Most Used Interconnect Solution for High-Performance Compute and Storage Infrastructures

© 2017 Mellanox Technologies 6 TOP500 Petascale-Performance Systems

InfiniBand is the Interconnect of Choice for Petascale Computing Accelerates 48% of the Petaflop Systems

© 2017 Mellanox Technologies 7 InfiniBand Solutions – TOP100, 200, 300, 400, 500

InfiniBand is The Most Used Interconnect of the TOP100, 200, 300, 400 Supercomputers Superior Performance, Scalability, Efficiency and Return-On-Investment

© 2017 Mellanox Technologies 8 InfiniBand Solutions – TOP100, 200, 300, 400, 500 HPC Systems Only (Excluding Cloud, Web2.0 etc. Systems)

InfiniBand is The Most Used Interconnect For HPC Systems Superior Performance, Scalability, Efficiency and Return-On-Investment

© 2017 Mellanox Technologies 9 Maximum Efficiency and Return on Investment

. Mellanox smart interconnect solutions enable In-Network Computing and CPU-Offloading . Critical with CPU accelerators and higher scale deployments . Ensures highest system efficiency and overall return on investment

InfiniBand Omni-Path

System: NASA, System Efficiency: 84% System: CINECA, System Efficiency: 57% System: NCAR, System Efficiency: 90% System: Barcelona, System Efficiency: 62% System: US Army, System Efficiency: 94% System: TACC, System Efficiency: 53%

1.7X Higher System Efficiency! 43% System Resources not Utilized!

© 2017 Mellanox Technologies 10 InfiniBand Delivers Best Return on Investment

30-100% Higher Return on Investment Up to 50% Saving on Capital and Operation Expenses Highest Applications Performance, Scalability and Productivity

Molecular Genomics Weather Automotive Chemistry Dynamics 1.3X Better 2X Better 1.4X Better 1.7X Better 1.3X Better

© 2017 Mellanox Technologies 11 InfiniBand Delivers Better Price/Performance Over Competition

Example: Supercomputer for

Other Vehicle Development (Memory, Chassis etc.)

Omni-Path Other (Memory, Chassis etc.) 42% Reduction Other InfiniBand in System Cost! (Memory, Chassis etc.) 10 of Top 10 CPUs InfiniBand Automotive CPUs Manufacturers CPUs Use Mellanox

~2X Lower Cost for Similar System Performance!

© 2017 Mellanox Technologies 12 The Advantages of HDR InfiniBand Solutions

Maximum Performance and Highest Return on Investment

World’s First World’s First 200G Switch 200G Adapter

© 2017 Mellanox Technologies 13 The Generation of 200G HDR InfiniBand

40-Ports 200G 128K Nodes 4.6X Higher Scalability 80-Ports 100G 3-Level Fat Tree

15.6 Billion 200G Data Speed Messages / Sec 2X Higher Throughput

Collective Operations MPI Tag Matching In-Network Computing 5-10X Faster 2X Faster

World’s First World’s First 200G Switch 200G Adapter

© 2017 Mellanox Technologies 14 Highest-Performance 100Gb/s and 200Gb/s Interconnect Solutions

200Gb/s Adapter, 0.6us latency 200 million messages per second (10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s)

40 HDR (200Gb/s) InfiniBand Ports 80 HDR100 InfiniBand Ports Throughput of 16Tb/s, <90ns Latency

32 100GbE Ports, 64 25/50GbE Ports (10 / 25 / 40 / 50 / 100GbE) Throughput of 6.4Tb/s

Transceivers Active Optical and Copper Cables (10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s) VCSELs, Silicon Photonics and Copper

© 2017 Mellanox Technologies 15 Interconnect Technology: The Need for Speed and Intelligence

400G NDR

Homeland Cosmological Brain Security Simulations Mapping Human The Large Hadron 200G HDR Collider (CERN) Weather Genome OpenFOAM LS-DYNA (CFD) 100G EDR (FEA) SPEED 56G FDR

40G QDR SIZE 100 Nodes 1,000 Nodes 10,000 Nodes 100,000 Nodes 1,000,000 Nodes

© 2017 Mellanox Technologies 16 The Intelligent Interconnect Overcomes Performance Bottlenecks

CPU-Centric (Onload) Data-Centric (Offload)

Must Wait for the Data Analyze Data as it Moves! Creates Performance Bottlenecks

Faster Data Speeds and In-Network Computing Enable Higher Performance and Scale

© 2017 Mellanox Technologies 17 In-Network Computing Enables Data-Centric Data Center

Faster Data Speeds and In-Network Computing Enable Higher Performance and Scale

© 2017 Mellanox Technologies 18 In-Network Computing Advantages with SHARP Technology

Critical for High Performance Computing and Machine Learning Applications

© 2017 Mellanox Technologies 19 SHIELD – Self Healing Interconnect Technology

© 2017 Mellanox Technologies 20 Mellanox to Connect Future #1 HPC Systems (Coral)

“Summit” System “Sierra” System

Paving the Path to Exascale

© 2017 Mellanox Technologies 21 End-to-End Interconnect Solutions for All Platforms

Highest Performance and Scalability for X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms 10, 20, 25, 40, 50, 56, 100 and 200Gb/s Speeds

X86 POWER GPU ARM FPGA

Smart Interconnect to Unleash The Power of All Compute Architectures

© 2017 Mellanox Technologies 22 InfiniBand The Smart Choice for HPC Platforms and Applications

“We chose a co-design approach supporting in the “In HPC, the processor should be going 100% of “One of the big reasons we use InfiniBand and not best possible manner our key applications. The only the time on a science question, not on a an alternative is that we’ve got backwards communications question. This is why the compatibility with our existing solutions.” interconnect that really could offload capability of Mellanox‘s deliver that was Mellanox’s InfiniBand.” network is critical.”

Watch Video Watch Video Watch Video

“InfiniBand is the most advanced interconnect “We have users that move tens of terabytes of data “InfiniBand is the best that is required for our technology in the world, with dramatic and this needs to happen very, very rapidly. applications. It enhances and unlocks communication overhead reduction that fully InfiniBand is the way to do it.” the potential of the system.” unleashes cluster performance.”

Watch Video Watch Video Watch Video

© 2017 Mellanox Technologies 23 Proven Advantages

. Scalable, intelligent, flexible, high performance, end-to-end connectivity

. Standards-based (InfiniBand, Ethernet), supported by large eco-system

. Supports all compute architectures: x86, Power, ARM, GPU, FPGA etc.

. Offloading architecture: RDMA, application acceleration engines, etc.

. Flexible topologies: Fat Tree, Mesh, 3D Torus, Dragonfly+, etc.

. Converged I/O: compute, storage, management on single fabric

. Backward and future compatible

The Future Depends On Smart Interconnect

© 2017 Mellanox Technologies 24 TOP500 Mellanox Accelerated Supercomputers

Examples

© 2017 Mellanox Technologies 25 Wuxi Supercomputing Center – World’s Fastest Supercomputer

. 93 Petaflop performance, 3X higher versus #2 on the TOP500

. 40K nodes, 10 million cores, 256 cores per CPU

. Mellanox adapter and switch solutions

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 26 NASA

Pleiades system . 20K Mellanox InfiniBand nodes . 241K CPU cores . 6 Petaflops (sustained performance) . HPE SGI 8600 . Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models • Future space vehicle design • Large-scale dark matter halos and galaxy evolution

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 27 Total Exploration Production

“Pangea” system . HPE SGI 8600, 220K cores . Mellanox InfiniBand . 5.3 Petaflops (sustained performance) . 80% efficiency

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 28 Texas Advanced Computing Center/Univ. of Texas

“Stampede” system . Mellanox InfiniBand . 5.2 Petaflops . 6,000+ Dell nodes . 462462 cores, Phi co-processors

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 29 NCAR (National Center for Atmospheric Research)

. “Cheyenne” system . HPE SGI 8600 . Mellanox EDR InfiniBand . 4.8 sustained Petaflop performance . 145K processor cores, 4K nodes

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 30 Exploration & Production ENI S.p.A.

. “HPC2” system . IBM iDataPlex DX360M4 . NVIDIA K20x GPUs . 3.2 Petaflops . Mellanox InfiniBand

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 31 Meteo France

. “Beaufix-2” system . Atos Bullx DLC 720 system . Mellanox InfiniBand solutions . 2.2 sustained Petaflop performance . 85% efficiency

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 32 Commissariat a l'Energie Atomique (CEA)

. Tera 100, first Petaflop system in Europe . Mellanox InfiniBand . 1.05 PF performance . 4,300 Bull S Series servers . 140,000 Intel® Xeon® 7500 processing cores . 300TB of central memory, 20PB of storage

Petaflop Mellanox Connected

© 2017 Mellanox Technologies 33 Thank You