InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice

Top500 , Nov 2008 Top500 Performance Trends

Total # of CPUs on the Top500 Total Performance of the Top500

87 CAGR 38% CAGR

ƒ Explosive computing market growth ƒ Clusters continue to dominate with 82% of the Top500 list ƒ Petaflop barrier shattered with the appearance of LANL Roadrunner cluster • Based on Mellanox ConnectX HCA and switch technology ƒ Mellanox 40Gb/s InfiniBand technology enables the ever growing performance demands

2 Mellanox Technologies InfiniBand in The Top500

ƒ InfiniBand is the only growing standard interconnect technology • 142 clusters, 16% increase versus June 2008 list • GigE and proprietary interconnects shows decline, no 10GigE clusters on the list ƒ Mellanox 40Gb/s InfiniBand end-to-end the only proven technology on the list • Virginia-Tech cluster ƒ InfiniBand connects the most powerful system in the world - #1 on the list • The First systems to achieve sustained Petaflop performance • Los Alamos National Lab – “Roadrunner” • ConnectX InfiniBand world leading scalable interconnect • Mellanox based InfiniScale III switches ƒ InfiniBand makes the most powerful clusters - Top10 • 4 of the top 10 (#1, #3, #6, #10), both Linux based and Windows based ƒ The most used interconnect in the Top200 • 54% of the Top100, 37% of the Top200 ƒ InfiniBand clusters responsible to 35% of the total Top500 performance ƒ InfiniBand enables the most power efficient clusters • Best Power/performance results versus other interconnects based systems ƒ Diverse set of applications • High end HPC, commercial HPC and enterprise

3 Mellanox Technologies Roadrunner – #1 and First Petaflop System

ƒ The most powerful in the world • Los Alamos Nation Lab, #1 on Nov 2008 Top500 list ¾ Nearly 3x faster than the leading contenders on Nov 2007 list • Usage - national nuclear weapons, astronomy, human genome science and climate change ƒ Breaking through the “Petaflop barrier" • More than 1,000 trillion operations per second • 12,960 IBM PowerXCell CPUs, 3,456 tri-blade units • Mellanox ConnectX 20Gb/s InfiniBand adapters • Mellanox based InfiniScale III 20Gb/s switches ƒ Mellanox Interconnect is the only scalable high-performance solution for Petascale computing

4 Mellanox Technologies Virginia Tech 40Gb/s InfiniBand QDR Cluster

ƒ Center for High-End Computing Systems (CHECS) • CHECS research activities are the foundation for the development of next generation, power-aware high-end computing resources ƒ Mellanox end-to-end 40Gb/s solution • Mellanox 40Gb/s - the only 40Gbs technology on the Top500 list ƒ 324 Apple Mac Pro Servers ƒ Total of 2592 quad-core CPU cores ƒ 22.3TF, 80% efficiency

5 Mellanox Technologies Interconnect Trends – Top500

Top InfiniBand Trends

ƒ InfiniBand is the only growing high speed clustering interconnect ƒ InfiniBand is the only growing standard interconnect technology ƒ 16% increase since June 08

6 Mellanox Technologies Interconnect Trends – Top100

Top Interconnect Trends

ƒ InfiniBand is the leading interconnect in the Top100 • 54 clusters, 42% higher than Nov 07 list • More than 5X higher than GigE, 9X higher than all proprietary high speed interconnects ƒ InfiniBand is the only growing high-speed interconnect

7 Mellanox Technologies Top100 Interconnect Share Over Time

Top100 Clustering Interconnect Share Over Time

ƒ InfiniBand the natural choice for large scale computing • All based on Mellanox InfiniBand technology

8 Mellanox Technologies Top500 Interconnect Placement

Top100 Interconnect Placement

ƒ InfiniBand is the high performance interconnect of choice • Connecting the most powerful clusters • Most of GigE clusters will fall from the Nov 2008 list ƒ InfiniBand is the best price/performance connectivity for clusters • For all cluster sizes, for all applications ƒ All InfiniBand clusters use Mellanox switch silicon • 139 out of the 142 clusters use Mellanox InfiniBand HCA adapters

9 Mellanox Technologies Top500 Interconnect Comparison

InfiniBand maximizes the cluster’s compute power

10 Mellanox Technologies InfiniBand Performance Trends

InfiniBand Clusters – CPU Count InfiniBand Clusters – Performance

173% CAGR 256% CAGR

ƒ Mellanox InfiniBand is the most efficient and scalable Interconnect ƒ Driving factors: performance, multi-core, productivity, consolidation

11 Mellanox Technologies InfiniBand Performance Leadership

Clusters Performance: InfiniBand versus GigE

ƒ InfiniBand = Maximum Scalability, Efficiency and Performance

12 Mellanox Technologies InfiniBand Performance Growth

InfiniBand Cluster Performance as a % of the Total Top500 Performance

ƒ InfiniBand performance growth higher then the total Top500

13 Mellanox Technologies Top500 Interconnect/Performance Share

Top500 Interconnect Share Top500 Interconnect Aggregate Performance (TF)

ƒ InfiniBand provides 34% higher aggregate performance • While connecting only ~half the number of the GigE based clusters

14 Mellanox Technologies Top500 InfiniBand based Clusters

1 DOE/NNSA/LANL 59 HPC2N - Umea University 187 Semiconductor Company (M) 358 IBM Poughkeepsie Benchmarking Center 3 NASA/Ames Research Center/NAS 62 Automotive Manufacturer 190 SGI 362 Stanford University/Biomedical 6 Texas Advanced Computing Center/Univ. of Texas 63 Maui High-Performance Computing Center 192 Universität Ulm/ Universität Konstanz Computational Facility 10 Shanghai Supercomputer Center (MHPCC) 201 University College London (UCL) 364 Semiconductor Company (M) 12 New Mexico Computing Applications Center 64 Commissariat a l'Energie Atomique (CEA) 202 Semiconductor Company (P) 371 Ufa State Aviation Technical University (NMCAC) 65 US Army Research Laboratory (ARL) 203 Universitaet Aachen/RWTH 378 US Army Research Laboratory (ARL) 13 Computational Research Laboratories, TATA 67 R-Systems 221 Interdisciplinary Centre for Mathematical 380 Kyushu University SONS 68 Gdansk University of Technology, CI Task and Computational Modelling, University of 386 Financial Services (O) 14 Grand Equipement National de Calcul Intensif - 69 Center for Development of Advanced Warsaw 389 DLR - CASE Centre Informatique National de l'Enseignement Computing (C-DAC) 222 JAXA 394 Taganrog Institute of Technology / Supérieur (GENCI-CINES) 71 Fiat 226 Supercomputing and Visualization in Hewlett Packard 17 Total Exploration Production 72 Turboinstitute Paralel Laboratory, Universidad Autonoma 395 Adam Opel AG 18 Government Agency 73 Lawrence Livermore National Laboratory Metropolitana 396 Centro Euro-Mediterraneo per i 19 Computer Network Information Center, Chinese 77 Louisiana Optical Network Initiative 235 Cambridge University Cambiamenti Climatici Academy of Science 81 Harvard University - FAS Research Computing 236 Lawrence Livermore National Laboratory 406 Naval Research Laboratory (NRL) 20 Pacific Northwest National Laboratory 83 Arizona State University High Performance 237 Automotive Company () 407 IT Service Provider (B) 21 ECMWF Computing Center 238 Financial Institution 424 Lockheed 22 IBM/ECMWF 85 HWW/Universitaet Stuttgart 247 Idaho National Laboratory 425 DOE/NNSA/LANL 23 RZG/Max-Planck-Gesellschaft MPI/IPP 87 University of Bristol 254 University of Utah 426 DOE/NNSA/LANL 29 GSIC Center, Tokyo Institute of Technology 88 University of North Carolina 266 Universitaet Mainz 427 DOE/NNSA/LANL 32 Center for Computational Sciences, University of 89 Semiconductor Company (P) 268 University of Minnesota 428 Centro de Supercomp. de Galicia Tsukuba 90 Semiconductor Company (P) 272 ENEA 430 Repsol YPF 35 Joint Supercomputer Center 91 University of Oklahoma 273 NanJing Institute of Geophysical 431 Repsol YPF 36 NOAA R&D/ National Centers for Environmental 99 Lawrence Livermore National Laboratory Prospecting, PEPRIS, SINOPEC 432 Repsol YPF Prediction 275 Ohio Supercomputer Center 37 NCSA 100 Clemson University Computational Center for 433 Vyatsky State University Mobility Systems 278 KISTI Supercomputing Center 38 Naval Oceanographic Office - NAVO MSRC 451 Roshydromet 106 NOAA/ESRL/GSD 279 Energy Company (A) 39 NASA/Ames Research Center/NAS 460 CASPUR 107 HLRN at Universitaet Hannover / RRZN 280 Center for High-End Computer Systems, 41 DOE/NNSA/LANL Virginia Tech 463 University of Arizona 108 HLRN at ZIB/Konrad Zuse-Zentrum fuer 465 Jefferson Lab 42 IBM Poughkeepsie Benchmarking Center Informationstechnik 286 Colorado School of Mines/Golden Energy 43 NCAR (National Center for Atmospheric Research) Computing Organization 476 Lockheed Martin 112 Semiconductor Company (Q) 477 Insight Direct 47 NNSA/Sandia National Laboratories 113 Semiconductor Company (Q) 288 NCSA 50 IDRIS 289 Texas Tech University 478 Sony. Information Technologies 114 Semiconductor Company (Q) Laboratories. 51 Kyoto University 299 CILEA 115 Semiconductor Company (Q) 483 Siberian National University 52 SARA (Stichting Academisch Rekencentrum) 306 Federal University of Rio de Janeiro - NCE 118 Irish Centre for High-End Computing 493 Semiconductor Company (O) 53 University of Toronto 311 Cyfronet 119 Kurchatov Institute Moscow 494 University of Kentucky 54 Moscow State University - Research Computing 132 Research Institution 335 Industrial Research Center 497 Universiteit Gent 147 Intel 337 PCSS Poznan 55 National Supercomputer Centre (NSC) 498 Scientific Supercomputing Center 155 Holland Computing Center at PKI 346 NASA/Goddard Space Flight Center Karlsruhe 58 Texas Advanced Computing Center/Univ. of Texas 161 Audi AG 347 NASA/Goddard Space Flight Center 499 Institute Francais du Petrole 172 Cardiff University - ARCCA 349 CINECA 500 Lawrence Berkeley National Laboratory 178 CINECA 356 University of Minnesota 179 CINECA 180 National Center for High Performance Computing

15 Mellanox Technologies 16