HPC with HP HP Unified Cluster Portfolio

[email protected] Hewlett Packard India

© 2005 Hewlett-Packard Dev elopment Company , L.P. The inf ormation contained herein is subject to change without notice HPC: A Market in Transition

Scientific research Mechanical engineering/ virtual prototyping

TRENDS: Geo sciences Dramatic High-end film  Linux PRICE/PERFORMANCE and video  Industry Standard Servers Improvement Finance/securities  Clustered Computing Life and materials sciences

Electronic design Product lifecycle automation Government management/informatics classified/defense

December 8, 05 2 Growth of Linux Clusters

Total Cluster Revenue by OS IDC 2004 6,000,000 CAGR 5,000,000 24.9% Other Windows 4,000,000

3,000,000 Revenue ($K) 2,000,000 Linux 18.2%

1,000,000

Unix 0 2.1% 2003 2004 2005 2006 2007 2008

-UX 11i HP HP = CHOICE

December 8, 05 3 HP Market Leadership

HPC IDC Q1 CY2005 Revenue Share

NEC Fujitsu 0.8% 1.1% Hitachi Bull SGI Other 2.6% 0.1% 0.2% 9.0% 1.0% IBM 27.8% Dell 11.7%

Sun 12.1% HP 33.6% Q1CY05 Total ($M) : $1,923.2

December 8, 2005 Source: q12005 qView (IDC June 2005) 4 HPC platforms

• Choice HP Cluster Platform 6000 • Performance HP Cluster HP Cluster Platform 4000 • Manageability Platform 3000

HP Integrity HP Integrity Superdome HP Integrity rx8620 HP Integrity rx7620 HP Integrity rx4640 HP Integrity rx2620 rx1620 HP ProLiant DL585 -UX 11i HP ProLiant HP BL20p / BL25p HP ProLiant BL30p / BL35p HP ProLiant DL360 DL140 / DL380 / DL385 BL45p DL145

December 8, 05 5 Architecture Comparison Intel Xeon DP AMD Opteron

• Each CPU gets <½ of max bus bandwidth • Bottlenecks reduced or eliminated • Memory and I/O must share the same bus • Adding CPU’s adds memory and I/O bandwidth • Adding CPU’s compounds problems • 5.3 GB/s dedicated CPU memory bandwidth • Not highly scalable past • CPU-to-CPU cHT links offer bandwidth of 3.2GB/s in each direction 2-way • Each PCI-X Bus has bandwidth of 3.2GB/s • I/O is independent of memory access

December 8, 05 6 Intel Montecito Dual Core processor

• Excellent progress on first silicon Montecito − Running multiple OS’s in dual core and multi-threading − First demo at IDF showed dual core and multi-threading Core − First sample deliveries to OEMs Sept ‘04 • Volume samples began in late Q4 ‘04 Cache Cache

• Montecito production targets Q1’06 Core − (3) bus speeds supported – 400, 533, 667 − Targeting 2.0GHz, 24MB L3 cache − Targeting 100W Jan 18, 2005 Core Core

• Performance and new technologies − ~1.5-2X higher performance than Madison 9M L3 Cache L3 Cache − Multiple new CPU and platform technologies • Dual core + multi-threading for 4 threads per socket • 24MB on-die L3 cache Arbiter • Foxton Technology boosts performance dynamically based on power consumption • Pellston Technology improves system uptime by reducing System Bus risk of ECC errors in L3 cache • Demand Based Switching reduces power consumption dynamically based on CPU utilization • Silvervale Technology enables easier and more robust*Intel estimate Decemberv 8i,r 0t5ual partitioning 7 +hp

Clustered Computing The Use of Open Source Software

December 8, 05 8 HP Unified Cluster Portfolio strategy and vision for HPC

HP Cluster HP Platforms StorageWorks Scalable File Data Share Management Storage Grid Computation

Visualization Scalable HP Integrity & Visualization ProLiant Servers Array

Advancing the power of clusters with • Integrated solutions spanning computation, storage and visualization • Choice of industry standard platforms, operating systems, interconnects, etc • HP engineered and supported solutions that are easy to manage and use • Scalable performance on complex workloads • Extensive use of, and contribution to, open source software • Extensive portfolio of qualified development tools and applications

December 8, 05 9 HP delivering the HPC vision

Users High speed interconnect

es Storage OST od Inbound n pp servers connections A Object storage servers MDS Compute Nodes Log-ins Meta data servers

es od z n Services Vi A d m i n Visualization Nodes Scalable HA Storage Farm

Service Nodes Multi-Panel Display Device December 8, 05 10 High performance interconnects

top-level switches (288 ports) • Infiniband − Emerging industry standard PCI-e − IB 4x – speeds 1.8GB/s, <5µSec MPI latency node-level switches (24 ports) − 24 port, 288 port switches Connects to 12 nodes − Scalable topologies with federation of switches top-level switches (264 ports) • Myrinet

− Rev D – speeds 489MB/s, <7µSec MPI latency − Rev E – speeds 800MB/s, <6µSec MPI latency node-level switches (128 ports) − 16 port, 128 port, 256 port switches Connects to 64 nodes − Scalable topologies with federation of switches top-lev el switchses • Elan − Elan4 – 800MB/s, <3µSec MPI latency − 8 port, 32 port, 64 port, 128 port switches node-lev el switches (128 ports)

− Scalable topologies with federation of switches Connects to 64 nodes

• GigE

Dece−mbe60-80MB/s,r 8, 05 >40 µSec MPI latency 11 Unified Cluster Portfolio for HPC

HPC Cluster Services

HPC Application, Development and Grid Software Portfolio

HP Scalable Visualization

HP Scalable File Share Scalable Cluster Management

XC System CMU/Partner Windows ClusterPack Software Software HPC Suite

Operating Environment & OS Extensions

HP-UX Linux Windows

HPC Cluster Platforms ProLiant and Integrity Servers, Multiple Interconnects

December 8, 05 12 December 8, 05 13 Unified Cluster Portfolio Data Management Options

Higher Bandwidth Greater HA Clustered HP & Capacity & Transactions “HPC class” “Enterprise class” Gateway SFS NAS NFS & CIFS Lustre & NFS Cluster Scalability

StorNext filesystem SAN & Managed Storage SMP & FC Storage Foundation scalability SAN Filesystem

Higher Bandwidth Greater HA Fabric & Capacity & Transactions Guaranteed BW “Enterprise class” HP Fabric Tape Silos

December 8, 05 14 Scalable data management HP StorageWorks Scalable File Share (HP SFS)

Customer challenge

• I/O performance limitations

HP SFS provides HP Scalable File Share

• Scalable performance − Aggregate parallel read or write bandwidth from > 1 GB/s to “tens of GB/s” − 100-fold increase over NFS

• Scalable access − Shared, coherent, parallel access across a huge number of clients, 1000’s today “10’s of thousands” future

• Scalable capacity Scalable − multiple terabytes to multiple petabytes Bandwidth • Based on breakthrough Lustre technology Scalable Linux − Open source, industry standards based Storage Grid Cluster (Smart Cells)

December 8, 05 15 Scalable visualization

Customer challenge

• Visualization solutions too expensive, proprietary, not scalable HP Scalable Visualization Array

HP Scalable Visualization Array (SVA)

• Open, scalable, affordable, high-end, visualization solution based on industry standard Sepia technology

• Innovative approach combining − standard graphics adapters − accelerated compositing

• Yields a system that scales to clusters capable of displaying 100 million pixels or more

• Early pilots underway, General availability in Q4 CY2005

December 8, 05 16 December 8, 05 17 TIFR – Tata Institute of Fundametal Research, Pune Computational Mathematics Laboratory (CML) Industry: Scientific Research (India) www.tifr.res.in We need partners who complement our core competency in areas like complex hardware system design, microelectronics, nanotechnology and system software. This is where HP steps in, as it has been investigating HPC concepts for more than a • HP Solution decade and this has led to the creation of Itanium processors − 1 teraflop peak HP XC based on: jointly with Intel. • CP6000 (77) 2CPU/4GB Integrity rx1620 1.6GHz There is a need to build a giant compute nodes, Integrity rx2620 service node hardware accelerator to address fundamental questions in • 288 Port Infiniband switch computer science, which could not − HP Math Libraries for Linux on Itanium be answered until now, either by theory or experiment, to influence − New CCN for collaboration on Algorithms future development of the subject, facilitate scientific discoveries and • Results solve grand challenges in various disciplines. This supercomputer, − First step to massive Supercomputer which will help us understand how − Improve ability to solve computationally demanding to structure our algorithms for a larger system, is only a first step in algorithms that direction.

Professor Naren Karmarkar Head CML, TIFR December 8, 05 (Dr Karmarkar is a Bell Labs 18 Fellow) IGIB – Institute of Genomics & Integrative Biology Industry: Life Sciences Research (India) www.igib.res.in

• Challenges HP’s Cluster Platform provides a − Current AlphaServer based scalable architecture that allows us to complete large, complex • Increase computational power simulation experiments such as − Explosive grow in new research molecular interactions and dynamics, virtual drug screening, • Massive increase in performance protein folding, etc much more − Partnership for support quickly. This technology combined with − Improve cost efficiencies HP’s experience and expertise in life sciences helps IGIB speed • HP Solution access to information, knowledge, and new levels of efficiency, which − 4½ teraflop peak HP XC’s based on: we • CP3000 (288) 2CPU/4GB ProLiant DL140 G2 3.6GHz hope will ultimately culminate in nodes using Infiniband the discovery of new drug targets and predictive medicine for • CP3000 (24) 2CPU/4GB ProLiant DL140 G2 test complex disorders with minimum cluster side effects. • Superdome, 12 TB StorageWorks EVA SAN, MSL6060 − Single point support service Dr. Samir Brahmachari Director, IGIB − IGIB research staff collaboration

• Results − India’s largest Supercomputer −DeOnecember 8of, 05 the world’s most powerful research systems 19 dedicated to Life Sciences December 8, 05 20 December 8, 05 21 December 8, 05 22 HP Collaboration and Competency Network www.hp.com/go/collaboration

• HP CCN is a forum to facilitate wide-ranging collaboration, innovation, discovery, and competency sharing between HP and our HPC customers and partners

• Initial collaboration targets: − Computational and data grids − Global file system for Linux (LustreTM) − Scientific visualization − Linux SMP scaling − Linux on Itanium® (Gelato)

December 8, 05 23 = everything is possible

Thank You & Questions

www.hp.com/go/hptc www.hp.com/go/collaboration

December 8, 05 24