Confidential
Industry-Standard Green HPC Systems
HPC Advisory Council Brazil Conference 2014
May 26, 2014 University of São Paulo
Attila A. Nagy Senior IT Consultant
© Supermicro 2014 Confidential HPC systems: What the industry is doing
Architecture Processor
0.4 15.4 8.0 0.6 Xeon Cluster 8.6 Opteron MPP Power 84.6 82.4 Sparc Other
system share (%) system share (%)
Source: The Top500 list of November 2013. http://www.top500.org Confidential HPC systems: What the industry is doing
Interconnect Operating System
2.2 1.4 4.0 2.2 Infiniband Linux 10.0 GbE 15.4 41.4 Unix 10GbE Other 96.4 Custom 27.0 Cray system share (%) Other system share (%)
Source: The Top500 list of November 2013. http://www.top500.org Confidential Industry-standard HPC clusters
Standard x86 servers throughout . Compute nodes . Head/Management/Control nodes . Storage nodes Infiniband and/or Ethernet networks . Main interconnect . Cluster management and administration . Out-of-band management Linux OS environment . Comprehensive software stack for HPC . Large availability of HPC software tools . Large collaboration community Confidential Typical HPC Cluster
IB Fabric Head & Management Nodes Campus Ethernet Fabric x86 servers/Linux Network
Infiniband The Beowulf cluster concept Ethernet OOB Mgmt Network is as solidNetwork as ever!! Network
Storage Nodes Compute Nodes x86 servers x86 servers Linux Linux Parallel FS Confidential Accelerator/Coprocessor usage
All top ten systems in the latest Green500 list are coprocessor Accelerator/CP based* . Two petaflop systems
N/A Up to 5x improvements in: 89.4 Nvidia . Power consumption . Physical space Xeon Phi 7.4 . Cost Other 0.8 2.4
GPUs/MICs: 80% of HPC users system share (%) at least testing them
Source: The Top500 list of November 2013. http://www.top500.org
(*) The Green500 list of November 2013. http://www.green500.org Confidential Green HPC?
Data centers consumed approximately 1.3% of worldwide electricity production in 2010 (2% in the US) ¹
Data center power usage growth 2012-2013 (%): ² 15.1
10.9 Power is not the only issue: 7.2 . Cooling 6.7 . Real estate . Management Brazil LatAm US World Cost (OpEx)
(1) Koomey, Jonathan. 2011. Growth in Data center electricity use 2005 to 2010. Oakland, CA: Analytics Press. August 1. http://www.analyticspress.com/datacenters.html
(2) DCD Industry Census 2013: Data Center Power. January 31, 2014 by DCD Intelligence. http://www.datacenterdynamics.com/focus/archive/2014/01/dcd-industry-census-2013-data-center-power Confidential The roadmap to Green HPC
Processing efficiency Power consumption . Heterogeneous computing breakdown
4% 6% GPU 100% Server design 6% 95% CPU . Power 90% 9% 85% Memory
. Thermal 80% Efficiency 50%75% Fans . Mechanical 70% PWS-920P-1R Platinum PWS-721P-1R Gold 65% Power Supply Previous Generation 25% 60% 0% 20% 40% 60% 80% 100% Loading Other
Node configuration: 2x Xeon E5v2; 2x Nvidia Kepler; 128GB memory; no HDD Confidential The roadmap to Green HPC
Data center design . PUE (Power Usage Effectiveness) . Layout . AC/cooling
Management . Server, rack/system & data center level . Core on/off & speed control . Power monitoring/capping/policy setting . Agent based & OOB . Integrated data center mgmt & control Cost Efficiency
~25kW Air cool Liquid cool
KW / rack Confidential Supermicro HPC
Innovation – HPC Optimized H/W Design and Manufacturing
Offer broadest line of x86 HPC server building blocks
Design to fit any HPC application – all about choices
Focus on high efficiency (performance per watt) to enable green HPC and scalability
Channel Enablement – First to Market with Building Block Solutions
Provide first to market competitiveness to our partners: HPC solution providers
Enable channel partners with fully validated HPC building blocks
End users get a solution that is optimized for their applications
Commoditization
Driving the technology curve from proprietary to open architecture and commodity
Commitment to the HPC community to make high-performance solutions affordable in order to accelerate science & engineering research and development
Confidential
Thank you!
Attila A. Nagy [email protected] www.supermicro.com