
Making a Case for a Green500 List∗ Sushant Sharma1, Chung-Hsing Hsu1, and Wu-chun Feng2 1Los Alamos National Laboratory 2Virginia Polytechnic Institute and State University Advanced Computing Lab. Dept. of Computer Science Los Alamos, NM 87545 USA Blacksburg, VA 24061 USA {sushant, chunghsu}@lanl.gov [email protected] Abstract starts to consider other factors such as reliability, avail- ability, and total cost of ownership (TCO) [3], just to For decades now, the notion of “performance” name a few. has been synonymous with “speed” (as measured in Currently, the focus of the TOP500 List FLOPS, short for floating-point operations per sec- (http://www.top500.org/) is solely on the perfor- ond). Unfortunately, this particular focus has led to the mance metric of speed, as defined by FLOPS, short for emergence of supercomputers that consume egregious floating-point operations per second. While this focus amounts of electrical power and produce so much heat has led to supercomputers that can complete hundreds that extravagant cooling facilities must be constructed of trillions of floating-point operations per second, it to ensure proper operation. In addition, the emphasis has also led to supercomputers that consume egregious on speed as the performance metric has caused other amounts of electrical power and produce so much heat performance metrics to be largely ignored, e.g., relia- that extravagant cooling facilities must be constructed bility, availability, and usability. As a consequence, all to ensure proper operation. of the above has led to an extraordinary increase in the total cost of ownership (TCO) of a supercomputer. System CPUs MTB(I/F) Power Space Despite the importance of the TOP500 List, we ar- (Hours) (kW) (Sq Ft) gue that the list makes it much more difficult for the ASC Q 8,192 6.5 3800 20,000 high-performance computing (HPC) community to fo- ASC 8,192 40 (’03) 1000 10,000 cus on performance metrics other than speed. There- White 5.0 (’01) fore, to raise awareness to other performance metrics PSC 3,016 9.7 N/A N/A of interest, e.g., energy efficiency for improved reliabil- Lemieux ity, we propose a Green500 List and discuss the poten- tial metrics that would be used to rank supercomputing MTB(I/F): Mean Time Between (Interrupts/Failures) systems on such a list. Table 1. Reliability and Availability of HPC Systems. 1 Motivation Would it be correct to say that supercomputers to- For instance, Seager of Lawrence Livermore Na- day have reached efficiency levels that no one could tional Laboratory (LLNL) notes that the large con- have ever imagined decades ago? Depending on the sumption of electricity to power and cool his supercom- perspective, one could argue that the answer might be puters leads to exorbitant energy bills, e.g., $14M/year yes as well as no. “Yes” if one considers efficiency as ($8M to power and $6M to cool) [15]. While at only the ability to perform a certain number of instruc- Los Alamos National Laboratory (LANL), the build- tions per second on a given supercomputer. “No” if one ing for the ASC Q supercomputer cost nearly $100M to construct. Even with such extravagant facilities in ∗This work was supported by the DOE ASC Program through Los Alamos National Laboratory contract W-7405-ENG-36. place, the excessive heat generation impacts the reli- Available as LANL technical report LA-UR 06-0793. ability and availability of such systems, as shown in 1-4244-0054-6/06/$20.00 ©2006 IEEE Table1[14].1 Therefore, not too surprisingly, all of the above evidence indicates that the commercial in- the above results in an astronomical increase in the dustry is moving more towards lower-power and more total cost of ownership (TCO). energy-efficient (but still high-performing) micropro- With the above considerations in mind, we argue cessors. for the need to maintain a list where the performance An alternative approach towards energy-efficient metric of interest is not only speed but also energy effi- HPC is to use existing power-hungry microprocessors ciency as it relates to reliability and availability.There- but to leverage an interface to the microprocessor that fore, we propose a Green500 List and discuss the poten- allows for the dynamic scaling of a microprocessor’s tial metrics that would be used to rank supercomputing clock frequency and supply voltage, as the power con- systems on such a list. sumption of a microprocessor is directly proportional to the clock frequency and the square of the supply 2 Background voltage. Such research has gained significant traction intheHPCcommunity[2,4,5,6,7,9]. Irrespective of the approach towards energy-efficient Efforts towards building energy-efficient supercom- supercomputing, we believe that there exists a need to puters include Green Destiny [3, 17], a 240-processor develop an alternative to the TOP500 Supercomputer supercomputer that consumed just 3.2 kilowatts (kW) 2 List: the Green500 Supercomputer List. But creating of power when booted diskless. Although this low- such a list means determining what metric(s) to use to power supercomputer was criticized for its computing rank the supercomputers. The purpose of this paper is ineptitude, Green Destiny with its customized high- to decide on such a metric and to use that metric to performance code-morphing software produced a Lin- rank supercomputers relative to energy efficiency. pack rating, i.e., 101 Gflops, that was equal to that of a contemporary 256-processor SGI Origin 2000 at the 2.1 Which Metric? time. Furthermore, the extraordinarily low power con- sumption of Green Destiny resulted in an extremely reliable supercomputer that had no unscheduled down- Supercomputers on the TOP500 List use FLOPS time in its 24-month existence. It is also important to — short for floating-point operations per second — as note here that Green Destiny never required any spe- the evaluation metric for performance relative to speed. cial cooling or air filtration in order to keep it running. However, the HPC community now understands that With efforts such as Green Destiny from 2001-2002, supercomputers should not be evaluated solely on the microprocessor vendors have been slowly giving up on basis of speed but should also consider metrics related the power-hungry, clock-speed race and focusing more to usability, availability, and energy efficiency. With re- spect to the latter, researchers have borrowed the EDn on efficient processor design. For example, in Oc- 3 tober 2004, Intel announced that after years of pro- metric from the the circuit-design domain in order to moting clock speed as the most important indicator quantify the energy-performance efficiency of different of processor performance, it now believes that intro- systems [1, 8, 12, 13]. EDn ducing multicore products and new silicon features are In [7], Cameron et al. propose a variant to the the best ways to improve processor performance [11]. metric. Specifically, they introduce a weighting vari- ∂ A month later in November 2004, the energy-efficient able called that could be used to put more emphasis E D IBM BlueGene/L debuted at #1 on the TOP500 Su- on energy or on performance , depending on what percomputer List using slowly-clocked 700-MHz Pow- is of interest to the end user. In short, the end user is ∂ erPC processors in spite of the availability of PowerPC allowed to choose the value for . What this means is processors with much higher clock speeds, and hence, that the end user can ultimately choose from an infi- EDn more power-hungry appetites. More recently, PA Semi nite number of variants of the metric, but it still ∂ announced its PWRficientTMProcessor Family, which leaves the problem of what value of should the end is based on the Power ArchitectureTM(licensed from user choose and what value, if any, should be used to IBM). As noted by the company’s renowned CEO, order the Green500 Supercomputer List. On the other EDn Dan Dobberpuhl, PA Semi is aiming to “really drive hand, Hsu and Feng demonstrate how various a breakthrough in performance per watt.” [16]. Thus, metrics are arguably biased towards massively paral- lel supercomputing systems [10]. Rather than use an n 1Arrhenius’ equation, as applied to microelectronics, projects ED -based metric, they ultimately “fall back” to using that the failure rate of a compute node in a supercomputer dou- bles with every 10oC(18oF) rise in temperature. 3E is the energy being used by a system while running a 23.2 kW is roughly equivalent to the power draw of two benchmark, and D is the time taken to complete that same hairdryers. benchmark. the FLOPS/watt metric for energy efficiency. All this suggests that there is still no consensus amongst HPC PROFILING researchers on which metric to choose for calculating COMPUTER the energy efficiency of a supercomputer. Rather than simply adopt an energy-efficiency met- ric and apply it to our tested systems (and even the systems on the TOP500 List), we first present the re- PARALLEL sults of various energy-efficiency metrics across a mul- COMPUTING SYSTEM titude of parallel-computing systems. Next, we pro- vide some analysis and insight into what factors should DIGITAL be considered when comparing the energy efficiency of POWER METER different supercomputers, using the currently available POWER STRIP metrics. Based on our analysis, we then make a case for a Green500 Supercomputer List, an energy-efficient Figure 1. Experimental Set-Up for Benchmark list that will implicitly capture the performance met- Tests. rics of speed and energy usage. In addition, we will also discuss (1) how the results from a particular efficiency metric vary when only CPU power consumption is used For the purposes of comparing results, we kept the instead of total system power and (2) when CPU power software configuration across all the parallel-computing consumption should be used in calculating energy effi- systems as similar as possible.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-