The Green500 List: Escapades to Exascale

Total Page:16

File Type:pdf, Size:1020Kb

The Green500 List: Escapades to Exascale Noname manuscript No. (will be inserted by the editor) The Green500 List: Escapades to Exascale Tom Scogland, Balaji Subramaniam, and Wu-chun Feng ftom.scogland,balaji,[email protected] Department of Computer Science Virginia Tech Blacksburg, VA 24060 Abstract Energy efficiency is now a top priority. The first is more open-ended, but it focuses on maximum achievable four years of the Green500 have seen the importance of en- performance. This persistent drive toward more speed at any ergy efficiency in supercomputing grow from an afterthought cost has brought about an age of supercomputers that con- to the forefront of innovation as we near a point where sys- sume enormous quantities of energy, resulting in the need tems will be forced to stop drawing more power. Even so, for extensive and costly cooling facilities to operate these the landscape of efficiency in supercomputing continues to supercomputers (Markoff and Hansell, 2006; Atwood and shift, with new trends emerging, and unexpected shifts in Miner, 2008; Belady, 2007). previous predictions. In 2007, we created the Green500 (Feng and Cameron, This paper offers an in-depth analysis of the new and 2007) to bring awareness to this issue and to provide a venue shifting trends in the Green500. In addition, the analysis of- for supercomputers to compete on efficiency as a comple- fers early indications of the track we are taking toward exas- ment to the TOP500’s (Meuer, 2008) focus on speed. Since cale, and what an exascale machine in 2018 is likely to look that time, many milestones have been achieved, most no- like. Lastly, we discuss the new efforts and collaborations tably, (1) the first petaflop supercomputer and (2) the first toward designing and establishing better metrics, method- supercomputer GFLOP/watt supercomputer. Concurrently, ologies and workloads for the measurement and analysis of efficiency has entered the consciousness of the supercom- energy-efficient supercomputing. puting community at large and is now a primary concern in the design of new supercomputers. In this paper, we 1 Introduction – Explore the emergence of green high-performance com- As with all the great races in modern history, the supercom- puting (HPC). puting race has been myopically focused on a single metric – Track trends across the past four years of the Green500. of success. With the space race, the metric was which coun- – Discuss the results of the shift toward thinking green. try could reach the moon first. In supercomputing, the race – Investigate the implications of the above as we move into the future. This work was supported in part by NSF grant CCF-0848670, a DoD National Defense Science & Engineering Graduate Fellowship (ND- Of particular interest are the implications of past and SEG), and Supermicro. current trends on the feasibility of exascale computing sys- T. R. W. Scogland tems in the timeframe discussed in the DARPA IPTO exas- 2202 Kraft Drive cale study (Bergman et al, 2008). The study itself indicates Blacksburg, VA 24060 that power will be the determining factor in the success or failure of any exascale program. Past trends in the Green500 B. Subramaniam have led us to believe that at least one of the targets listed in 2202 Kraft Drive the study may be feasible, but as we look at the progression Blacksburg, VA 24060 in total power use and the aggregate progression of the list, some questions are raised as to whether the optimistic figure W. Feng 2202 Kraft Drive of 20 megawatts (MW) will be possible by 2018. Blacksburg, VA 24060 The rest of the paper is organized as follows. First, Sec- tion 2 discusses the background of and motivation behind 2 T. Scogland, B. Subramaniam, W. Feng the Green500. Section 3 offers a high-level analysis of ef- It delivered two orders of magnitude better energy efficiency ficiency, along with power, and a more in-depth analysis of than the Earth Simulator and ASCI Q and debuted at 205 the efficiency characteristics of the different types of ma- MFLOPS/W on the first Green500. Clearly, there was an chines that achieve high energy efficiency. Projections from inkling of the need for efficiency to achieve the greatest pos- current and past lists to exascale are discussed along with sible performance, but even so, it continues to be an uphill their implications in Section 4. Section 5 presents the inno- battle. vations and collaborations behind the continuing evolution The original goal of the Green500 was to raise aware- of the Green500 as well as directions for future growth. Fi- ness of the state of energy efficiency in supercomputing and nally, Section 6 presents concluding remarks along with fu- bring the importance of energy efficiency to the community ture work. on par with the importance of performance. In order to mea- sure and rank supercomputers in terms of their energy ef- 2 Background ficiency, the Green500 employs the LINPACK benchmark, Large-scale, high-performance computing (HPC) has reached as provided by the TOP500 list, combined with power mea- a turning point. Prior to 2001, the cost of purchasing a 1U surements. LINPACK solves a dense linear algebra problem server exceeded the annualized infrastructure and energy using LU factorization and backward substitution. It is de- (I&E) cost for that server , as depicted in Figure 1. By 2001, signed and tuned for load balancing and scalability. this annualized I&E cost for a server matched the cost of While the Green500 remains a ranking of the efficiency purchasing the server. By 2004, the annualized infrastruc- of the 500 fastest supercomputers in the world, we launched ture cost by itself matched the cost of purchasing the server, three additional lists in 2009, based on feedback from the and by 2008, the annualized energy cost by itself matched HPC community: the Little Green500, the HPCC Green500 the cost of purchasing the server. Ignoring power and energy and the Open Green500. As of now, the HPCC Green500 efficiency to pursue performance at any cost is no longer and Open Green500 have been discontinued due in part to feasible. Data centers and HPC centers are already feeling lack of participation and interest from the community. The the pinch across the industry from Yahoo! (Filo, 2009) to Little Green500 continues to operate and broadens the def- Google (Markoff and Hansell, 2006) to the National Secu- inition of a supercomputer to help guide purchasing deci- rity Agency (Gorman, 2006; croptome.org, 2008). sions for smaller institutions. To be eligible for the Little Green500, a supercomputer must be as “fast” as the 500th- 4000 Annual I&E ranked supercomputer on the TOP500 list 18-months prior Infrastructure Cost 3500 to the release of the Little Green500. Energy Cost Server Cost 3000 3 Efficiency 2500 Across these last four years, we have observed a steady climb 2000 Dollars in the energy efficiency of the Green500. We have been track- 1500 ing the average efficiency as compared to Moore’s Law and 1000 finding that the average does track closely, while the maxi- 500 mum surges ahead, improving at a rate faster than Moore’s Law. The extreme weight at the high end of this list, exem- 0 1990 1995 2000 2001 2004 2005 2008 2010 2014 plified by IBM’s BlueGene/Q machines occupying the top Year five slots of the current Green500, draws the average up to closely track Moore’s law while the median lags well be- Fig. 1 Annual amortized costs in the data center (Belady, 2007) hind, as shown in Figure 2. With each release, we see the The supercomputing community has been particularly distance between the high efficiency machines and the aver- guilty of seeking speed at the exclusion of all else. When age grow wider, to the point where many of the top machines we released the first Green500 in November of 2007, the can be considered to be outliers as they are more than 1.5 computers that resulted from the event known colloquially times the interquartile range above the median. as Computenik, were still in the TOP500, namely the Earth While on the topic of the high end of the list, we have Simulator supercomputer from Japan and ASCI Q from the a first this release — for the first time, the maximum ef- U.S. The former far outstripped the performance of the latter ficiency of the list actually decreased from the June 2011 by five-fold. Furthermore, the Earth Simulator had an effi- list to the November 2011 list. The BlueGene/Q supercom- ciency of only 5.60 MFLOPS/W while ASCI Q had an even puter at #1 grew in size and evidently lost efficiency as a lower score of 3.65 MFLOPS/W. result. Even so, the four production BlueGene/Q comput- In 2004, the first U.S. machine to regain the number one ers hold the top four slots with an energy efficiency of ap- position on the TOP500 was an IBM BlueGene/L prototype. proximately 2 GFLOPS/W, as shown in Table 1. The orig- The Green500 List: Escapades to Exascale 3 # Gf/W Computer 1 2.026 BlueGene/Q Custom (IBM Rochester) puter currently at #1 on the TOP500 draws a whopping 12 2 2.026 BlueGene/Q Custom (IBM Watson) megawatts of power, more than half the optimistic estimate 3 1.996 BlueGene/Q Custom2 (IBM Rochester) 4 1.988 BlueGene/Q Custom (DOE/NNSA/LLNL) for the power required to run an exaflop supercomputer in 5 1.689 Blue Gene/Q Prototype 1 (NNSA/SC) the latter part of the decade at 1/100th of the performance. 6 1.378 DEGIMA: ATI Radeon GPU (Nagasaki U.) 7 1.266 Bullx B505, NVIDIA 2090 (BSC-CNS) We will discuss trends toward exascale in greater detail in 8 1.010 Curie: Bullx B505, NVIDIA M2090 (GENCI) Section 4.
Recommended publications
  • Performance Analysis on Energy Efficient High
    Performance Analysis on Energy Efficient High-Performance Architectures Roman Iakymchuk, François Trahay To cite this version: Roman Iakymchuk, François Trahay. Performance Analysis on Energy Efficient High-Performance Architectures. CC’13 : International Conference on Cluster Computing, Jun 2013, Lviv, Ukraine. hal-00865845 HAL Id: hal-00865845 https://hal.archives-ouvertes.fr/hal-00865845 Submitted on 25 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Performance Analysis on Energy Ecient High-Performance Architectures Roman Iakymchuk1 and François Trahay1 1Institut Mines-Télécom Télécom SudParis 9 Rue Charles Fourier 91000 Évry France {roman.iakymchuk, francois.trahay}@telecom-sudparis.eu Abstract. With the shift in high-performance computing (HPC) towards energy ecient hardware architectures such as accelerators (NVIDIA GPUs) and embedded systems (ARM processors), arose the need to adapt existing perfor- mance analysis tools to these new systems. We present EZTrace a performance analysis framework for parallel applications. EZTrace relies on several core components, in particular on a mechanism for instrumenting func- tions, a lightweight tool for recording events, and a generic interface for writing traces. To support EZTrace on energy ecient HPC systems, we developed a CUDA module and ported EZTrace to ARM processors.
    [Show full text]
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]
  • Measuring Power Consumption on IBM Blue Gene/P
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Comput Sci Res Dev DOI 10.1007/s00450-011-0192-y SPECIAL ISSUE PAPER Measuring power consumption on IBM Blue Gene/P Michael Hennecke · Wolfgang Frings · Willi Homberg · Anke Zitz · Michael Knobloch · Hans Böttiger © The Author(s) 2011. This article is published with open access at Springerlink.com Abstract Energy efficiency is a key design principle of the Top10 supercomputers on the November 2010 Top500 list IBM Blue Gene series of supercomputers, and Blue Gene [1] alone (which coincidentally are also the 10 systems with systems have consistently gained top GFlops/Watt rankings an Rpeak of at least one PFlops) are consuming a total power on the Green500 list. The Blue Gene hardware and man- of 33.4 MW [2]. These levels of power consumption are al- agement software provide built-in features to monitor power ready a concern for today’s Petascale supercomputers (with consumption at all levels of the machine’s power distribu- operational expenses becoming comparable to the capital tion network. This paper presents the Blue Gene/P power expenses for procuring the machine), and addressing the measurement infrastructure and discusses the operational energy challenge clearly is one of the key issues when ap- aspects of using this infrastructure on Petascale machines. proaching Exascale. We also describe the integration of Blue Gene power moni- While the Flops/Watt metric is useful, its emphasis on toring capabilities into system-level tools like LLview, and LINPACK performance and thus computational load ne- highlight some results of analyzing the production workload glects the fact that the energy costs of memory references at Research Center Jülich (FZJ).
    [Show full text]
  • Compute System Metrics (Bof)
    Setting Trends for Energy-Efficient Supercomputing Natalie Bates, EE HPC Working Group Tahir Cader, HP & The Green Grid Wu Feng, Virginia Tech & Green500 John Shalf, Berkeley Lab & NERSC Horst Simon, Berkeley Lab & TOP500 Erich Strohmaier, Berkeley Lab & TOP500 ISC BoF; May 31, 2010; Hamburg, Germany Why We Are Here • “Can only improve what you can measure” • Context – Power consumption of HPC and facilities cost are increasing • What is needed? – Converge on a common basis between different research and industry groups for: •metrics • methodologies • workloads for energy-efficient supercomputing, so we can make progress towards solutions. Current Technology Roadmaps will Depart from Historical Gains Power is the Leading Design Constraint From Peter Kogge, DARPA Exascale Study … and the power costs will still be staggering From Peter Kogge, DARPA Exascale Study $1M per megawatt per year! (with CHEAP power) Absolute Power Levels Power Consumption Power Efficiency What We Have Done • Stages of Green Supercomputing – Denial – Awareness – Hype – Substance The Denial Phase (2001 – 2004) • Green Destiny – A 240-Node Supercomputer in 5 Sq. Ft. – LINPACK Performance: 101 Gflops – Power Consumption: 3.2 kW • Prevailing Views embedded processor – “Green Destiny is so low power that it runs just as fast when it is unplugged.” – “In HPC, no one cares about power & cooling, and no one ever will …” – “Moore’s Law for Power will stimulate the economy by creating a new market in cooling technologies.” The Awareness Phase (2004 – 2008) • Green Movements & Studies – IEEE Int’l Parallel & Distributed Processing Symp. (2005) • Workshop on High-Performance, Power-Aware Computing (HPPAC) Green500 • Metrics: Energy-Delay Product and FLOPS/Watt FLOPS/watt – Green Grid (2007) • Industry-driven consortium of all the top system vendors • Metric: Power Usage Efficiency (PUE) – Kogge et al., “ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems, DARPA ITO, AFRL, 2008.
    [Show full text]
  • 2010 IBM and the Environment Report
    2010 IBM and the Environment Report Committed to environmental leadership across all of IBM's business activities IBM and the Environment - 2010 Annual Report IBM AND THE ENVIRONMENT IBM has long maintained an unwavering commitment to environmental protection, which was formalized by a corporate environmental policy in 1971. The policy calls for IBM to be an environmental leader across all of our business activities, from our research, operations and products to the services and solutions we provide our clients to help them be more protective of the environment. This section of IBM’s Corporate Responsibility Report describes IBM’s programs and performance in the following environmental areas: A Commitment to Environmental 3 Energy and Climate Programs 36 Leadership A Five-Part Strategy 36 Conserving Energy 37 Global Governance and Management 4 CO2 Emissions Reduction 42 System PFC Emissions Reduction 43 Global Environmental Management 4 Renewable Energy 45 System Voluntary Climate Partnerships 46 Stakeholder Engagement 7 Transportation and Logistics 47 Voluntary Partnerships and Initiatives 8 Initiatives Environmental Investment and Return 9 Energy and Climate Protection in 48 the Supply Chain Process Stewardship 12 Environmentally Preferable Substances 12 Remediation 52 and Materials Nanotechnology 15 Audits and Compliance 53 Accidental Releases 53 Pollution Prevention 17 Fines and Penalties 54 Hazardous Waste 17 Nonhazardous Waste 19 Awards and Recognition 55 Chemical Use and Management 20 Internal Recognition 55 External Recognition
    [Show full text]
  • 101117-Green Computing and Gpus
    Wu Feng, Ph.D. Department of Computer Science Department of Electrical & Computer Engineering NSF Center for High-Performance Reconfigurable CompuAng © W. Feng, SC 2010 We have spent decades focusing on performance, performance, performance (and price/performance). © W. Feng, SC 2010 A ranking of the fastest 500 supercomputers in the world • Benchmark – LINPACK: Solves a (random) dense system of linear equaons in double-precision (64 bits) arithmeAc. • Evaluaon Metric – Performance (i.e., Speed) • Floang-Operaons Per Second (FLOPS) • Web Site – h\p://www.top500.org/ © W. Feng, SC 2010 • Metrics for Evaluang Supercomputers – Performance (i.e., Speed) • Metric: Floang-Operaons Per Second (FLOPS) – Price/Performance Cost Efficiency • Metric: AcquisiAon Cost / FLOPS • Performance & price/performance are important metrics, but … © W. Feng, September 2010 • Electrical power costs $$$$. Source: IDC, 2006 © W. Feng, SC 2010 Examples: Power, Cooling, and Infrastructure $$$ • Japanese Earth Simulator – Power & Cooling: 12 MW $10M/year © W. Feng, SC 2010 • Too much power affects efficiency, reliability, and availability. – Anecdotal Evidence from a “Machine Room” in 2001 - 2002 • Winter: “Machine Room” Temperature of 70-75° F – Failure approximately once per week. • Summer: “Machine Room” Temperature of 85-90° F – Failure approximately twice per week. – Arrenhius’ Equaon (applied to microelectronics) • For every 10° C (18° F) increase in temperature, … the failure rate of a system doubles.* * W. Feng, M. Warren, and E. Weigle, “The Bladed Beowulf: A Cost-EffecAve Alternave to TradiAonal Beowulfs,” IEEE Cluster, Sept. 2002. © W. Feng, SC 2010 • Debuted at SC 2007 • Goal: Raise Awareness in the Energy Efficiency of Supercompung – Drive energy efficiency as a first-order design constraint (on par with performance).
    [Show full text]
  • Tibidabo$: Making the Case for an ARM-Based HPC System
    TibidaboI: Making the Case for an ARM-Based HPC System Nikola Rajovica,b,∗, Alejandro Ricoa,b, Nikola Puzovica, Chris Adeniyi-Jonesc, Alex Ramireza,b aComputer Sciences Department, Barcelona Supercomputing Center, Barcelona, Spain bDepartment d'Arquitectura de Computadors, Universitat Polit`ecnica de Catalunya - BarcelonaTech, Barcelona, Spain cARM Ltd., Cambridge, United Kingdom Abstract It is widely accepted that future HPC systems will be limited by their power consumption. Current HPC systems are built from commodity server pro- cessors, designed over years to achieve maximum performance, with energy efficiency being an after-thought. In this paper we advocate a different ap- proach: building HPC systems from low-power embedded and mobile tech- nology parts, over time designed for maximum energy efficiency, which now show promise for competitive performance. We introduce the architecture of Tibidabo, the first large-scale HPC clus- ter built from ARM multicore chips, and a detailed performance and energy efficiency evaluation. We present the lessons learned for the design and im- provement in energy efficiency of future HPC systems based on such low- power cores. Based on our experience with the prototype, we perform simu- lations to show that a theoretical cluster of 16-core ARM Cortex-A15 chips would increase the energy efficiency of our cluster by 8.7x, reaching an energy efficiency of 1046 MFLOPS/W. Keywords: high-performance computing, embedded processors, mobile ITibidabo is a mountain overlooking Barcelona ∗Corresponding author Email addresses: [email protected] (Nikola Rajovic), [email protected] (Alejandro Rico), [email protected] (Nikola Puzovic), [email protected] (Chris Adeniyi-Jones), [email protected] (Alex Ramirez) Preprint of the article accepted for publication in Future Generation Computer Systems, Elsevier processors, low power, cortex-a9, cortex-a15, energy efficiency 1.
    [Show full text]
  • Supermicro “Industry-Standard Green HPC Systems”
    Confidential Industry-Standard Green HPC Systems HPC Advisory Council Brazil Conference 2014 May 26, 2014 University of São Paulo Attila A. Nagy Senior IT Consultant © Supermicro 2014 Confidential HPC systems: What the industry is doing Architecture Processor 0.4 15.4 8.0 0.6 Xeon Cluster 8.6 Opteron MPP Power 84.6 82.4 Sparc Other system share (%) system share (%) Source: The Top500 list of November 2013. http://www.top500.org Confidential HPC systems: What the industry is doing Interconnect Operating System 2.2 1.4 4.0 2.2 Infiniband Linux 10.0 GbE 15.4 41.4 Unix 10GbE Other 96.4 Custom 27.0 Cray system share (%) Other system share (%) Source: The Top500 list of November 2013. http://www.top500.org Confidential Industry-standard HPC clusters Standard x86 servers throughout . Compute nodes . Head/Management/Control nodes . Storage nodes Infiniband and/or Ethernet networks . Main interconnect . Cluster management and administration . Out-of-band management Linux OS environment . Comprehensive software stack for HPC . Large availability of HPC software tools . Large collaboration community Confidential Typical HPC Cluster IB Fabric Head & Management Nodes Campus Ethernet Fabric x86 servers/Linux Network Infiniband The Beowulf cluster concept Ethernet OOB Mgmt Network is as solidNetwork as ever!! Network Storage Nodes Compute Nodes x86 servers x86 servers Linux Linux Parallel FS Confidential Accelerator/Coprocessor usage All top ten systems in the latest Green500 list are coprocessor Accelerator/CP based* . Two petaflop systems N/A Up to 5x improvements in: 89.4 Nvidia . Power consumption . Physical space Xeon Phi 7.4 . Cost Other 0.8 2.4 GPUs/MICs: 80% of HPC users system share (%) at least testing them Source: The Top500 list of November 2013.
    [Show full text]
  • NVIDIA Powers the World's Top 13 Most Energy Efficient Supercomputers
    NVIDIA Powers the World's Top 13 Most Energy Efficient Supercomputers ISC -- Advancing the path to exascale computing, NVIDIA today announced that the NVIDIA® Tesla® AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world's most energy-efficient high performance computing (HPC) systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1™ AI supercomputer. • As Moore's Law slows, NVIDIA Tesla GPUs continue to extend computing, improving performance 3X in two years • Tesla V100 GPUs projected to provide U.S. Energy Department's Summit supercomputer with 200 petaflops of HPC, 3 exaflops of AI performance • Major cloud providers commit to bring NVIDIA Volta GPU platform to market Advancing the path to exascale computing, NVIDIA today announced that the NVIDIA® Tesla® AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world's most energy-efficient high performance computing (HPC) systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1™ AI supercomputer. NVIDIA today also released performance data illustrating that NVIDIA Tesla GPUs have improved performance for HPC applications by 3X over the Kepler architecture released two years ago. This significantly boosts performance beyond what would have been predicted by Moore's Law, even before it began slowing in recent years. Additionally, NVIDIA announced that its Tesla V100 GPU accelerators -- which combine AI and traditional HPC applications on a single platform -- are projected to provide the U.S.
    [Show full text]
  • TOP500 Supercomputer Sites
    7/24/2018 News | TOP500 Supercomputer Sites HOME | SEARCH | REGISTER RSS | MY ACCOUNT | EMBED RSS | SUPER RSS | Contact Us | News | TOP500 Supercomputer Sites http://top500.org/blog/category/feature-article/feeds/rss Are you the publisher? Claim or contact Browsing the Latest Browse All Articles (217 Live us about this channel Snapshot Articles) Browser Embed this Channel Description: content in your HTML TOP500 News Search Report adult content: 04/27/18--03:14: UK Commits a 0 0 Billion Pounds to AI Development click to rate The British government and the private sector are investing close to £1 billion Account: (login) pounds to boost the country’s artificial intelligence sector. The investment, which was announced on Thursday, is part of a wide-ranging strategy to make the UK a global leader in AI and big data. More Channels Under the investment, known as the “AI Sector Deal,” government, industry, and academia will contribute £603 million in new funding, adding to the £342 million already allocated in existing budgets. That brings the grand total to Showcase £945 million, or about $1.3 billion at the current exchange rate. The UK RSS Channel Showcase 1586818 government is also looking to increase R&D spending across all disciplines by 2.4 percent, while also raising the R&D tax credit from 11 to 12 percent. This is RSS Channel Showcase 2022206 part of a broader commitment to raise government spending in this area from RSS Channel Showcase 8083573 around £9.5 billion in 2016 to £12.5 billion in 2021. RSS Channel Showcase 1992889 The UK government policy paper that describes the sector deal meanders quite a bit, describing a lot of programs and initiatives that intersect with the AI investments, but are otherwise free-standing.
    [Show full text]
  • Industry Insights | HPC and the Future of Seismic
    INDUSTRY INSIGHTS August 20 By Andrew Long ([email protected]) 1 of 4 HPC and the Future of Seismic I briefly profile the world’s largest commercial computer installations, and consider their relevance for the future of high-end seismic imaging and AI pursuits by the oil and gas industry. Introduction When I began my career in seismic geophysics over 20 years ago we were regularly told that seismic processing and imaging was one of the largest users of computing, and indeed, it took developments in computing capability to mature much further before applications such as Full Waveform Inversion (FWI) and Least-Squares Migration (LSM) became commercially commonplace in the last 5-10 years. Although the oil and gas industry still has several representatives in the top-100 ranked supercomputers (discussed below), applications now more commonly include modeling and simulations essential for nuclear security, weather and climate forecasting, computational fluid dynamics, financial forecasting and risk management, drug discovery and biochemical modeling, clean energy development, exploring the fundamental laws of the universe, and so on. The dramatic growth in High Performance Computing (HPC) services in the cloud (HPCaaS: HPC-as-a-Service) in recent years has in principle made HPC accessible ‘on demand’ to anyone with an appropriate budget, and key selling points are generally the scalable and flexible capabilities, combined with a variety of vendor-specific Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastucture-as-a-Service (IaaS) offerings. This new vocabulary can be confronting, and due to the complexity of cloud-based solutions, I only focus here on the standalone supercomputer infrastructures.
    [Show full text]
  • High Performance Computing (HPC)
    Vol. 20 | No. 2 | 2013 Globe at a Glance | Pointers | Spinouts Editor’s column Managing Editor “End of the road for Roadrunner” Los Alamos important. According to the developers of the National Laboratory, March 29, 2013.a Graph500 benchmarks, these data-intensive applications are “ill-suited for platforms designed for Five years after becoming the fastest 3D physics simulations,” the very purpose for which supercomputer in the world, Roadrunner was Roadrunner was designed. New supercomputer decommissioned by the Los Alamos National Lab architectures and software systems must be on March 31, 2013. It was the rst supercomputer designed to support such applications. to reach the petaop barrier—one million billion calculations per second. In addition, Roadrunner’s These questions of power eciency and unique design combined two dierent kinds changing computational models are at the core of processors, making it the rst “hybrid” of moving supercomputers toward exascale supercomputer. And it still held the number 22 spot computing, which industry experts estimate will on the TOP500 list when it was turned o. occur sometime between 2020 and 2030. They are also the questions that are addressed in this issue of Essentially, Roadrunner became too power The Next Wave (TNW). inecient for Los Alamos to keep running. As of November 2012, Roadrunner required 2,345 Look for articles on emerging technologies in kilowatts to hit 1.042 petaops or 444 megaops supercomputing centers and the development per watt. In contrast, Oak Ridge National of new supercomputer architectures, as well as a Laboratory’s Titan, which was number one on the brief introduction to quantum computing.
    [Show full text]