Overview of the K Computer System

Total Page:16

File Type:pdf, Size:1020Kb

Overview of the K Computer System Overview of the K computer System Hiroyuki Miyazaki Yoshihiro Kusano Naoki Shinjou Fumiyoshi Shoji Mitsuo Yokokawa Tadashi Watanabe RIKEN and Fujitsu have been working together to develop the K computer, with the aim of beginning shared use by the fall of 2012, as a part of the High-Performance Computing Infrastructure (HPCI) initiative led by Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT). Since the K computer involves over 80 000 compute nodes, building it with lower power consumption and high reliability was important from the availability point of view. This paper describes the K computer system and the measures taken for reducing power consumption and achieving high reliability and high availability. It also presents the results of implementing those measures. 1. Introduction Technology (MEXT). As the name “Kei” in Fujitsu has been actively developing and Japanese implies, one objective of this project providing advanced supercomputers for over was to achieve a computing performance of 30 years since its development of the FACOM 1016 floating-point operations per second (10 230-75 APU—Japan’s first supercomputer—in PFLOPS). The K computer, moreover, was 1977 (Figure 1). As part of this effort, it has developed not just to achieve peak performance been developing its own hardware including in benchmark tests but also to ensure high original processors and software too and building effective performance in applications used in up its technical expertise in supercomputers actual research. Furthermore, to enable the along the way. entire system to be installed and operated at The sum total of this technical expertise one location, it was necessary to reduce power has been applied to developing a massively consumption and provide a level of reliability parallel computer system—the K computer1), note)i that could ensure the total operation of a large- —which has been ranked as the top performing scale system. supercomputer in the world. To this end, four development targets were The K computer was developed jointly established. by RIKEN and Fujitsu as part of the High • A high-performance CPU for scientific Performance Computing Infrastructure computation (HPCI) initiative overseen by Japan’s Ministry • New interconnect architecture for massively of Education, Culture, Sports, Science and parallel computing • Low power consumption note)i “K computer” is the English name that RIKEN has been using for the • High reliability and high availability supercomputer of this project since July The CPU and interconnect architecture are 2010. “K” comes from the Japanese word defined in detail in other papers of this special “Kei,” which means ten peta or 10 to the 2),3) 16th power. issue. In this paper, we present an overview FUJITSU Sci. Tech. J., Vol. 48, No. 3, pp. 255–265 (July 2012) 255 H. Miyazaki et al.: Overview of the K computer System K computer Peta-scale SPARC FX1 Enterprise VPP5000 VPP300/700 PRIMEQUEST VPP500 PRIMEPOWER PRIMERGY Vector machine HPC2500 BX900 VP Series Cluster node AP3000 Scalar machine HX600 Cluster node ScalarAP1000 MPP FACOM230-75 APU PRIMERGY x86 cluster RX200 Cluster node ~1985 1990 1995 2000 2005 2010 2015 Figure 1 FigureHistory of1 supercomputer development at Fujitsu. History of supercomputer development at Fujitsu. of the K computer system, describe the measures implementing power-reduction measures from taken for reducing power consumption and both process-technology and design points of achieving high reliability and high availability at view. This CPU applies High Performance the system level of the K computer, and present Computing–Arithmetic Computational Extensions the results of implementing those measures. (HPC-ACE) applicable to scientifi c computing and analysis. It also uses a 6-MB/12-way 2. Compute-node confi guration sector cache as an L2 cache and uses a Virtual in the K computer Single Processor by Integrated Multicore We fi rst provide an overview of the compute Architecture (VISIMPACT), whose effectiveness nodes, which lie at the center of the K computer has previously been demonstrated on Fujitsu’s system. A compute node consists of a CPU, FX1 high-end technical computing server. As memory, and an interconnect. a result of these features, the SPARC64 VIIIfx 1) CPU achieves high execution performance in the We developed an 8-core CPU with a HPC fi eld. In addition, the system-control and theoretical peak performance of 128 GFLOPS memory-access-control (MAC) functions, which called the “SPARC64 VIIIfx” as the CPU for the were previously implemented on separate chips, K computer (Figure 2).4) The SPARC64 VIIIfx are now integrated in the CPU, resulting in high features Fujitsu’s advanced 45-nm semiconductor memory throughput and low latency. process technology and a world-class power 2) Memory performance of 2.2 GFLOPS/W achieved by Commercially available DDR3-SDRAM- 256 FUJITSU Sci. Tech. J., Vol. 48, No. 3 (July 2012) H. Miyazaki et al.: Overview of the K computer System System bus interface Core 5 Core 7 L2 cache data Core 4 Core 6 MAC L2 cache control MAC 6D mesh/torus Node group Core 3 DDR3 interface DDR3 interface Core 1 Figure 3 TofuFigure interconnect. 3 L2 cache data Tofu interconnect. Core 0 Core 2 torus confi guration. It employs an extended- dimension-order routing algorithm that enables the provision of a CPU group joined in a 1–3 Figure 2 2 dimensional torus so that a faulty node in the SPARC64 VIIIfx. SPARC64 VIIIfx. system becomes non-existent as far as the user is concerned. The Tofu interconnect also makes it possible to allocate to the user a complete CPU DIMM is used as main memory. The dual in-line group joined in a 1–3 dimensional torus even if memory module (DIMM) is a commodity module part of the system has been extracted for use. that is also used in servers and PC clusters, A compute node of the K computer which provides a number of advantages. For consists of the CPU/DIMM described above example, it can be multi-sourced, a stable supply and an InterConnect Controller (ICC) chip for of about 700 000 modules can be obtained in a implementing the interconnect. Since the Tofu roughly one-year manufacturing period with a interconnect forms a direct interconnection stable level of quality, and modules with superior network, the ICC includes a function for relaying power-consumption specifi cations can be selected. packets between other nodes (Figure 4). If the Besides this, an 8-channel memory interface node section of a compute node should fail, only per CPU provides a peak memory bandwidth the job using that compute node will be affected, of 64 GB/s, surpassing that of competitors’ but if the router section of that compute node computers and the high memory throughput should fail, all jobs using that compute node as deemed necessary for scientifi c computing. a packet relay point will be affected. The failure 3) Interconnect rate of the router section is nearly proportional To provide a computing network to the amount of circuitry that it contains, and (interconnect), We developed and implemented it needs to be made signifi cantly lower than the an interconnect architecture called “Tofu” failure rate of the node section. ICC-chip faults for massively parallel computers in excess are therefore categorized in accordance with of 80 000 nodes.5) The Tofu interconnect their impact, which determines the action taken (Figure 3) constitutes a direct interconnection at the time of a fault occurrence. The end result network that provides scalable connections with is a mechanism for continuing system operation low latency and high throughput for a massively in the event of a fault in a node section. parallel group of CPUs and achieves high availability and operability by using a 6D mesh/ FUJITSU Sci. Tech. J., Vol. 48, No. 3 (July 2012) 257 H. Miyazaki et al.: Overview of the K computer System Node section Router section CPU ICC Core Core Port TNI Port Memory Core Core Port DIMM Port DIMM MAC r DIMM TNI DIMM MAC a L2 cache System System b Port s DIMM control bus IF bus IF s Port DIMM MAC o DIMM r DIMM MAC TNI C Port Port Core Core L2 cache Port PCIe PCIe TNI Core data Core Port core core Tofu interconnect PCIe MAC: Memory access controller ICC: InterConnect Controller TNI: Tofu network interface IF: Interface PCIe: Peripheral component interconnect express Figure 4 ConceptualFigure 4 diagram of node and router sections. Conceptual diagram of node and router sections. Memory 3. Rack configuration in the K CPU Memory PCIe riser computer ICC Compute nodes of the K computer are installed in specially developed racks. A compute rack can accommodate 24 system boards (SBs), each of which mount 4 compute nodes (Figure 5), CPU and 6 I/O system boards (IOSBs), each of which Memory System board I/O system board mount an I/O node for system booting and file- system access. A total of 102 nodes can therefore FigureFigure 5 5 be mounted in one rack. System board and I/O system board. Water cooling is applied to the CPU/ICCs System board and I/O system board. used for the compute nodes and I/O nodes and to on-board power-supply devices. Specifically, IOSBs are air cooled since they make use of chilled water is supplied to each SB and IOSB commercially available components. Since these via water-cooling pipes and hoses mounted boards are mounted horizontally inside the rack, on the side of the rack. Water cooling is used air must be blown in a horizontal direction. To to maintain system reliability, enable a high- enable the racks themselves to be arranged in a density implementation, and lower power high-density configuration, the SBs are mounted consumption (decrease leakage current).
Recommended publications
  • Performance Analysis on Energy Efficient High
    Performance Analysis on Energy Efficient High-Performance Architectures Roman Iakymchuk, François Trahay To cite this version: Roman Iakymchuk, François Trahay. Performance Analysis on Energy Efficient High-Performance Architectures. CC’13 : International Conference on Cluster Computing, Jun 2013, Lviv, Ukraine. hal-00865845 HAL Id: hal-00865845 https://hal.archives-ouvertes.fr/hal-00865845 Submitted on 25 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Performance Analysis on Energy Ecient High-Performance Architectures Roman Iakymchuk1 and François Trahay1 1Institut Mines-Télécom Télécom SudParis 9 Rue Charles Fourier 91000 Évry France {roman.iakymchuk, francois.trahay}@telecom-sudparis.eu Abstract. With the shift in high-performance computing (HPC) towards energy ecient hardware architectures such as accelerators (NVIDIA GPUs) and embedded systems (ARM processors), arose the need to adapt existing perfor- mance analysis tools to these new systems. We present EZTrace a performance analysis framework for parallel applications. EZTrace relies on several core components, in particular on a mechanism for instrumenting func- tions, a lightweight tool for recording events, and a generic interface for writing traces. To support EZTrace on energy ecient HPC systems, we developed a CUDA module and ported EZTrace to ARM processors.
    [Show full text]
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]
  • Measuring Power Consumption on IBM Blue Gene/P
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Comput Sci Res Dev DOI 10.1007/s00450-011-0192-y SPECIAL ISSUE PAPER Measuring power consumption on IBM Blue Gene/P Michael Hennecke · Wolfgang Frings · Willi Homberg · Anke Zitz · Michael Knobloch · Hans Böttiger © The Author(s) 2011. This article is published with open access at Springerlink.com Abstract Energy efficiency is a key design principle of the Top10 supercomputers on the November 2010 Top500 list IBM Blue Gene series of supercomputers, and Blue Gene [1] alone (which coincidentally are also the 10 systems with systems have consistently gained top GFlops/Watt rankings an Rpeak of at least one PFlops) are consuming a total power on the Green500 list. The Blue Gene hardware and man- of 33.4 MW [2]. These levels of power consumption are al- agement software provide built-in features to monitor power ready a concern for today’s Petascale supercomputers (with consumption at all levels of the machine’s power distribu- operational expenses becoming comparable to the capital tion network. This paper presents the Blue Gene/P power expenses for procuring the machine), and addressing the measurement infrastructure and discusses the operational energy challenge clearly is one of the key issues when ap- aspects of using this infrastructure on Petascale machines. proaching Exascale. We also describe the integration of Blue Gene power moni- While the Flops/Watt metric is useful, its emphasis on toring capabilities into system-level tools like LLview, and LINPACK performance and thus computational load ne- highlight some results of analyzing the production workload glects the fact that the energy costs of memory references at Research Center Jülich (FZJ).
    [Show full text]
  • Compute System Metrics (Bof)
    Setting Trends for Energy-Efficient Supercomputing Natalie Bates, EE HPC Working Group Tahir Cader, HP & The Green Grid Wu Feng, Virginia Tech & Green500 John Shalf, Berkeley Lab & NERSC Horst Simon, Berkeley Lab & TOP500 Erich Strohmaier, Berkeley Lab & TOP500 ISC BoF; May 31, 2010; Hamburg, Germany Why We Are Here • “Can only improve what you can measure” • Context – Power consumption of HPC and facilities cost are increasing • What is needed? – Converge on a common basis between different research and industry groups for: •metrics • methodologies • workloads for energy-efficient supercomputing, so we can make progress towards solutions. Current Technology Roadmaps will Depart from Historical Gains Power is the Leading Design Constraint From Peter Kogge, DARPA Exascale Study … and the power costs will still be staggering From Peter Kogge, DARPA Exascale Study $1M per megawatt per year! (with CHEAP power) Absolute Power Levels Power Consumption Power Efficiency What We Have Done • Stages of Green Supercomputing – Denial – Awareness – Hype – Substance The Denial Phase (2001 – 2004) • Green Destiny – A 240-Node Supercomputer in 5 Sq. Ft. – LINPACK Performance: 101 Gflops – Power Consumption: 3.2 kW • Prevailing Views embedded processor – “Green Destiny is so low power that it runs just as fast when it is unplugged.” – “In HPC, no one cares about power & cooling, and no one ever will …” – “Moore’s Law for Power will stimulate the economy by creating a new market in cooling technologies.” The Awareness Phase (2004 – 2008) • Green Movements & Studies – IEEE Int’l Parallel & Distributed Processing Symp. (2005) • Workshop on High-Performance, Power-Aware Computing (HPPAC) Green500 • Metrics: Energy-Delay Product and FLOPS/Watt FLOPS/watt – Green Grid (2007) • Industry-driven consortium of all the top system vendors • Metric: Power Usage Efficiency (PUE) – Kogge et al., “ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems, DARPA ITO, AFRL, 2008.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Programming on K Computer
    Programming on K computer Koh Hotta The Next Generation Technical Computing Fujitsu Limited Copyright 2010 FUJITSU LIMITED System Overview of “K computer” Target Performance : 10PF over 80,000 processors Over 640K cores Over 1 Peta Bytes Memory Cutting-edge technologies CPU : SPARC64 VIIIfx 8 cores, 128GFlops Extension of SPARC V9 Interconnect, “Tofu” : 6-D mesh/torus Parallel programming environment. 1 Copyright 2010 FUJITSU LIMITED I have a dream that one day you just compile your programs and enjoy high performance on your high-end supercomputer. So, we must provide easy hybrid parallel programming method including compiler and run-time system support. 2 Copyright 2010 FUJITSU LIMITED User I/F for Programming for K computer K computer Client System FrontFront End End BackBack End End (80000 (80000 Nodes) Nodes) IDE Interface CCommommandand JobJob C Controlontrol IDE IntInteerfrfaceace debugger DDebugebuggerger App IntInteerfrfaceace App Interactive Debugger GUI Data Data Data CoConvnveerrtteerr Data SamplSampleerr debugging partition official op VisualizedVisualized SSaammpplingling Stage out partition Profiler DaDattaa DaDattaa (InfiniBand) 3 Copyright 2010 FUJITSU LIMITED Parallel Programming 4 Copyright 2010 FUJITSU LIMITED Hybrid Parallelism on over-640K cores Too large # of processes to manipulate To reduce number of processes, hybrid thread-process programming is required But Hybrid parallel programming is annoying for programmers Even for multi-threading, procedure level or outer loop parallelism was desired Little opportunity for such coarse grain parallelism System support for “fine grain” parallelism is required 5 Copyright 2010 FUJITSU LIMITED Targeting inner-most loop parallelization Automatic vectorization technology has become mature, and vector-tuning is easy for programmers. Inner-most loop parallelism, which is fine-grain, should be an important portion for peta-scale parallelization.
    [Show full text]
  • 2010 IBM and the Environment Report
    2010 IBM and the Environment Report Committed to environmental leadership across all of IBM's business activities IBM and the Environment - 2010 Annual Report IBM AND THE ENVIRONMENT IBM has long maintained an unwavering commitment to environmental protection, which was formalized by a corporate environmental policy in 1971. The policy calls for IBM to be an environmental leader across all of our business activities, from our research, operations and products to the services and solutions we provide our clients to help them be more protective of the environment. This section of IBM’s Corporate Responsibility Report describes IBM’s programs and performance in the following environmental areas: A Commitment to Environmental 3 Energy and Climate Programs 36 Leadership A Five-Part Strategy 36 Conserving Energy 37 Global Governance and Management 4 CO2 Emissions Reduction 42 System PFC Emissions Reduction 43 Global Environmental Management 4 Renewable Energy 45 System Voluntary Climate Partnerships 46 Stakeholder Engagement 7 Transportation and Logistics 47 Voluntary Partnerships and Initiatives 8 Initiatives Environmental Investment and Return 9 Energy and Climate Protection in 48 the Supply Chain Process Stewardship 12 Environmentally Preferable Substances 12 Remediation 52 and Materials Nanotechnology 15 Audits and Compliance 53 Accidental Releases 53 Pollution Prevention 17 Fines and Penalties 54 Hazardous Waste 17 Nonhazardous Waste 19 Awards and Recognition 55 Chemical Use and Management 20 Internal Recognition 55 External Recognition
    [Show full text]
  • 101117-Green Computing and Gpus
    Wu Feng, Ph.D. Department of Computer Science Department of Electrical & Computer Engineering NSF Center for High-Performance Reconfigurable CompuAng © W. Feng, SC 2010 We have spent decades focusing on performance, performance, performance (and price/performance). © W. Feng, SC 2010 A ranking of the fastest 500 supercomputers in the world • Benchmark – LINPACK: Solves a (random) dense system of linear equaons in double-precision (64 bits) arithmeAc. • Evaluaon Metric – Performance (i.e., Speed) • Floang-Operaons Per Second (FLOPS) • Web Site – h\p://www.top500.org/ © W. Feng, SC 2010 • Metrics for Evaluang Supercomputers – Performance (i.e., Speed) • Metric: Floang-Operaons Per Second (FLOPS) – Price/Performance Cost Efficiency • Metric: AcquisiAon Cost / FLOPS • Performance & price/performance are important metrics, but … © W. Feng, September 2010 • Electrical power costs $$$$. Source: IDC, 2006 © W. Feng, SC 2010 Examples: Power, Cooling, and Infrastructure $$$ • Japanese Earth Simulator – Power & Cooling: 12 MW $10M/year © W. Feng, SC 2010 • Too much power affects efficiency, reliability, and availability. – Anecdotal Evidence from a “Machine Room” in 2001 - 2002 • Winter: “Machine Room” Temperature of 70-75° F – Failure approximately once per week. • Summer: “Machine Room” Temperature of 85-90° F – Failure approximately twice per week. – Arrenhius’ Equaon (applied to microelectronics) • For every 10° C (18° F) increase in temperature, … the failure rate of a system doubles.* * W. Feng, M. Warren, and E. Weigle, “The Bladed Beowulf: A Cost-EffecAve Alternave to TradiAonal Beowulfs,” IEEE Cluster, Sept. 2002. © W. Feng, SC 2010 • Debuted at SC 2007 • Goal: Raise Awareness in the Energy Efficiency of Supercompung – Drive energy efficiency as a first-order design constraint (on par with performance).
    [Show full text]
  • 40 Jahre Erfolgsgeschichte BS2000 Vortrag Von 2014
    Fujitsu NEXT Arbeitskreis BS2000 40 Jahre BS2000: Eine Erfolgsgeschichte Achim Dewor Fujitsu NEXT AK BS2000 5./6. Mai 2014 0 © FUJITSU LIMITED 2014 Was war vor 40 Jahren eigentlich so los? (1974) 1 © FUJITSU LIMITED 2014 1974 APOLLO-SOJUS-KOPPLUNG Die amtierenden Staatspräsidenten Richard Nixon und Leonid Breschnew unterzeichnen das Projekt. Zwei Jahre arbeitet man an den Vorbereitungen für eine neue gemeinsame Kopplungsmission 2 © FUJITSU LIMITED 2014 1974 Der erste VW Golf wird ausgeliefert 3 © FUJITSU LIMITED 2014 1974 Willy Brandt tritt zurück 6. MAI 1974 TRITT BUNDESKANZLER WILLY BRANDT IM VERLAUF DER SPIONAGE-AFFÄRE UM SEINEN PERSÖNLICHEN REFERENTEN, DEN DDR-SPION GÜNTER GUILLAUME 4 © FUJITSU LIMITED 2014 1974 Muhammad Ali gegen George Foreman „Ich werde schweben wie ein Schmetterling, zustechen wie eine Biene", "Ich bin das Größte, das jemals gelebt hat“ "Ich weiß nicht immer, wovon ich rede. Aber ich weiß, dass ich recht habe“ (Zitate von Ali) 5 © FUJITSU LIMITED 2014 1974 Deutschland wird Fußball-Weltmeister Breitners Elfer zum Ausgleich und das unvergängliche, typische Müller- Siegtor aus der Drehung. Es war der sportliche Höhepunkt der wohl besten deutschen Mannschaft aller Zeiten. 6 © FUJITSU LIMITED 2014 Was ist eigentlich „alt“? These: „Stillstand macht alt …. …. ewig jung bleibt, was laufend weiterentwickelt wird!“ 7 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Wird das Fahrrad bald 200 Jahre alt !? Welches? Das linke oder das rechte? Fahrrad (Anfang 1817) Fahrrad (2014) 8 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Soroban (Japan) Im Osten, vom Balkan bis nach China, wird er hier und da noch als preiswerte Rechenmaschine bei kleineren Geschäften verwendet.
    [Show full text]
  • Tibidabo$: Making the Case for an ARM-Based HPC System
    TibidaboI: Making the Case for an ARM-Based HPC System Nikola Rajovica,b,∗, Alejandro Ricoa,b, Nikola Puzovica, Chris Adeniyi-Jonesc, Alex Ramireza,b aComputer Sciences Department, Barcelona Supercomputing Center, Barcelona, Spain bDepartment d'Arquitectura de Computadors, Universitat Polit`ecnica de Catalunya - BarcelonaTech, Barcelona, Spain cARM Ltd., Cambridge, United Kingdom Abstract It is widely accepted that future HPC systems will be limited by their power consumption. Current HPC systems are built from commodity server pro- cessors, designed over years to achieve maximum performance, with energy efficiency being an after-thought. In this paper we advocate a different ap- proach: building HPC systems from low-power embedded and mobile tech- nology parts, over time designed for maximum energy efficiency, which now show promise for competitive performance. We introduce the architecture of Tibidabo, the first large-scale HPC clus- ter built from ARM multicore chips, and a detailed performance and energy efficiency evaluation. We present the lessons learned for the design and im- provement in energy efficiency of future HPC systems based on such low- power cores. Based on our experience with the prototype, we perform simu- lations to show that a theoretical cluster of 16-core ARM Cortex-A15 chips would increase the energy efficiency of our cluster by 8.7x, reaching an energy efficiency of 1046 MFLOPS/W. Keywords: high-performance computing, embedded processors, mobile ITibidabo is a mountain overlooking Barcelona ∗Corresponding author Email addresses: [email protected] (Nikola Rajovic), [email protected] (Alejandro Rico), [email protected] (Nikola Puzovic), [email protected] (Chris Adeniyi-Jones), [email protected] (Alex Ramirez) Preprint of the article accepted for publication in Future Generation Computer Systems, Elsevier processors, low power, cortex-a9, cortex-a15, energy efficiency 1.
    [Show full text]
  • Supermicro “Industry-Standard Green HPC Systems”
    Confidential Industry-Standard Green HPC Systems HPC Advisory Council Brazil Conference 2014 May 26, 2014 University of São Paulo Attila A. Nagy Senior IT Consultant © Supermicro 2014 Confidential HPC systems: What the industry is doing Architecture Processor 0.4 15.4 8.0 0.6 Xeon Cluster 8.6 Opteron MPP Power 84.6 82.4 Sparc Other system share (%) system share (%) Source: The Top500 list of November 2013. http://www.top500.org Confidential HPC systems: What the industry is doing Interconnect Operating System 2.2 1.4 4.0 2.2 Infiniband Linux 10.0 GbE 15.4 41.4 Unix 10GbE Other 96.4 Custom 27.0 Cray system share (%) Other system share (%) Source: The Top500 list of November 2013. http://www.top500.org Confidential Industry-standard HPC clusters Standard x86 servers throughout . Compute nodes . Head/Management/Control nodes . Storage nodes Infiniband and/or Ethernet networks . Main interconnect . Cluster management and administration . Out-of-band management Linux OS environment . Comprehensive software stack for HPC . Large availability of HPC software tools . Large collaboration community Confidential Typical HPC Cluster IB Fabric Head & Management Nodes Campus Ethernet Fabric x86 servers/Linux Network Infiniband The Beowulf cluster concept Ethernet OOB Mgmt Network is as solidNetwork as ever!! Network Storage Nodes Compute Nodes x86 servers x86 servers Linux Linux Parallel FS Confidential Accelerator/Coprocessor usage All top ten systems in the latest Green500 list are coprocessor Accelerator/CP based* . Two petaflop systems N/A Up to 5x improvements in: 89.4 Nvidia . Power consumption . Physical space Xeon Phi 7.4 . Cost Other 0.8 2.4 GPUs/MICs: 80% of HPC users system share (%) at least testing them Source: The Top500 list of November 2013.
    [Show full text]
  • Toward Automated Cache Partitioning for the K Computer
    IPSJ SIG Technical Report Toward Automated Cache Partitioning for the K Computer Swann Perarnau1 Mitsuhisa Sato1,2 Abstract: The processor architecture available on the K computer (SPARC64VIIIfx) features an hardware cache par- titioning mechanism called sector cache. This facility enables software to split the memory cache in two independent sectors and to select which one will receive a line when it is retrieved from memory. Such control over the cache by an application enables significant performance optimization opportunities for memory intensive programs, that several studies on software-controlled cache partitioning environments already demonstrated. Most of these previous studies share the same implementation idea: the use of page coloring and an overriding of the operating system virtual memory manager. However, the sector cache differs in several key points over these imple- mentations, making the use of the existing analysis or optimization strategies impractical, while enabling new ones. For example, while most studies overlooked the issue of phase changes in an application because of the prohibitive cost of a repartitioning, the sector cache provides this feature without any costs, allowing multiple cache distribution strategies to alternate during an execution, providing the best allocation of the current locality pattern. Unfortunately, for most application programmers, this hardware cache partitioning facility is not easy to take advantage of. It requires intricate knowledge of the memory access patterns of a code and the ability to identify data structures that would benefit from being isolated in cache. This optimization process can be tedious, with multiple code modifications and countless application runs necessary to achieve a good optimization scheme.
    [Show full text]