Programming on K Computer

Total Page:16

File Type:pdf, Size:1020Kb

Programming on K Computer Programming on K computer Koh Hotta The Next Generation Technical Computing Fujitsu Limited Copyright 2010 FUJITSU LIMITED System Overview of “K computer” Target Performance : 10PF over 80,000 processors Over 640K cores Over 1 Peta Bytes Memory Cutting-edge technologies CPU : SPARC64 VIIIfx 8 cores, 128GFlops Extension of SPARC V9 Interconnect, “Tofu” : 6-D mesh/torus Parallel programming environment. 1 Copyright 2010 FUJITSU LIMITED I have a dream that one day you just compile your programs and enjoy high performance on your high-end supercomputer. So, we must provide easy hybrid parallel programming method including compiler and run-time system support. 2 Copyright 2010 FUJITSU LIMITED User I/F for Programming for K computer K computer Client System FrontFront End End BackBack End End (80000 (80000 Nodes) Nodes) IDE Interface CCommommandand JobJob C Controlontrol IDE IntInteerfrfaceace debugger DDebugebuggerger App IntInteerfrfaceace App Interactive Debugger GUI Data Data Data CoConvnveerrtteerr Data SamplSampleerr debugging partition official op VisualizedVisualized SSaammpplingling Stage out partition Profiler DaDattaa DaDattaa (InfiniBand) 3 Copyright 2010 FUJITSU LIMITED Parallel Programming 4 Copyright 2010 FUJITSU LIMITED Hybrid Parallelism on over-640K cores Too large # of processes to manipulate To reduce number of processes, hybrid thread-process programming is required But Hybrid parallel programming is annoying for programmers Even for multi-threading, procedure level or outer loop parallelism was desired Little opportunity for such coarse grain parallelism System support for “fine grain” parallelism is required 5 Copyright 2010 FUJITSU LIMITED Targeting inner-most loop parallelization Automatic vectorization technology has become mature, and vector-tuning is easy for programmers. Inner-most loop parallelism, which is fine-grain, should be an important portion for peta-scale parallelization. Inner-most loop acceleration by multi-threading technology 6 Copyright 2010 FUJITSU LIMITED Inner-most loop acceleration by multi-threading technology CPU architecture is designed to reuse vectorization methodology efficiently. Targeting the inner-most loop automatic parallelization for multi- core processor. 7 Copyright 2010 FUJITSU LIMITED VISIMPACTTM : you need not think about multi-cores Efficient multi-thread execution on multiple cores tightly coupled with each other Collaboration between hardware architecture and compiler optimization makes high efficiency Shared L2 cache on a chip High speed hardware barrier on a chip Automatic parallelization Automatic parallelization facility makes multi-cores like a single high-speed core You need not think about cores in a CPU chip. 8 Copyright 2010 FUJITSU LIMITED VISIMPACTTM Performance on DAXPY SPARC64™ VIIIfx 2.0GHz simd8 threads ×8並列 Euroben 8 (DAXPY) FX1(SPARC64™ VII) 2.52GHz 4並列4 threads BX900 Nehalem 2.93GHz 8並列(2チップ)8 threads (2 chips) Do i = 1, n 10000 y(i,jsw) = y(i,jsw) + c0*x1(i) End Do 1000 Shared cache provides Rate (Mflop/s) twice performance than Nehalem 2.93GHz. 100 100 1000 10000 9 Problem size Copyright 2010 FUJITSU LIMITED MPI Open MPI based Tuned to “Tofu” interconnect 10 Copyright 2010 FUJITSU LIMITED MPI Approach for the K computer Open MPI based Open Standard, Open Source, Multi-Platform including PC Cluster Adding extension to Open MPI for “Tofu” interconnect High Performance Short-cut message path for low latency communication Torus oriented protocol: Message Size, Location, Hop Sensitive Trunking Communication utilizing multi-dimensional network links by Tofu selective routing. 11 Copyright 2010 FUJITSU LIMITED Goal for MPI on K system High Performance Low Latency & High Bandwidth Highly Scalability Collective Performance Optimized for Tofu interconnect High Availability, Flexibility and Easy to Use Providing Logical 3D-Torus for each JOB with eliminating failure nodes. Providing New up version of MPI Standard functions as soon as possible 12 Copyright 2010 FUJITSU LIMITED MPI Software stack Original Open MPI Software Stack Supported special Bcast・Allgather・ Alltoall・Allreduce for Tofu (Using openib BTL) Extension MPI MPI Extension MPI MPI COLL COLL PML ob1 PML PML ob1 PML PML (Point-to-Point Messaging Layer) BML BML BML r2 BML Adapting BML r2 BML to tofu tofu (BTL Management Layer) BTL LLP BTL BTL openib BTL Hardware BTL tofu BTL dependent (Byte Transfer Layer) tofu common OpenFabrics Verbs Tofu Library LLP (Low Latency Path) Providing Common Data processing and structures for BTL・LLP・COLL Special Hardware dependent layer For Tofu Interconnect Rendezvous Protocol Optimization etc 13 Copyright 2010 FUJITSU LIMITED Flexible Process Mapping to Tofu environment You can allocate your processes as you like. Dimension Specification for each rank 1D :(x) (0) (0,0) (0,0,0) (1) (1,0) (1,0,0) 2D :(x,y) (2) (2,0) (0,1,0) (3) (3,0) (1,1,0) 3D :(x,y,z) (7) (3,1) (0,0,1) (6) (2,1) (0,1,1) (5) (1,1) (1,0,1) (4) (0,1) (1,1,1) 4 5 6 7 7 6 5 4 5 7 y 0 1 2 3 0 1 2 3 2 3 4 6 x y 0 1 z x 14 Copyright 2010 FUJITSU LIMITED Performance Tuning Not only by compiler optimization, but also you can manipulate performance Compiler directives to tune programs. Tools to help your effort to tune your programs ex. Watch your program using event counter 15 Copyright 2010 FUJITSU LIMITED Performance Tuning (Event Counter Example) 3-D job example Display 4096 procs in 16 x 16 x 16 cells Cells painted in colors according to the proc status (e.g. CPU time) Cut a slice of jobs along x-, y-, or z-axis to view 16 Copyright 2010 FUJITSU LIMITED Conclusion: Automatic and transparency of performance VISIMPACTTM lets you treat 8-cored CPU as a single high-speed core. Collaboration by the CPU architecture and the compiler. High-speed hardware barrier to reduce the overhead of synchronization Shared L2 cache to improve memory access Automatic parallelization to recognize parallelism and accelerate your program Open MPI based MPI to utilize “Tofu” interconnect. Tuning facility shows the activity of parallel programs. 17 Copyright 2010 FUJITSU LIMITED 18 Copyright 2010 FUJITSU LIMITED.
Recommended publications
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • 40 Jahre Erfolgsgeschichte BS2000 Vortrag Von 2014
    Fujitsu NEXT Arbeitskreis BS2000 40 Jahre BS2000: Eine Erfolgsgeschichte Achim Dewor Fujitsu NEXT AK BS2000 5./6. Mai 2014 0 © FUJITSU LIMITED 2014 Was war vor 40 Jahren eigentlich so los? (1974) 1 © FUJITSU LIMITED 2014 1974 APOLLO-SOJUS-KOPPLUNG Die amtierenden Staatspräsidenten Richard Nixon und Leonid Breschnew unterzeichnen das Projekt. Zwei Jahre arbeitet man an den Vorbereitungen für eine neue gemeinsame Kopplungsmission 2 © FUJITSU LIMITED 2014 1974 Der erste VW Golf wird ausgeliefert 3 © FUJITSU LIMITED 2014 1974 Willy Brandt tritt zurück 6. MAI 1974 TRITT BUNDESKANZLER WILLY BRANDT IM VERLAUF DER SPIONAGE-AFFÄRE UM SEINEN PERSÖNLICHEN REFERENTEN, DEN DDR-SPION GÜNTER GUILLAUME 4 © FUJITSU LIMITED 2014 1974 Muhammad Ali gegen George Foreman „Ich werde schweben wie ein Schmetterling, zustechen wie eine Biene", "Ich bin das Größte, das jemals gelebt hat“ "Ich weiß nicht immer, wovon ich rede. Aber ich weiß, dass ich recht habe“ (Zitate von Ali) 5 © FUJITSU LIMITED 2014 1974 Deutschland wird Fußball-Weltmeister Breitners Elfer zum Ausgleich und das unvergängliche, typische Müller- Siegtor aus der Drehung. Es war der sportliche Höhepunkt der wohl besten deutschen Mannschaft aller Zeiten. 6 © FUJITSU LIMITED 2014 Was ist eigentlich „alt“? These: „Stillstand macht alt …. …. ewig jung bleibt, was laufend weiterentwickelt wird!“ 7 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Wird das Fahrrad bald 200 Jahre alt !? Welches? Das linke oder das rechte? Fahrrad (Anfang 1817) Fahrrad (2014) 8 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Soroban (Japan) Im Osten, vom Balkan bis nach China, wird er hier und da noch als preiswerte Rechenmaschine bei kleineren Geschäften verwendet.
    [Show full text]
  • Toward Automated Cache Partitioning for the K Computer
    IPSJ SIG Technical Report Toward Automated Cache Partitioning for the K Computer Swann Perarnau1 Mitsuhisa Sato1,2 Abstract: The processor architecture available on the K computer (SPARC64VIIIfx) features an hardware cache par- titioning mechanism called sector cache. This facility enables software to split the memory cache in two independent sectors and to select which one will receive a line when it is retrieved from memory. Such control over the cache by an application enables significant performance optimization opportunities for memory intensive programs, that several studies on software-controlled cache partitioning environments already demonstrated. Most of these previous studies share the same implementation idea: the use of page coloring and an overriding of the operating system virtual memory manager. However, the sector cache differs in several key points over these imple- mentations, making the use of the existing analysis or optimization strategies impractical, while enabling new ones. For example, while most studies overlooked the issue of phase changes in an application because of the prohibitive cost of a repartitioning, the sector cache provides this feature without any costs, allowing multiple cache distribution strategies to alternate during an execution, providing the best allocation of the current locality pattern. Unfortunately, for most application programmers, this hardware cache partitioning facility is not easy to take advantage of. It requires intricate knowledge of the memory access patterns of a code and the ability to identify data structures that would benefit from being isolated in cache. This optimization process can be tedious, with multiple code modifications and countless application runs necessary to achieve a good optimization scheme.
    [Show full text]
  • Fujitsu Standard Tool
    Toward Building up ARM HPC Ecosystem Shinji Sumimoto, Ph.D. Next Generation Technical Computing Unit FUJITSU LIMITED Sept. 12th, 2017 0 Copyright 2017 FUJITSU LIMITED Outline Fujitsu’s Super computer development history and Post-K Processor Overview Compiler Development for ARMv8 with SVE Towards building up ARM HPC Ecosystem 1 Copyright 2017 FUJITSU LIMITED Fujitsu’s Super computer development history and Post-K Processor Overview 2 Copyright 2017 FUJITSU LIMITED Fujitsu Supercomputers Fujitsu has been providing high performance FX100 supercomputers for 40 years, increasing application performance while maintaining FX10 Post-K Under Development application compatibility w/ RIKEN No.1 in Top500 (June and Nov., 2011) K computer World’s Fastest FX1 Vector Processor (1999) Most Efficient Performance VPP5000 SPARC in Top500 (Nov. 2008) NWT* Enterprise Developed with NAL No.1 in Top500 PRIMEQUEST (Nov. 1993) VPP300/700 PRIMEPOWER PRIMERGY CX400 Gordon Bell Prize Skinless server (1994, 95, 96) ⒸJAXA HPC2500 VPP500 World’s Most Scalable PRIMERGY VP Series Supercomputer BX900 Cluster node AP3000 (2003) HX600 F230-75APU Cluster node AP1000 PRIMERGY RX200 Cluster node Japan’s Largest Japan’s First Cluster in Top500 Vector (Array) (July 2004) Supercomputer (1977) *NWT: Numerical Wind Tunnel 3 Copyright 2017 FUJITSU LIMITED FUJITSU high-end supercomputers development 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 K computer and PRIMEHPC FX10/FX100 in operation FUJITSU The CPU and interconnect of FX10/FX100 inherit the K PRIMEHPC FX10 PRIMEHPC FX100
    [Show full text]
  • World's Fastest Computer
    HISTORY OF FUJITSU LIMITED SUPERCOMPUTING “FUGAKU” – World’s Fastest Computer Birth of Fugaku Fujitsu has been developing supercomputer systems since 1977, equating to over 40 years of direct hands-on experience. Fujitsu determined to further expand its HPC thought leadership and technical prowess by entering into a dedicated partnership with Japan’s leading research institute, RIKEN, in 2006. This technology development partnership led to the creation of the “K Computer,” successfully released in 2011. The “K Computer” raised the bar for processing capabilities (over 10 PetaFLOPS) to successfully take on even larger and more complex challenges – consistent with Fujitsu’s corporate charter of tackling societal challenges, including global climate change and sustainability. Fujitsu and Riken continued their collaboration by setting aggressive new targets for HPC solutions to further extend the ability to solve complex challenges, including applications to deliver improved energy efficiency, better weather and environmental disaster prediction, and next generation manufacturing. Fugaku, another name for Mt. Fuji, was announced May 23, 2019. Achieving The Peak of Performance Fujitsu Ltd. And RIKEN partnered to design the next generation supercomputer with the goal of achieving Exascale performance to enable HPC and AI applications to address the challenges facing mankind in various scientific and social fields. Goals of the project Ruling the Rankings included building and delivering the highest peak performance, but also included wider adoption in applications, and broader • : Top 500 list First place applicability across industries, cloud and academia. The Fugaku • First place: HPCG name symbolizes the achievement of these objectives. • First place: Graph 500 Fugaku’s designers also recognized the need for massively scalable systems to be power efficient, thus the team selected • First place: HPL-AI the ARM Instruction Set Architecture (ISA) along with the Scalable (real world applications) Vector Extensions (SVE) as the base processing design.
    [Show full text]
  • Mainframes in Tomorrow's Data Center
    Mainframes in Tomorrow's Data Center Achim Dewor Fujitsu BS2000/OSD Mainframe Summit Munich, 28th June 2012 0 Tomorrow's Data Center Today's Data Center Yesterday's Data Center 1 The origins of IT "Programming language" Cuneiform script Data medium: - Clay tablet Branch solutions: - Administration - Inventory mgmt. Users: - Sumerians - Babylonians - Akkadians approx. 3000 BC - Assyrians - Persians 2 The origins of IT 1. Programming language: Hieroglyphics 2. State-of-the-art IT support: Cubit rule 4. Application: Branch application - Building of the pyramids (App) (Individual application) Pyramid of Cheops at Giza (approx. 2750 BC) 3. Data medium / Storage medium: Papyrus 3 Further development of IT Medieval ABACUS data center ABACUS ( Roman version ) 1100 BC Suan Pan / ABACUS (Chinese version) 4 Further development of IT Abacus Calculator 1500 AD Adam Riese (Germany) John Neper (Scotland) 5 First Data Center 1938 1941 Z3 Z1 Konrad Zuse 6 First Data Center 1946 ENIAC (Electronic Numerical Integrator and Computer) 7 Data Center - Further developments 1978 FACOM M-200 8 Data Center - Further developments 1988 H120 Quadro 9 Today's Data Center World’s No.1 2012 Again on TOP 500 List Performance of over 10 Peta*flops *) Floating Point Operations Per Second K Computer 1 Peta = 1015 (arithmetic operations per second) 10 Data Center - Today's Installations Large hosters accommodate up to 70,000 servers in their data centers Google is estimated at > 500,000 servers Microsoft (Chicago) > 300.000 servers Other members of this "club" are e.g.
    [Show full text]
  • Introduction of Fujitsu's Next-Generation Supercomputer
    Introduction of Fujitsu’s next-generation supercomputer MATSUMOTO Takayuki July 16, 2014 HPC Platform Solutions Fujitsu has a long history of supercomputing over 30 years Technologies and experience of providing Post-FX10 several types of supercomputers such as vector, MPP and cluster, are inherited PRIMEHPC FX10 and implemented in latest supercomputer product line No.1 in Top500 K computer (June and Nov., 2011) FX1 World’s Fastest Vector Processor (1999) Highest Performance VPP5000 SPARC efficiency in Top500 NWT* Enterprise (Nov. 2008) Developed with NAL No.1 in Top500 PRIMEQUEST VPP300/700 (Nov. 1993) PRIMEPOWER PRIMERGY CX400 Gordon Bell Prize HPC2500 Skinless server (1994, 95, 96) ⒸJAXA VPP500 World’s Most Scalable PRIMERGY VP Series Supercomputer BX900 Cluster node AP3000 (2003) HX600 Cluster node F230-75APU AP1000 PRIMERGY RX200 Cluster node Japan’s Largest Cluster in Top500 Japan’s First (July 2004) Vector (Array) *NWT: Supercomputer Numerical Wind Tunnel (1977) 1 Copyright 2014 FUJITSU LIMITED K computer and Fujitsu PRIMEHPC series Single CPU/node architecture for multicore Good Bytes/flop and scalability Key technologies for massively parallel supercomputers Original CPU and interconnect Support for tens of millions of cores (VISIMPACT*, Collective comm. HW) VISIMPACT SIMD extension HPC-ACE HPC-ACE HPC-ACE2 Collective comm. HW Direct network Tofu Direct network Tofu Tofu interconnect 2 HMC & Optical connections CY2008~ CY2010~ CY2012~ Coming soon 40GF, 4-core/CPU 128GF, 8-core/CPU 236.5GF, 16-core/CPU 1TF~, 32-core/CPU
    [Show full text]
  • Post-K Computer Development
    Post-K Computer Development In collaboration with RIKEN, Fujitsu is developing the world’s top-level supercomputer, capable of realizing high effective performance for a broad range of application software. Advanced Technology Fujitsu Contributions to the Post-K Computer National Project Post-K Development ・System Characteristics We are aiming to achieve the development goals of having an effective application performance that is up to 100 times greater than that of the K computer and power consumption of 30-40 MW (the K computer’s power consumption is 12.7MW), and to create the world’s top-performing general-purpose supercomputer. To be more precise, our aim is to balance various factors, such as i) power consumption, ii) computational performance, iii) user convenience, and iv) ability to produce ground-breaking results, characterized by its all-around capabilities, compared to any other system in the world. ・Fujitsu’s Efforts Fujitsu, through efficient use of not only cutting-edge technology, but also the cultivation of know-how and performance experience with the K computer and the Supercomputer PRIMEHPC Series, is developing the entire system; from the processor to the software. In particular, in regards to the processor, Fujitsu chose the Arm instruction set architecture with supercomputing extensions co-created with Arm, and is working hard to extract the potential maximum performance from it. 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 K Computer K Computer Development Operation Basic Mass Install./ Prototype・Detailed Design Operation Post-K Design Product. Tuning K computer The K computer is used in various fields PRIMEHPC FX100 from advanced research to manufacturing.
    [Show full text]
  • Overview of the K Computer System
    Overview of the K computer System Hiroyuki Miyazaki Yoshihiro Kusano Naoki Shinjou Fumiyoshi Shoji Mitsuo Yokokawa Tadashi Watanabe RIKEN and Fujitsu have been working together to develop the K computer, with the aim of beginning shared use by the fall of 2012, as a part of the High-Performance Computing Infrastructure (HPCI) initiative led by Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT). Since the K computer involves over 80 000 compute nodes, building it with lower power consumption and high reliability was important from the availability point of view. This paper describes the K computer system and the measures taken for reducing power consumption and achieving high reliability and high availability. It also presents the results of implementing those measures. 1. Introduction Technology (MEXT). As the name “Kei” in Fujitsu has been actively developing and Japanese implies, one objective of this project providing advanced supercomputers for over was to achieve a computing performance of 30 years since its development of the FACOM 1016 floating-point operations per second (10 230-75 APU—Japan’s first supercomputer—in PFLOPS). The K computer, moreover, was 1977 (Figure 1). As part of this effort, it has developed not just to achieve peak performance been developing its own hardware including in benchmark tests but also to ensure high original processors and software too and building effective performance in applications used in up its technical expertise in supercomputers actual research. Furthermore, to enable the along the way. entire system to be installed and operated at The sum total of this technical expertise one location, it was necessary to reduce power has been applied to developing a massively consumption and provide a level of reliability parallel computer system—the K computer1), note)i that could ensure the total operation of a large- —which has been ranked as the top performing scale system.
    [Show full text]
  • Exascale” Supercomputer Fugaku & Beyond
    The first “exascale” supercomputer Fugaku & beyond l Satoshi Matsuoka l Director, RIKEN Center for Computational Science l 20190815 Modsim Presentation 1 Arm64fx & Fugaku 富岳 /Post-K are: l Fujitsu-Riken design A64fx ARM v8.2 (SVE), 48/52 core CPU l HPC Optimized: Extremely high package high memory BW (1TByte/s), on-die Tofu-D network BW (~400Gbps), high SVE FLOPS (~3Teraflops), various AI support (FP16, INT8, etc.) l Gen purpose CPU – Linux, Windows (Word), otherSCs/Clouds l Extremely power efficient – > 10x power/perf efficiency for CFD benchmark over current mainstream x86 CPU l Largest and fastest supercomputer to be ever built circa 2020 l > 150,000 nodes, superseding LLNL Sequoia l > 150 PetaByte/s memory BW l Tofu-D 6D Torus NW, 60 Petabps injection BW (10x global IDC traffic) l 25~30PB NVMe L1 storage l many endpoint 100Gbps I/O network into Lustre l The first ‘exascale’ machine (not exa64bitflops but in apps perf.) 3 Brief History of R-CCS towards Fugaku April 2018 April 2012 AICS renamed to RIKEN R-CCS. July 2010 Post-K Feasibility Study start Satoshi Matsuoka becomes new RIKEN AICS established 3 Arch Teams and 1 Apps Team Director August 2010 June 2012 Aug 2018 HPCI Project start K computer construction complete Arm A64fx announce at Hotchips September 2010 September 2012 Oct 2018 K computer installation starts K computer production start NEDO 100x processor project start First meeting of SDHPC (Post-K) November 2012 Nov 2018 ACM Gordon bell Award Post-K Manufacturing approval by Prime Minister’s CSTI Committee 2 006 2011 2013
    [Show full text]
  • Supercomputer "Fugaku"
    Supercomputer “Fugaku” Formerly known as Post-K 0 Copyright 2019 FUJITSU LIMITED Supercomputer Fugaku Project RIKEN and Fujitsu are currently developing Japan's next-generation flagship supercomputer, the successor to the K computer, as the most advanced general-purpose supercomputer in the world PRIMEHPC FX100 PRIMEHPC FX10 No.1(2017) Finalist(2016) No.1(2018) K computer Supercomputer Fugaku © RIKEN RIKEN and Fujitsu announced that manufacturing started in March 2019 RIKEN announced on May 23, 2019 that the supercomputer is named*Formerly “Fugaku known as” Post-K 1 Copyright 2019 FUJITSU LIMITED Goals and Approaches for Fugaku Goals High application Good usability Keeping application performance and wide range of uses compatibility RIKEN announced predicted performance: More than 100x+ faster than K computer for GENESIS and NICAM+LETKF Geometric mean of speedup over K computer in 9 Priority Issues is greater than 37x+ https://postk-web.r-ccs.riken.jp/perf.html 2 Copyright 2019 FUJITSU LIMITED Goals and Approaches for Fugaku Goals High application Good usability Keeping application performance and wide range of uses compatibility Approaches Develop Achieve 1. High-performance Arm CPU A64FX in HPC and AI areas - High performance in real applications 2. Cutting-edge hardware design - High efficiency in key features for AI 3. System software stack applications 3 Copyright 2019 FUJITSU LIMITED 1. High-Performance Arm CPU A64FX in HPC and AI Areas Architecture features CMG (Core Memory Group) ISA Armv8.2-A (AArch64 only) SVE (Scalable
    [Show full text]