Mainframes in Tomorrow's Data Center

Total Page:16

File Type:pdf, Size:1020Kb

Mainframes in Tomorrow's Data Center Mainframes in Tomorrow's Data Center Achim Dewor Fujitsu BS2000/OSD Mainframe Summit Munich, 28th June 2012 0 Tomorrow's Data Center Today's Data Center Yesterday's Data Center 1 The origins of IT "Programming language" Cuneiform script Data medium: - Clay tablet Branch solutions: - Administration - Inventory mgmt. Users: - Sumerians - Babylonians - Akkadians approx. 3000 BC - Assyrians - Persians 2 The origins of IT 1. Programming language: Hieroglyphics 2. State-of-the-art IT support: Cubit rule 4. Application: Branch application - Building of the pyramids (App) (Individual application) Pyramid of Cheops at Giza (approx. 2750 BC) 3. Data medium / Storage medium: Papyrus 3 Further development of IT Medieval ABACUS data center ABACUS ( Roman version ) 1100 BC Suan Pan / ABACUS (Chinese version) 4 Further development of IT Abacus Calculator 1500 AD Adam Riese (Germany) John Neper (Scotland) 5 First Data Center 1938 1941 Z3 Z1 Konrad Zuse 6 First Data Center 1946 ENIAC (Electronic Numerical Integrator and Computer) 7 Data Center - Further developments 1978 FACOM M-200 8 Data Center - Further developments 1988 H120 Quadro 9 Today's Data Center World’s No.1 2012 Again on TOP 500 List Performance of over 10 Peta*flops *) Floating Point Operations Per Second K Computer 1 Peta = 1015 (arithmetic operations per second) 10 Data Center - Today's Installations Large hosters accommodate up to 70,000 servers in their data centers Google is estimated at > 500,000 servers Microsoft (Chicago) > 300.000 servers Other members of this "club" are e.g. Amazon, eBay, Yahoo, Facebook, etc. 1. Lakeside Technology Center 2. Metro Data Center 3. NAP of the Americas (Chicago) 102,000 sqm (Atlanta) 92,000 sqm (Miami) 70,000 sqm 11 Copyright 2012 FUJITSU IT is constantly changing 2012 SQ210 H120 Quadro 2010 Amdahl 470 V/6 1988 K Computer ENIAC (Electronic Numerical Integrator and Computer) 1978 S175, S210 1975 LSI-based computer 1974 (fastest in the world) 1965 Sumerians Z3 (Konrad Zuse) 1954 1946 FACOM M-200 1943 Papyrus 1941 1100 BC FACOM 230-10 - 3000 BC FACOM 100 Relay-based mainframe (first in Japan) ABACUS Colossus MARK II 12 Tomorrow's Data Center What will tomorrow's data center look like? 13 14 René Descartes I think, therefore I am 15 HAL 9000 I compute, therefore I am 16 Tomorrow's Data Center … 17 Tomorrow's Data Center Today's Data Center Yesterday's Data Center 18 19 IT challenges are alike … Flexibility Quality Efficiency 20 IT challenges have been alike to the present day … Flexibility CEO IT Challenges User CFO Quality Efficiency 21 ... they are becoming more varied and faster… 22 … their complexity is growing increasingly … Complex Increase in Whole range of applications workloads complexity Big Data As fast as can be More efficient operations "Never-ending story" of Business Erode the differences intelligence server consolidation of system worlds Increase in integration 23 Reduce complexity with mainframes Open world: Heterogeneous components Mainframe : All from a single source - high administration expenditure + coordinated, ready and - additional integration software consistent data center infrastructure + low administration expenditure + pre-integrated components Cluster + Security Mgmt. Backup + + + Cluster Mgmt. + Security Backup Application Accounting Servers Kernel Performance Application Kernel Accounting + + + Servers Performance VM Mgmt. Data Mgmt. Job Mgmt. Job Mgmt. VM Mgmt. Data Mgmt. 24 Reduce complexity with mainframes Distributed applications Mainframe systems Complexity must be handled by Self-organization encapsulate the administration complexity Servers Application 25 IT challenges of today and tomorrow Flexibility Flexibility Fast reaction to business requirements Service adapted at short notice CEO Development of new services Efficiency Reduction in complexity and costs Transparency IT Cash flow Challenge Reduced risks Conformity with the law User CFO Quality Better service levels Quality Efficiency Productivity Satisfaction of the end users 26 IT challenges of today and tomorrow CxO Flexibility CEO IT Challenge User CFO Quality Efficiency Aspects of Technology & Applications Provision IT Challenges 27 Mainframes from a CxO viewpoint Availability / Continuity High availability - least downtimes in the normal IT lifecycle CxO viewpoint Flexibility (99.999%) as in the case of an error or a disaster CEO Maximum degree of virtualization and automation IT Full automation of daily IT operations Challenge Investment protection through source and object compatibility over User CFO Quality Efficiency very long periods and different HW technologies TCO / Efficiency / Cost control Maximum economic viability with a high number of end users (TCO per user) Optimal consolidation platform Flexibility / Agility Scalability over very wide performance ranges without any leaps in technology Proven quality and customer support Full control of the system via the SW configuration Pre-tested HW configurations including peripherals Third-level support from the manufacturer and customer proximity 28 Mainframes from an application viewpoint Continuity, reliability, longevity Applications Investment protection over longer periods in case of technology innovations and / or changes of platform Flexible and fast adaptation to changed business requirements Further use of legacy applications as well as reliable use/further development of core applications Connectivity with new / changed applications: Mobility Web-enabling Business Intelligence Offload scenarios SOA / SOI openSEAS 29 Challenges cause trends … …Trends cause challenges 30 Copyright 2012 FUJITSU Tomorrow's Data Center Mainframes & IT Trends Today's Data Center Yesterday's Data Center 31 Data Center Trends Technology & Provision Client Server / Distributed IT / Multi-Station Systems / PCs / Solaris / Unix / Linux … Central mainframes All-Purpose Computers / Main Computers / with "dumb" terminals Mainframes 32 Data Center Trends Technology & Provision Period of "wars of religion" Client Server / Distributed IT / between mainframes, Multi-Station multi-station systems Systems / PCs / Solaris / Unix / and single-station systems Linux … All-Purpose Computers / Main Computers / Mainframes 33 Data Center Trends What counts in future is the method of provision: Decide on requirements - have to provide appropriate technologies Client Server / Distributed IT / Multi-Station Systems / PCs / Solaris / Unix / Linux … All-Purpose Computers / Main Computers / Mainframes 34 Technology and Provision Technology & Provision Client Server / Distributed IT / Multi-Station Systems / PCs / Solaris / Unix / Linux … All-Purpose Computers / Main Computers / Mainframes 35 Trend: Pre-configured systems ….@ BS2000 Integration and interaction of different components is one of the Technology & core strengths of mainframes Provision BS2000 innovation packages & pre-configured systems Interaction of all BS2000 infrastructure components is guaranteed Further current examples from the up-to-date FTS portfolio: (Infrastructure Solutions / Appliances) Fujitsu HANA Appliance FlexFrame for SAP FlexFrame on Windows for SAP FlexFrame Compact for SAP ManageNow DI Blocks Dynamic Infrastructures for VMware vCloud Virtual Client Computing Solutions ETERNUS CS ETERNUS CS 800 36 Trend: Green IT ….@ BS2000 Joint use of resources for all applications & services Technology & Maximum utilization has always been a features of mainframes Provision Idling servers consume almost the same amount of current as under normal load (plus additionally required cooling also necessary) Shared storage systems and disk allocation (better disk use supports energy efficiency) Consolidation and automation save >75% of the energy costs Mainframes are world champions when it comes to consolidation and automation KWh Power High energy efficiency if new mainframes are used Approx. 20% less energy consumption than with S165 / S200 (server and cooling) AC At the same time system performance is up to 20% higher CO Eco friendliness 2 Significantly lower energy consumption of new servers protects the environment and climate 2 Surface New environmentally conscious powder coating of the housing finish m Small footprint and maintenance space of the mainframes Minimum space requirement, yet at the same time high performance and functionality 37 Trend: Automation & Efficient Management ….@ BS2000 Maximum degree of virtualization and automation Technology & Provision Full automation of first-level support possible Over 20 years of experience with virtualization (VM2000) Optimized and absolutely secure resource management and resource sharing Developed for highest demands of multi-user and mixed-mode operations Sophisticated user and role concept Maximum know-how with dynamic and flexible resource management Multiprogramming Lowest personnel costs ….. 38 Trend: Cloud ….@ BS2000 "A cloud is actually nothing more than a very highly scalable, Technology & globally accessible, constantly available, high-performance and Provision centrally managed hardware platform ….. a great many people also call it a mainframe!“ Clouds can be implemented with the most varied concepts - mainframes are only an alternative here, …… presently already available in the role an in-house cloud! Flexible Cloud According to Gartner availability is one important criterion for cloud Infrastructure services. Leading-edge providers must ensure availability of 99.95% … mainframes offer 99.999%! Application Connectivity
Recommended publications
  • Supercomputer Fugaku
    Supercomputer Fugaku Toshiyuki Shimizu Feb. 18th, 2020 FUJITSU LIMITED Copyright 2020 FUJITSU LIMITED Outline ◼ Fugaku project overview ◼ Co-design ◼ Approach ◼ Design results ◼ Performance & energy consumption evaluation ◼ Green500 ◼ OSS apps ◼ Fugaku priority issues ◼ Summary 1 Copyright 2020 FUJITSU LIMITED Supercomputer “Fugaku”, formerly known as Post-K Focus Approach Application performance Co-design w/ application developers and Fujitsu-designed CPU core w/ high memory bandwidth utilizing HBM2 Leading-edge Si-technology, Fujitsu's proven low power & high Power efficiency performance logic design, and power-controlling knobs Arm®v8-A ISA with Scalable Vector Extension (“SVE”), and Arm standard Usability Linux 2 Copyright 2020 FUJITSU LIMITED Fugaku project schedule 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Fugaku development & delivery Manufacturing, Apps Basic Detailed design & General Feasibility study Installation review design Implementation operation and Tuning Select Architecture & Co-Design w/ apps groups apps sizing 3 Copyright 2020 FUJITSU LIMITED Fugaku co-design ◼ Co-design goals ◼ Obtain the best performance, 100x apps performance than K computer, within power budget, 30-40MW • Design applications, compilers, libraries, and hardware ◼ Approach ◼ Estimate perf & power using apps info, performance counts of Fujitsu FX100, and cycle base simulator • Computation time: brief & precise estimation • Communication time: bandwidth and latency for communication w/ some attributes for communication patterns • I/O time: ◼ Then, optimize apps/compilers etc. and resolve bottlenecks ◼ Estimation of performance and power ◼ Precise performance estimation for primary kernels • Make & run Fugaku objects on the Fugaku cycle base simulator ◼ Brief performance estimation for other sections • Replace performance counts of FX100 w/ Fugaku params: # of inst. commit/cycle, wait cycles of barrier, inst.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Programming on K Computer
    Programming on K computer Koh Hotta The Next Generation Technical Computing Fujitsu Limited Copyright 2010 FUJITSU LIMITED System Overview of “K computer” Target Performance : 10PF over 80,000 processors Over 640K cores Over 1 Peta Bytes Memory Cutting-edge technologies CPU : SPARC64 VIIIfx 8 cores, 128GFlops Extension of SPARC V9 Interconnect, “Tofu” : 6-D mesh/torus Parallel programming environment. 1 Copyright 2010 FUJITSU LIMITED I have a dream that one day you just compile your programs and enjoy high performance on your high-end supercomputer. So, we must provide easy hybrid parallel programming method including compiler and run-time system support. 2 Copyright 2010 FUJITSU LIMITED User I/F for Programming for K computer K computer Client System FrontFront End End BackBack End End (80000 (80000 Nodes) Nodes) IDE Interface CCommommandand JobJob C Controlontrol IDE IntInteerfrfaceace debugger DDebugebuggerger App IntInteerfrfaceace App Interactive Debugger GUI Data Data Data CoConvnveerrtteerr Data SamplSampleerr debugging partition official op VisualizedVisualized SSaammpplingling Stage out partition Profiler DaDattaa DaDattaa (InfiniBand) 3 Copyright 2010 FUJITSU LIMITED Parallel Programming 4 Copyright 2010 FUJITSU LIMITED Hybrid Parallelism on over-640K cores Too large # of processes to manipulate To reduce number of processes, hybrid thread-process programming is required But Hybrid parallel programming is annoying for programmers Even for multi-threading, procedure level or outer loop parallelism was desired Little opportunity for such coarse grain parallelism System support for “fine grain” parallelism is required 5 Copyright 2010 FUJITSU LIMITED Targeting inner-most loop parallelization Automatic vectorization technology has become mature, and vector-tuning is easy for programmers. Inner-most loop parallelism, which is fine-grain, should be an important portion for peta-scale parallelization.
    [Show full text]
  • 40 Jahre Erfolgsgeschichte BS2000 Vortrag Von 2014
    Fujitsu NEXT Arbeitskreis BS2000 40 Jahre BS2000: Eine Erfolgsgeschichte Achim Dewor Fujitsu NEXT AK BS2000 5./6. Mai 2014 0 © FUJITSU LIMITED 2014 Was war vor 40 Jahren eigentlich so los? (1974) 1 © FUJITSU LIMITED 2014 1974 APOLLO-SOJUS-KOPPLUNG Die amtierenden Staatspräsidenten Richard Nixon und Leonid Breschnew unterzeichnen das Projekt. Zwei Jahre arbeitet man an den Vorbereitungen für eine neue gemeinsame Kopplungsmission 2 © FUJITSU LIMITED 2014 1974 Der erste VW Golf wird ausgeliefert 3 © FUJITSU LIMITED 2014 1974 Willy Brandt tritt zurück 6. MAI 1974 TRITT BUNDESKANZLER WILLY BRANDT IM VERLAUF DER SPIONAGE-AFFÄRE UM SEINEN PERSÖNLICHEN REFERENTEN, DEN DDR-SPION GÜNTER GUILLAUME 4 © FUJITSU LIMITED 2014 1974 Muhammad Ali gegen George Foreman „Ich werde schweben wie ein Schmetterling, zustechen wie eine Biene", "Ich bin das Größte, das jemals gelebt hat“ "Ich weiß nicht immer, wovon ich rede. Aber ich weiß, dass ich recht habe“ (Zitate von Ali) 5 © FUJITSU LIMITED 2014 1974 Deutschland wird Fußball-Weltmeister Breitners Elfer zum Ausgleich und das unvergängliche, typische Müller- Siegtor aus der Drehung. Es war der sportliche Höhepunkt der wohl besten deutschen Mannschaft aller Zeiten. 6 © FUJITSU LIMITED 2014 Was ist eigentlich „alt“? These: „Stillstand macht alt …. …. ewig jung bleibt, was laufend weiterentwickelt wird!“ 7 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Wird das Fahrrad bald 200 Jahre alt !? Welches? Das linke oder das rechte? Fahrrad (Anfang 1817) Fahrrad (2014) 8 © FUJITSU LIMITED 2014 Was ist „alt“ - was wurde weiterentwickelt ? Soroban (Japan) Im Osten, vom Balkan bis nach China, wird er hier und da noch als preiswerte Rechenmaschine bei kleineren Geschäften verwendet.
    [Show full text]
  • Toward Automated Cache Partitioning for the K Computer
    IPSJ SIG Technical Report Toward Automated Cache Partitioning for the K Computer Swann Perarnau1 Mitsuhisa Sato1,2 Abstract: The processor architecture available on the K computer (SPARC64VIIIfx) features an hardware cache par- titioning mechanism called sector cache. This facility enables software to split the memory cache in two independent sectors and to select which one will receive a line when it is retrieved from memory. Such control over the cache by an application enables significant performance optimization opportunities for memory intensive programs, that several studies on software-controlled cache partitioning environments already demonstrated. Most of these previous studies share the same implementation idea: the use of page coloring and an overriding of the operating system virtual memory manager. However, the sector cache differs in several key points over these imple- mentations, making the use of the existing analysis or optimization strategies impractical, while enabling new ones. For example, while most studies overlooked the issue of phase changes in an application because of the prohibitive cost of a repartitioning, the sector cache provides this feature without any costs, allowing multiple cache distribution strategies to alternate during an execution, providing the best allocation of the current locality pattern. Unfortunately, for most application programmers, this hardware cache partitioning facility is not easy to take advantage of. It requires intricate knowledge of the memory access patterns of a code and the ability to identify data structures that would benefit from being isolated in cache. This optimization process can be tedious, with multiple code modifications and countless application runs necessary to achieve a good optimization scheme.
    [Show full text]
  • Fujitsu Standard Tool
    Toward Building up ARM HPC Ecosystem Shinji Sumimoto, Ph.D. Next Generation Technical Computing Unit FUJITSU LIMITED Sept. 12th, 2017 0 Copyright 2017 FUJITSU LIMITED Outline Fujitsu’s Super computer development history and Post-K Processor Overview Compiler Development for ARMv8 with SVE Towards building up ARM HPC Ecosystem 1 Copyright 2017 FUJITSU LIMITED Fujitsu’s Super computer development history and Post-K Processor Overview 2 Copyright 2017 FUJITSU LIMITED Fujitsu Supercomputers Fujitsu has been providing high performance FX100 supercomputers for 40 years, increasing application performance while maintaining FX10 Post-K Under Development application compatibility w/ RIKEN No.1 in Top500 (June and Nov., 2011) K computer World’s Fastest FX1 Vector Processor (1999) Most Efficient Performance VPP5000 SPARC in Top500 (Nov. 2008) NWT* Enterprise Developed with NAL No.1 in Top500 PRIMEQUEST (Nov. 1993) VPP300/700 PRIMEPOWER PRIMERGY CX400 Gordon Bell Prize Skinless server (1994, 95, 96) ⒸJAXA HPC2500 VPP500 World’s Most Scalable PRIMERGY VP Series Supercomputer BX900 Cluster node AP3000 (2003) HX600 F230-75APU Cluster node AP1000 PRIMERGY RX200 Cluster node Japan’s Largest Japan’s First Cluster in Top500 Vector (Array) (July 2004) Supercomputer (1977) *NWT: Numerical Wind Tunnel 3 Copyright 2017 FUJITSU LIMITED FUJITSU high-end supercomputers development 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 K computer and PRIMEHPC FX10/FX100 in operation FUJITSU The CPU and interconnect of FX10/FX100 inherit the K PRIMEHPC FX10 PRIMEHPC FX100
    [Show full text]
  • World's Fastest Computer
    HISTORY OF FUJITSU LIMITED SUPERCOMPUTING “FUGAKU” – World’s Fastest Computer Birth of Fugaku Fujitsu has been developing supercomputer systems since 1977, equating to over 40 years of direct hands-on experience. Fujitsu determined to further expand its HPC thought leadership and technical prowess by entering into a dedicated partnership with Japan’s leading research institute, RIKEN, in 2006. This technology development partnership led to the creation of the “K Computer,” successfully released in 2011. The “K Computer” raised the bar for processing capabilities (over 10 PetaFLOPS) to successfully take on even larger and more complex challenges – consistent with Fujitsu’s corporate charter of tackling societal challenges, including global climate change and sustainability. Fujitsu and Riken continued their collaboration by setting aggressive new targets for HPC solutions to further extend the ability to solve complex challenges, including applications to deliver improved energy efficiency, better weather and environmental disaster prediction, and next generation manufacturing. Fugaku, another name for Mt. Fuji, was announced May 23, 2019. Achieving The Peak of Performance Fujitsu Ltd. And RIKEN partnered to design the next generation supercomputer with the goal of achieving Exascale performance to enable HPC and AI applications to address the challenges facing mankind in various scientific and social fields. Goals of the project Ruling the Rankings included building and delivering the highest peak performance, but also included wider adoption in applications, and broader • : Top 500 list First place applicability across industries, cloud and academia. The Fugaku • First place: HPCG name symbolizes the achievement of these objectives. • First place: Graph 500 Fugaku’s designers also recognized the need for massively scalable systems to be power efficient, thus the team selected • First place: HPL-AI the ARM Instruction Set Architecture (ISA) along with the Scalable (real world applications) Vector Extensions (SVE) as the base processing design.
    [Show full text]
  • Introduction of Fujitsu's Next-Generation Supercomputer
    Introduction of Fujitsu’s next-generation supercomputer MATSUMOTO Takayuki July 16, 2014 HPC Platform Solutions Fujitsu has a long history of supercomputing over 30 years Technologies and experience of providing Post-FX10 several types of supercomputers such as vector, MPP and cluster, are inherited PRIMEHPC FX10 and implemented in latest supercomputer product line No.1 in Top500 K computer (June and Nov., 2011) FX1 World’s Fastest Vector Processor (1999) Highest Performance VPP5000 SPARC efficiency in Top500 NWT* Enterprise (Nov. 2008) Developed with NAL No.1 in Top500 PRIMEQUEST VPP300/700 (Nov. 1993) PRIMEPOWER PRIMERGY CX400 Gordon Bell Prize HPC2500 Skinless server (1994, 95, 96) ⒸJAXA VPP500 World’s Most Scalable PRIMERGY VP Series Supercomputer BX900 Cluster node AP3000 (2003) HX600 Cluster node F230-75APU AP1000 PRIMERGY RX200 Cluster node Japan’s Largest Cluster in Top500 Japan’s First (July 2004) Vector (Array) *NWT: Supercomputer Numerical Wind Tunnel (1977) 1 Copyright 2014 FUJITSU LIMITED K computer and Fujitsu PRIMEHPC series Single CPU/node architecture for multicore Good Bytes/flop and scalability Key technologies for massively parallel supercomputers Original CPU and interconnect Support for tens of millions of cores (VISIMPACT*, Collective comm. HW) VISIMPACT SIMD extension HPC-ACE HPC-ACE HPC-ACE2 Collective comm. HW Direct network Tofu Direct network Tofu Tofu interconnect 2 HMC & Optical connections CY2008~ CY2010~ CY2012~ Coming soon 40GF, 4-core/CPU 128GF, 8-core/CPU 236.5GF, 16-core/CPU 1TF~, 32-core/CPU
    [Show full text]
  • Post-K Computer Development
    Post-K Computer Development In collaboration with RIKEN, Fujitsu is developing the world’s top-level supercomputer, capable of realizing high effective performance for a broad range of application software. Advanced Technology Fujitsu Contributions to the Post-K Computer National Project Post-K Development ・System Characteristics We are aiming to achieve the development goals of having an effective application performance that is up to 100 times greater than that of the K computer and power consumption of 30-40 MW (the K computer’s power consumption is 12.7MW), and to create the world’s top-performing general-purpose supercomputer. To be more precise, our aim is to balance various factors, such as i) power consumption, ii) computational performance, iii) user convenience, and iv) ability to produce ground-breaking results, characterized by its all-around capabilities, compared to any other system in the world. ・Fujitsu’s Efforts Fujitsu, through efficient use of not only cutting-edge technology, but also the cultivation of know-how and performance experience with the K computer and the Supercomputer PRIMEHPC Series, is developing the entire system; from the processor to the software. In particular, in regards to the processor, Fujitsu chose the Arm instruction set architecture with supercomputing extensions co-created with Arm, and is working hard to extract the potential maximum performance from it. 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 K Computer K Computer Development Operation Basic Mass Install./ Prototype・Detailed Design Operation Post-K Design Product. Tuning K computer The K computer is used in various fields PRIMEHPC FX100 from advanced research to manufacturing.
    [Show full text]
  • Overview of the K Computer System
    Overview of the K computer System Hiroyuki Miyazaki Yoshihiro Kusano Naoki Shinjou Fumiyoshi Shoji Mitsuo Yokokawa Tadashi Watanabe RIKEN and Fujitsu have been working together to develop the K computer, with the aim of beginning shared use by the fall of 2012, as a part of the High-Performance Computing Infrastructure (HPCI) initiative led by Japan’s Ministry of Education, Culture, Sports, Science and Technology (MEXT). Since the K computer involves over 80 000 compute nodes, building it with lower power consumption and high reliability was important from the availability point of view. This paper describes the K computer system and the measures taken for reducing power consumption and achieving high reliability and high availability. It also presents the results of implementing those measures. 1. Introduction Technology (MEXT). As the name “Kei” in Fujitsu has been actively developing and Japanese implies, one objective of this project providing advanced supercomputers for over was to achieve a computing performance of 30 years since its development of the FACOM 1016 floating-point operations per second (10 230-75 APU—Japan’s first supercomputer—in PFLOPS). The K computer, moreover, was 1977 (Figure 1). As part of this effort, it has developed not just to achieve peak performance been developing its own hardware including in benchmark tests but also to ensure high original processors and software too and building effective performance in applications used in up its technical expertise in supercomputers actual research. Furthermore, to enable the along the way. entire system to be installed and operated at The sum total of this technical expertise one location, it was necessary to reduce power has been applied to developing a massively consumption and provide a level of reliability parallel computer system—the K computer1), note)i that could ensure the total operation of a large- —which has been ranked as the top performing scale system.
    [Show full text]
  • Exascale” Supercomputer Fugaku & Beyond
    The first “exascale” supercomputer Fugaku & beyond l Satoshi Matsuoka l Director, RIKEN Center for Computational Science l 20190815 Modsim Presentation 1 Arm64fx & Fugaku 富岳 /Post-K are: l Fujitsu-Riken design A64fx ARM v8.2 (SVE), 48/52 core CPU l HPC Optimized: Extremely high package high memory BW (1TByte/s), on-die Tofu-D network BW (~400Gbps), high SVE FLOPS (~3Teraflops), various AI support (FP16, INT8, etc.) l Gen purpose CPU – Linux, Windows (Word), otherSCs/Clouds l Extremely power efficient – > 10x power/perf efficiency for CFD benchmark over current mainstream x86 CPU l Largest and fastest supercomputer to be ever built circa 2020 l > 150,000 nodes, superseding LLNL Sequoia l > 150 PetaByte/s memory BW l Tofu-D 6D Torus NW, 60 Petabps injection BW (10x global IDC traffic) l 25~30PB NVMe L1 storage l many endpoint 100Gbps I/O network into Lustre l The first ‘exascale’ machine (not exa64bitflops but in apps perf.) 3 Brief History of R-CCS towards Fugaku April 2018 April 2012 AICS renamed to RIKEN R-CCS. July 2010 Post-K Feasibility Study start Satoshi Matsuoka becomes new RIKEN AICS established 3 Arch Teams and 1 Apps Team Director August 2010 June 2012 Aug 2018 HPCI Project start K computer construction complete Arm A64fx announce at Hotchips September 2010 September 2012 Oct 2018 K computer installation starts K computer production start NEDO 100x processor project start First meeting of SDHPC (Post-K) November 2012 Nov 2018 ACM Gordon bell Award Post-K Manufacturing approval by Prime Minister’s CSTI Committee 2 006 2011 2013
    [Show full text]
  • Supercomputer "Fugaku"
    Supercomputer “Fugaku” Formerly known as Post-K 0 Copyright 2019 FUJITSU LIMITED Supercomputer Fugaku Project RIKEN and Fujitsu are currently developing Japan's next-generation flagship supercomputer, the successor to the K computer, as the most advanced general-purpose supercomputer in the world PRIMEHPC FX100 PRIMEHPC FX10 No.1(2017) Finalist(2016) No.1(2018) K computer Supercomputer Fugaku © RIKEN RIKEN and Fujitsu announced that manufacturing started in March 2019 RIKEN announced on May 23, 2019 that the supercomputer is named*Formerly “Fugaku known as” Post-K 1 Copyright 2019 FUJITSU LIMITED Goals and Approaches for Fugaku Goals High application Good usability Keeping application performance and wide range of uses compatibility RIKEN announced predicted performance: More than 100x+ faster than K computer for GENESIS and NICAM+LETKF Geometric mean of speedup over K computer in 9 Priority Issues is greater than 37x+ https://postk-web.r-ccs.riken.jp/perf.html 2 Copyright 2019 FUJITSU LIMITED Goals and Approaches for Fugaku Goals High application Good usability Keeping application performance and wide range of uses compatibility Approaches Develop Achieve 1. High-performance Arm CPU A64FX in HPC and AI areas - High performance in real applications 2. Cutting-edge hardware design - High efficiency in key features for AI 3. System software stack applications 3 Copyright 2019 FUJITSU LIMITED 1. High-Performance Arm CPU A64FX in HPC and AI Areas Architecture features CMG (Core Memory Group) ISA Armv8.2-A (AArch64 only) SVE (Scalable
    [Show full text]