Comparative HPC Performance Powerpoint

Total Page:16

File Type:pdf, Size:1020Kb

Comparative HPC Performance Powerpoint Comparative HPC Performance TOP500 Top Ten, Graph and Detail FX700 Actual Customer Benchmarks GRAPH500 Top Ten, Graph and Detail HPCG Top Ten, Graph and Detail HPL-AI Top Five, Graph and Detail Top 500 HPC Rankings – November 2020 500 450 400 350 300 Home - | TOP500 250 200 150 100 50 0 Fugaku Summit Sierra TaihuLight Selene Tianhe Juwels HPC5 Frontera Dammam 7 Rmax (k Tflop/s) Home - | TOP500 Rank Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW) 1 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu 7,630,848 442,010 537,212 29,899 RIKEN Center for Computational Science Japan Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 2 2,414,592 148,600 200,795 10,096 Infiniband, IBM DOE/SC/Oak Ridge National Laboratory United States Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 3 1,572,480 94,640 125,712 7,438 Infiniband, IBM / NVIDIA / Mellanox DOE/NNSA/LLNL United States 4 Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC 10,649,600 93,015 125,436 15,371 National Supercomputing Center in Wuxi China 5 Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband, Nvidia 555,520 63,460 79,215 2,646 NVIDIA Corporation United States 6 Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000, NUDT 4,981,760 61,445 100,679 18,482 National Super Computer Center in Guangzhou China JUWELS Booster Module - Bull Sequana XH2000 , AMD EPYC 7402 24C 2.8GHz, NVIDIA A100, Mellanox HDR 7 449,280 44,120 70,980 1,764 InfiniBand/ParTec ParaStation ClusterSuite, Atos Forschungszentrum Juelich (FZJ) Germany 8 HPC5 - PowerEdge C4140, Xeon Gold 6252 24C 2.1GHz, NVIDIA Tesla V100, Mellanox HDR Infiniband, Dell EMC 669,760 35,450 51,721 2,252 Eni S.p.A. Italy 9 Frontera - Dell C6420, Xeon Platinum 8280 28C 2.7GHz, Mellanox InfiniBand HDR, Dell EMC 448,448 23,516 38,746 Texas Advanced Computing Center/Univ. of Texas United States 10 Dammam-7 - Cray CS-Storm, Xeon Gold 6248 20C 2.5GHz, NVIDIA Tesla V100 SXM2, InfiniBand HDR 100, HPE 672,520 22,400 55,424 Saudi Aramco Saudi Arabia Actual Optimized Customer Run Benchmark Results – October 2020 FX700 Gflops/s Processor to Processor Comparison 202.7 Geomean 105.0 85.7 237% Performance 208.4 Geomean SP.C [b] 111.7 86.4 406.2 MG.C 153.1 67.0 250.0 LU.C [b] 140.0 120.4 190.4 FT.C 100.3 103.9 41.8 CG.C 28.6 20.8 412.4 BT.C [b] 194.1 262.7 0.0 50.0 100.0 150.0 200.0 250.0 300.0 350.0 400.0 450.0 FX700 FJ Compiler FX700 GNU Compiler Intel Cascade Lake Top 500 HPC Rankings – November 2020 500 GRAPH500 HPC Rankings – November 2020 450 120000 400 100000 350 300 80000 Home - | TOP500 250 200 60000 Graph 500 | large-scale benchmarks 150 40000 100 20000 50 0 0 Fugaku Summit Sierra TaihuLight Selene Tianhe Juwels HPC5 Frontera Dammam 7 Rmax (k Tflop/s) GTEPS GRAPH500 Benchmark Results – November 2020 NUMBER OF NUMBER OF RANK MACHINE VENDOR INSTALLATION SITE LOCATION COUNTRY YEAR SCALE GTEPS NODES CORES RIKEN Center for Computational Science (R- 1 Supercomputer Fugaku Fujitsu Kobe Hyogo Japan 2020 158976 7630848 41 102956 CCS) 2 Sunway TaihuLight NRCPC National Supercomputing Center in Wuxi Wuxi China 2015 40768 10599680 40 23755.7 3 TOKI-SORA Fujitsu Japan Aerospace eXploration Agency (JAXA) Tokyo Japan 2020 5760 276480 36 10813 4 OLCF Summit (CPU-Only) IBM Oak Ridge National Laboratory Oak Ridge TN United States 2018 2048 86016 40 7665.7 5 SuperMUC-NG Lenovo Leibniz Rechenzentrum Garching Germany 2018 4096 196608 39 6279.47 NERSC Cori - 1024 6 Cray NERSC/LBNL DOE/SC/LBNL/NERSC United States 2017 1024 32768 37 2562.16 haswell partition National University of Defense 7 Tianhe-2 (MilkyWay-2) Changsha China Changsha China China 2013 8192 196608 36 2061.48 Technology Korea Institute of Science and Technology Korea Republic 8 Nurion Cray Daejeon 2018 1024 65536 37 1456.46 Information Of 9 Turing IBM CNRS/IDRIS-GENCI Orsay France 2012 4096 65536 36 1427 Science and Technology Facilities Council - 9 Blue Joule IBM UK 2012 4096 65536 36 1427 Daresbury Laboratory 9 DIRAC IBM University of Edinburgh UK 2012 4096 65536 36 1427 9 Zumbrota IBM EDF R&D France 2012 4096 65536 36 1427 9 Avoca IBM Victorian Life Sciences Computation Initiative Australia 2012 4096 65536 36 1427 HPCG Rankings – November 2020 18 16 14 12 10 8 6 https://www.hpcg-benchmark.org/ 4 2 0 HPCG (Pflop/s) November 2020 HPL Rmax HPCG Fraction of Rank Site Computer Cores (Pflop/s) (Pflop/s) Peak RIKEN Center for Computational 1 Supercomputer Fugaku — A64FX 48C 2.2GHz, Tofu Interconnect D 7,630,848 442.01 16.000 3.00% Science Summit — IBM POWER9 22C 3.07GHz, Dual-rail Mellanox EDR 2 DOE/SC/ORNL 2,414,592 148.6 2.926 1.50% Infiniband, NVIDIA Volta V100 Sierra — IBM POWER9 22C 3.1GHz, Dual-rail Mellanox EDR Infiniband, 3 DOE/NNSA/LLNL 1,572,480 94.64 1.796 1.40% NVIDIA Volta V100 Selene — AMD EPYC 7742 64C 2.25GHz, Mellanox HDR Infiniband, 4 NVIDIA Corporation 555,520 63.46 1.623 2.00% NVIDIA Tesla A100 40GB JUWELS Booster Module — AMD EPYC 7402 24C 2.8GHz, Mellanox HDR 5 Forschungszentrum Juelich (FZJ) 449,280 44.12 1.275 1.80% InfiniBand/ParTec ParaStation ClusterSuite, NVIDIA Ampere A100 Dammam-7 — Xeon Gold 6248 20C 2.5GHz, InfiniBand HDR 100, NVIDIA 6 Saudi Aramco 672,520 22.4 0.881 1.60% Volta V100 SXM2 HPC5 — Xeon Gold 6252 24C 2.1GHz, Mellanox HDR Infiniband 100, 7 Eni S.p.A. 669,760 35.45 0.860 1.70% NVIDIA Volta V100 PCIe 32GB Japan Aerospace eXploration 8 TOKI-SORA — A64FX 48C 2.2GHz, Tofu interconnect D 276,480 16.59 0.614 3.20% Agency Trinity — Intel Xeon E5-2698v3 16C 2.3GHz, Aries, Intel Xeon Phi 7250 9 DOE/NNSA/LANL/SNL 979,072 20.16 0.546 1.30% 68C 1.4GHz National Institute for Fusion Plasma Simulator — Vector Engine Type10AE 8C 1.58GHz, InfiniBand 10 34,560 7.893 0.529 5.00% Science (NIFS) HDR 200 Gbps HPL-AI Rankings – November 2020 2.5 2 1.5 https://icl.bitbucket.io/hpl-ai 1 0.5 0 Fugaku Summit Selene JUWELS Flow Eflop/s Rank Site Computer Cores HPL-AI 1 RIKEN Fugaku 7,299,072 2 2 ORNL Summit 2,414,592 0.55 3 NVIDIA, USA Selene 277,760 0.25 4 FZJ JUWELS_BM 449,280 0.11 5 Nagoya Flow 110,592 0.03 THANK YOU .
Recommended publications
  • Interconnect Your Future Enabling the Best Datacenter Return on Investment
    Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2016 Mellanox Accelerates The World’s Fastest Supercomputers . Accelerates the #1 Supercomputer . 39% of Overall TOP500 Systems (194 Systems) . InfiniBand Connects 65% of the TOP500 HPC Platforms . InfiniBand Connects 46% of the Total Petascale Systems . Connects All of 40G Ethernet Systems . Connects The First 100G Ethernet System on The List (Mellanox End-to-End) . Chosen for 65 End-User TOP500 HPC Projects in 2016, 3.6X Higher versus Omni-Path, 5X Higher versus Cray Aries InfiniBand is the Interconnect of Choice for HPC Infrastructures Enabling Machine Learning, High-Performance, Web 2.0, Cloud, Storage, Big Data Applications © 2016 Mellanox Technologies 2 Mellanox Connects the World’s Fastest Supercomputer National Supercomputing Center in Wuxi, China #1 on the TOP500 List . 93 Petaflop performance, 3X higher versus #2 on the TOP500 . 41K nodes, 10 million cores, 256 cores per CPU . Mellanox adapter and switch solutions * Source: “Report on the Sunway TaihuLight System”, Jack Dongarra (University of Tennessee) , June 20, 2016 (Tech Report UT-EECS-16-742) © 2016 Mellanox Technologies 3 Mellanox In the TOP500 . Connects the world fastest supercomputer, 93 Petaflops, 41 thousand nodes, and more than 10 million CPU cores . Fastest interconnect solution, 100Gb/s throughput, 200 million messages per second, 0.6usec end-to-end latency . Broadest adoption in HPC platforms , connects 65% of the HPC platforms, and 39% of the overall TOP500 systems . Preferred solution for Petascale systems, Connects 46% of the Petascale systems on the TOP500 list . Connects all the 40G Ethernet systems and the first 100G Ethernet system on the list (Mellanox end-to-end) .
    [Show full text]
  • Towards Exascale Computing
    TowardsTowards ExascaleExascale Computing:Computing: TheThe ECOSCALEECOSCALE ApproachApproach Dirk Koch, The University of Manchester, UK ([email protected]) 1 Motivation: let’s build a 1,000,000,000,000,000,000 FLOPS Computer (Exascale computing: 1018 FLOPS = one quintillion or a billion billion floating-point calculations per sec.) 2 1,000,000,000,000,000,000 FLOPS . 10,000,000,000,000,000,00 FLOPS 1975: MOS 6502 (Commodore 64, BBC Micro) 3 Sunway TaihuLight Supercomputer . 2016 (fully operational) . 12,543,6000,000,000,000,00 FLOPS (125.436 petaFLOPS) . Architecture Sunway SW26010 260C (Digital Alpha clone) 1.45GHz 10,649,600 cores . Power “The cooling system for TaihuLight uses a closed- coupled chilled water outfit suited for 28 MW with a custom liquid cooling unit”* *https://www.nextplatform.com/2016/06/20/look-inside-chinas-chart-topping-new-supercomputer/ . Cost US$ ~$270 million 4 TOP500 Performance Development We need more than all the performance of all TOP500 machines together! 5 TaihuLight for Exascale Computing? We need 8x the worlds fastest supercomputer: . Architecture Sunway SW26010 260C (Digital Alpha clone) @1.45GHz: > 85M cores . Power 224 MW (including cooling) costs ~ US$ 40K/hour, US$ 340M/year from coal: 2,302,195 tons of CO2 per year . Cost US$ 2.16 billion We have to get at least 10x better in energy efficiency 2-3x better in cost Also: scalable programming models 6 Alternative: Green500 Shoubu supercomputer (#1 Green500 in 2015): . Cores: 1,181,952 . Theoretical Peak: 1,535.83 TFLOPS/s . Memory: 82 TB . Processor: Xeon E5-2618Lv3 8C 2.3GHz .
    [Show full text]
  • Computational PHYSICS Shuai Dong
    Computational physiCs Shuai Dong Evolution: Is this our final end-result? Outline • Brief history of computers • Supercomputers • Brief introduction of computational science • Some basic concepts, tools, examples Birth of Computational Science (Physics) The first electronic general-purpose computer: Constructed in Moore School of Electrical Engineering, University of Pennsylvania, 1946 ENIAC: Electronic Numerical Integrator And Computer ENIAC Electronic Numerical Integrator And Computer • Design and construction was financed by the United States Army. • Designed to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory. • It was heralded in the press as a "Giant Brain". • It had a speed of one thousand times that of electro- mechanical machines. • ENIAC was named an IEEE Milestone in 1987. Gaint Brain • ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 27 tons, took up 167 m2, and consumed 150 kW of power. • This led to the rumor that whenever the computer was switched on, lights in Philadelphia dimmed. • Input was from an IBM card reader, and an IBM card punch was used for output. Development of micro-computers modern PC 1981 IBM PC 5150 CPU: Intel i3,i5,i7, CPU: 8088, 5 MHz 3 GHz Floppy disk or cassette Solid state disk 1984 Macintosh Steve Jobs modern iMac Supercomputers The CDC (Control Data Corporation) 6600, released in 1964, is generally considered the first supercomputer. Seymour Roger Cray (1925-1996) The father of supercomputing, Cray-1 who created the supercomputer industry. Cray Inc.
    [Show full text]
  • FCMSSR Meeting 2018-01 All Slides
    Federal Committee for Meteorological Services and Supporting Research (FCMSSR) Dr. Neil Jacobs Assistant Secretary for Environmental Observation and Prediction and FCMSSR Chair April 30, 2018 Office of the Federal Coordinator for Meteorology Services and Supporting Research 1 Agenda 2:30 – Opening Remarks (Dr. Neil Jacobs, NOAA) 2:40 – Action Item Review (Dr. Bill Schulz, OFCM) 2:45 – Federal Coordinator's Update (OFCM) 3:00 – Implementing Section 402 of the Weather Research And Forecasting Innovation Act Of 2017 (OFCM) 3:20 – Federal Meteorological Services And Supporting Research Strategic Plan and Annual Report. (OFCM) 3:30 – Qualification Standards For Civilian Meteorologists. (Mr. Ralph Stoffler, USAF A3-W) 3:50 – National Earth System Predication Capability (ESPC) High Performance Computing Summary. (ESPC Staff) 4:10 – Open Discussion (All) 4:20 – Wrap-Up (Dr. Neil Jacobs, NOAA) Office of the Federal Coordinator for Meteorology Services and Supporting Research 2 FCMSSR Action Items AI # Text Office Comment Status Due Date Responsible 2017-2.1 Reconvene JAG/ICAWS to OFCM, • JAG/ICAWS convened. Working 04/30/18 develop options to broaden FCMSSR • Options presented to FCMSSR Chairmanship beyond Agencies ICMSSR the Undersecretary of Commerce • then FCMSSR with a for Oceans and Atmosphere. revised Charter Draft a modified FCMSSR • Draft Charter reviewed charter to include ICAWS duties by ICMSSR. as outlined in Section 402 of the • Pending FCMSSR and Weather Research and Forecasting OSTP approval to Innovation Act of 2017 and secure finalize Charter for ICMSSR concurrence. signature. Recommend new due date: 30 June 2018. 2017-2.2 Publish the Strategic Plan for OFCM 1/12/18: Plan published on Closed 11/03/17 Federal Weather Coordination as OFCM website presented during the 24 October 2017 FCMMSR Meeting.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • The Sunway Taihulight Supercomputer: System and Applications
    SCIENCE CHINA Information Sciences . RESEARCH PAPER . July 2016, Vol. 59 072001:1–072001:16 doi: 10.1007/s11432-016-5588-7 The Sunway TaihuLight supercomputer: system and applications Haohuan FU1,3 , Junfeng LIAO1,2,3 , Jinzhe YANG2, Lanning WANG4 , Zhenya SONG6 , Xiaomeng HUANG1,3 , Chao YANG5, Wei XUE1,2,3 , Fangfang LIU5 , Fangli QIAO6 , Wei ZHAO6 , Xunqiang YIN6 , Chaofeng HOU7 , Chenglong ZHANG7, Wei GE7 , Jian ZHANG8, Yangang WANG8, Chunbo ZHOU8 & Guangwen YANG1,2,3* 1Ministry of Education Key Laboratory for Earth System Modeling, and Center for Earth System Science, Tsinghua University, Beijing 100084, China; 2Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; 3National Supercomputing Center in Wuxi, Wuxi 214072, China; 4College of Global Change and Earth System Science, Beijing Normal University, Beijing 100875, China; 5Institute of Software, Chinese Academy of Sciences, Beijing 100190, China; 6First Institute of Oceanography, State Oceanic Administration, Qingdao 266061, China; 7Institute of Process Engineering, Chinese Academy of Sciences, Beijing 100190, China; 8Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China Received May 27, 2016; accepted June 11, 2016; published online June 21, 2016 Abstract The Sunway TaihuLight supercomputer is the world’s first system with a peak performance greater than 100 PFlops. In this paper, we provide a detailed introduction to the TaihuLight system. In contrast with other existing heterogeneous supercomputers, which include both CPU processors and PCIe-connected many-core accelerators (NVIDIA GPU or Intel Xeon Phi), the computing power of TaihuLight is provided by a homegrown many-core SW26010 CPU that includes both the management processing elements (MPEs) and computing processing elements (CPEs) in one chip.
    [Show full text]
  • It's a Multi-Core World
    It’s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group Moore’s Law is not to blame. Intel process technology capabilities High Volume Manufacturing 2004 2006 2008 2010 2012 2014 2016 2018 Feature Size 90nm 65nm 45nm 32nm 22nm 16nm 11nm 8nm Integration Capacity (Billions of 2 4 8 16 32 64 128 256 Transistors) Transistor for Influenza Virus 90nm Process Source: CDC 50nm Source: Intel At end of day, we keep using all those new transistors. That Power and Clock Inflection Point in 2004… didn’t get better. Fun fact: At 100+ Watts and <1V, currents are beginning to exceed 100A at the point of load! Source: Kogge and Shalf, IEEE CISE Courtesy Horst Simon, LBNL Not a new problem, just a new scale… CPU Power W) Cray-2 with cooling tower in foreground, circa 1985 And how to get more performance from more transistors with the same power. RULE OF THUMB A 15% Frequency Power Performance Reduction Reduction Reduction Reduction In Voltage 15% 45% 10% Yields SINGLE CORE DUAL CORE Area = 1 Area = 2 Voltage = 1 Voltage = 0.85 Freq = 1 Freq = 0.85 Power = 1 Power = 1 Perf = 1 Perf = ~1.8 Single Socket Parallelism Processor Year Vector Bits SP FLOPs / core / Cores FLOPs/cycle cycle Pentium III 1999 SSE 128 3 1 3 Pentium IV 2001 SSE2 128 4 1 4 Core 2006 SSE3 128 8 2 16 Nehalem 2008 SSE4 128 8 10 80 Sandybridge 2011 AVX 256 16 12 192 Haswell 2013 AVX2 256 32 18 576 KNC 2012 AVX512 512 32 64 2048 KNL 2016 AVX512 512 64 72 4608 Skylake 2017 AVX512 512 96 28 2688 Putting It All Together Prototypical Application: Serial Weather Model CPU MEMORY First Parallel Weather Modeling Algorithm: Richardson in 1917 Courtesy John Burkhardt, Virginia Tech Weather Model: Shared Memory (OpenMP) Core Fortran: !$omp parallel do Core do i = 1, n Core a(i) = b(i) + c(i) enddoCore C/C++: MEMORY #pragma omp parallel for Four meteorologists in the samefor(i=1; room sharingi<=n; i++) the map.
    [Show full text]
  • Challenges in Programming Extreme Scale Systems William Gropp Wgropp.Cs.Illinois.Edu
    1 Challenges in Programming Extreme Scale Systems William Gropp wgropp.cs.illinois.edu Towards Exascale Architectures Figure 1: Core Group for Node (Low Capacity, High Bandwidth) 3D Stacked (High Capacity, Memory Low Bandwidth) DRAM Thin Cores / Accelerators Fat Core NVRAM Fat Core Integrated NIC Core for Off-Chip Coherence Domain Communication Figure 2.1: Abstract Machine Model of an exascale Node Architecture 2.1 Overarching Abstract Machine Model We begin with asingle model that highlights the anticipated key hardware architectural features that may support exascale computing. Figure 2.1 pictorially presents this as a single model, while the next subsections Figure 2: Basic Layout of a Node describe several emergingFrom technology “Abstract themes that characterize moreMachine specific hardware design choices by com- Sunway TaihuLightmercial vendors. In Section 2.2, we describe the most plausible set of realizations of the singleAdapteva model that are Epiphany-V DOE Sierra viable candidates forModels future supercomputing and architectures. Proxy • 1024 RISC June• 19, Heterogeneous2016 2.1.1 Processor 2 • Power 9 with 4 NVIDA It is likely that futureArchitectures exascale machines will feature heterogeneous for nodes composed of a collectionprocessors of more processors (MPE,than a single type of processing element. The so-called fat cores that are found in many contemporary desktop Volta GPU and server processorsExascale characterized by deep pipelines, Computing multiple levels of the memory hierarchy, instruction-level parallelism
    [Show full text]
  • Joaovicentesouto-Tcc.Pdf
    UNIVERSIDADE FEDERAL DE SANTA CATARINA CENTRO TECNOLÓGICO DEPARTAMENTO DE INFORMÁTICA E ESTATÍSTICA CIÊNCIAS DA COMPUTAÇÃO João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Florianópolis 6 de dezembro de 2019 João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Trabalho de Conclusão do Curso do Curso de Graduação em Ciências da Computação do Centro Tecnológico da Universidade Federal de Santa Catarina como requisito para ob- tenção do título de Bacharel em Ciências da Computação. Orientador: Prof. Márcio Bastos Castro, Dr. Coorientador: Pedro Henrique Penna, Me. Florianópolis 6 de dezembro de 2019 Ficha de identificação da obra elaborada pelo autor, através do Programa de Geração Automática da Biblioteca Universitária da UFSC. Souto, João Vicente An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS / João Vicente Souto ; orientador, Márcio Bastos Castro , coorientador, Pedro Henrique Penna , 2019. 92 p. Trabalho de Conclusão de Curso (graduação) - Universidade Federal de Santa Catarina, Centro Tecnológico, Graduação em Ciências da Computação, Florianópolis, 2019. Inclui referências. 1. Ciências da Computação. 2. Sistema Operacional Distribuído. 3. Camada de Abstração de Hardware. 4. Processador Lightweight Manycore. 5. Kalray MPPA-256. I. , Márcio Bastos Castro. II. , Pedro Henrique Penna. III. Universidade Federal de Santa Catarina. Graduação em Ciências da Computação. IV. Título. João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Este Trabalho de Conclusão do Curso foi julgado adequado para obtenção do Título de Bacharel em Ciências da Computação e aprovado em sua forma final pelo curso de Graduação em Ciências da Computação.
    [Show full text]
  • Parallel Processing with the MPPA Manycore Processor
    Parallel Processing with the MPPA Manycore Processor Kalray MPPA® Massively Parallel Processor Array Benoît Dupont de Dinechin, CTO 14 Novembre 2018 Outline Presentation Manycore Processors Manycore Programming Symmetric Parallel Models Untimed Dataflow Models Kalray MPPA® Hardware Kalray MPPA® Software Model-Based Programming Deep Learning Inference Conclusions Page 2 ©2018 – Kalray SA All Rights Reserved KALRAY IN A NUTSHELL We design processors 4 ~80 people at the heart of new offices Grenoble, Sophia (France), intelligent systems Silicon Valley (Los Altos, USA), ~70 engineers Yokohama (Japan) A unique technology, Financial and industrial shareholders result of 10 years of development Pengpai Page 3 ©2018 – Kalray SA All Rights Reserved KALRAY: PIONEER OF MANYCORE PROCESSORS #1 Scalable Computing Power #2 Data processing in real time Completion of dozens #3 of critical tasks in parallel #4 Low power consumption #5 Programmable / Open system #6 Security & Safety Page 4 ©2018 – Kalray SA All Rights Reserved OUTSOURCED PRODUCTION (A FABLESS BUSINESS MODEL) PARTNERSHIP WITH THE WORLD LEADER IN PROCESSOR MANUFACTURING Sub-contracted production Signed framework agreement with GUC, subsidiary of TSMC (world top-3 in semiconductor manufacturing) Limited investment No expansion costs Production on the basis of purchase orders Page 5 ©2018 – Kalray SA All Rights Reserved INTELLIGENT DATA CENTER : KEY COMPETITIVE ADVANTAGES First “NVMe-oF all-in-one” certified solution * 8x more powerful than the latest products announced by our competitors**
    [Show full text]
  • Optimizing High-Resolution Community Earth System
    https://doi.org/10.5194/gmd-2020-18 Preprint. Discussion started: 21 February 2020 c Author(s) 2020. CC BY 4.0 License. Optimizing High-Resolution Community Earth System Model on a Heterogeneous Many-Core Supercomputing Platform (CESM- HR_sw1.0) Shaoqing Zhang1,4,5, Haohuan Fu*2,3,1, Lixin Wu*4,5, Yuxuan Li6, Hong Wang1,4,5, Yunhui Zeng7, Xiaohui 5 Duan3,8, Wubing Wan3, Li Wang7, Yuan Zhuang7, Hongsong Meng3, Kai Xu3,8, Ping Xu3,6, Lin Gan3,6, Zhao Liu3,6, Sihai Wu3, Yuhu Chen9, Haining Yu3, Shupeng Shi3, Lanning Wang3,10, Shiming Xu2, Wei Xue3,6, Weiguo Liu3,8, Qiang Guo7, Jie Zhang7, Guanghui Zhu7, Yang Tu7, Jim Edwards1,11, Allison Baker1,11, Jianlin Yong5, Man Yuan5, Yangyang Yu5, Qiuying Zhang1,12, Zedong Liu9, Mingkui Li1,4,5, Dongning Jia9, Guangwen Yang1,3,6, Zhiqiang Wei9, Jingshan Pan7, Ping Chang1,12, Gokhan 10 Danabasoglu1,11, Stephen Yeager1,11, Nan Rosenbloom 1,11, and Ying Guo7 1 International Laboratory for High-Resolution Earth System Model and Prediction (iHESP), Qingdao, China 2 Ministry of Education Key Lab. for Earth System Modeling, and Department of Earth System Science, Tsinghua University, Beijing, China 15 3 National Supercomputing Center in Wuxi, Wuxi, China 4 Laboratory for Ocean Dynamics and Climate, Qingdao Pilot National Laboratory for Marine Science and Technology, Qingdao, China 5 Key Laboratory of Physical Oceanography, the College of Oceanic and Atmospheric Sciences & Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China 20 6 Department of Computer Science & Technology, Tsinghua
    [Show full text]
  • Performance Tuning of Graph500 Benchmark on Supercomputer Fugaku
    Performance tuning of Graph500 benchmark on Supercomputer Fugaku Masahiro Nakao (RIKEN R-CCS) Outline Graph500 Benchmark Supercomputer Fugaku Tuning Graph500 Benchmark on Supercomputer Fugaku 2 Graph500 https://graph500.org Graph500 has started since 2010 as a competition for evaluating performance of large-scale graph processing The ranking is updated twice a year (June and November) Fugaku won the awards twice in 2020 One of kernels in Graph500 is BFS (Breadth-First Search) An artificial graph called the Kronecker graph is used Some vertices are connected to many other vertices while numerous others are connected to only a few vertices Social network is known to have a similar property 3 Overview of BFS BFS Input:graph and root Output:BFS tree Data structure and BFS algorithm are free 4 Hybrid-BFS [Beamer, 2012] Scott Beamer et al. Direction-optimizing breadth-first search, SC ’12 It is suitable for small diameter graphs used in Graph500 Perform BFS while switching between Top-down and Bottom-up In the middle of BFS, the number of vertices being visited increases explosively, so it is inefficient in only Top-down Top-down Bottom-up 0 0 1 1 1 1 1 1 Search for unvisited vertices Search for visited vertices from visited vertices from unvisited vertices 5 2D Hybrid-BFS [Beamer, 2013] Scott Beamer, et. al. Distributed Memory Breadth- First Search Revisited: Enabling Bottom-Up Search. IPDPSW '13. Distribute the adjacency matrix to a 2D process grid (R x C) Communication only within the column process and within the row process Allgatherv, Alltoallv,
    [Show full text]