Solving Mesoscale Atmospheric Dynamics Using a Reconfigurable Dataflow Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Solving Mesoscale Atmospheric Dynamics Using a Reconfigurable Dataflow Architecture Architectures for the Post-Moore Era Solving Mesoscale Atmospheric Dynamics Using a Reconfigurable Dataflow Architecture This article presents an efficient dataflow methodology for solving Euler atmospheric dynamic equations. The authors map a complex Euler stencil kernel into a single field-programmable gate array chip and develop a long streaming pipeline that can perform 956 mixed-precision operations per cycle. Their dataflow design outperforms traditional multicore and many-core counterparts in both time to solution and energy to solution. Lin Gan, umerical atmospheric simulation is an essential method to study the climate sys- Haohuan Fu tem and a key tool for verifying climate changing mechanisms, making predic- Tsinghua University tions into the future, and providing guidance for protecting the planet from severe N issues such as climate change and weather disasters. Due to atmospheric models’ Wayne Luk high demand for computing power, a lot of effort has been made to study and simulate the Imperial College London atmospheric model using the world’s most powerful supercomputers. However, traditional high-performance computing (HPC) systems based on multicore and many-core architectures Chao Yang have to face architectural constraints from data representation, memory access patterns, com- Chinese Academy of munications, and bandwidth, as well as challenges from the extremely complex atmospheric Sciences algorithm. Thus, novel architectures and revolutionary optimization techniques are in great demand toward achieving better results in both time to solution and energy to solution. Wei Xue, In recent years, dataflow engines (DFEs) based on reconfigurable field-programmable gate Guangwen Yang arrays (FPGAs) have developed quickly and achieved many inspiring results in various applica- Tsinghua University tions such as deep learning.1 Unlike traditional architectures, reconfigurable DFEs2 obtain high performance through deploying a long pipeline of concurrent operations corresponding to the targeting algorithm, while maintaining high power efficiency due to the lower clock frequency. Their customizable features on the hardware circuit, data representation, and on-chip memory provide great flexibility and optimizing space to tackle complex numerical algorithms. 40 Published by the IEEE Computer Society 0272-1732/17/$33.00 © 2017 IEEE Dataflow Dataflow Dataflow Dataflow DFE core A core B core C core D FPGA Dataflow Dataflow Data core E core F Data CPU Dataflow Dataflow PCIe Dataflow Dataflow core I core J BRAM core G core H DRAM FPGA DRAM (a) (b) T N subdomain subdomain per stencil sweep cycle W E N–1 N S B subdomain 2 C2D DFE Inner Euler stencil halo D2C Halo updating inner CPU Outer stencil 25-point stencil subdomain 1 1 2 3 4 5 (c) (d) (e) Figure 1. The dataflow engine (DFE) architectures and algorithmic computing mechanisms. (a) General architecture of the hybrid CPU-DFE system using PCI Express as a bridge. (b) Dataflow computing model. Each dataflow core refers to a specific operation. (c) The 25-point Euler stencil. (d) The hybrid domain decomposition methodology assigns inner stencil to DFE and outer area to CPU. (e) Workflow of the hybrid methodology, in which C2D and D2C refer to the data exchange between CPU and DFE. In this article, we present an efficient data- other major multicore and many-core proces- flow computing model to solve the 3D Euler sors prove dataflow computing to be a promis- atmospheric equations, which is the essen- ing method in the post-Moore era to break the tial step to describe mesoscale atmospheric architectural constraints and bring inspiring dynamics. Through fully exploiting the cus- achievements. tomizable features of the selected FPGA plat- forms, we propose a set of novel optimizing Background techniques such as a customizable window This section introduces the hardware architec- buffer, algorithmic offsetting arithmetics, and ture, equations, and algorithm, and summa- a mixed-precision mechanism; manage to map rizes the existing efforts. the complex Euler kernel into a single FPGA chip; and build a long computing pipeline Dataflow Engines that can perform 956 mixed-precision oper- Figure 1a shows the general architecture of a ations per cycle. We also present our efforts hybrid CPU-DFE system. Similar to the tradi- to fully optimize the Euler algorithm over tional hybrid system, it also contains CPUs to traditional platforms including CPU, Many handle issues such as DFE initialization, task Integrated Coprocessor (MIC), GPU, and scheduling, and data exchange. SW26010 CPU. Compared with traditional multicore and Our dataflow design outperforms tradi- many-core architectures such as CPU, MIC, tional multicore and many-core counterparts and GPU, DFE achieves high performance in terms of both time to solution and energy through developing within a single FPGA to solution. The performance comparisons with chip a long streaming pipeline that comprises www.computer.org/micro July/August 2017 41 Architectures for the Post-Moore Era 1: for (k, j, i) ← (0, 0, 0) to finite volume scheme,3 the numerical fluxes on (N , N , N ) do k–1 j–1 i–1 a common edge of two consecutive mesh ele- 2: if (k, j, i) Halo then ments are identical. 3: Halo Updating We used a cell-centered finite volume 4: end if scheme plus an explicit Runge-Kutta time- 5: 3D Euler Stencil stepping method,3 after which each time step 6: end for in solving the Euler equations (that is, the Euler algorithm) required two Euler stencil Figure 2. Euler algorithm per stencil sweep. sweeps applied at all mesh elements. Figure 1c is the Euler stencil in the 3D channel, where many concurrent operations. Different hard- to update a mesh element (for example, the ware resources, such as logic units and digital central one in Figure 1c), we need to access 25 signal processors (DSPs), are customized to elements. After the original channel is decom- deploy a specific hardware circuit that corre- posed into different subdomains corresponding sponds exactly to the targeted algorithm (see to different MPI processes (see Figure 1d), the Figure 1b). Data that is originally stored in halo area needs to exchange data information the onboard memory (DRAM) will stream with neighboring subdomain halos, which is through different dataflow cores on the pipe- marked as halo updating. line and get computed. By streaming different Figure 2 shows the Euler algorithm that dataflows through different dataflow cores, we contains two major steps: halo updating and can execute multiple instructions onto multiple 3D Euler stencil computing. Unlike an ordi- data (MIMD), and thus boost the overall per- nary stencil kernel, the 3D Euler stencil has formance. to perform over 2,100 double-precision opera- tions to update one element per sweep,4 and it Euler Equations is more complex in the communication model, We can write the Euler equations3 as the fol- resulting in tough computational challenges for lowing set of conservation laws: traditional HPC platforms. ∂Q ∂F ∂G ∂H Computational Challenges + + + +=S 0 , (1) ∂t ∂x ∂ y ∂z Traditional HPC systems based on many-core and multicore architectures face tough chal- in which lenges in part from the complex algorithmic kernel. The more realistic 3D model built on Q 5 (r9, ru, rv, rw, (r)9)T, the Euler equations brings more complex algo- F 5 (ru, ruu 1 p9, ruv, ruw, ru)T, rithms and data communication patterns, and G 5 (rv, rvu, rvv 1 p9, rvw, rv)T, thus greatly strikes the memory bandwidth. H 5 (rw, rwu, rwv, rww 1 p9, rw)T, The neighboring points of a 25-point stencil S 5 (0, 0, 0, r9g, 0)T, (2) are stored in different rows, columns, or slices, and can be far apart in the memory space. where r is the density, v 5 (u, v, w) is the Accessing all 25 points incurs a lot of cache velocity, p is the pressure, and is the potential misses. temperature of the atmosphere. The system is Although complex atmospheric stencils closed with the equation of state have been widely studied on traditional HPC platforms, the achieved efficiency is relatively γ pRθ low (generally less than 10 percent). Therefore, pp= 00 , (3) novel architectures and revolutionary optimiza- p00 tion techniques are in great demand to achieve better efficiency. 5 in which p00 1013:25hPa is the ground-level pressure, R 5 287:04J/(kg . K) is the gas con- Existing Efforts stant for dry air, and g 5 1:4. We remark here With increased computing power in the form that because of the conservative property of the of many-core processors and accelerators (for 42 IEEE Micro example, the Sunway many-core processor, over CPUs and GPUs.11 Koji Okina and his GPU, MIC, and so on), we see a lot of efforts colleagues selected two essential 3D stencil at porting and refactoring atmospheric mod- kernels from the heat conduction and electro- els on the most powerful supercomputers to magnetic fields to evaluate their performance achieve ultra-high scalability and resolution. A on stream-oriented FPGAs.12 Comparatively, recent work5 that won the 2016 ACM Gordon the 3D Euler stencil in this work is much more Bell Prize scales a fully implicit Euler solver to challenging to solve, because more than 2,100 over 10.5 million cores of the Sunway TaihuLight double-precision floating-point operations supercomputer6 and sustains a double-precision must be performed. performance of 7.95 Pflops with 488-m hor- izontal resolution. Although these many-core Dataflow-Oriented Euler Designs architectures demonstrate potential for improv- This section focuses on dataflow designs, ing the performance of existing models, we still including the essential optimizations to boost see a gap of several orders of magnitude to the overall performance. advance to the next level of science. On the other hand, preliminary work Design Overview through FPGAs has achieved promising results Efficiently mapping a given algorithm into a in recent years.
Recommended publications
  • Towards Exascale Computing
    TowardsTowards ExascaleExascale Computing:Computing: TheThe ECOSCALEECOSCALE ApproachApproach Dirk Koch, The University of Manchester, UK ([email protected]) 1 Motivation: let’s build a 1,000,000,000,000,000,000 FLOPS Computer (Exascale computing: 1018 FLOPS = one quintillion or a billion billion floating-point calculations per sec.) 2 1,000,000,000,000,000,000 FLOPS . 10,000,000,000,000,000,00 FLOPS 1975: MOS 6502 (Commodore 64, BBC Micro) 3 Sunway TaihuLight Supercomputer . 2016 (fully operational) . 12,543,6000,000,000,000,00 FLOPS (125.436 petaFLOPS) . Architecture Sunway SW26010 260C (Digital Alpha clone) 1.45GHz 10,649,600 cores . Power “The cooling system for TaihuLight uses a closed- coupled chilled water outfit suited for 28 MW with a custom liquid cooling unit”* *https://www.nextplatform.com/2016/06/20/look-inside-chinas-chart-topping-new-supercomputer/ . Cost US$ ~$270 million 4 TOP500 Performance Development We need more than all the performance of all TOP500 machines together! 5 TaihuLight for Exascale Computing? We need 8x the worlds fastest supercomputer: . Architecture Sunway SW26010 260C (Digital Alpha clone) @1.45GHz: > 85M cores . Power 224 MW (including cooling) costs ~ US$ 40K/hour, US$ 340M/year from coal: 2,302,195 tons of CO2 per year . Cost US$ 2.16 billion We have to get at least 10x better in energy efficiency 2-3x better in cost Also: scalable programming models 6 Alternative: Green500 Shoubu supercomputer (#1 Green500 in 2015): . Cores: 1,181,952 . Theoretical Peak: 1,535.83 TFLOPS/s . Memory: 82 TB . Processor: Xeon E5-2618Lv3 8C 2.3GHz .
    [Show full text]
  • Computational PHYSICS Shuai Dong
    Computational physiCs Shuai Dong Evolution: Is this our final end-result? Outline • Brief history of computers • Supercomputers • Brief introduction of computational science • Some basic concepts, tools, examples Birth of Computational Science (Physics) The first electronic general-purpose computer: Constructed in Moore School of Electrical Engineering, University of Pennsylvania, 1946 ENIAC: Electronic Numerical Integrator And Computer ENIAC Electronic Numerical Integrator And Computer • Design and construction was financed by the United States Army. • Designed to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory. • It was heralded in the press as a "Giant Brain". • It had a speed of one thousand times that of electro- mechanical machines. • ENIAC was named an IEEE Milestone in 1987. Gaint Brain • ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 27 tons, took up 167 m2, and consumed 150 kW of power. • This led to the rumor that whenever the computer was switched on, lights in Philadelphia dimmed. • Input was from an IBM card reader, and an IBM card punch was used for output. Development of micro-computers modern PC 1981 IBM PC 5150 CPU: Intel i3,i5,i7, CPU: 8088, 5 MHz 3 GHz Floppy disk or cassette Solid state disk 1984 Macintosh Steve Jobs modern iMac Supercomputers The CDC (Control Data Corporation) 6600, released in 1964, is generally considered the first supercomputer. Seymour Roger Cray (1925-1996) The father of supercomputing, Cray-1 who created the supercomputer industry. Cray Inc.
    [Show full text]
  • FCMSSR Meeting 2018-01 All Slides
    Federal Committee for Meteorological Services and Supporting Research (FCMSSR) Dr. Neil Jacobs Assistant Secretary for Environmental Observation and Prediction and FCMSSR Chair April 30, 2018 Office of the Federal Coordinator for Meteorology Services and Supporting Research 1 Agenda 2:30 – Opening Remarks (Dr. Neil Jacobs, NOAA) 2:40 – Action Item Review (Dr. Bill Schulz, OFCM) 2:45 – Federal Coordinator's Update (OFCM) 3:00 – Implementing Section 402 of the Weather Research And Forecasting Innovation Act Of 2017 (OFCM) 3:20 – Federal Meteorological Services And Supporting Research Strategic Plan and Annual Report. (OFCM) 3:30 – Qualification Standards For Civilian Meteorologists. (Mr. Ralph Stoffler, USAF A3-W) 3:50 – National Earth System Predication Capability (ESPC) High Performance Computing Summary. (ESPC Staff) 4:10 – Open Discussion (All) 4:20 – Wrap-Up (Dr. Neil Jacobs, NOAA) Office of the Federal Coordinator for Meteorology Services and Supporting Research 2 FCMSSR Action Items AI # Text Office Comment Status Due Date Responsible 2017-2.1 Reconvene JAG/ICAWS to OFCM, • JAG/ICAWS convened. Working 04/30/18 develop options to broaden FCMSSR • Options presented to FCMSSR Chairmanship beyond Agencies ICMSSR the Undersecretary of Commerce • then FCMSSR with a for Oceans and Atmosphere. revised Charter Draft a modified FCMSSR • Draft Charter reviewed charter to include ICAWS duties by ICMSSR. as outlined in Section 402 of the • Pending FCMSSR and Weather Research and Forecasting OSTP approval to Innovation Act of 2017 and secure finalize Charter for ICMSSR concurrence. signature. Recommend new due date: 30 June 2018. 2017-2.2 Publish the Strategic Plan for OFCM 1/12/18: Plan published on Closed 11/03/17 Federal Weather Coordination as OFCM website presented during the 24 October 2017 FCMMSR Meeting.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • It's a Multi-Core World
    It’s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group Moore’s Law is not to blame. Intel process technology capabilities High Volume Manufacturing 2004 2006 2008 2010 2012 2014 2016 2018 Feature Size 90nm 65nm 45nm 32nm 22nm 16nm 11nm 8nm Integration Capacity (Billions of 2 4 8 16 32 64 128 256 Transistors) Transistor for Influenza Virus 90nm Process Source: CDC 50nm Source: Intel At end of day, we keep using all those new transistors. That Power and Clock Inflection Point in 2004… didn’t get better. Fun fact: At 100+ Watts and <1V, currents are beginning to exceed 100A at the point of load! Source: Kogge and Shalf, IEEE CISE Courtesy Horst Simon, LBNL Not a new problem, just a new scale… CPU Power W) Cray-2 with cooling tower in foreground, circa 1985 And how to get more performance from more transistors with the same power. RULE OF THUMB A 15% Frequency Power Performance Reduction Reduction Reduction Reduction In Voltage 15% 45% 10% Yields SINGLE CORE DUAL CORE Area = 1 Area = 2 Voltage = 1 Voltage = 0.85 Freq = 1 Freq = 0.85 Power = 1 Power = 1 Perf = 1 Perf = ~1.8 Single Socket Parallelism Processor Year Vector Bits SP FLOPs / core / Cores FLOPs/cycle cycle Pentium III 1999 SSE 128 3 1 3 Pentium IV 2001 SSE2 128 4 1 4 Core 2006 SSE3 128 8 2 16 Nehalem 2008 SSE4 128 8 10 80 Sandybridge 2011 AVX 256 16 12 192 Haswell 2013 AVX2 256 32 18 576 KNC 2012 AVX512 512 32 64 2048 KNL 2016 AVX512 512 64 72 4608 Skylake 2017 AVX512 512 96 28 2688 Putting It All Together Prototypical Application: Serial Weather Model CPU MEMORY First Parallel Weather Modeling Algorithm: Richardson in 1917 Courtesy John Burkhardt, Virginia Tech Weather Model: Shared Memory (OpenMP) Core Fortran: !$omp parallel do Core do i = 1, n Core a(i) = b(i) + c(i) enddoCore C/C++: MEMORY #pragma omp parallel for Four meteorologists in the samefor(i=1; room sharingi<=n; i++) the map.
    [Show full text]
  • Joaovicentesouto-Tcc.Pdf
    UNIVERSIDADE FEDERAL DE SANTA CATARINA CENTRO TECNOLÓGICO DEPARTAMENTO DE INFORMÁTICA E ESTATÍSTICA CIÊNCIAS DA COMPUTAÇÃO João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Florianópolis 6 de dezembro de 2019 João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Trabalho de Conclusão do Curso do Curso de Graduação em Ciências da Computação do Centro Tecnológico da Universidade Federal de Santa Catarina como requisito para ob- tenção do título de Bacharel em Ciências da Computação. Orientador: Prof. Márcio Bastos Castro, Dr. Coorientador: Pedro Henrique Penna, Me. Florianópolis 6 de dezembro de 2019 Ficha de identificação da obra elaborada pelo autor, através do Programa de Geração Automática da Biblioteca Universitária da UFSC. Souto, João Vicente An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS / João Vicente Souto ; orientador, Márcio Bastos Castro , coorientador, Pedro Henrique Penna , 2019. 92 p. Trabalho de Conclusão de Curso (graduação) - Universidade Federal de Santa Catarina, Centro Tecnológico, Graduação em Ciências da Computação, Florianópolis, 2019. Inclui referências. 1. Ciências da Computação. 2. Sistema Operacional Distribuído. 3. Camada de Abstração de Hardware. 4. Processador Lightweight Manycore. 5. Kalray MPPA-256. I. , Márcio Bastos Castro. II. , Pedro Henrique Penna. III. Universidade Federal de Santa Catarina. Graduação em Ciências da Computação. IV. Título. João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Este Trabalho de Conclusão do Curso foi julgado adequado para obtenção do Título de Bacharel em Ciências da Computação e aprovado em sua forma final pelo curso de Graduação em Ciências da Computação.
    [Show full text]
  • Parallel Processing with the MPPA Manycore Processor
    Parallel Processing with the MPPA Manycore Processor Kalray MPPA® Massively Parallel Processor Array Benoît Dupont de Dinechin, CTO 14 Novembre 2018 Outline Presentation Manycore Processors Manycore Programming Symmetric Parallel Models Untimed Dataflow Models Kalray MPPA® Hardware Kalray MPPA® Software Model-Based Programming Deep Learning Inference Conclusions Page 2 ©2018 – Kalray SA All Rights Reserved KALRAY IN A NUTSHELL We design processors 4 ~80 people at the heart of new offices Grenoble, Sophia (France), intelligent systems Silicon Valley (Los Altos, USA), ~70 engineers Yokohama (Japan) A unique technology, Financial and industrial shareholders result of 10 years of development Pengpai Page 3 ©2018 – Kalray SA All Rights Reserved KALRAY: PIONEER OF MANYCORE PROCESSORS #1 Scalable Computing Power #2 Data processing in real time Completion of dozens #3 of critical tasks in parallel #4 Low power consumption #5 Programmable / Open system #6 Security & Safety Page 4 ©2018 – Kalray SA All Rights Reserved OUTSOURCED PRODUCTION (A FABLESS BUSINESS MODEL) PARTNERSHIP WITH THE WORLD LEADER IN PROCESSOR MANUFACTURING Sub-contracted production Signed framework agreement with GUC, subsidiary of TSMC (world top-3 in semiconductor manufacturing) Limited investment No expansion costs Production on the basis of purchase orders Page 5 ©2018 – Kalray SA All Rights Reserved INTELLIGENT DATA CENTER : KEY COMPETITIVE ADVANTAGES First “NVMe-oF all-in-one” certified solution * 8x more powerful than the latest products announced by our competitors**
    [Show full text]
  • Optimizing High-Resolution Community Earth System
    https://doi.org/10.5194/gmd-2020-18 Preprint. Discussion started: 21 February 2020 c Author(s) 2020. CC BY 4.0 License. Optimizing High-Resolution Community Earth System Model on a Heterogeneous Many-Core Supercomputing Platform (CESM- HR_sw1.0) Shaoqing Zhang1,4,5, Haohuan Fu*2,3,1, Lixin Wu*4,5, Yuxuan Li6, Hong Wang1,4,5, Yunhui Zeng7, Xiaohui 5 Duan3,8, Wubing Wan3, Li Wang7, Yuan Zhuang7, Hongsong Meng3, Kai Xu3,8, Ping Xu3,6, Lin Gan3,6, Zhao Liu3,6, Sihai Wu3, Yuhu Chen9, Haining Yu3, Shupeng Shi3, Lanning Wang3,10, Shiming Xu2, Wei Xue3,6, Weiguo Liu3,8, Qiang Guo7, Jie Zhang7, Guanghui Zhu7, Yang Tu7, Jim Edwards1,11, Allison Baker1,11, Jianlin Yong5, Man Yuan5, Yangyang Yu5, Qiuying Zhang1,12, Zedong Liu9, Mingkui Li1,4,5, Dongning Jia9, Guangwen Yang1,3,6, Zhiqiang Wei9, Jingshan Pan7, Ping Chang1,12, Gokhan 10 Danabasoglu1,11, Stephen Yeager1,11, Nan Rosenbloom 1,11, and Ying Guo7 1 International Laboratory for High-Resolution Earth System Model and Prediction (iHESP), Qingdao, China 2 Ministry of Education Key Lab. for Earth System Modeling, and Department of Earth System Science, Tsinghua University, Beijing, China 15 3 National Supercomputing Center in Wuxi, Wuxi, China 4 Laboratory for Ocean Dynamics and Climate, Qingdao Pilot National Laboratory for Marine Science and Technology, Qingdao, China 5 Key Laboratory of Physical Oceanography, the College of Oceanic and Atmospheric Sciences & Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China 20 6 Department of Computer Science & Technology, Tsinghua
    [Show full text]
  • A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway Taihulight
    2018 IEEE International Parallel and Distributed Processing Symposium Workshops A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway TaihuLight Zhang Yang Damodar Sahasrabudhe Institute of Applied Physics and Computational Mathematics Scientific Computing and Imaging Institute Email: yang [email protected] Email: [email protected] Alan Humphrey Martin Berzins Scientific Computing and Imaging Institute Scientific Computing and Imaging Institute Email: [email protected] Email: [email protected] Abstract—The Sunway TaihuLight is the world’s fastest su- (CGs). Each CG is made up of one Management Process- percomputer at the present time with a low power consumption ing Element (MPE) and 64 Computing Processing Elements per flop and a unique set of architectural features. Applications (CPEs) sharing the same main memory as is described below. performance depends heavily on being able to adapt codes to make best use of these features. Porting large codes to Each CPE is equipped with a small user-controlled scratch pad novel architectures such as Sunway is both time-consuming memory instead of data caches. This architecture has made and expensive, as modifications throughout the code may be it possible to run many real-word applications at substantial needed. One alternative to conventional porting is to consider fractions of peak performance, such as the three applications an approach based upon Asynchronous Many Task (AMT) selected as Gordon Bell finalists in SC16 [2]–[4]. However, Runtimes such as the Uintah framework considered here. Uintah structures the problem as a series of tasks that are executed these performance levels were obtained through extensive and by the runtime via a task scheduler.
    [Show full text]
  • Eithne: a Framework for Benchmarking Micro-Core Accelerators
    Eithne: A framework for benchmarking micro-core accelerators Maurice Jamieson Nick Brown EPCC EPCC University of Edinburgh University of Edinburgh Edinburgh, UK Edinburgh, UK [email protected] [email protected] Soft-core MFLOPs/core 1 INTRODUCTION MicroBlaze (integer only) 0.120 The free lunch is over and the HPC community is acutely aware of MicroBlaze (floating point) 5.905 the challenges that the end of Moore’s Law and Dennard scaling Table 1: LINPACK performance of the Xilinx MicroBlaze on [4] impose on the implementation of exascale architectures due to the Zynq-7020 @ 100MHz the end of significant generational performance improvements of traditional processor designs, such as x86 [5]. Power consumption and energy efficiency is also a major concern when scaling thecore is the benefit of reduced chip resource usage when configuring count of traditional CPU designs. Therefore, other technologies without hardware floating point support, but there is a 50 times need to be investigated, with micro-cores and FPGAs, which are performance impact on LINPACK due to the software emulation somewhat related, being considered by the community. library required to perform floating point arithmetic. By under- Micro-core architectures look to address this issue by implement- standing the implications of different configuration decisions, the ing a large number of simple cores running in parallel on a single user can make the most appropriate choice, in this case trading off chip and have been used in successful HPC architectures, such how much floating point arithmetic is in their code vs the saving as the Sunway SW26010 of the Sunway TaihuLight (#3 June 2019 in chip resource.
    [Show full text]
  • Technologies and Tools for High-Performance Distributed
    Solver software infrastructure for exascale applications David Keyes, Applied Mathematics & Computational Science Director, Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology Teratec 2016 Philosophy of software investment I. Hoteit M. Mai V. Bajic A. Fratalocchi G. Schuster F. Bisetti R. Samtaney U. Schwingenschloegl G. Stenchikov Applications Math Applications Math & CS drive CS enable Teratec 2016 France and KAUST (top five academics are French or Francophone) Jean-Lou Chameau Jean Fréchet President VP for Research PhD, Stanford, 1973 PhD Syracuse, 1971 came from Caltech came from Berkeley Légion d’honneur NAS, NAE, Japan Prize Yves Gnanou Pierre Magistretti Dean, PSE Dean, BESE PhD Strasbourg, 1985 PhD UCSD, 1982 came from Ecole Polytechnique came from EPFL Légion d’honneur Mootaz Elnozahy Dean, CEMSE PhD Rice, 1993 came from IBM France and KAUST in HPC Multi- objective adaptive optics project E-ELT— to scale Euro “seistest” project: Mygdonian basin Teratec 2016 KAUST and the four scientific paradigms Galileo timeline, Greeks KAUST Typical model university (2009) experiment model theory data simulation (many institutions are still firing on just two of the four cylinders) The ThirdAdvance Paradigm of the third paradigm typical research confirm institution new research discover institutions Experimentation & Observation Simulation predict SciDAC Los Alamos KAUST understand 1950 2000 2050 Teratec 2016 Shaheen has been a scientific instrument for environment and energy simulations Science Area
    [Show full text]
  • Comparative HPC Performance Powerpoint
    Comparative HPC Performance TOP500 Top Ten, Graph and Detail FX700 Actual Customer Benchmarks GRAPH500 Top Ten, Graph and Detail HPCG Top Ten, Graph and Detail HPL-AI Top Five, Graph and Detail Top 500 HPC Rankings – November 2020 500 450 400 350 300 Home - | TOP500 250 200 150 100 50 0 Fugaku Summit Sierra TaihuLight Selene Tianhe Juwels HPC5 Frontera Dammam 7 Rmax (k Tflop/s) Home - | TOP500 Rank Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW) 1 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu 7,630,848 442,010 537,212 29,899 RIKEN Center for Computational Science Japan Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 2 2,414,592 148,600 200,795 10,096 Infiniband, IBM DOE/SC/Oak Ridge National Laboratory United States Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 3 1,572,480 94,640 125,712 7,438 Infiniband, IBM / NVIDIA / Mellanox DOE/NNSA/LLNL United States 4 Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC 10,649,600 93,015 125,436 15,371 National Supercomputing Center in Wuxi China 5 Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband, Nvidia 555,520 63,460 79,215 2,646 NVIDIA Corporation United States 6 Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000, NUDT 4,981,760 61,445 100,679 18,482 National Super Computer Center in Guangzhou China JUWELS Booster Module - Bull Sequana XH2000 , AMD EPYC 7402 24C 2.8GHz, NVIDIA A100, Mellanox HDR 7 449,280 44,120 70,980 1,764 InfiniBand/ParTec ParaStation ClusterSuite, Atos Forschungszentrum Juelich (FZJ) Germany 8 HPC5 - PowerEdge C4140, Xeon Gold 6252 24C 2.1GHz, NVIDIA Tesla V100, Mellanox HDR Infiniband, Dell EMC 669,760 35,450 51,721 2,252 Eni S.p.A.
    [Show full text]