Venetis University of Patras

Total Page:16

File Type:pdf, Size:1020Kb

Venetis University of Patras Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? ▶ Aren’t we here to talk about GPUs? ▶ And how to program them with CUDA? ▶ Yes, but we need to understand their place and their purpose in modern High Performance Systems ▶ This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 in Wuxi Sunway 10.649.600 - 93.014,6 125.435,9 15.371 China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 (CSCS) , NVIDIA Tesla P100 361.760 297.920 19.590,0 25.326,3 2.272 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 United States NVIDIA K20x 560.640 261.632 17.590,0 27.112,5 8.209 Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do we build an HPC system? ▶ Limitations in technology ▶ It is impossible to fit all computational resources we require into a single chip ▶ We have to build our system hierarchically 5 Introduction to GPU/Parallel Computing www.prace-ri.eu Processor ▶ All modern processors are “multi-core” ▶ Multiple, independent processors are placed on the same chip ▶ They might also support Simultaneous Multi-Threading (SMT) ▶ Every core is capable to execute more flows of instructions ▶ Threads ▶ However, these share most of the functional units of each core ▶ 1st level of parallelism ▶ Typically 4, 8 or 16 cores 6 Introduction to GPU/Parallel Computing www.prace-ri.eu Compute card ▶ 1 or more processors are placed on a compute card ▶ Typically, a single compute card operates as a shared memory system ▶ It usually contains 1, 2 or 4 processors 7 Introduction to GPU/Parallel Computing www.prace-ri.eu Node ▶ Multiple compute cards are placed in a node ▶ There is no shared memory among compute cards ▶ The interconnection network among compute cards can be implemented in may different ways ▶ Usually there exists 1 or more additional compute cards that are dedicated to manage communication with the rest of the nodes 8 Introduction to GPU/Parallel Computing www.prace-ri.eu Rack ▶ Multiple nodes are placed in a rack ▶ There is no shared memory among nodes of a rack ▶ The interconnection network among nodes can be implemented in may different ways ▶ Not necessarily in the same way that compute cards are connected within a single node 9 Introduction to GPU/Parallel Computing www.prace-ri.eu The whole system ▶ Multiple racks are connected ▶ Typically there are dedicated nodes that handle I/O 10 Introduction to GPU/Parallel Computing www.prace-ri.eu Hierarchical parallelism IBM BlueGene/P or 11 Introduction to GPU/Parallel Computing www.prace-ri.eu Examples of modern High Performance Systems 12 Introduction to GPU/Parallel Computing www.prace-ri.eu Sunway TaihuLight (No 1, Top 500 list, June 2017) ▶ Computing node ▶ Basic element of the architecture ▶ 256 computing nodes create a super node ▶ Super nodes are connected through the central switch network Sources of images: • The Sunway TaihuLight supercomputer: system and applications. Fu, H., Liao, J., Yang, J. et al. Sci. China Inf. Sci. (2016) 59: 072001. doi:10.1007/s11432-016-5588-7 • Report on the Sunway TaihuLight System Dongara, J., Tech Report UT-EECS-16-742, June 2016. 13 Introduction to GPU/Parallel Computing www.prace-ri.eu Processor ▶ SW26010 ▶ One of the few systems the rely on a custom made processor ▶ Designed by the Shanghai High Performance IC Design Center ▶ Characteristic example of a heterogeneous many-core processor ▶ Composed of 2 types of different cores 14 Introduction to GPU/Parallel Computing www.prace-ri.eu Processor ▶ Contains 4 Core Groups (CGs) ▶ Connected through a Network On Chip (NoC) ▶ Each CG is composed of: ▶ 1 Management Processing Element (MPE) ▶ 64 Computing Processing Elements (CPEs) ▶ Placed on a 8x8 grid 15 Introduction to GPU/Parallel Computing www.prace-ri.eu Processor ▶ Each CG has a distinct address space ▶ Connected to the MPE and the CPEs through a Memory Controller (MC) ▶ Each processor connects to the rest of the system through the System Interface (SI) 16 Introduction to GPU/Parallel Computing www.prace-ri.eu The two types of cores ▶ Management Processing Element (MPE) ▶ Complete 64-bit RISC core ▶ Executes instructions in user and system modes, handles interrupts, memory mamagement, superscalar, out-of-order execution, … ▶ Performs all management and communication tasks ▶ Computing Processing Element (CPE) ▶ Reduced capability 64-bit RISC core ▶ Executes instructions only in user mode, does not handle interrupts, … ▶ Objectives of the design ▶ Maximum overall performance ▶ Reduced design complexity ▶ Placed on an 8x8 grid ▶ Allows for fast exchange of data directly between registers 17 Introduction to GPU/Parallel Computing www.prace-ri.eu Compute card ▶ 2 processors 18 Introduction to GPU/Parallel Computing www.prace-ri.eu Node ▶ 4 compute cards ▶ 2 σε κάθε μεριά 19 Introduction to GPU/Parallel Computing www.prace-ri.eu Supernode ▶ 32 nodes (256 processors) 20 Introduction to GPU/Parallel Computing www.prace-ri.eu Cabinet ▶ 4 supernodes (1024 processors) 21 Introduction to GPU/Parallel Computing www.prace-ri.eu Sunway TaihuLight ▶ 40 cabinets 22 Introduction to GPU/Parallel Computing www.prace-ri.eu Overview Cores 10.649.600 Peak performance 125,436 PFlops Linpack performance 93,015 PFlops CPU frequency 1,45 GHz Peak performance of a CPU 3,06 TFlops Total memory 1310,72 TB Total memory bandwidth 5591,5 TB/s Network link bandwidth 16 GB/s Network bisection bandwidth 70 TB/s Network diameter 7 Total storage 20 PB Total I/O bandwidth 288 GB/s Power consumption when running the Linpack test 15,371 MW Performance power ratio 6,05 GFlops/W 23 Introduction to GPU/Parallel Computing www.prace-ri.eu Tianhe-2 (No 2, Top 500 list, June 2017) ▶ In contrast to Synway TaihuLight it has typical/commercial processors ▶ Intel Xeon E5-2692 ▶ 12 cores ▶ 2.2 GHz ▶ To achieve high performance it uses coprocessors ▶ Intel Xeon Phi 31S1P ▶ 57 cores ▶ 4-way SMT ▶ 1.1 GHz ▶ PCI-E 2.0 interconnect with the host system 24 Introduction to GPU/Parallel Computing www.prace-ri.eu Compute card ▶ Contains 2 processors and 3 Xeon Phi 25 Introduction to GPU/Parallel Computing www.prace-ri.eu Node ▶ Contains 2 compute cards ▶ Special interconnection 26 Introduction to GPU/Parallel Computing www.prace-ri.eu Frame ▶ 16 nodes 27 Introduction to GPU/Parallel Computing www.prace-ri.eu Rack ▶ 4 frames 28 Introduction to GPU/Parallel Computing www.prace-ri.eu Tianhe-2 ▶ 125 racks 29 Introduction to GPU/Parallel Computing www.prace-ri.eu Overview Cores 3.120.000 Peak performance 54,902 PFlops Linpack performance 33,863 PFlops CPU frequency 2,2 GHz / 1,1 GHz Total memory 1.404 TB Total storage 12,4 PB Total I/O bandwidth 100 GB/s Power consumption when running Linpack 17,808 MW Performance power ratio 1,9 GFlops/W 30 Introduction to GPU/Parallel Computing www.prace-ri.eu Titan (No 4, Top 500 list, June 2017) ▶ Also consists of typical/commercial processors ▶ AMD Opteron 6274 ▶ 16 cores ▶ 2.2 GHz ▶ To achieve high performance it uses coprocessors ▶ NVidia K20x ▶ 2688 cores ▶ 732 MHz ▶ PCI-E 2.0 interconnect with the host system 31 Introduction to GPU/Parallel Computing www.prace-ri.eu Compute card / Node ▶ Contains 1 processor + 1 GPU ▶ 2 nodes share the router of the interconnection network Z Y X 32 Introduction to GPU/Parallel Computing www.prace-ri.eu Blade / Cabinet ▶ Each blade contains 4 nodes ▶ Each cabinet contains 24 blades 33 Introduction to GPU/Parallel Computing www.prace-ri.eu Titan ▶ 200 cabinets 34 Introduction to GPU/Parallel Computing www.prace-ri.eu Overview Cores 560.640 Peak performance 27,113 PFlops Linpack performance 17,590 Pflops CPU frequency 2,2 GHz / 2,2 GHz Total memory 710 TB Total storage 40 PB Total I/O bandwidth 1,4 TB/s Power consumption when running Linpack 8,209 MW Performance power ratio 2,1 GFlops/W 35 Introduction to GPU/Parallel Computing www.prace-ri.eu Comparison Sunway Tianhe-2 Titan TaihuLight Cores 10.649.600 3.120.000 560.640 Peak performance 125,436 PFlops 54,902 PFlops 27,113 PFlops Linpack performance 93,015 PFlops 33,863 PFlops 17,590 Pflops CPU frequency 1,45 GHz 2,2 GHz / 1,1 GHz 2,2 GHz / 2,2 GHz Total memory 1310,72 TB 1.404 TB 710 TB Total storage 20 PB 12,4 PB 40 PB Total I/O bandwidth 288 GB/s 100 GB/s 1,4 TB/s Power consumption for Linpack 15,371 MW 17,808 MW 8,209 MW Performance power ratio 6,05 GFlops/W 1,9 GFlops/W 2,1 GFlops/W 36 Introduction to GPU/Parallel Computing www.prace-ri.eu Power consumption ▶ Average daily power consumption per household: 11 KWh ▶ http://www.cres.gr/pepesec/apotelesmata.html ▶ Small study, but gives a picture ▶ Tianhe-2: 17.808 KW * 24 hours = 427.392 KWh ▶ Consumes as much as 38.854 households per day! ▶ If on average 3 people live an household: ▶ 38.854 * 3
Recommended publications
  • Towards Exascale Computing
    TowardsTowards ExascaleExascale Computing:Computing: TheThe ECOSCALEECOSCALE ApproachApproach Dirk Koch, The University of Manchester, UK ([email protected]) 1 Motivation: let’s build a 1,000,000,000,000,000,000 FLOPS Computer (Exascale computing: 1018 FLOPS = one quintillion or a billion billion floating-point calculations per sec.) 2 1,000,000,000,000,000,000 FLOPS . 10,000,000,000,000,000,00 FLOPS 1975: MOS 6502 (Commodore 64, BBC Micro) 3 Sunway TaihuLight Supercomputer . 2016 (fully operational) . 12,543,6000,000,000,000,00 FLOPS (125.436 petaFLOPS) . Architecture Sunway SW26010 260C (Digital Alpha clone) 1.45GHz 10,649,600 cores . Power “The cooling system for TaihuLight uses a closed- coupled chilled water outfit suited for 28 MW with a custom liquid cooling unit”* *https://www.nextplatform.com/2016/06/20/look-inside-chinas-chart-topping-new-supercomputer/ . Cost US$ ~$270 million 4 TOP500 Performance Development We need more than all the performance of all TOP500 machines together! 5 TaihuLight for Exascale Computing? We need 8x the worlds fastest supercomputer: . Architecture Sunway SW26010 260C (Digital Alpha clone) @1.45GHz: > 85M cores . Power 224 MW (including cooling) costs ~ US$ 40K/hour, US$ 340M/year from coal: 2,302,195 tons of CO2 per year . Cost US$ 2.16 billion We have to get at least 10x better in energy efficiency 2-3x better in cost Also: scalable programming models 6 Alternative: Green500 Shoubu supercomputer (#1 Green500 in 2015): . Cores: 1,181,952 . Theoretical Peak: 1,535.83 TFLOPS/s . Memory: 82 TB . Processor: Xeon E5-2618Lv3 8C 2.3GHz .
    [Show full text]
  • Computational PHYSICS Shuai Dong
    Computational physiCs Shuai Dong Evolution: Is this our final end-result? Outline • Brief history of computers • Supercomputers • Brief introduction of computational science • Some basic concepts, tools, examples Birth of Computational Science (Physics) The first electronic general-purpose computer: Constructed in Moore School of Electrical Engineering, University of Pennsylvania, 1946 ENIAC: Electronic Numerical Integrator And Computer ENIAC Electronic Numerical Integrator And Computer • Design and construction was financed by the United States Army. • Designed to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory. • It was heralded in the press as a "Giant Brain". • It had a speed of one thousand times that of electro- mechanical machines. • ENIAC was named an IEEE Milestone in 1987. Gaint Brain • ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 27 tons, took up 167 m2, and consumed 150 kW of power. • This led to the rumor that whenever the computer was switched on, lights in Philadelphia dimmed. • Input was from an IBM card reader, and an IBM card punch was used for output. Development of micro-computers modern PC 1981 IBM PC 5150 CPU: Intel i3,i5,i7, CPU: 8088, 5 MHz 3 GHz Floppy disk or cassette Solid state disk 1984 Macintosh Steve Jobs modern iMac Supercomputers The CDC (Control Data Corporation) 6600, released in 1964, is generally considered the first supercomputer. Seymour Roger Cray (1925-1996) The father of supercomputing, Cray-1 who created the supercomputer industry. Cray Inc.
    [Show full text]
  • FCMSSR Meeting 2018-01 All Slides
    Federal Committee for Meteorological Services and Supporting Research (FCMSSR) Dr. Neil Jacobs Assistant Secretary for Environmental Observation and Prediction and FCMSSR Chair April 30, 2018 Office of the Federal Coordinator for Meteorology Services and Supporting Research 1 Agenda 2:30 – Opening Remarks (Dr. Neil Jacobs, NOAA) 2:40 – Action Item Review (Dr. Bill Schulz, OFCM) 2:45 – Federal Coordinator's Update (OFCM) 3:00 – Implementing Section 402 of the Weather Research And Forecasting Innovation Act Of 2017 (OFCM) 3:20 – Federal Meteorological Services And Supporting Research Strategic Plan and Annual Report. (OFCM) 3:30 – Qualification Standards For Civilian Meteorologists. (Mr. Ralph Stoffler, USAF A3-W) 3:50 – National Earth System Predication Capability (ESPC) High Performance Computing Summary. (ESPC Staff) 4:10 – Open Discussion (All) 4:20 – Wrap-Up (Dr. Neil Jacobs, NOAA) Office of the Federal Coordinator for Meteorology Services and Supporting Research 2 FCMSSR Action Items AI # Text Office Comment Status Due Date Responsible 2017-2.1 Reconvene JAG/ICAWS to OFCM, • JAG/ICAWS convened. Working 04/30/18 develop options to broaden FCMSSR • Options presented to FCMSSR Chairmanship beyond Agencies ICMSSR the Undersecretary of Commerce • then FCMSSR with a for Oceans and Atmosphere. revised Charter Draft a modified FCMSSR • Draft Charter reviewed charter to include ICAWS duties by ICMSSR. as outlined in Section 402 of the • Pending FCMSSR and Weather Research and Forecasting OSTP approval to Innovation Act of 2017 and secure finalize Charter for ICMSSR concurrence. signature. Recommend new due date: 30 June 2018. 2017-2.2 Publish the Strategic Plan for OFCM 1/12/18: Plan published on Closed 11/03/17 Federal Weather Coordination as OFCM website presented during the 24 October 2017 FCMMSR Meeting.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • It's a Multi-Core World
    It’s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group Moore’s Law is not to blame. Intel process technology capabilities High Volume Manufacturing 2004 2006 2008 2010 2012 2014 2016 2018 Feature Size 90nm 65nm 45nm 32nm 22nm 16nm 11nm 8nm Integration Capacity (Billions of 2 4 8 16 32 64 128 256 Transistors) Transistor for Influenza Virus 90nm Process Source: CDC 50nm Source: Intel At end of day, we keep using all those new transistors. That Power and Clock Inflection Point in 2004… didn’t get better. Fun fact: At 100+ Watts and <1V, currents are beginning to exceed 100A at the point of load! Source: Kogge and Shalf, IEEE CISE Courtesy Horst Simon, LBNL Not a new problem, just a new scale… CPU Power W) Cray-2 with cooling tower in foreground, circa 1985 And how to get more performance from more transistors with the same power. RULE OF THUMB A 15% Frequency Power Performance Reduction Reduction Reduction Reduction In Voltage 15% 45% 10% Yields SINGLE CORE DUAL CORE Area = 1 Area = 2 Voltage = 1 Voltage = 0.85 Freq = 1 Freq = 0.85 Power = 1 Power = 1 Perf = 1 Perf = ~1.8 Single Socket Parallelism Processor Year Vector Bits SP FLOPs / core / Cores FLOPs/cycle cycle Pentium III 1999 SSE 128 3 1 3 Pentium IV 2001 SSE2 128 4 1 4 Core 2006 SSE3 128 8 2 16 Nehalem 2008 SSE4 128 8 10 80 Sandybridge 2011 AVX 256 16 12 192 Haswell 2013 AVX2 256 32 18 576 KNC 2012 AVX512 512 32 64 2048 KNL 2016 AVX512 512 64 72 4608 Skylake 2017 AVX512 512 96 28 2688 Putting It All Together Prototypical Application: Serial Weather Model CPU MEMORY First Parallel Weather Modeling Algorithm: Richardson in 1917 Courtesy John Burkhardt, Virginia Tech Weather Model: Shared Memory (OpenMP) Core Fortran: !$omp parallel do Core do i = 1, n Core a(i) = b(i) + c(i) enddoCore C/C++: MEMORY #pragma omp parallel for Four meteorologists in the samefor(i=1; room sharingi<=n; i++) the map.
    [Show full text]
  • Joaovicentesouto-Tcc.Pdf
    UNIVERSIDADE FEDERAL DE SANTA CATARINA CENTRO TECNOLÓGICO DEPARTAMENTO DE INFORMÁTICA E ESTATÍSTICA CIÊNCIAS DA COMPUTAÇÃO João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Florianópolis 6 de dezembro de 2019 João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Trabalho de Conclusão do Curso do Curso de Graduação em Ciências da Computação do Centro Tecnológico da Universidade Federal de Santa Catarina como requisito para ob- tenção do título de Bacharel em Ciências da Computação. Orientador: Prof. Márcio Bastos Castro, Dr. Coorientador: Pedro Henrique Penna, Me. Florianópolis 6 de dezembro de 2019 Ficha de identificação da obra elaborada pelo autor, através do Programa de Geração Automática da Biblioteca Universitária da UFSC. Souto, João Vicente An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS / João Vicente Souto ; orientador, Márcio Bastos Castro , coorientador, Pedro Henrique Penna , 2019. 92 p. Trabalho de Conclusão de Curso (graduação) - Universidade Federal de Santa Catarina, Centro Tecnológico, Graduação em Ciências da Computação, Florianópolis, 2019. Inclui referências. 1. Ciências da Computação. 2. Sistema Operacional Distribuído. 3. Camada de Abstração de Hardware. 4. Processador Lightweight Manycore. 5. Kalray MPPA-256. I. , Márcio Bastos Castro. II. , Pedro Henrique Penna. III. Universidade Federal de Santa Catarina. Graduação em Ciências da Computação. IV. Título. João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Este Trabalho de Conclusão do Curso foi julgado adequado para obtenção do Título de Bacharel em Ciências da Computação e aprovado em sua forma final pelo curso de Graduação em Ciências da Computação.
    [Show full text]
  • Parallel Processing with the MPPA Manycore Processor
    Parallel Processing with the MPPA Manycore Processor Kalray MPPA® Massively Parallel Processor Array Benoît Dupont de Dinechin, CTO 14 Novembre 2018 Outline Presentation Manycore Processors Manycore Programming Symmetric Parallel Models Untimed Dataflow Models Kalray MPPA® Hardware Kalray MPPA® Software Model-Based Programming Deep Learning Inference Conclusions Page 2 ©2018 – Kalray SA All Rights Reserved KALRAY IN A NUTSHELL We design processors 4 ~80 people at the heart of new offices Grenoble, Sophia (France), intelligent systems Silicon Valley (Los Altos, USA), ~70 engineers Yokohama (Japan) A unique technology, Financial and industrial shareholders result of 10 years of development Pengpai Page 3 ©2018 – Kalray SA All Rights Reserved KALRAY: PIONEER OF MANYCORE PROCESSORS #1 Scalable Computing Power #2 Data processing in real time Completion of dozens #3 of critical tasks in parallel #4 Low power consumption #5 Programmable / Open system #6 Security & Safety Page 4 ©2018 – Kalray SA All Rights Reserved OUTSOURCED PRODUCTION (A FABLESS BUSINESS MODEL) PARTNERSHIP WITH THE WORLD LEADER IN PROCESSOR MANUFACTURING Sub-contracted production Signed framework agreement with GUC, subsidiary of TSMC (world top-3 in semiconductor manufacturing) Limited investment No expansion costs Production on the basis of purchase orders Page 5 ©2018 – Kalray SA All Rights Reserved INTELLIGENT DATA CENTER : KEY COMPETITIVE ADVANTAGES First “NVMe-oF all-in-one” certified solution * 8x more powerful than the latest products announced by our competitors**
    [Show full text]
  • Optimizing High-Resolution Community Earth System
    https://doi.org/10.5194/gmd-2020-18 Preprint. Discussion started: 21 February 2020 c Author(s) 2020. CC BY 4.0 License. Optimizing High-Resolution Community Earth System Model on a Heterogeneous Many-Core Supercomputing Platform (CESM- HR_sw1.0) Shaoqing Zhang1,4,5, Haohuan Fu*2,3,1, Lixin Wu*4,5, Yuxuan Li6, Hong Wang1,4,5, Yunhui Zeng7, Xiaohui 5 Duan3,8, Wubing Wan3, Li Wang7, Yuan Zhuang7, Hongsong Meng3, Kai Xu3,8, Ping Xu3,6, Lin Gan3,6, Zhao Liu3,6, Sihai Wu3, Yuhu Chen9, Haining Yu3, Shupeng Shi3, Lanning Wang3,10, Shiming Xu2, Wei Xue3,6, Weiguo Liu3,8, Qiang Guo7, Jie Zhang7, Guanghui Zhu7, Yang Tu7, Jim Edwards1,11, Allison Baker1,11, Jianlin Yong5, Man Yuan5, Yangyang Yu5, Qiuying Zhang1,12, Zedong Liu9, Mingkui Li1,4,5, Dongning Jia9, Guangwen Yang1,3,6, Zhiqiang Wei9, Jingshan Pan7, Ping Chang1,12, Gokhan 10 Danabasoglu1,11, Stephen Yeager1,11, Nan Rosenbloom 1,11, and Ying Guo7 1 International Laboratory for High-Resolution Earth System Model and Prediction (iHESP), Qingdao, China 2 Ministry of Education Key Lab. for Earth System Modeling, and Department of Earth System Science, Tsinghua University, Beijing, China 15 3 National Supercomputing Center in Wuxi, Wuxi, China 4 Laboratory for Ocean Dynamics and Climate, Qingdao Pilot National Laboratory for Marine Science and Technology, Qingdao, China 5 Key Laboratory of Physical Oceanography, the College of Oceanic and Atmospheric Sciences & Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China 20 6 Department of Computer Science & Technology, Tsinghua
    [Show full text]
  • A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway Taihulight
    2018 IEEE International Parallel and Distributed Processing Symposium Workshops A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway TaihuLight Zhang Yang Damodar Sahasrabudhe Institute of Applied Physics and Computational Mathematics Scientific Computing and Imaging Institute Email: yang [email protected] Email: [email protected] Alan Humphrey Martin Berzins Scientific Computing and Imaging Institute Scientific Computing and Imaging Institute Email: [email protected] Email: [email protected] Abstract—The Sunway TaihuLight is the world’s fastest su- (CGs). Each CG is made up of one Management Process- percomputer at the present time with a low power consumption ing Element (MPE) and 64 Computing Processing Elements per flop and a unique set of architectural features. Applications (CPEs) sharing the same main memory as is described below. performance depends heavily on being able to adapt codes to make best use of these features. Porting large codes to Each CPE is equipped with a small user-controlled scratch pad novel architectures such as Sunway is both time-consuming memory instead of data caches. This architecture has made and expensive, as modifications throughout the code may be it possible to run many real-word applications at substantial needed. One alternative to conventional porting is to consider fractions of peak performance, such as the three applications an approach based upon Asynchronous Many Task (AMT) selected as Gordon Bell finalists in SC16 [2]–[4]. However, Runtimes such as the Uintah framework considered here. Uintah structures the problem as a series of tasks that are executed these performance levels were obtained through extensive and by the runtime via a task scheduler.
    [Show full text]
  • Eithne: a Framework for Benchmarking Micro-Core Accelerators
    Eithne: A framework for benchmarking micro-core accelerators Maurice Jamieson Nick Brown EPCC EPCC University of Edinburgh University of Edinburgh Edinburgh, UK Edinburgh, UK [email protected] [email protected] Soft-core MFLOPs/core 1 INTRODUCTION MicroBlaze (integer only) 0.120 The free lunch is over and the HPC community is acutely aware of MicroBlaze (floating point) 5.905 the challenges that the end of Moore’s Law and Dennard scaling Table 1: LINPACK performance of the Xilinx MicroBlaze on [4] impose on the implementation of exascale architectures due to the Zynq-7020 @ 100MHz the end of significant generational performance improvements of traditional processor designs, such as x86 [5]. Power consumption and energy efficiency is also a major concern when scaling thecore is the benefit of reduced chip resource usage when configuring count of traditional CPU designs. Therefore, other technologies without hardware floating point support, but there is a 50 times need to be investigated, with micro-cores and FPGAs, which are performance impact on LINPACK due to the software emulation somewhat related, being considered by the community. library required to perform floating point arithmetic. By under- Micro-core architectures look to address this issue by implement- standing the implications of different configuration decisions, the ing a large number of simple cores running in parallel on a single user can make the most appropriate choice, in this case trading off chip and have been used in successful HPC architectures, such how much floating point arithmetic is in their code vs the saving as the Sunway SW26010 of the Sunway TaihuLight (#3 June 2019 in chip resource.
    [Show full text]
  • Technologies and Tools for High-Performance Distributed
    Solver software infrastructure for exascale applications David Keyes, Applied Mathematics & Computational Science Director, Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology Teratec 2016 Philosophy of software investment I. Hoteit M. Mai V. Bajic A. Fratalocchi G. Schuster F. Bisetti R. Samtaney U. Schwingenschloegl G. Stenchikov Applications Math Applications Math & CS drive CS enable Teratec 2016 France and KAUST (top five academics are French or Francophone) Jean-Lou Chameau Jean Fréchet President VP for Research PhD, Stanford, 1973 PhD Syracuse, 1971 came from Caltech came from Berkeley Légion d’honneur NAS, NAE, Japan Prize Yves Gnanou Pierre Magistretti Dean, PSE Dean, BESE PhD Strasbourg, 1985 PhD UCSD, 1982 came from Ecole Polytechnique came from EPFL Légion d’honneur Mootaz Elnozahy Dean, CEMSE PhD Rice, 1993 came from IBM France and KAUST in HPC Multi- objective adaptive optics project E-ELT— to scale Euro “seistest” project: Mygdonian basin Teratec 2016 KAUST and the four scientific paradigms Galileo timeline, Greeks KAUST Typical model university (2009) experiment model theory data simulation (many institutions are still firing on just two of the four cylinders) The ThirdAdvance Paradigm of the third paradigm typical research confirm institution new research discover institutions Experimentation & Observation Simulation predict SciDAC Los Alamos KAUST understand 1950 2000 2050 Teratec 2016 Shaheen has been a scientific instrument for environment and energy simulations Science Area
    [Show full text]
  • Comparative HPC Performance Powerpoint
    Comparative HPC Performance TOP500 Top Ten, Graph and Detail FX700 Actual Customer Benchmarks GRAPH500 Top Ten, Graph and Detail HPCG Top Ten, Graph and Detail HPL-AI Top Five, Graph and Detail Top 500 HPC Rankings – November 2020 500 450 400 350 300 Home - | TOP500 250 200 150 100 50 0 Fugaku Summit Sierra TaihuLight Selene Tianhe Juwels HPC5 Frontera Dammam 7 Rmax (k Tflop/s) Home - | TOP500 Rank Cores Rmax (TFlop/s) Rpeak (TFlop/s) Power (kW) 1 Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu 7,630,848 442,010 537,212 29,899 RIKEN Center for Computational Science Japan Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 2 2,414,592 148,600 200,795 10,096 Infiniband, IBM DOE/SC/Oak Ridge National Laboratory United States Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR 3 1,572,480 94,640 125,712 7,438 Infiniband, IBM / NVIDIA / Mellanox DOE/NNSA/LLNL United States 4 Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway, NRCPC 10,649,600 93,015 125,436 15,371 National Supercomputing Center in Wuxi China 5 Selene - NVIDIA DGX A100, AMD EPYC 7742 64C 2.25GHz, NVIDIA A100, Mellanox HDR Infiniband, Nvidia 555,520 63,460 79,215 2,646 NVIDIA Corporation United States 6 Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000, NUDT 4,981,760 61,445 100,679 18,482 National Super Computer Center in Guangzhou China JUWELS Booster Module - Bull Sequana XH2000 , AMD EPYC 7402 24C 2.8GHz, NVIDIA A100, Mellanox HDR 7 449,280 44,120 70,980 1,764 InfiniBand/ParTec ParaStation ClusterSuite, Atos Forschungszentrum Juelich (FZJ) Germany 8 HPC5 - PowerEdge C4140, Xeon Gold 6252 24C 2.1GHz, NVIDIA Tesla V100, Mellanox HDR Infiniband, Dell EMC 669,760 35,450 51,721 2,252 Eni S.p.A.
    [Show full text]