Pdf Download

Total Page:16

File Type:pdf, Size:1020Kb

Pdf Download E-Infrastructures H2020-EINFRA-2014-2015 EINFRA-4-2014: Pan-European High Performance Computing Infrastructure and Services PRACE-4IP PRACE Fourth Implementation Phase Project Grant Agreement Number: EINFRA-653838 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered Final Version: 1.0 Author(s): Ioannis Liabotis, GRNET Date: 21.04.2017 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered Project and Deliverable Information Sheet PRACE Project Project Ref. №: EINFRA-653838 Project Title: PRACE Fourth Implementation Phase Project Project Web Site: http://www.prace-project.eu Deliverable ID: D5.2 Deliverable Nature: Report Dissemination Level: Contractual Date of Delivery: PU * 30 / April / 2017 Actual Date of Delivery: 30 / April / 2017 EC Project Officer: Leonardo Flores Añover * - The dissemination level are indicated as follows: PU – Public, CO – Confidential, only for members of the consortium (including the Commission Services) CL – Classified, as referred to in Commission Decision 2991/844/EC. Document Control Sheet Title: Market and Technology Watch Report Year 2. Final Document summary of results gathered ID: D5.2 Version: 1.0 Status: Final Available at: http://www.prace-project.eu Software Tool: Microsoft Word 2010 File(s): D5.2.docx Written by: Ioannis Liabotis, GRNET PRACE-4IP-EINFRA-653838 i 21.04.2017 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered Authorship Contributors: Felip Moll, BSC Oscar Yerpes, BSC Francois Robin, CEA Jean-Philippe Nominé, CEA Guillaume Colin de Verdiere, CEA Carlo Cavazzoni, CINECA Bertrand Cirou, CINES Samuli Saarinen, CSC Susanna Salminen, CSC Dirk Pleiter, FZJ Eric Boyer, GENCI Philippe Segers, GENCI Ioannis Liabotis, GRNET Dimitrios Dellis, GRNET Branislav Jansik, IT4I-VSB Filip Stanek, IT4I-VSB Gert Svensson, KTH Andreas Johansson, LiU Torsten Wilde, LRZ Radek Januszewski, PSNC Norbert Meyer, PSNC Huub Stoffers, SURFsara Walter Lioen, SURFsara Damian Podareanu, SURFsara Reviewed by: Thomas Bönisch, HLRS Florian Berberich, FZJ Approved by: MB/TB Document Status Sheet Version Date Status Comments 0.1 15/February/2017 Draft Initial Draft with TOC and 1st set of contributions 0.2 03/March/2017 Draft 1st almost complete draft with all contributions received before the 02 March included. 0.3 05/March/2017 Draft Added CRAY contribution, fixed references, acronyms etc. 0.3 07/March/2017 Draft Added new section in Heterogeneous systems 0.4 08/March/2017 Draft Minor corrections 0.5 14/March/2017 Draft Added new section on HPC and Cloud, PRACE-4IP-EINFRA-653838 ii 21.04.2017 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered corrections to glossary and conclusions. 0.6 19/March/2017 Draft Incorporated extra comments and corrections from WP5 partners 1.0 21/April/2017 Final Addressed internal review comments PRACE-4IP-EINFRA-653838 iii 21.04.2017 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered Document Keywords Keywords: PRACE, HPC, Research Infrastructure, Market Survey, Technology Watch Disclaimer This deliverable has been prepared by the responsible Work Package of the Project in accordance with the Consortium Agreement and the Grant Agreement n° EINFRA-653838. It solely reflects the opinion of the parties to such agreements on a collective basis in the context of the Project and to the extent foreseen in such agreements. Please note that even though all participants to the Project are members of PRACE AISBL, this deliverable has not been approved by the Council of PRACE AISBL and therefore does not emanate from it nor should it be considered to reflect PRACE AISBL’s individual opinion. Copyright notices 2017 PRACE Consortium Partners. All rights reserved. This document is a project document of the PRACE project. All contents are reserved by default and may not be disclosed to third parties without the written consent of the PRACE partners, except as mandated by the European Commission contract EINFRA-653838 for reviewing and dissemination purposes. All trademarks and other rights on third party products mentioned in this document are acknowledged as own by the respective holders. PRACE-4IP-EINFRA-653838 iv 21.04.2017 D5.2 Market and Technology Watch Report Year 2. Final summary of results gathered Table of Contents Project and Deliverable Information Sheet ......................................................................................... i Document Control Sheet ........................................................................................................................ i Document Status Sheet ......................................................................................................................... ii Document Keywords ............................................................................................................................ iv Table of Contents .................................................................................................................................. v List of Figures ...................................................................................................................................... vii List of Tables ........................................................................................................................................ vii References and Applicable Documents ............................................................................................. vii List of Acronyms and Abbreviations ................................................................................................... x List of Project Partner Acronyms ..................................................................................................... xiii Executive Summary .............................................................................................................................. 1 1 Introduction ................................................................................................................................... 1 2 Worldwide HPC landscape and market overview ..................................................................... 3 2.1 A quick snapshot of HPC worldwide ................................................................................................. 3 2.1.1 TOP 500 ....................................................................................................................................... 3 2.2 Exascale plans China, Japan, USA and Europe ............................................................................... 5 2.2.1 Exascale plans China ................................................................................................................... 5 2.2.2 Exascale plans Japan ................................................................................................................... 5 2.2.3 Exascale plans USA ...................................................................................................................... 6 2.2.4 Exascale plans Europe ................................................................................................................. 6 2.3 Business analysis .................................................................................................................................. 7 3 Core technology and components ................................................................................................ 8 3.1 Processors ............................................................................................................................................. 8 3.1.1 AMD ............................................................................................................................................. 8 3.1.2 ARM .............................................................................................................................................. 8 3.1.3 IBM ............................................................................................................................................... 9 3.1.4 Intel ............................................................................................................................................... 9 3.2 Highly parallel components/compute engines ..................................................................................10 3.2.1 NVIDIA Tesla P100 .................................................................................................................... 10 3.2.2 Intel Knights Landing ................................................................................................................. 11 3.2.3 SW26010 ..................................................................................................................................... 12 3.3 Memory and storage technologies .....................................................................................................13 3.3.1 DRAM ......................................................................................................................................... 13 3.3.2 Non-volatile Memory .................................................................................................................. 14 3.3.3 Tapes .......................................................................................................................................... 16 3.4 Interconnect ........................................................................................................................................17
Recommended publications
  • Towards Exascale Computing
    TowardsTowards ExascaleExascale Computing:Computing: TheThe ECOSCALEECOSCALE ApproachApproach Dirk Koch, The University of Manchester, UK ([email protected]) 1 Motivation: let’s build a 1,000,000,000,000,000,000 FLOPS Computer (Exascale computing: 1018 FLOPS = one quintillion or a billion billion floating-point calculations per sec.) 2 1,000,000,000,000,000,000 FLOPS . 10,000,000,000,000,000,00 FLOPS 1975: MOS 6502 (Commodore 64, BBC Micro) 3 Sunway TaihuLight Supercomputer . 2016 (fully operational) . 12,543,6000,000,000,000,00 FLOPS (125.436 petaFLOPS) . Architecture Sunway SW26010 260C (Digital Alpha clone) 1.45GHz 10,649,600 cores . Power “The cooling system for TaihuLight uses a closed- coupled chilled water outfit suited for 28 MW with a custom liquid cooling unit”* *https://www.nextplatform.com/2016/06/20/look-inside-chinas-chart-topping-new-supercomputer/ . Cost US$ ~$270 million 4 TOP500 Performance Development We need more than all the performance of all TOP500 machines together! 5 TaihuLight for Exascale Computing? We need 8x the worlds fastest supercomputer: . Architecture Sunway SW26010 260C (Digital Alpha clone) @1.45GHz: > 85M cores . Power 224 MW (including cooling) costs ~ US$ 40K/hour, US$ 340M/year from coal: 2,302,195 tons of CO2 per year . Cost US$ 2.16 billion We have to get at least 10x better in energy efficiency 2-3x better in cost Also: scalable programming models 6 Alternative: Green500 Shoubu supercomputer (#1 Green500 in 2015): . Cores: 1,181,952 . Theoretical Peak: 1,535.83 TFLOPS/s . Memory: 82 TB . Processor: Xeon E5-2618Lv3 8C 2.3GHz .
    [Show full text]
  • Computational PHYSICS Shuai Dong
    Computational physiCs Shuai Dong Evolution: Is this our final end-result? Outline • Brief history of computers • Supercomputers • Brief introduction of computational science • Some basic concepts, tools, examples Birth of Computational Science (Physics) The first electronic general-purpose computer: Constructed in Moore School of Electrical Engineering, University of Pennsylvania, 1946 ENIAC: Electronic Numerical Integrator And Computer ENIAC Electronic Numerical Integrator And Computer • Design and construction was financed by the United States Army. • Designed to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory. • It was heralded in the press as a "Giant Brain". • It had a speed of one thousand times that of electro- mechanical machines. • ENIAC was named an IEEE Milestone in 1987. Gaint Brain • ENIAC contained 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed more than 27 tons, took up 167 m2, and consumed 150 kW of power. • This led to the rumor that whenever the computer was switched on, lights in Philadelphia dimmed. • Input was from an IBM card reader, and an IBM card punch was used for output. Development of micro-computers modern PC 1981 IBM PC 5150 CPU: Intel i3,i5,i7, CPU: 8088, 5 MHz 3 GHz Floppy disk or cassette Solid state disk 1984 Macintosh Steve Jobs modern iMac Supercomputers The CDC (Control Data Corporation) 6600, released in 1964, is generally considered the first supercomputer. Seymour Roger Cray (1925-1996) The father of supercomputing, Cray-1 who created the supercomputer industry. Cray Inc.
    [Show full text]
  • CS 110 Computer Architecture Lecture 5: Intro to Assembly Language, MIPS Intro
    CS 110 Computer Architecture Lecture 5: Intro to Assembly Language, MIPS Intro Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech University Slides based on UC Berkley's CS61C 1 Using Memory You Don’t Own • What’s wrong with this code? char *append(const char* s1, const char *s2) { const int MAXSIZE = 128; char result[128]; int i=0, j=0; for (j=0; i<MAXSIZE-1 && j<strlen(s1); i++,j++) { result[i] = s1[j]; } for (j=0; i<MAXSIZE-1 && j<strlen(s2); i++,j++) { result[i] = s2[j]; } result[++i] = '\0'; return result; } 2 Using Memory You Don’t Own • Beyond stack read/write char *append(const char* s1, const char *s2) { const int MAXSIZE = 128; char result[128]; result is a local array name – int i=0, j=0; stack memory allocated for (j=0; i<MAXSIZE-1 && j<strlen(s1); i++,j++) { result[i] = s1[j]; } for (j=0; i<MAXSIZE-1 && j<strlen(s2); i++,j++) { result[i] = s2[j]; } result[++i] = '\0'; return result; Function returns pointer to stack } memory – won’t be valid after function returns 3 Managing the Heap • realloc(p,size): – Resize a previously allocated block at p to a new size – If p is NULL, then realloc behaves like malloc – If size is 0, then realloc behaves like free, deallocating the block from the heap – Returns new address of the memory block; NOTE: it is likely to have moved! E.g.: allocate an array of 10 elements, expand to 20 elements later int *ip; ip = (int *) malloc(10*sizeof(int)); /* always check for ip == NULL */ … … … ip = (int *) realloc(ip,20*sizeof(int));
    [Show full text]
  • FCMSSR Meeting 2018-01 All Slides
    Federal Committee for Meteorological Services and Supporting Research (FCMSSR) Dr. Neil Jacobs Assistant Secretary for Environmental Observation and Prediction and FCMSSR Chair April 30, 2018 Office of the Federal Coordinator for Meteorology Services and Supporting Research 1 Agenda 2:30 – Opening Remarks (Dr. Neil Jacobs, NOAA) 2:40 – Action Item Review (Dr. Bill Schulz, OFCM) 2:45 – Federal Coordinator's Update (OFCM) 3:00 – Implementing Section 402 of the Weather Research And Forecasting Innovation Act Of 2017 (OFCM) 3:20 – Federal Meteorological Services And Supporting Research Strategic Plan and Annual Report. (OFCM) 3:30 – Qualification Standards For Civilian Meteorologists. (Mr. Ralph Stoffler, USAF A3-W) 3:50 – National Earth System Predication Capability (ESPC) High Performance Computing Summary. (ESPC Staff) 4:10 – Open Discussion (All) 4:20 – Wrap-Up (Dr. Neil Jacobs, NOAA) Office of the Federal Coordinator for Meteorology Services and Supporting Research 2 FCMSSR Action Items AI # Text Office Comment Status Due Date Responsible 2017-2.1 Reconvene JAG/ICAWS to OFCM, • JAG/ICAWS convened. Working 04/30/18 develop options to broaden FCMSSR • Options presented to FCMSSR Chairmanship beyond Agencies ICMSSR the Undersecretary of Commerce • then FCMSSR with a for Oceans and Atmosphere. revised Charter Draft a modified FCMSSR • Draft Charter reviewed charter to include ICAWS duties by ICMSSR. as outlined in Section 402 of the • Pending FCMSSR and Weather Research and Forecasting OSTP approval to Innovation Act of 2017 and secure finalize Charter for ICMSSR concurrence. signature. Recommend new due date: 30 June 2018. 2017-2.2 Publish the Strategic Plan for OFCM 1/12/18: Plan published on Closed 11/03/17 Federal Weather Coordination as OFCM website presented during the 24 October 2017 FCMMSR Meeting.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • It's a Multi-Core World
    It’s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group Moore’s Law is not to blame. Intel process technology capabilities High Volume Manufacturing 2004 2006 2008 2010 2012 2014 2016 2018 Feature Size 90nm 65nm 45nm 32nm 22nm 16nm 11nm 8nm Integration Capacity (Billions of 2 4 8 16 32 64 128 256 Transistors) Transistor for Influenza Virus 90nm Process Source: CDC 50nm Source: Intel At end of day, we keep using all those new transistors. That Power and Clock Inflection Point in 2004… didn’t get better. Fun fact: At 100+ Watts and <1V, currents are beginning to exceed 100A at the point of load! Source: Kogge and Shalf, IEEE CISE Courtesy Horst Simon, LBNL Not a new problem, just a new scale… CPU Power W) Cray-2 with cooling tower in foreground, circa 1985 And how to get more performance from more transistors with the same power. RULE OF THUMB A 15% Frequency Power Performance Reduction Reduction Reduction Reduction In Voltage 15% 45% 10% Yields SINGLE CORE DUAL CORE Area = 1 Area = 2 Voltage = 1 Voltage = 0.85 Freq = 1 Freq = 0.85 Power = 1 Power = 1 Perf = 1 Perf = ~1.8 Single Socket Parallelism Processor Year Vector Bits SP FLOPs / core / Cores FLOPs/cycle cycle Pentium III 1999 SSE 128 3 1 3 Pentium IV 2001 SSE2 128 4 1 4 Core 2006 SSE3 128 8 2 16 Nehalem 2008 SSE4 128 8 10 80 Sandybridge 2011 AVX 256 16 12 192 Haswell 2013 AVX2 256 32 18 576 KNC 2012 AVX512 512 32 64 2048 KNL 2016 AVX512 512 64 72 4608 Skylake 2017 AVX512 512 96 28 2688 Putting It All Together Prototypical Application: Serial Weather Model CPU MEMORY First Parallel Weather Modeling Algorithm: Richardson in 1917 Courtesy John Burkhardt, Virginia Tech Weather Model: Shared Memory (OpenMP) Core Fortran: !$omp parallel do Core do i = 1, n Core a(i) = b(i) + c(i) enddoCore C/C++: MEMORY #pragma omp parallel for Four meteorologists in the samefor(i=1; room sharingi<=n; i++) the map.
    [Show full text]
  • Joaovicentesouto-Tcc.Pdf
    UNIVERSIDADE FEDERAL DE SANTA CATARINA CENTRO TECNOLÓGICO DEPARTAMENTO DE INFORMÁTICA E ESTATÍSTICA CIÊNCIAS DA COMPUTAÇÃO João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Florianópolis 6 de dezembro de 2019 João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Trabalho de Conclusão do Curso do Curso de Graduação em Ciências da Computação do Centro Tecnológico da Universidade Federal de Santa Catarina como requisito para ob- tenção do título de Bacharel em Ciências da Computação. Orientador: Prof. Márcio Bastos Castro, Dr. Coorientador: Pedro Henrique Penna, Me. Florianópolis 6 de dezembro de 2019 Ficha de identificação da obra elaborada pelo autor, através do Programa de Geração Automática da Biblioteca Universitária da UFSC. Souto, João Vicente An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS / João Vicente Souto ; orientador, Márcio Bastos Castro , coorientador, Pedro Henrique Penna , 2019. 92 p. Trabalho de Conclusão de Curso (graduação) - Universidade Federal de Santa Catarina, Centro Tecnológico, Graduação em Ciências da Computação, Florianópolis, 2019. Inclui referências. 1. Ciências da Computação. 2. Sistema Operacional Distribuído. 3. Camada de Abstração de Hardware. 4. Processador Lightweight Manycore. 5. Kalray MPPA-256. I. , Márcio Bastos Castro. II. , Pedro Henrique Penna. III. Universidade Federal de Santa Catarina. Graduação em Ciências da Computação. IV. Título. João Vicente Souto An Inter-Cluster Communication Facility for Lightweight Manycore Processors in the Nanvix OS Este Trabalho de Conclusão do Curso foi julgado adequado para obtenção do Título de Bacharel em Ciências da Computação e aprovado em sua forma final pelo curso de Graduação em Ciências da Computação.
    [Show full text]
  • Parallel Processing with the MPPA Manycore Processor
    Parallel Processing with the MPPA Manycore Processor Kalray MPPA® Massively Parallel Processor Array Benoît Dupont de Dinechin, CTO 14 Novembre 2018 Outline Presentation Manycore Processors Manycore Programming Symmetric Parallel Models Untimed Dataflow Models Kalray MPPA® Hardware Kalray MPPA® Software Model-Based Programming Deep Learning Inference Conclusions Page 2 ©2018 – Kalray SA All Rights Reserved KALRAY IN A NUTSHELL We design processors 4 ~80 people at the heart of new offices Grenoble, Sophia (France), intelligent systems Silicon Valley (Los Altos, USA), ~70 engineers Yokohama (Japan) A unique technology, Financial and industrial shareholders result of 10 years of development Pengpai Page 3 ©2018 – Kalray SA All Rights Reserved KALRAY: PIONEER OF MANYCORE PROCESSORS #1 Scalable Computing Power #2 Data processing in real time Completion of dozens #3 of critical tasks in parallel #4 Low power consumption #5 Programmable / Open system #6 Security & Safety Page 4 ©2018 – Kalray SA All Rights Reserved OUTSOURCED PRODUCTION (A FABLESS BUSINESS MODEL) PARTNERSHIP WITH THE WORLD LEADER IN PROCESSOR MANUFACTURING Sub-contracted production Signed framework agreement with GUC, subsidiary of TSMC (world top-3 in semiconductor manufacturing) Limited investment No expansion costs Production on the basis of purchase orders Page 5 ©2018 – Kalray SA All Rights Reserved INTELLIGENT DATA CENTER : KEY COMPETITIVE ADVANTAGES First “NVMe-oF all-in-one” certified solution * 8x more powerful than the latest products announced by our competitors**
    [Show full text]
  • Optimizing High-Resolution Community Earth System
    https://doi.org/10.5194/gmd-2020-18 Preprint. Discussion started: 21 February 2020 c Author(s) 2020. CC BY 4.0 License. Optimizing High-Resolution Community Earth System Model on a Heterogeneous Many-Core Supercomputing Platform (CESM- HR_sw1.0) Shaoqing Zhang1,4,5, Haohuan Fu*2,3,1, Lixin Wu*4,5, Yuxuan Li6, Hong Wang1,4,5, Yunhui Zeng7, Xiaohui 5 Duan3,8, Wubing Wan3, Li Wang7, Yuan Zhuang7, Hongsong Meng3, Kai Xu3,8, Ping Xu3,6, Lin Gan3,6, Zhao Liu3,6, Sihai Wu3, Yuhu Chen9, Haining Yu3, Shupeng Shi3, Lanning Wang3,10, Shiming Xu2, Wei Xue3,6, Weiguo Liu3,8, Qiang Guo7, Jie Zhang7, Guanghui Zhu7, Yang Tu7, Jim Edwards1,11, Allison Baker1,11, Jianlin Yong5, Man Yuan5, Yangyang Yu5, Qiuying Zhang1,12, Zedong Liu9, Mingkui Li1,4,5, Dongning Jia9, Guangwen Yang1,3,6, Zhiqiang Wei9, Jingshan Pan7, Ping Chang1,12, Gokhan 10 Danabasoglu1,11, Stephen Yeager1,11, Nan Rosenbloom 1,11, and Ying Guo7 1 International Laboratory for High-Resolution Earth System Model and Prediction (iHESP), Qingdao, China 2 Ministry of Education Key Lab. for Earth System Modeling, and Department of Earth System Science, Tsinghua University, Beijing, China 15 3 National Supercomputing Center in Wuxi, Wuxi, China 4 Laboratory for Ocean Dynamics and Climate, Qingdao Pilot National Laboratory for Marine Science and Technology, Qingdao, China 5 Key Laboratory of Physical Oceanography, the College of Oceanic and Atmospheric Sciences & Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China 20 6 Department of Computer Science & Technology, Tsinghua
    [Show full text]
  • A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway Taihulight
    2018 IEEE International Parallel and Distributed Processing Symposium Workshops A Preliminary Port and Evaluation of the Uintah AMT Runtime on Sunway TaihuLight Zhang Yang Damodar Sahasrabudhe Institute of Applied Physics and Computational Mathematics Scientific Computing and Imaging Institute Email: yang [email protected] Email: [email protected] Alan Humphrey Martin Berzins Scientific Computing and Imaging Institute Scientific Computing and Imaging Institute Email: [email protected] Email: [email protected] Abstract—The Sunway TaihuLight is the world’s fastest su- (CGs). Each CG is made up of one Management Process- percomputer at the present time with a low power consumption ing Element (MPE) and 64 Computing Processing Elements per flop and a unique set of architectural features. Applications (CPEs) sharing the same main memory as is described below. performance depends heavily on being able to adapt codes to make best use of these features. Porting large codes to Each CPE is equipped with a small user-controlled scratch pad novel architectures such as Sunway is both time-consuming memory instead of data caches. This architecture has made and expensive, as modifications throughout the code may be it possible to run many real-word applications at substantial needed. One alternative to conventional porting is to consider fractions of peak performance, such as the three applications an approach based upon Asynchronous Many Task (AMT) selected as Gordon Bell finalists in SC16 [2]–[4]. However, Runtimes such as the Uintah framework considered here. Uintah structures the problem as a series of tasks that are executed these performance levels were obtained through extensive and by the runtime via a task scheduler.
    [Show full text]
  • Power and Energy Characterization of an Open Source 25-Core Manycore Processor
    Power and Energy Characterization of an Open Source 25-core Manycore Processor Michael McKeown, Alexey Lavrov, Mohammad Shahrad, Paul J. Jackson, Yaosheng Fu∗, Jonathan Balkind, Tri M. Nguyen, Katie Lim, Yanqi Zhouy, David Wentzlaff Princeton University fmmckeown,alavrov,mshahrad,pjj,yfu,jbalkind,trin,kml4,yanqiz,[email protected] ∗ Now at NVIDIA y Now at Baidu Abstract—The end of Dennard’s scaling and the looming power wall have made power and energy primary design CB Chip Bridge (CB) PLL goals for modern processors. Further, new applications such Tile 0 Tile 1 Tile 2 Tile 3 Tile 4 as cloud computing and Internet of Things (IoT) continue to necessitate increased performance and energy efficiency. Manycore processors show potential in addressing some of Tile 5 Tile 6 Tile 7 Tile 8 Tile 9 these issues. However, there is little detailed power and energy data on manycore processors. In this work, we carefully study Tile 10 Tile 11 Tile 12 Tile 13 Tile 14 detailed power and energy characteristics of Piton, a 25-core modern open source academic processor, including voltage Tile 15 Tile 16 Tile 17 Tile 18 Tile 19 versus frequency scaling, energy per instruction (EPI), memory system energy, network-on-chip (NoC) energy, thermal charac- Tile 20 Tile 21 Tile 22 Tile 23 Tile 24 teristics, and application performance and power consumption. This is the first detailed power and energy characterization of (a) (b) an open source manycore design implemented in silicon. The Figure 1. Piton die, wirebonds, and package without epoxy encapsulation open source nature of the processor provides increased value, (a) and annotated CAD tool layout screenshot (b).
    [Show full text]
  • Eithne: a Framework for Benchmarking Micro-Core Accelerators
    Eithne: A framework for benchmarking micro-core accelerators Maurice Jamieson Nick Brown EPCC EPCC University of Edinburgh University of Edinburgh Edinburgh, UK Edinburgh, UK [email protected] [email protected] Soft-core MFLOPs/core 1 INTRODUCTION MicroBlaze (integer only) 0.120 The free lunch is over and the HPC community is acutely aware of MicroBlaze (floating point) 5.905 the challenges that the end of Moore’s Law and Dennard scaling Table 1: LINPACK performance of the Xilinx MicroBlaze on [4] impose on the implementation of exascale architectures due to the Zynq-7020 @ 100MHz the end of significant generational performance improvements of traditional processor designs, such as x86 [5]. Power consumption and energy efficiency is also a major concern when scaling thecore is the benefit of reduced chip resource usage when configuring count of traditional CPU designs. Therefore, other technologies without hardware floating point support, but there is a 50 times need to be investigated, with micro-cores and FPGAs, which are performance impact on LINPACK due to the software emulation somewhat related, being considered by the community. library required to perform floating point arithmetic. By under- Micro-core architectures look to address this issue by implement- standing the implications of different configuration decisions, the ing a large number of simple cores running in parallel on a single user can make the most appropriate choice, in this case trading off chip and have been used in successful HPC architectures, such how much floating point arithmetic is in their code vs the saving as the Sunway SW26010 of the Sunway TaihuLight (#3 June 2019 in chip resource.
    [Show full text]