Contact: [email protected] :@Etp4H www.etp4hpc.eu/euexascale www.etp4hpc.eu

European High-Performance Computing | Handbook 2019 Handbook Computing High-Performance European 2019 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Dear Reader,

This Handbook has become a recognised mechanism to provide an overview of the European HPC projects. Its increasing size is a sign of the vitality of our European HPC landscape. We have included a number of projects besides those financed by HPC cPPP-related calls, as they are closely related to HPC. If you know of an EU-funded project related to HPC but not mentioned in this Handbook, do report it to our team, we will be happy to include it in the next edition. EuroHPC – a joint initiative of the European Commission and 29 European member states – has taken over the task of defining and funding European HPC projects. The objective of EuroHPC is also to fund European HPC systems and that is where we hope the projects presented in this publication will be able to find a venue for the application of their results. The projects started in 2015 – at the beginning of the H2020 multiannual research programme - have reached their end. They have resulted in an impressive outcome, with over 150 applicable pieces of technology produced by 19 projects: packages, APIs, demonstrators, libraries, memory types, etc. We hope these results will be reused, leading to synergies and collaborations between European (and overseas) players. (The full list of outcomes and their analysis is available on the website of the EXDCI project: https://exdci.eu/activities/fethpc-results). On the other hand, there is new work on the horizon. The first EuroHPC call for projects was issued during the summer, taking into account the suggestions presented by our ecosystem (represented in the EuroHPC JU by ETP4HPC and BDVA, the Big Data Value Association). The projects selected in this process will form part of next year’s Handbook. See you all then!

EXDCI-2 Coordinator and PRACE Director Serge Bogaerts ETP4HPC Chairman Jean-Pierre Panziera Table of Contents EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

6 European Exascale Projects 51 Centres of Excellence 93 International Cooperation 111 Ecosystem development in computing applications 94 ENERXICO 112 EUROLAB-4-HPC-2 9 Technology Projects 52 FocusCoE 96 HPCWE 114 EXDCI-2 10 ASPIDE 54 BioExcel 116 HPC-EUROPA3 12 DEEP projects 56 ChEESE 99 Other projects 14 ECOSCALE 58 CompBioMed2 100 ARGO 118 Appendix 16 EPEEC 60 E-cam 102 EXA MODE 120 Completed projects 18 EPiGRAM-HS 62 EoCoE-II 104 M2DC 20 ESCAPE-2 64 ESiWACE 106 OPRECOMP 22 EuroExa 66 ESiWACE2 108 PROCESS 24 EXA2PRO 68 EXCELLERAT 26 ExaHyPE 70 HiDALGO 28 ExaNeSt 72 MaX 30 ExaNoDe 74 POP2 32 ExaQUte 34 MAESTRO 77 European Processor 36 MANGO 78 EPI 38 NEXTGenIO 80 Mont-Blanc 2020 40 NLAFET 83 HPC and Big Data testbeds 42 RECIPE 84 CYBELE 44 Sage2 86 DeepHealth 46 VECMA 88 EVOLVE 48 VESTEC 90 LEXIS

4 5 CHAPTER CALLS TO WHICH THE PROJECTS BELONG COMMENTS EUROPEAN EXASCALE PROJECTS Technology FETHPC-1-2014 19 projects starting 2015 Projects HPC Core Technologies, Programming with a duration of around 3 years Environments and Algorithms for Extreme Parallelism All projects and Extreme Data Applications in this chapter are hardware FETHPC-01-2016 2 projects starting in 2017 and/or software Co-design of HPC systems with a duration of around 3 years oriented and applications FETHPC-02-2017 11 projects starting 2015 Transition to Exascale with a duration of around 3 years Centres E-INFRA-5-2015 9 projects starting in 2016 of Excellence Centres of Excellence with a duration of around 3 years in computing for computing applications applications INFRAEDI-02-2018 10 projects and 1 Coordination Centres of Excellence and Support Action starting for computing applications in 2018 This edition of our Handbook now features a number of pro- European ICT-42-2017 The European Processor jects, which, though not financed by FET-HPC calls, are closely Processor Framework Partnership Agreement Initiative (EPI) in European low-power microprocessor related to HPC and have their place here. For ease of use, we technologies have grouped projects by scope or objective, instead of by SGA-LPMT-01-2018 Specific Grant Agreement under H2020 call, However, we also provide in the table below an the Framework Partnership overview of the different calls, indicating where you will find Agreement “EPI FPA” the corresponding projects in our Handbook. ICT-05-2017 Prequel to EPI Finally, this Handbook only features on-going projects – inclu- Customised and low energy computing (including Low power processor technologies) ding those that end in 2019. Projects that ended before Ja- HPC and Big ICT-11-2018-2019 Subtopic (a) 4 Innovative Action starting nuary 2019 are listed in the Annex, and described in detail in Data testbeds HPC and Big Data enabled Large-scale in 2018 previous editions of the Handbook (available on our website at Test-beds and Applications https://www.etp4hpc.eu/european-hpc-handbook.html ). International FETHPC-01-2018 2 projects starting June 2019 Cooperation International Cooperation on HPC Most of the projects in this Handbook have been funded in the Other projects ICT-04-2015 A selection of technology Customised projects related to HPC scope of H2020 HPC contractual Public Private Partnership. and low power computing (hardware/software) from Notable particularities are: ICT-12-2018-2020 various ICT, FET and INFRA calls Big Data technologies “HPC and Big Data testbeds” projects which were funded by and extreme-scale analytics a call sitting between HPC and Big Data cPPPs; FETPROACT-01-2016 Proactive: emerging “Other projects” which encompasses a selection of HPC pro- themes and communities jects funded in calls which are not under the umbrella of HPC EINFRA-21-2017 Platform-driven cPPP; e-infrastructure innovation HPC-Europa3 which is not directly related to the cPPP either Ecosystem FETHPC-2-2014 2 Coordination but was added besides cPPP FETHPC Coordination and Sup- development HPC Ecosystem Development and Support Actions port Actions, because of its ecosystem development dimen- FETHPC-03-2017 2 Coordination Exascale HPC ecosystem development and Support Actions sion. INFRAIA-01-2016-2017 HPC-EUROPA3 Integrating Activities for Advanced Communities

6 7 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Technology Projects

9 exAScale ProgramIng models for EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ASPIDE extreme Data processing ASPIDE www.aspide-project.eu : @ASPIDE_PROJECT

COORDINATING ORGANISATION Universidad Carlos III de Madrid, Spain

OTHER PARTNERS • Institute e-Austria Timisoara, Romania • University of Calabria, Italy • Universität Klagenfurt, Austria • Institute of Bioorganic Chemistry of Polish • Academy of Sciences, Poland • Servicio Madrileño de Salud, Spain OBJECTIVE of social data posts queried in real-time on implement Exascale applications. The AS- • INTEGRIS SA, Italy Extreme Data is an incarnation of Big an in-memory components database. Tradi- PIDE project will contribute with the defi- • Bull SAS (Atos Group), France Data concept distinguished by the mas- tional disks or commercial storage cannot nition of a new programming paradigms, CONTACT sive amounts of data that must be que- handle nowadays the extreme scale of such APIs, runtime tools and methodologies for Javier GARCIA-BLAS ried, communicated and analysed in (near) application data. expressing data-intensive tasks on Exascale [email protected] real-time by using a very large number of Following the need of improvement of systems, which can pave the way for the ex- memory/storage elements and Exascale current concepts and technologies, AS- ploitation of massive parallelism over a sim- CALL computing systems. Immediate examples PIDE’s activities focus on data-intensive plified model of the system architecture, FETHPC-02-2017 are the scientific data produced at a rate applications running on systems composed promoting high performance and efficien- PROJECT TIMESPAN of hundreds of gigabits-per-second that of up to millions of computing elements cy, and offering powerful operations and 15/06/2018 - 14/12/2020 must be stored, filtered and analysed, the (Exascale systems). Practical results will in- mechanisms for processing extreme data millions of images per day that must be clude the methodology and software pro- sources at high speed and/or real-time. mined (analysed) in parallel, the one billion totypes that will be designed and used to

10 11 DEEP projects Dynamical Exascale Entry Platform EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

www.deep-projects.eu : @DEEPprojects the resources are segregated and pooled

into compute modules, as this enables to COORDINATING ORGANISATION DEEP PROJECTS flexibly adapt the system to very diverse Forschungszentrum Jülich application requirements. In addition to (Jülich Supercomputing Centre) usability and flexibility, the sustained - per OTHER PARTNERS formance made possible by following this Current partners in DEEP-EST: approach aims to reach exascale levels. • Forschungszentrum Jülich, Projects One important aspect that makes the DEEP • Intel, • Leibniz Supercomputing Centre, architecture stand out is the co-design ap- • Barcelona Supercomputing Centre (BSC), There is more than one way to build a su- proach, which is a key component of the • Spain, percomputer, and meeting the diverse project. In DEEP-EST, six ambitious HPC/ • Megware Computer Vertrieb und Service GmbH, demands of modern applications, which HPDA applications will be used to define • Heidelberg University, increasingly combine data analytics and and evaluate the hardware and software • EXTOLL, • The University of Edinburgh, artificial intelligence (AI) with simulation, technologies developed. Careful analysis • Fraunhofer ITWM, requires a flexible system architecture. of the application codes allows a fuller un- • Astron, • KU Leuven, Since 2011, the DEEP series of projects derstanding of their requirements, which • National Center For Supercomputing (DEEP, DEEP-ER, DEEP-EST) has pioneered informed the prototype’s design and confi- Applications (Bulgaria), an innovative concept known as the modu- guration. • Norges Miljo-Og Biovitenskaplige Universitet Haskoli Islands, lar supercomputer architecture, whereby • European Organisation for Nuclear multiple modules are coupled like buil- In addition to traditional compute-inten- Research (CERN), • ParTec. ding blocks. Each module is tailored to the sive HPC applications, the DEEP-EST DAM Partners in DEEP and DEEP-ER: needs of a specific class of applications, and includes leading-edge memory and storage • CERFACS, all modules together behave as a single ma- technology tailored to the needs of data-in- • CGG, Forschungszentrum Jülich / Ralf- Uwe Limbach • CINECA, chine. tensive workloads, which occur in data ana- • Eurotech, Connected by a high-speed, federated (DAM), which will be tested with six applica- lytics and ML. Based on general-purpose • EPFL, • Seagate, network and programmed in a uniform sys- tions combining high-performance compu- processors with a huge amount of memory • INRIA, Institut National de Recherche tem software and programming environ- ting (HPC) with high-performance data ana- per core, this module is boosted by power- en Informatique et Automatique, ment, the supercomputer allows an applica- lytics (HPDA) and machine learning (ML). ful general-purpose graphics processing • Mellanox, • The Cyprus Institute, tion to be distributed over several hardware The Cluster Module and the Data Analytics units (GPGPU) and field-programmable • Universität Regensburg. modules, running each code component on Module are already installed. gate array (FPGA) accelerators. the one which best suits its particular needs. The DEEP approach is part of the trend Through the DEEP projects, researchers CONTACT Specifically, DEEP-EST, the latest project towards using accelerators to improve have shown that combining resources in Dr. Estela Suarez [email protected] in the series, is building a prototype with performance and overall energy efficien- compute modules efficiently serves appli- three modules: a general-purpose cluster cy – but with a twist. Traditionally, hete- cations from multi-physics simulations to CALL for low or medium scalable codes, a highly rogeneity is done within the node, com- simulations integrating HPC with HPDA, to ETHPC-01-2016 scalable booster comprising a cluster of bining a central processing unit (CPU) with complex heterogeneous workflows such as accelerators, and a Data Analytics Module one or more accelerators. In DEEP-EST those in artificial intelligence applications. PROJECT TIMESPAN (DEEP-EST) 01/07/2017 - 31/03/2021 12 Energy-efficient Heterogeneous EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ECOSCALE COmputing at exaSCALE

www.ecoscale.eu ECOSCALE : @ECOSCALE_H2020

COORDINATING ORGANISATION Telecommunication Systems Institute (TSI)

OTHER PARTNERS • Telecommunication Systems Institute • Queen’s University Belfast • STMicroelectronics • Acciona • University of Manchester • Politecnico di Torino ECOSCALE tackles the exascale challenge hierarchical approach where the system tem, the Workers employ reconfigurable • Chalmers University of Technology, Sweden by providing a novel heterogeneous en- is partitioned into multiple autonomous accelerators implemented on FPGAs that • Synelixis Solutions ergy-efficient hierarchical architecture, Workers (i.e. compute nodes). Workers perform coherent memory accesses in the CONTACT a hybrid distributed OpenCL programmi- are interconnected in a tree-like structure virtual address space while being program- Yannis Papaefstathiou ng environment and a runtime system. in order to form larger Partitioned Global med by OpenCL. [email protected] The ECOSCALE architecture, programmi- Address Space (PGAS) partitions. To further +30 694427772 ng model and runtime system follows a increase the energy efficiency of the sys- The novel UNILOGIC (Unified Logic) archi- CALL tecture, introduced within ECOSCALE, is FETHPC-1-2014 an extension of the UNIMEM architecture. UNIMEM provides shared partitioned glo- PROJECT TIMESPAN bal address space while UNILOGIC pro- 01/10/2015 - 31/07/2019 vides shared partitioned reconfigurable resources. The UNIMEM architecture gives the user the option to move tasks and pro- cesses close to data instead of moving data around and thus it reduces significantly the data traffic and related energy consump- tion and delays. The proposed UNILO- GIC+UNIMEM architecture and the ECOS- CALE design tools partition seamlessly the application into several Worker nodes that communicate through a fat-tree communi¬- cation infrastructure. The final FPGA-based ECOSCALE prototype can execute real-wor- ld applications with 10x higher energy effi- ciency than conventional CPU- and/or GPU- based parallel systems. ECOSCALE prototype

14 15 European joint Effort toward a Highly EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EPEEC Productive Programming Environment

for Heterogeneous Exascale Computing EPEEC

www.epeec-project.eu

COORDINATING ORGANISATION Barcelona Supercomputing Centre (BSC), Spain

OTHER PARTNERS • Frauhofer Gesellschaft zur Förderung der angewandten Forschung E.V., Germany • INESC ID - Intituto de Engenhariade Sistemas e Computadores, Investigacao e Desenvolvimento em Lisboa, Portugal • INRIA, Institut National de Recherche ductivity from the very beginning of the proach and as technology demonstrators. en Informatique et Automatique, France OBJECTIVE • Appentra Solutions SL, Spain EPEEC’s main goal is to develop and de- application developing/porting process. EPEEC exploits results from past FET pro- • CINECA Consorzio Interuniversitario, Italy ploy a production-ready parallel program- Developers will be able to leverage either jects that led to the cutting-edge software • ETA SCALE AB, Sweden • CERFACS, France ming environment that turns upcoming shared memory or distributed-shared me- components it builds upon, and pursues • Interuniversitair Micro-Electronica overwhelmingly-heterogeneous exascale mory programming flavours, and code in influencing the most relevant parallel pro- Centrum, Belgium • Uppsala Universitet, Sweden supercomputers into manageable plat- their preferred language: C, Fortran, or gramming standardisation bodies. The forms for domain application developers. C++. EPEEC will ensure the composability consortium is composed of European insti- CONTACT The consortium will significantly advance and interoperability of its programming tutions and individuals with the highest ex- Antonio J. Peña and integrate existing state-of-the-art com- models and runtimes, which will incorpo- pertise in their field, including not only lea- [email protected] ponents based on European technology rate specific features to handle data-in- ding research centres and universities but (programming models, runtime systems, tensive and extreme-data applications. also SME/start-up companies, all of them CALL FETHPC-02-2017 and tools) with key features enabling 3 Enhanced leading-edge performance tools recognised as high-tech innovators world- overarching objectives: high coding produc- will offer integral profiling, performance wide. Adhering to the Work Programme’s PROJECT TIMESPAN tivity, high performance, and energy awar- prediction, and visualisation of traces. Five guidelines, EPEEC features the participa- 01/10/2018 - 30/09/2021 eness. applications representative of different re- tion of young and high-potential resear- An automatic generator of compiler direc- levant scientific domains will serve as part chers, and includes careful dissemination, tives will provide outstanding coding pro- of a strong inter-disciplinary co-design ap- exploitation, and public engagement plans.

16 17 Exascale Programming Models EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EPiGRAM-HS for Heterogeneous Systems

https://epigram-hs.eu : @EpigramHs EPIGRAM-HS

COORDINATING ORGANISATION KTH, Sweden

OTHER PARTNERS • EPCC, UK • ETHZ - Eidgenössische Technische Hochschule Zürich, Switzerland • Fraunhofer, Germany • Cray, Switzerland • ECMWF, UK MOTIVATION OBJECTIVE • Maximize the productivity of application • Increasing presence of heterogeneous • Extend the programmability of large-scale development on heterogeneous super- CONTACT technologies on pre-exascale supercom- heterogeneous systems with GPUs, FP- computers by: Stefano Markidis puters GAs, HBM and NVM - providing auto-tuned collective commu- [email protected] • Need to port key HPC and emerging ap- • Introduce new concepts and functiona- nication CALL plications to these systems on time for lities, and implement them in two wi- - a framework for automatic code genera- FETHPC-02-2017 exascale dely-used HPC programming systems for tion for FPGAs large-scale supercomputers: MPI and - a memory abstraction device comprised PROJECT TIMESPAN GASPI of APIs 01/09/2018 - 31/08/2021 - a runtime for automatic data placement on diverse memories and a DSL for large- scale deep-learning frameworks

18 19 Energy-efficient SCalable Algorithms EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ESCAPE-2 for weather and climate Prediction at Exascale

www.ecmwf.int/escape ESCAPE-2

COORDINATING ORGANISATION ECMWF - European Centre for Medium-range Weather Forecasts, UK

OTHER PARTNERS • DKRZ - Deutsches Klimarechenzentrum GmbH, Germany • Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany • Eidgenössisches Depertment des Innern, Switzerland gorithm development will be accelerated rallel computation of dynamics and phy- • Barcelona Supercomputing Centre (BSC), OBJECTIVE Spain ESCAPE-2 will develop world-class, extre- and future-proofed. Eventually, the pro- sics on multiple scales and multiple levels. • CEA - Commissariat à l’Energie Atomique me-scale computing capabilities for Euro- ject aims at providing exascale-ready pro- Highly-scalable spatial discretization will be et aux énergies alternatives, France • Loughborough University, United Kingdom pean operational numerical weather and duction benchmarks to be operated on combined with proven large time-stepping • Institut Royal Météorologique de Belgique, climate prediction, and provide the key extreme-scale demonstrators (EsD) and techniques to optimize both time-to-solu- Belgium • Politecnico di Milano, Italy components for weather and climate do- beyond. ESCAPE-2 combines cross-disci- tion and energy-to-solution. Connecting • Danmarks Meteorologiske Institut, main benchmarks to be deployed on extre- plinary uncertainty quantification tools multi-grid tools, iterative solvers, and over- Denmark me-scale demonstrators and beyond. This (URANIE) for high-performance computing, lapping computations with flexible-order • Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, Italy will be achieved by developing bespoke originating from the energy sector, with en- spatial discretization will strengthen algo- • Bull SAS (Atos group), France and novel mathematical and algorithmic semble-based weather and climate models rithm resilience against soft or hard failure. concepts, combining them with proven to quantify the effect of model and data In addition, machine learning techniques CONTACT methods, and thereby reassessing the related uncertainties on forecasting – a ca- will be applied for accelerating complex Peter Bauer [email protected] mathematical foundations forming the ba- pability, which weather and climate predic- sub-components. The sum of these efforts sis of Earth system models. ESCAPE-2 also tion has pioneered since the 1960s. will aim at achieving at the same time: per- CALL invests in significantly more productive The mathematics and algorithmic research formance, resilience, accuracy and portabi- FETHPC-02-2017 programming models for the weather-cli- in ESCAPE-2 will focus on implementing lity. mate community through which novel al- data structures and tools supporting pa- PROJECT TIMESPAN 01/10/2018 - 30/09/2021

20 21 The Largest Group of ExaScale Projects EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EuroExa and Biggest Co-Design Effort

www.euroexa.eu EUROEXA : @EuroEXA

COORDINATING ORGANISATION VISION: ICCS (Institute of Communication • First testbed architecture will be shown to and Computer Systems), Greece be capable of scaling to world-class peak OTHER PARTNERS performance in excess of 400 PFLOPS • Arm, UK with an estimated system power of • The University of Manchester, UK around 30 MW peak • Barcelona Supercomputing Centre (BSC), Spain • A compute-centric 250 PFLOPS per 15 MW • FORTH (Foundation for Research The Horizon 2020 EuroEXA project pro- ExaNeSt, EcoScale and ExaNoDe, EuroEXA by 2019 and Technology Hellas), Greece poses a ground-breaking design for mind has its own EU investment as a co-design • Show that an exascale machine could be • The Hartree Centre of STFC, UK • IMEC, Belgium blowing results: Over four times higher project to further develop technologies built in 2020 within 30 shipping containers • ZeroPoint Technologies, SE performance and four times higher energy from the project group and support the EU with an edge to edge distance of less than • Iceotope, UK • Synelixis Solutions Ltd., Greece efficiency than today’s high-performance in its bid to deliver EU based ExaScale su- 40m • Maxeler Technologies, UK platforms. Originally the informal names percomputers. • Neurasmus, Netherlands • INFN (Istituto Nazionale Di Fisica Nucleare), for a group of H2020 research projects, This project has a €20m investment over Italy a 42-month period and is part of a total • INAF (Istituto Nazionale Di Astrofisica, Italy €50m, investment made be the European • ECMWF (European Centre for Medium- Range Weather Forecasts), International Commission across the EuroEXA group of • Fraunhofer, Germany projects supporting research, innovation and action across applications, system sof- CONTACT tware, hardware, networking, storage li- Peter Hopton (Dissemination & Exploitation Lead) quid cooling and data centre technologies. [email protected] +44 1142 245500 The project is also supported by a high va- CALL lue donation of IP from ARM and Synopsys. FETHPC-01-2016 Funded under H2020-EU.1.2.2. FET Proac- tive (FETHPC-2016-01) as a result of a com- PROJECT TIMESPAN petitive selection process, the consortium, 01/09/2017 - 28/02/2021 partners bring a rich mix of key applications from across climate/weather, physics/ener- gy and life-science/bioinformatics. The pro- ject objectives include to develop and de- ploy an ARM Cortex technology processing system with FPGA acceleration at peta-flop level an Exa-scale procurement for deploy- The EuroEXA Blade ment in 2022/23.

22 23 Enhancing Programmability EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EXA2PRO and boosting Performance Portability for Exascale Computing Systems

https://exa2pro.eu EXA2PRO

COORDINATING ORGANISATION ICCS (Institute of Communication and Computer Systems), Greece

OTHER PARTNERS • Linkopings Universitet, Sweden • CERTH, Ethniko Kentro Erevnas kai Technologikis Anaptyxis, Greece • INRIA, Institut National de Recherche en Informatique et Automatique, France • Forschungszentrum Jülich GmbH, Germany in JUELICH supercomputing centre: High EXA2PRO outcome is expected to have ma- • Maxeler Technologies Ltd, United Kingdom OBJECTIVE • CNRS - Centre National de la Recherche The vision of EXA2PRO is to develop a pro- energy physics, materials and supercapa- jor impact on: Scientifique, France gramming environment that will enable the citors. The applications will leverage the 1. the scientific and industrial community • UoM, University of Macedonia, Greece productive deployment of highly parallel EXA2PRO toolchain and we expect: that focuses on application deployment CONTACTS applications in exascale computing sys- 1. Increased programmability that enables in supercomputing centres: EXA2PRO Prof. Dimitrios Soudris tems. EXA2PRO programming environment the efficient exploitation of heteroge- environment will allow efficient applica- [email protected] will integrate tools that will address signifi- neity of modern supercomputing sys- tion deployment with reduced effort. cant exascale challenges. It will support a tems, which allows the evaluation of 2. application developers that target exas- CALL wide range of scientific applications, provi- more complex problems. cale systems: EXA2PRO will provide tools FETHPC-02-2017 de tools for improving source code quality, 2. Effective deployment in an environment for improving source code maintainabi- PROJECT TIMESPAN enable efficient exploitation of exascale that provides features critical for exas- lity/reusability, which will allow applica- 01/05/2018 - 30/04/2021 systems’ heterogeneity and integrate tools cale computing systems such as fault to- tion evolution with reduced developers’ for data and memory management optimi- lerance, flexibility of execution and per- effort. zation. Additionally, it will provide fault-to- formance monitoring based on EXA2PRO 3. the scientific community and the indus- lerance mechanisms, both user-exposed optimization tools. try relevant to the EXA2PRO applica- and at runtime system level and perfor- 3. Identification of trade-offs between de- tions: Significant impact is expected on mance monitoring features. EXA2PRO will sign qualities (source code maintainabi- the materials and processes design for be evaluated using 4 applications from 3 lity/reusability) and run-time constraints CO2 capture and on the supercapacitors different domains, which will be deployed (performance/energy consumption). industry.

24 25 ExaHyPE An Exascale Hyperbolic PDE Engine EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

www.exahype.eu EXAHYPE

COORDINATING ORGANISATION Technical University of Munich

of (systems of) neutron stars and black OTHER PARTNERS holes. Demonstrator scenarios from these • Universita degli Studi di Trento • Durham University application areas and further use-cases are • Frankfurt Institute for Advanced Studies available on exahype.org. • Ludwig-Maximilians-Universität München • RSC Technologies • Bavarian Research Alliance

EXAHYPE VISION EXAHYPE ENGINE CONTACTS Hyperbolic systems of PDE resulting from The hyperbolic PDE engine is available as Michael Bader conservation laws are the basis to simulate open source software, hosted at www. [email protected] a wide range of problems, ranging from exahype.org. CALL complex earthquake physics, via atmosphe- The consortium provides a guidebook co- FETHPC-1-2014 ric processes and all kinds of flow problems ming along with the released code which up to the most catastrophic events in the contains, besides documentation, rationale PROJECT TIMESPAN universe. The ExaHyPe engine shall enable and further documentation links. 01/10/2015 - 30/09/2019 teams of computational scientists to more quickly realize grand challenge simulations High Order ADER-DG Finite Volume Limiting on exascale hardware based on hyperbolic ADER-DG solution for the 3D Michel accretion test. PDE models.

EXAHYPE ALGORITHMS The engine implements a high-order dis- continuous Galerkin approach with ADER time stepping and a-posteriori finite-vo- lume limiting, which combines high-or- Code Generation Tree Structured AMR der accuracy with the robustness of finite volumes. Problems are discretised on EXAHYPE DEMONSTRATOR tree-structured fully adaptive Cartesian APPLICATIONS meshes, for which advanced parallelisation The ExaHyPE project evaluates the engine’s approaches and load-balancing algorithms capabilities on two exascale candidate sce- are available. narios stemming from computational seis- mology and astrophysics, respectively: regional earthquake simulation, particular- ly in alpine areas, and the fully relativistic simulation Seismic wave propagation in a complex alpine topography.

26 27 ExaNeSt EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

ExaNeSt has contributed to a new gene- EXANEST www.exanest.eu ration of European Supercomputers. The : @ExaNeSt_H2020 project’s advances in performance and ef- ficiency will enable SME’s in several sectors COORDINATING ORGANISATION of the economy to utilize HPC and data ana- FORTH lytics, with an attractive trade-off between usability and affordability. OTHER PARTNERS • Iceotope • Allinea • EnginSoft • eXact lab • MonetDB • Virtual Open Systems European Exascale System Interconnect but also built a rack prototype • Istituto Nazionale di Astrofisica (INAF) and Storage (ExaNeSt) project has suc- • Design tailored to entire Applications • National Institutefor Nuclear Physics(INFN) • The University of Manchester cessfully built a prototype for Exascale. - Ported, re-engineered; WP6: • Universitat Politècnica de València ExaNeSt has taken a sensible integrated integrated, evaluated • Fraunhofer approach to co-design the hardware and • Exploitation Objective: CONTACT software that is needed to enable the pro- New products; follow on projects Manolis Katevenis totype to run in real-life evaluations. The (Project Coordinator) energy-efficient solutions are expected to INTERCONNECT: [email protected] +30 2811 39 1078 spread to standard off the shelf computing • Multi-tiered 4D topology Peter Hopton systems, assisting the efforts of the EU to • Custom, low-cost, resilient, virtualised, (Dissemination & Exploitation Lead) reduce carbon dioxide emissions and reach RDMA-capable interconnect [email protected] +44 7437 01 2186 environmental targets. • Sub-microsecond process-to-process communication CALL ACHIEVEMENTS: • Low-latency packet-based FETHPC-1-2014 • Unified, Optimized, Low-power, prototype (ExaNet) Hierarchical Interconnect • Running real HPC app’s (ExaNet MPI port) PROJECT TIMESPAN - Unimem (zero-copy, user-level), 10x or more improvement over Ethernet 01/12/2015 - 31/05/2019 low-cost; Top-of-Rack; Optical • Distributed, in-node NVM’s, STORAGE: scalable Storage • UNIMEM – PGAS - Modified BeeGFS parallel FS, ported • BeeGFS Filesystem data base; Resilience; Virtualization • HPC Virtualization • Scalable dense packaging based • Profiling Tools on Liquid Cooling • Checkpointing - Ultra-dense daughter boards; • Status Monitoring next-generation cooling technology • FPGA Acceleration • Not only simulations for scalability, Exanest Prototype

28 29 European Exascale Processor EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ExaNoDe & Memory Node Design

system integrators, software teams and its • Advanced package integration with two www.exanode.eu EXANODE subsequent evaluation through industrial FPGA bare dies including ARMv8 cores, : @ExanodeProject deployment. one interposer and 43 decoupling capaci- ExaNoDe’s main achievements cover all as- tors in a 68.5 mm × 55 mm Multi-chip-Mo- COORDINATING ORGANISATION pects of heterogeneous integration from sili- dule (MCM). CEA - Commissariat à l’Energie Atomique et con technology to system software including: • Integration of two MCMs on a 260 mm aux énergies alternatives, France • Design of innovative, high-speed and x 120 mm daughter board. Development OTHER PARTNERS low-power interconnect for the heteroge- of a complete SW stack including UNI- • Arm Limited, UK neous integration of chiplets via a silicon MEM-based system software and • ETHZ - Eidgenössische Technische interposer. middleware; Runtimes libraries optimized Hochschule Zürich, Switzerland • FORTH, Greece • Design of a Convolutional Neural Network for the UNIMEM architecture (OmpSs, MPI, • Fraunhofer ITWM, Germany ExaNoDe designed core technologies for (CNN) accelerator hardware IP for a use OpenStream, GPI); Checkpointing techno- • Scapos AG, Germany a highly energy efficient and highly inte- case demonstrating heterogeneous inte- logy for virtualisation; A set of mini-appli- • University of Manchester, UK • Bull SAS (Atos Group), France grated heterogeneous compute node direc- gration. cations for benchmarking purposes. • Virtual Open Systems, France ted towards Exascale computing. ExaNoDe • Design and manufacturing of a chiplet • A performance projection of ExaNoDe tech- • Barcelona Supercomputing Centre (BSC), Spain has built a compute node prototype avai- System-on-Chip (SoC) in 28FDSOI techno- nologies into a strawman architecture re- • Forschungszentrum Jülich, Germany lable for collaborative working comprising: logy node presentative of upcoming HPC processors. • Kalray SA, France • CNRS - Centre National de la Recherche • Technology and design solutions for an • 3D integration of chiplets on an active The ExaNoDe prototype has been delive- Scientifique, France interposer-based computing device targe- silicon interposer with about 50,000 high red at the end of June 2019. It has been de- ting HPC applications, density (20 µm pitch) connections. signed to demonstrate and to validate all CONTACTS • Integration of devices in a technologies developed within the project. Denis Dutoit Multi- Chip-Module (MCM) The methodology, the know-how and the [email protected] +33677126676 to increase compute density building blocks related to advanced packa- Guy Lonsdale and take advantage of hete- ging (MCM and interposer) and system sof- [email protected] rogeneity, tware for heterogeneous integration provi- +492241142818 • System and middleware SW de an IP baseline for the next generation of CALL stack. European processors to be used in future FETHPC-1-2014 Exascale systems. CONTRIBUTION PROJECT TIMESPAN TO EXASCALE 01/10/2015 - 30/06/2019 COMPUTING The ExaNoDe final prototype consists of an integrated HW/ ExaNoDe collaborated closely with the SW daughter board with two ExaNeSt and EcoScale projects on the basis MCMs, each including two of a joint MoU. chiplets stacked on one sili- con interposer and two Xilinx

Zynq Ultrascale+ FPGAs. It ExaNoDe ExaNoDe prototype with Multi-Chip-Modules, targets prototype-level use by SW stack 3D Integrated Circuit and FPGA bare dies

30 31 ExaQUte EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

www.exaqute.eu EXAQUTE : @ExaQUteEU

COORDINATING ORGANISATION CIMNE- Centre Internacional de Metodes Numerics en Enginyeria, Spain Exa te OTHER PARTNERS • Barcelona Supercomputing Centre (BSC), Exascale Quantification of Uncertainties for Spain Technology and Science Simulation • Technische Universität München, Germany • INRIA, Institut National de Recherche en Informatique et Automatique, France OBJECTIVE pute stochastic sensitivities. This requires The methods and tools developed in • Vysoka Skola Banska - Technicka The ExaQUte project aims at constructing new theoretical and computational deve- ExaQUte will be applicable to many fields Univerzita Ostrava, Czech Republic • Ecole Polytechnique Fédérale a framework to enable Uncertainty Quan- lopments. With a proper definition of risk of science and technology. The chosen de Lausanne, Switzerland tification (UQ) and Optimization Under Un- measures and constraints, these methods application focuses on wind engineering, • Universitat Politecnica de Catalunya, Spain • str.ucture GmbH, Germany certainties (OUU) in complex engineering allow high-performance robust designs, a field of notable industrial interest for problems using computational simulations also maximizing the solution reliability. which currently no reliable solution exists. CONTACTS on Exascale systems. The description of complex geometries will This will include the quantification of un- Cecilia Soriano The stochastic problem of quantifying un- be possible by employing embedded me- certainties in the response of civil enginee- [email protected] certainties will be tackled by using a Multi thods, which guarantee a high robustness ring structures to the wind action, and the Riccardo Rossi [email protected] Level MonteCarlo (MLMC) approach that al- in the mesh generation and adaptation shape optimization taking into account un- lows a high number of stochastic variables. steps, while allowing preserving the exact certainties related to wind loading, structu- CALL New theoretical developments will be geometry representation. ral shape and material behaviour. FETHPC-02-2017 carried out to enable its combination with The efficient exploitation of Exascale sys- All developments in ExaQUte will be open- adaptive mesh refinement, considering tem will be addressed by combining State- source and will follow a modular approach, PROJECT TIMESPAN both, octree-based and anisotropic mesh of-the-Art dynamic task-scheduling techno- thus maximizing future impact. 01/06/2018 - 31/05/2021 adaptation. Gradient-based optimization logies with space-time accelerated solution techniques will be extended to consider un- methods, where parallelism is harvested certainties by developing methods to com- both in space and time.

32 33 Middleware for memory EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 MAESTRO and data-awareness in workflows

www.maestro-data.eu MAESTRO : @maestrodata

COORDINATING ORGANISATION Forschungszentrum Jülich GmbH, Germany point operations (FLOPS) was paramount. Several decades of technical evolution built OTHER PARTNERS a software stack and programming models • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France highly fit for optimising floating point ope- • Appentra Solutions SL, Spain rations but lacking in basic data handling • CSCS - Swiss National Supercomputing Centre, Switzerland functionality. We characterise the set of • ETHZ - Eidgenössische Technische Maestro will build a data-aware and me- a broad variety of efficiency challenges, it technical issues at missing data-awareness. Hochschule Zürich, Switzerland mory-aware middleware framework that would be fair to say that the performance • ECMWF - European Centre for Medium- range Weather Forecasts, United Kingdom addresses ubiquitous problems of data mo- of both has become dominated by data 2. Software rightfully insulates users from • Seagate Systems UK Ltd, United Kingdom vement in complex memory hierarchies and movement through the memory and sto- hardware details, especially as we move • Cray Computer GmbH, Switzerland at many levels of the HPC software stack. rage systems, as opposed to floating point higher up the software stack. But HPC ap- CONTACT Though HPC and HPDA applications pose computational capability. Despite this shift, plications, programming environments and Dirk Pleiter current software systems software cannot make key data [email protected] technologies remain movement decisions without some unders- severely limited in tanding of the hardware, especially the in- CALL their ability to op- creasingly complex memory hierarchy. With FETHPC-02-2017 timise data move- the exception of runtimes, which treat me- PROJECT TIMESPAN ment. The Maestro mory in a domain-specific manner, software 01/09/2018 - 31/08/2021 project addresses typically must make hardware-neutral deci- what it sees as the sions which can often leave performance two major impedi- on the table. We characterise this issue as for reasoning about data movement and ments of modern missing memory-awareness. placement based on costs of moving data HPC software: objects. The middleware will support auto- Maestro proposes a middleware framework matic movement and promotion of data in 1. Moving data that enables memory- and data-awareness. memories and storage and allow for data through memory Maestro has developed new data-aware transformations and optimisation. was not always the abstractions that can be be used in any bottleneck. The sof- level of software, e.g. compiler, runtime Maestro follows a co-design methodology tware stack that HPC or application. Core elements are the using a set of applications and workflows relies upon was built Core Data Objects (CDO), which through from diverse areas including numerical through decades of a give/take semantics provided to the weather forecasting, earth-system mo- a different situation middleware. The middleware is being de- delling, materials sciences and in-situ data – when the cost of signed such that it enables modelling of analysis pipelines for computational fluid performing floating memory and storage hierarchies to allow dynamics simulations.

34 35 Exploring Manycore Architectures EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 MANGO for Next- GeneratiOn HPC systems MANGO www.mango-project.eu : @mangoEU

COORDINATING ORGANISATION Universitat Politecnica de Valencia, Spain

OTHER PARTNERS • Universitat Politècnica de València, Spain • Centro Regionale Information Communication Technology, Italy • Faculty of Electrical Engineering and Computing, University of Zagreb, HR • Politecnico di Milano / Dipartimento di MANGO project has been focused on de- to let users easily adapt and port their ap- Elettronica, Informazione e Bioingegneria, signing and building a large-system pro- plications to new emerging scenarios with Italy • École polytechnique fédérale de Lausanne, totype for efficient exploration of future multitude of highly heterogeneous accele- Switzerland HPC manycore architectures including CPU rators. For all components an API and the • Pro Design Electronic GmbH, Germany • Eaton Corporation, France cores, GPU cores and FPGA/HW Accelerator resource manager are provided. MANGO • Thales Group, France cores. system also includes innovative cooling • Philips, Netherlands The prototype embeds highly heteroge- techniques using termoshypons. CONTACT neous accelerators in a common infrastruc- Final MANGO prototype offers versatile ex- José Flich ture (a network) which provides 3P design ploration platform for future HPC system [email protected] approach (Performance/Power/Predictabi- architectures as well as HPC HW/SW buil- +34 96 387 70 07 (ext. 75753) lity) to the applications. ding components to be possibly included in MANGO is deploying all the software stack future commercial designs. CALL (at both server side and accelerator side) FETHPC-1-2014

PROJECT TIMESPAN 01/10/2015 - 31/03/2019

The Mango prototype

36 37 NEXTGenIO Next Generation I/O for the Exascale EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Memory & Storage Latency Gaps NEXTGENIO

socket socket CPU CPU

1x 1x socket socket Register Register

10x 10x

The NEXTGenIO project addresses the I/O socket socket performance challenge not only for Exas- www.nextgenio.eu Cache Cache cale, but also for HPC and data intensive : @nextgenio 10x 10x computing in general. COORDINATING ORGANISATION DIMM DIMM NEXTGenIO is developing a prototype EPCC – The University of Edinburgh, UK DRAM DRAM computing platform that uses on-node non-volatile memory, bridging the latency OTHER PARTNERS 10x gap between DRAM and disk. In addition • Intel Deutschland GmbH, Germany DIMM Memory NVRAM to the hardware that will be built as part • Fujitsu • Barcelona Supercomputing Center (BSC), of the project, NEXTGenIO is developing Spain 100x the software stack that goes hand-in-hand • Technische Universität Dresden 100,000x • Allinea, now part of Arm IO with this new hardware architecture, and is • European Centre for Medium Storage SSD testing the developments using a set of ap- Range Weather Forecasts plications that include both traditional HPC • ARCTUR 100x (e.g. CFD and weather) and novel worklo- IO IO CONTACTS Spinning storage disk Spinning storage disk ads (e.g. machine learning and genomics). Prof Mark Parsons In addition to a detailed understanding of (Project Coordinator) 1,000x the wide range of use cases for non-volatile [email protected] backup memory, key outcomes of NEXTGenIO are: Dr Michèle Weiland 10,000x (Project Manager) Storage disk - MAID • Prototype compute nodes with non-vola- [email protected] tile memory +44 (0)131 651 3580 10x

• Profiling and debugging tools backup backup CALL • Data and power/energy aware job schedu- Storage tape Storage tape FETHPC-1-2014 ling system • Filesystem and object store PROJECT TIMESPAN HPC systems today HPC systems of the future • Library for persistent data structures 01/10/2015 - 30/09/2019 • Workload generator & I/O workflow simulator Memory and Storage Latency Gaps

38 39 Parallel Numerical Linear Algebra EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 NLAFET for Future Extreme Scale Systems NLAFET www.nlafet.eu NLAFET IMPACT The main impact is the software available, COORDINATING ORGANISATION via GitHub NLAFET repositories, to the Umeå University, Sweden scientific community and industry, and thereby providing novel tools for their OTHER PARTNERS • University of Manchester, UK computational challenges. The work on • Institut National de Recherche en the batched BLAS specification has already Informatique et an Automatique, Inria, achieved considerable impact with indus- France • Science and Technology Facilities Council, try in reaching a community standard. The UK idea is to group multiple independent BLAS NLAFET MISSION linear systems and eigenvalue problems, operations on small matrices as a single CONTACT Today’s largest HPC systems have a se- the development of communication avoi- routine (see the figure). The graph shows Prof. Bo Kågström [email protected] - [email protected] rious gap between the peak capabilities ding algorithms, an initial assessment of performance on an Nvidia P100 GPU for +46-90-786 5419 of the hardware and the performance application use cases, and an evaluation of 50 to 1000 GEMM operations on matrices realized by high-performance computing different runtime systems and auto-tuning of size 16-by-16 up to 4000-by-4000. For CALL applications. NLAFET is a direct response infrastructures. example, for matrices of size 256-by-256 FETHPC-1-2014 to this challenge. NLAFET will enable a ra- The novel software developed and de- around 30 times speedup is obtained com- PROJECT TIMESPAN dical improvement in the performance and ployed are available via the NLAFET website pared to CuBLAS GEMM. For large matrix 01/11/2015 - 30/04/2019 scalability of a wide range of real-world www.nlafet.eu/software and associated 13 sizes we switch to non-batched BLAS. applications, by developing novel architec- public GitHub repositories structured in Sample applications for batched BLAS can ture-aware algorithms, and the supporting five groups: Dense matrix factorizations be found in Structural mechanics, Astro- runtime capabilities to achieve scalable and solvers; Solvers and tool for standard physics, Direct sparse solvers, High-order performance and resilience on heteroge- and generalized eigenvalue problems; FEM simulations, and Machine learning. neous architectures. Sparse direct factorizations and solvers; The validation and dissemination of results Communication optimal algorithms for will be done by integrating new software iterative methods; Cross-cutting tools. The solutions into challenging scientific applica- parallel programming models used include tions in materials science, power systems, MPI, OpenMP, PaRSEC and StarPU. NLAFET study of energy solutions, and data analysis software implementations are tested and in astrophysics. The software will be pac- evaluated on small- to large-scale problems kaged into open-source library modules. executing on homogeneous systems and some on heterogeneous systems with ac- NLAFET SAMPLE RESULTS celerator hardware. Scientific progress and The main scientific and technological results of NLAFET have been presented at achievements include advances in the de- several international conferences and in velopment of Parallel Numerical Linear scientific journals (see www.nlafet.eu/pu- Algebra algorithms for dense and sparse blications ). Batched Computations: How do we design and optimise

40 41 REliable power and time-ConstraInts- EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 RECIPE aware Predictive management of heterogeneous Exascale systems RECIPE www.recipe-project.eu : @EUrecipe

COORDINATING ORGANISATION Politecnico di Milano, Italy

OTHER PARTNERS • Universitat Politecnica de Valencia, Spain • Centro Regionale Information Communication Technology scrl, Italy • Barcelona Supercomputing Centre (BSC), Spain • Instytut Chemii Bioorganicznej Polskiej OBJECTIVE infrastructure optimizing energy efficiency The project will assess its results against Akademii Nauk, Poland The current HPC facilities will need to grow and ensuring reliability for both time-criti- the following set of real world use cases, • EPFL - Ecole Polytechnique Fédérale de Lausanne, Switzerland by an order of magnitude in the next few cal and throughput-oriented computation; addressing key application domains ran- • Intelligence behind Things Solutions SRL, years to reach the Exascale range. The de- a predictive reliability methodology to sup- ging from well-established HPC applica- Italy • Centre Hospitalier Universitaire Vaudois, dicated middleware needed to manage port the enforcing of QoS guarantees in tions such as geophysical exploration and Switzerland the enormous complexity of future HPC face of both transient and long-term hard- meteorology, to emerging application do- centres, where deep heterogeneity is nee- ware failures, including thermal, timing and mains such as biomedical machine learning CONTACT ded to handle the wide variety of applica- reliability models; and a set of integration and data analytics. Prof. William Fornaciari [email protected] tions within reasonable power budgets, layers allowing the resource manager to To this end, RECIPE relies on a consor- will be one of the most critical aspects in interact with both the application and the tium composed of four leading academic CALL the evolution of HPC infrastructure towar- underlying deeply heterogeneous archi- partners (POLIMI, UPV, EPFL, CeRICT); two FETHPC-02-2017 ds Exascale. This middleware will need to tecture, addressing them in a disaggregate supercomputing centres, BSC and PSNC; a address the critical issue of reliability in way. Quantitative goals for RECIPE include: research hospital, CHUV, and an SME, IBTS, PROJECT TIMESPAN face of the increasing number of resources, 25% increase in energy efficiency (perfor- which provide effective exploitation ave- 01/05/2018 - 30/04/2021 and therefore decreasing mean time mance/watt) with an 15% MTTF improve- nues through industry-based use cases. between failures. ment due to proactive thermal manage- To close this gap, RECIPE provides: a hie- ment; energy-delay product improved up rarchical runtime resource management to 25%; 20% reduction of faulty executions.

42 43 Percipient Storage for Exascale EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 SAGE2 Data Centric Computing 2 SAGE2

www.sagestorage.eu : @SageStorage

COORDINATING ORGANISATION Seagate Systems UK Ltd, United Kingdom

OTHER PARTNERS • Bull SAS (Atos Group), France • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France • United Kingdom Atomic Energy Authority, United Kingdom • Kungliga Tekniska Hoegskolan (KTH), OBJECTIVE the wheel” leveraging existing initiatives Sage2 aims to provide significantly en- Sweden The landscape for High Performance Com- and know-how to build the pieces of the hanced scientific throughput, improved • Forschungszentrum Jülich GmbH, Germany • The University of Edinburgh, puting is changing with the proliferation of Exascale puzzle as quickly and efficiently as scalability, and, time & energy to solution United Kingdom enormous volumes of data created by scien- we can. for the use cases at scale. Sage2 will also • Kitware SAS, France • Arm Ltd, United Kingdom tific instruments and sensors, in addition The SAGE paradigm already provides a ba- dramatically increase the productivity of to data from simulations. This data needs sic framework to address the extreme scale developers and users of these systems. CONTACT to be stored, processed and analysed, and data aspects of High Performance Compu- Sage2 will provide a highly performant and Sai Narasimhamurthy existing storage system technologies in the ting on the path to Exascale. Sage2 (Per- resilient, QoS capable multi-tiered storage [email protected] realm of extreme computing need to be cipient StorAGe for Exascale Data Centric system, with data layouts across the tiers CALL adapted to achieve reasonable efficiency in Computing 2) intends to validate a next managed by the Mero Object Store, which FETHPC-02-2017 achieving higher scientific throughput. We generation storage system building on top is capable of handling in-transit/in-situ pro- started on the journey to address this pro- of the already existing SAGE platform to cessing of data within the storage system, PROJECT TIMESPAN blem with the SAGE project. The HPC use address new use case requirements in the accessible through the Clovis API. 01/09/2018 - 31/08/2021 cases and the technology ecosystem is now areas of extreme scale computing scien- further evolving and there are new requi- tific workflows and AI/deep learning leve- rements and innovations that are brought raging the latest developments in storage to the forefront. It is extremely critical to infrastructure software and storage tech- address them today without “reinventing nology ecosystem.

44 45 Visual Exploration and Sampling EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 VECMA Toolkit for Extreme Computing VECMA www.vecma.eu : @VECMA4

COORDINATING ORGANISATION University College London, United Kingdom

OTHER PARTNERS • Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany • Brunel University London, United Kingdom • Bayerische Akademie der Wissenschaften, Germany • Bull SAS (Atos Group), France The purpose of this project is to enable a The project includes a fast track that will en- To further develop and disseminate the • Stichting Nederlandse Wetenschappelijk diverse set of multiscale, multiphysics ap- sure applications are able to apply available VECMAtk, we have been working with Onderzoek Instituten, Netherlands • CBK Sci Con Ltd, United Kingdom plications -- from fusion and advanced ma- multiscale VVUQ tools as soon as possible, the Alan Turing Institute to jointly run the • Universiteit van Amsterdam, Netherlands terials through climate and migration, to while guiding the deep track development planned event of Reliability and Reprodu- • Instytut Chemii Bioorganicznej Polskiej Akademii Nauk, Poland drug discovery and the sharp end of clinical of new capabilities and their integration cibility in Computational Science: Imple- decision making in personalised medicine into a wider set of production applications menting Verification, Validation and Uncer- CONTACT -- to run on current multi-petascale compu- by the end of the project. The deep track in- tainty Quantification in silico. The event will Xuanye Gu ters and emerging exascale environments cludes the development of more disruptive comprise as part of it the first VECMA trai- [email protected] with high fidelity such that their output is and automated algorithms, and their exas- ning workshop in January 2020. Ahead of CALL «actionable». That is, the calculations and cale-aware implementation in a more intru- this event, we have run a VECMA hackathon FETHPC-02-2017 simulations are certifiable as validated (V), sive way with respect to the under¬lying event in association with VECMAtk in Sep- verified (V) and equipped with uncertainty and pre-existing multiscale modelling and tember 2019. These events associated with PROJECT TIMESPAN quantification (UQ) by tight error bars such simulation schemes. VECMAtk have participation from some of 15/06/2018 - 14/06/2021 that they may be relied upon for making our users from industry and government important decisions in all the domains of The potential impact of these certified mul- who use the tools we are developing on concern. The central deliverable is an open tiscale simulations is enormous, and we have HPC at supercomputer centres in Europe at source toolkit for multiscale VVUQ based already been promoting the VVUQ toolkit this time. on generic multiscale VV and UQ primitives, (VECMAtk) across a wide range of scienti- to be released in stages over the lifetime of fic and social scientific domains, as well as this project, fully tested and evaluated in within computational science more broadly. emerging exascale environments, actively We have made six releases of the toolkit promoted over the lifetime of this project, with the dedicated website at https://www. and made widely available in European HPC vecma-toolkit.eu/, available for public users centres. to access, download and use.

46 47 Visual Exploration and Sampling Toolkit EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 VESTEC for Extreme Computing VESTEC www.vestec-project.eu : @VESTECproject

COORDINATING ORGANISATION DLR - Deutsches Zentrum für Luft - und Raumfahrt EV, Germany

OTHER PARTNERS • The University of Edinburgh, United Kingdom • Kungliga Tekniska Hoegskolan (KTH), Sweden • Sorbonne Université, France OBJECTIVE VESTEC will create the software solutions emerging, time-critical phenomena. Inno- • Kitware SAS, France Technological advances in high perfor- needed to realise this vision for urgent de- vative data compression approaches, based • Intel Deutschland GmbH, Germany • Fondazione Bruno Kessler, Italy mance computing are creating exciting new cision making in various fields with high im- on topological feature extraction and data • Université Paul Sabatier Toulouse III, France opportunities that move well beyond im- pact for the European community. VESTEC sampling, will result in considerable reduc- • Tecnosylva SL, Spain proving the precision of simulation models. will build a flexible toolchain to combine tions in storage and processing demands by CONTACT The use of extreme computing in real-time multiple data sources, efficiently extract discarding domain-irrelevant data. Andreas Gerndt applications with high velocity data and live essential features, enable flexible schedu- Three emerging use cases will demonstrate [email protected] analytics is within reach. The availability of ling and interactive supercomputing, and the immense benefit for urgent decision fast growing social and sensor networks realise 3D visualization environments for making: wildfire monitoring and forecas- CALL raises new possibilities in monitoring, as- interactive explorations by stakeholders ting; analysis of risk associated with mos- FETHPC-02-2017 sessing and predicting environmental, so- and decision makers. VESTEC will develop quito-borne diseases; and the effects of PROJECT TIMESPAN cial and economic incidents as they happen. and evaluate methods and interfaces to space weather on technical supply chains. 01/09/2018 - 31/08/2021 Add in grand challenges in data fusion, integrate high-performance data analytics VESTEC brings together experts in each do- analysis and visualization, and extreme processes into running simulations and main to address the challenges holistically. computing hardware has an increasingly es- real-time data environments. Interactive sential role in enabling efficient processing ensemble management will launch new si- workflows for huge heterogeneous data mulations for new data, building up statis- streams. tically more and more accurate pictures of

48 49 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Centres of Excellence in computing applications

51 Concerted action EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 FocusCoE for the European HPC CoEs

www.focus-coe.eu FOCUSCOE

COORDINATING ORGANISATION Scapos AG, Germany

OTHER PARTNERS • Kuningla Tekniska högskolan (KTH), Sweden • Höchstleistungsrechenzentrum der Universität Stuttgart (HLRS), Germany • Barcelona Supercomputing Center (BSC), Spain • University College London, UK • Agenzia nazionale per le nuove tecnologie, The specific objectives and achievements • To instigate concerted action on training l’energia e lo sviluppo economico OBJECTIVE sostenibile (ENEA), Italy FocusCoE contributes to the success of the of FocusCoE are by and for the complete set of HPC CoEs: • National University of Ireland, Galway, EU HPC Ecosystem and the EuroHPC Ini- • The creation of the HPC CoE Council (HPC3) providing consolidating vehicle for user Ireland • Teratec, France tiative by supporting the EU HPC CoEs to in May 2019 in Poznań that allows all Eu- training offered by the CoEs and by PRACE • Forschungszentrum Jülich GmbH, Germany more effectively fulfil their role within the ropean HPC CoEs to collectively define (PATCs) and providing cross-area training • Partnership for advanced computing in Europe (PRACE), Belgium ecosystem and initiative: ensuring that ex- an overriding strategy and collaborative to the CoEs (e.g. on sustainable business • CEA - Commissariat à l’Energie Atomique treme scale applications result in tangible implementation for interactions with, and development). An HPC training stakehol- et aux Energies Alternatives, France benefits for addressing scientific, industrial contributions to, the EU HPC Ecosystem. der was organised in October 2019 in or societal challenges. It achieves this by • To support the HPC CoEs to achieve en- Brussels, defining training and education CONTACT Guy Lonsdale creating an effective platform for the CoEs hanced interaction with industry, and needs and the assessing requirements for [email protected] to coordinate strategic directions and col- SMEs in particular, through concerted out- HPC training programmes in the EU. laboration (addressing possible fragmenta- reach and business development actions. • To promote and concert the capabilities of CALL tion of activities across the CoEs and coor- FocusCoE has already initiated contacts and services offered by the HPC CoEs and INFRAEDI-02-2018 dinating interactions with the overall HPC between CoEs and industry. The focus in development of the EU HPC CoE “brand” ecosystem) and provides support services the coming period will be on sectorial in- raising awareness with stakeholders and PROJECT TIMESPAN for the CoEs in relation to both industrial dustrial events and industry associations. both academic and industrial users. First 01/12/2018s - 30/11/2021 outreach and promotion of their services A first tranche of industrial success stories steps have been regular newsletters1 and and competences by acting as a focal point from the CoEs is in preparation. highlighting CoE achievements in social for users to discover those services. media. Next steps will be a handbook and a dedicated web site presenting the set of service offerings by CoEs to industrial and academic users.

1. www.focus-coe.eu/index.php/newsletter/

52 53 Centre of Excellence EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 BioExcel for Computational Biomolecular Research High-performance computing (HPC) and https://bioexcel.eu BIOEXCEL high-throughput computing (HTC) tech- : @BioExcelCoE niques have now reached a level of maturity : https://goo.gl/5dBzmw for many of the widely-used codes and plat- forms, but taking full advantage of them re- COORDINATING ORGANISATION quires that researchers have the necessary KTH Royal Institute of Technology, Sweden training and access to guidance by experts. The necessary ecosystem of services in the OTHER PARTNERS field is presently inadequate. A suitable in- • The University of Manchester, UK • University of Utrecht, the Netherlands frastructure needed to be set up in a sustai- • Institute of Research in Biomedicine (IRB), nable, long-term operational fashion. Spain • European Molecular Biology Laboratory BioExcel is the leading European Centre of OVERVIEW (EMBL-EBI), UK Excellence for Computational Biomolecular Much of the current Life Sciences research BioExcel CoE was thus established as the • Forschungszentrum Jülich, Germany Research. We develop advanced software, relies on intensive biomolecular modelling go-to provider of a full-range of software • University of Jyväskylä, Finland • The University of Edinburgh, UK tools and full solutions for high-end and and simulation. As a result, both academia applications and training services. These • Max Planck Gesellschaft, Germany extreme computing for biomolecular re- and industry are facing significant challen- cover fast and scalable software, user-frien- • Across Limits, Malta • Barcelona Supercomputing Centre (BSC), search. The centre supports academic and ges when applying best practices for opti- dly automation workflows and a support Spain industrial research by providing expertise mal usage of compute infrastructures. At base of experts. The main services offered • Ian Harrow Consulting, UK • Norman Consulting, Norway and advanced training to end-users and the same time, increasing productivity of include hands-on training, tailored customi- • Nostrum Biodiscovery, Spain promoting best practices in the field. The researchers will be of high importance for zation of code and personalized consultan- centre asprires to operate in the long-term achieving reliable scientific results at faster cy support. BioExcel actively works with: CONTACT as the core facility for advanced biomolecu- pace. • academic and non-profit researchers, Erwin Laure lar modelling and simulations in Europe. • industrial researchers, [email protected] • software vendors and academic code CALL , providers INFRAEDI-02-2018 • non-profit and commercialresource providers, and PROJECT TIMESPAN • related international projects 01/01/2019 - 31/12/2021 and initiatives. The centre was established and operates through funding by the EC Horizon 2020 program (Grant agreements 675728 and 823830).

54 55 Centre of Excellence EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ChEESE for Exascale and Solid Earth CHEESE www.cheese-coe.eu

COORDINATING ORGANISATION Barcelona Supercomputing Centre (BSC), Spain

OTHER PARTNERS • Istituto Nazionale di Geofisica e Vulcanologia (INGV), Italy • Icelandic Meteorological Office (IMO), Iceland • Swiss Federal Institute of Technology (ETH), Switzerland ware prototypes for the upcoming Exascale Second, pilots are also intended for ena- • Universität Stuttgart - High Performance OBJECTIVE Computing Center (HLRS), Germany ChEESE will address extreme computing architectures, thereby ensuring commit- bling of operational services requiring of • CINECA Consorzio Interuniversitario, Italy scientific and societal challenges by harnes- ment with a co-design approach. Prepara- extreme HPC on urgent computing, early • Technical University of Munich (TUM), Germany sing European institutions in charge of tion to Exascale will consider also code in- warning forecast of geohazards, hazard • Ludwig-Maximilians Universität operational monitoring networks, tier-0 su- ter-kernel aspects of simulation workflows assessment and data analytics. Selected München (LMU), Germany • University of Málaga (UMA), Spain percomputing centres, academia, hardware like data management and sharing, I/O, pilots will be tested in an operational en- • Norwegian Geotechnical Institute developers and third-parties from SMEs, post-process and visualization. vironment to make them available to a (NGI), Norway Industry and public-governance. The scien- broader user community. Additionally, and • Institut de Physique du Globe de Paris (IPGP), France tific challenging ambition is to prepare 10 In parallel with these transversal activities, in collaboration with the European Plate • CNRS - Centre National open-source flagship codes to solve Exas- ChEESE will sustent on three vertical pillars. Observing System (EPOS), ChEESE will pro- de la Recherche Scientifique, France • Bull SAS (Atos Group), France cale problems on computational seismolo- First, it will develop pilot demonstrators for mote and facilitate the integration of HPC gy, magnetohydrodynamics, physical vol- scientific challenging problems requiring of services to widen the access to codes and CONTACT canology, tsunamis, and data analysis and Exascale computing in alignment with the fostering transfer of know-how to Solid Arnau Folch predictive techniques, including machine vision of European Exascale roadmaps. This Earth user communities. [email protected] learning and predictive techniques from includes near real-time seismic simulations Finally, the third pillar of ChEESE aims at ac- CALL monitoring earthquake and volcanic acti- and full-wave inversion, ensemble-based ting as a hub to foster HPC across the Solid INFRAEDI-02-2018 vity. The selected codes will be audit and volcanic ash dispersal forecasts, faster than Earth Community and related stakeholders optimized at both intranode level (inclu- real-time tsunami simulations and phy- and to provide specialized training on ser- PROJECT TIMESPAN ding heterogeneous computing nodes) and sics-based hazard assessments for seismics, vices and capacity building measures. 01/11/2018 - 31/10/2021 internode level on heterogeneous hard- volcanoes and tsunamis.

56 57 Centre of Excellence EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 CompBioMed2 in Computational Biomedicine

www.compbiomed.eu : @bio_comp

COORDINATING ORGANISATION COMPBIOMED2 University College London, UK

OTHER PARTNERS • University of Amsterdam, Netherlands • The University of Edinburgh, UK • Barcelona Supercomputing Centre (BSC), Spain • SURFsara, Netherlands • University of Oxford, UK CompBioMed is a user-driven Centre of levant patient-specific models. The reach develop and refine personalised medicine • University of Geneva, Switzerland Excellence (CoE) in Computational Biome- of the project beyond the funded partners strategies ahead of their clinical delivery to • The University of Sheffield, UK • CBK Sci Con Ltd, UK dicine, designed to nurture and promote is manifested by our highly effective Asso- the patient. Medical regulatory authorities • Universitat Pompeu Fabra, Spain the uptake and exploitation of high per- ciate Partner Programme (all of whom have are currently embracing the prospect of • Bayerische Akademie der Wissenschaften, Germany formance computing within the biomedical played an active role in our activities) with using in silico methods in the area of clinical • Acellera, Spain modelling community. Our user communi- cost free, lightweight joining mechanism, trials and we intend to be in the vanguard • Evotec Ltd, UK • Bull SAS (Atos Group), France ties come from academia, industry and cli- and an Innovation Exchange Programme of this activity, laying the groundwork for • Janssen Pharmaceutica, Belgium nical practice. that has brought upward of thirty scien- the application of HPC-based Computatio- The first phase of the CompBioMed CoE tists, industrialists and clinicians into the nal Biomedicine approaches to a greater CONTACT has already achieved notable successes in project from the wider community. number of therapeutic areas. The HPC re- Emily Lumley the development of applications, training Furthermore, we have developed and im- quirements of our users are as diverse as [email protected] and efficient access mechanisms for using plemented a highly successful training pro- the communities we represent. We support CALL HPC machines and clouds in computational gramme, targeting the full range of medical both monolithic codes, potentially scaling INFRAEDI-02-2018 biomedicine. We have brought together a students, biomedical engineers, biophysics, to the exascale, and complex workflows growing list of HPC codes relevant for bio- and computational scientists. This pro- requiring support for advanced execution PROJECT TIMESPAN medicine which have been enhanced and gramme contains a mix of tailored courses patterns. Understanding the complex out- 01/10/2019 - 30/09/2023 scaled up to larger machines. Our codes for specific groups, webinars and winter puts of such simulations requires both ri- (such as Alya, HemeLB, BAC, Palabos and schools, which is now being packaged into gorous uncertainty quantification and the our most mature applications for wider HemoCell) are now running on several of an easy to use training package. embrace of the convergence of HPC and usage, providing avenues that will sustain the world’s fastest supercomputers and In CompBioMed2 we are extending the high-performance data analytics (HPDA). CompBioMed2 well-beyond the proposed investigating challenging applications ran- CoE to serve the community for a total of CompBioMed2 seeks to combine these funding period. ging from defining clinical biomarkers of 7 years. CompBioMed has established it- approaches with the large, heterogeneous arrhythmic risk to the impact of mutations self as a hub for practitioners in the field, datasets from medical records and from on cancer treatment. successfully nucleating a substantial body the experimental laboratory to underpin Our work has provided the ability to inte- of research, education, training, innova- clinical decision support systems. Comp- grate clinical datasets with HPC simula- tion and outreach within the nascent field BioMed2 will continue to support, nurture tions through fully working computational of Computational Biomedicine. This emer- and grow our community of practitioners, pipelines designed to provide clinically re- gent technology will enable clinicians to delivering incubator activities to prepare

58 59 E-cam EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 E-CAM • Our Extended Software Development www.e-cam2020.eu Workshops (ESDWs) trained 230 people : @ECAM2020 so far, in advanced computational me- thods, good practices for code develop- COORDINATING ORGANISATION ment, documentation and maintenance • Ecole Polytechnique Fédérale de Lausanne, • An online training infrastructure is in place Switzerland to support the develop software for ex- OTHER PARTNERS treme-scale hardware • University College Dublin, Ireland • Several industrially relevant pilot projects • Freie Universität Berlin, Germany have started in E-CAM. Developments are • Universita degli Studi di Roma La Sapienza, Italy reported on our website • CNRS - Centre National The E-CAM Centre of Excellence is an e-in- interest, training courses for industrialists, • Our State of the Art and Scoping de la Recherche Scientifique, France frastructure for software development, on specific families of methods, software Workshops are important access points • Technische Universität Wien, AT • University of Cambridge, UK training and industrial discussion in simula- and HPC tools, as well as the possibility to for companies and SMEs to expertise and • Max-Planck-Institut tion and modelling. E-CAM is based around engage in direct discussions. discussions on leading-edge simulation für Polymerforschung, Germany • ENS - École Normale CECAM’s distributed European Network of and modelling techniques Supérieure de Lyon, France simulation science nodes and the physical Our approach is focused on four scientific • E-CAM works on software development • Forschungszentrum Jülich, Germany • Universitat de Barcelona, Spain computational e-infrastructure of PRACE. areas, critical for HPC simulations relevant projects that enable an HPC practice with • Daresbury Laboratory, Scientific We are a partnership of 16 CECAM Nodes, to key societal and industrial challenges. potential for transfer into industry. Exa- and Technology Facilities Council, UK 4 PRACE centres, 12 industrial partners These areas are classical molecular dy- mples are: GPU re-write of DL_MESO code • Scuola Internazionale Superiore Di Trieste, Italy and one Centre for Industrial Computing namics, electronic structure, quantum for mesoscale simulations; the develop- • Universiteit van Amsterdam, Netherlands (Hartree Centre). dynamics and meso- and multi-scale mo- ment of a load-balancing library specifi- • Scuola Normale Superiore Pisa, Italy • Aalto University, s Our goals are to: delling. E-CAM develops new scientific cally targeting MD applications, and of an • CSC-IT Centre for Science Ltd, Finland • Develop software modules to be used in ideas and transfers them to algorithm deve- HTC library; among other efforts. • Irish Centre for High-End Computing, Ireland academia and industry to solve simulation lopment, optimisation, and parallelization and modelling problems. Interface those in these four respective areas, and delivers CONTACT software modules with standard codes the related training. Postdoctoral resear- Luke Drury and tune them to run on the next genera- chers are employed under each scientific (Project Chair) tion of exascale computers; area, working closely with the scientific [email protected] +353 1 6621333 ext 321 • Train scientists from industry and aca- programmers to create, oversee and imple- Ana Catarina Mendonça demia on the development of methods ment the different software codes, in colla- (Project Coordinator) and software scaling towards the high end boration with our industrial partners. [email protected] +41 21 693 0395 of HPC systems; • Provide multidisciplinary, coordinated, The impact of E-CAM so far: CALL top level applied discussions to support • More than one hundred certified software EINFRA-5-2015 industrial end-users (both large multina- modules that are open access and easily tionals and SMEs) in their use of simulation available for the industrial and academic PROJECT TIMESPAN and modelling. This includes workshops communities through our software repo- 01/10/2015 - 30/09/2020 with industry to identify areas of mutual sitory. Positioning of the e-CAM infrastructure

60 61 Energy Oriented Centre EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EoCoE-II Of Excellence, Phase II

www.eocoe.eu EOCOE-II : @EoCoE

COORDINATING ORGANISATION Maison de la Simulation (MdlS) and Institut de Recherche sur la Fusion par confinement Magnétique (IRFM) at CEA, France

OTHER PARTNERS • Forschungszentrum Jülich (FZJ), with RWTH Aachen University, Germany • Agenzia nazionale per le nuove tecnologie, l’energia e lo sviluppo economico The Energy-oriented Centre of Excellence to anticipate future HPC hardware deve- new, long lasting and sustainable commu- sostenibile (ENEA), Italy (EoCoE) applies cutting-edge computatio- lopments. A world-class consortium of 18 nity around computational energy science. • Barcelona Supercomputer Centre (BSC), Spain • Centre National de la Recherche nal methods in its mission to accelerate complementary partners from 7 countries EoCoE resolves current bottlenecks in ap- Scientifique (CNRS) with Inst. Nat. the transition to the production, storage forms a unique network of expertise in en- plication codes; it develops cutting-edge Polytechnique Toulouse (INPT), France • Institut National de Recherche en and management of clean, decarbonized ergy science, scientific computing and HPC, mathematical and numerical methods, and Informatique et Automatique (INRIA), France energy. EoCoE is anchored in the High Per- including 3 leading European supercompu- tools to foster the usage of Exascale com- • Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique formance Computing (HPC) community and ting centres. New modelling capabilities in puting. Dedicated services for laboratories (CERFACS), France targets research institutes, key commercial selected energy sectors will be created at and industries are established to leverage • Max-Planck Gesellschaft (MPG), Germany players and SMEs who develop and enable unprecedented scale, demonstrating the this expertise and to develop an ecosystem • Fraunhofer Gesellschaft, Germany • Friedrich-Alexander Univ. Erlangen- energy-relevant numerical models to be potential benefits to the energy industry, around HPC for energy. Nürnberg (FAU), Germany run on exascale supercomputers, demons- such as accelerated design of storage de- • Consiglio Nazionale delle Ricerche (CNR), with Univ. Rome, Tor Vergata (UNITOV), Italy trating their benefits for low-carbon ener- vices, high-resolution probabilistic wind We are interested in collaborations in the • Università degli Studi di Trento (UNITN), Italy gy technology. The present project draws and solar forecasting for the power grid area of HPC (e.g. programming models, • Instytut Chemii Bioorganicznej Polskiej Akademii Nauk (PSNC), Poland on a successful proof-of-principle phase of and quantitative understanding of plasma exascale architectures, linear solvers, I/O) • Université Libre de Bruxelles (ULB), Belgium EoCoE-I, where a large set of diverse com- core-edge interactions in ITER-scale toka- and also with people working in the energy • University of Bath (UBAH), United Kingdom puter applications from four such energy maks. These flagship applications will pro- domain and needing expertise for carrying • Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), domains achieved significant efficiency vide a high-visibility platform for high-per- out ambitious simulation. See our service Spain gains thanks to its multidisciplinary ex- formance computational energy science, page for more details: • IFP Energies Nouvelles (IFPEN), France • DDN Storage pertise in applied mathematics and super- cross-fertilized through close working www.eocoe.eu/services computing. During this 2nd round, EoCoE-II connections to the EERA and EUROfusion CONTACT channels its efforts into 5 scientific Exas- consortia. Edouard Audit cale challenges in the low-carbon sectors [email protected] +33 1 69 08 96 35 of Energy Meteorology, Materials, Water, EoCoE is structured around a central Fran- Wind and Fusion. This multidisciplinary co-German hub coordinating a pan-Euro- CALL effort harnesses innovations in computer pean network, gathering a total of 7 coun- INFRAEDI-02-2018 science and mathematical algorithms wit- tries and 21 teams. Its partners are strongly hin a tightly integrated co-design approach engaged in both the HPC and energy fields. PROJECT TIMESPAN to overcome performance bottlenecks and The primary goal of EoCoE is to create a 01/01/2019 - 31/12/2021

62 63 Centre of Excellence in Simulation EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ESiWACE of Weather and Climate in Europe

www.esiwace.eu ESiWACE : @ESIWACE

COORDINATING ORGANISATION Deutsches Klimarechenzentrum (DKRZ), Germany

OTHER PARTNERS • CNRS- Centre National de la Recherche Scientifique, France • Barcelona Supercomputing Centre (BSC), Spain • European Centre for Medium-Range OBJECTIVE case. The centre of excellence ESiWACE fur- data and workflow management are deve- Weather Forecasts, United Kingdom Science driver for ESiWACE is the establish- ther leverages two European networks for loped. Other contributions of ESiWACE are • Max-Planck-Institut für Meteorologie, Germany ment of global weather and climate models this purpose: ENES (European Network for handbooks on system and software stacks • Sveriges Meteorologiska och Hydrologiska that allow simulating convective clouds and Earth System Modelling) and ECMWF (Eu- that are required for installation and opera- Institut, Sweden • CERFACS - Centre Européen de Recherche small-scale ocean eddies to enable reliable ropean Centre for Medium-Range Weather tion of the various complex models on HPC et de Formation Avancée en Calcul simulation of high-impact regional events. Forecasts). platforms. This will substantially improve Scientifique, France • National University of Ireland Galway This is not affordable today, considering the Weather and climate models are being pu- usability and portability of the weather and (Irish Centre for High End Computing), Ireland required compute power, data loads and shed towards global 1-2.5 km resolution, climate models. • Met Office, UK tight production schedules (for weather fo- cf. Figure 3 2.5-km global simulation using Future work in the scope of ESiWACE com- • Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, Italy recasts), and will require exascale compu- the ICON model. Performance predictions prises very high-resolution runs of coupled • The University of Reading, UK ting. We address and quantify the compu- concerning scalability of the models at atmosphere-ocean models based on the • STFC - Science and Technology Facilities Council, UK tational challenges in achieving this science exascale are made, and efficient tools for models ICON and EC-Earth. • Bull SAS (Atos Group), France • Seagate Systems UK Limited, UK • DWD - Deutscher Wetterdienst, Germany • Arm Limited, UK

CONTACT Joachim Biercamp [email protected]

CALL EINFRA-5-2015

PROJECT TIMESPAN 01/09/2015 - 31/08/2019

2.5-km global simulation using the ICON model

64 65 Centre of Excellence in Simulation EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ESiWACE2 of Weather and Climate in Europe, Phase 2

www.esiwace.eu ESiWACE2 : @ESIWACE

COORDINATING ORGANISATION Deutsches Klimarechenzentrum (DKRZ), Germany

OTHER PARTNERS • CNRS- Centre National de la Recherche Scientifique, France • ECMWF - European Centre for Medium- Range Weather Forecasts, United Kingdom • Barcelona Supercomputing Center (BSC), Spain ESiWACE2 will leverage the potential of the These developments will be complemented • Max-Planck-Institut für Meteorologie, envisaged EuroHPC pre-exascale systems by the establishment of new technologies Germany • Sveriges meteorologiska och hydrologiska to push European climate and weather co- such as domain-specific languages and institut, Sweden des to world-leading spatial resolution for tools to minimise data output for ensemble • CERFACS - Centre Européen de Recherche et de Formation Avancée en Calcul production-ready simulations, including runs. Co-design between model develo- Scientifique, France the associated data management and data pers, HPC manufacturers and HPC centres • National University of Ireland Galway (Irish Centre for High End Computing), Ireland analytics workflows. For this goal, the port- is to be fostered, in particular through the • Met Office, United Kingdom folio of climate models supported by the design and employment of High-Perfor- • Fondazione Centro Euro-Mediterraneo project has been extended with respect mance Climate and Weather benchmarks, sui Cambiamenti Climatici, Italy • The University of Reading, United Kingdom to the prototype-like demonstrators of ES- with the first version of these benchmarks • STFC - Science and Technology Facilities iWACE1. Besides, while the central focus feeding into ESiWACE2 through the FET- Council, United Kingdom • Bull SAS (Atos Group), France of ESiWACE2 lies on achieving scientific HPC research project ESCAPE-2. Additio- • Seagate Systems UK Limited, United Kingdom performance goals with HPC systems that nally, training and dedicated services will • ETHZ - Eidgenössische Technische Hochschule Zürich – Switzerland will become available within the next four be set up to enable the wider community to • The University of Manchester, United Kingdom years, research and development to pre- efficiently use upcoming pre-exascale and • Netherlands eScience Centre, Netherlands pare the community for the systems of the exascale supercomputers. • Federal Office of Meteorology and Climatology, Switzerland exascale era constitutes another project • DataDirect Networks, France goal. • Mercator Océan, France

CONTACT Joachim Biercamp [email protected]

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/01/2019 - 31/12/2022

66 67 EXCELLERAT EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

www.excellerat.eu

: @EXCELLERAT_CoE EXCELLERAT

COORDINATING ORGANISATION Universität Stuttgart (HLRS), Germany

OTHER PARTNERS • The University of Edinburgh (EPCC), United Kingdom • CINECA Consorzio Interuniversitario, Italy • SICOS BW GmbH, Germany • Kungliga Tekniska Hoegskolan (KTH), Sweden • ARCTUR DOO, Slovenia OBJECTIVE with the European HPC Strategy as imple- Exascale Systems and Exascale Machines. • DLR - Deutsches Zentrum für Luft - Engineering applications will be among mented through the EuroHPC Joint Under- EXCELLERAT addresses the setup of a und Raumfahrt EV, Germany • Centre Européen de Recherche et de the first exploiting Exascale. Not only in taking. To fulfil its mission, EXCELLERAT Centre as an entity, acting as a single hub, Formation Avancée en Calcul Scientifique academia but also in industry. In fact, the will focus its developments on six carefully covering a wide range of issues, from (CERFACS), France • Barcelona Supercomputing Centre (BSC), industrial engineering field is *the* field chosen reference applications (Nek5000, «non-pure-technical» services such as ac- Spain with the highest Exascale potential. Thus Alya, AVBP, Fluidity, FEniCS, Flucs), which cess to knowledge or networking up to • SSC-Services GmbH, Germany • Fraunhofer Gesellschaft zur Förderung the EXCELLERAT activity brings together were analysed on their potential to support technical services as e.g. co-design, scala- der Angewandten Forschung E.V., Germany the premium European expertise to esta- the aim to achieve Exascale performance in bility enhancement or code porting to new • TERATEC, France blish a Centre of Excellence in Engineering HPC for Engineering. Thus, they are promi- (Exa) Hardware. As the consortium contains • Rheinisch-Westfälische Technische Hochschule Aachen, Germany Applications on HPC with a broad service sing candidates to act as showcases for evo- key players in HPC, HPDA, AI and experts portfolio, paving the way for the evolu- lution of applications towards execution on for the reference applications, impact (e.g. CONTACT tion towards Exascale. All perfectly in line high scale on Exascale Demonstrators, Pre- code improvements and awareness raising) Dr.-Ing. Bastian Koller is guaranteed. The scientific excellence of [email protected] the EXCELLERAT consortium enables evo- CALL lution, optimization, scaling and porting INFRAEDI-02-2018 of applications towards disruptive tech- nologies and increases Europe´s competi- PROJECT TIMESPAN tiveness in engineering. Within the frame 01/12/2018 - 30/11/2021 of the project, EXCELLERAT will prove the applicability of the results for not only the six chosen reference applications, but even going beyond. Thus (but not only for that purpose), EXCELLERAT will extend the re- cipients of its developments beyond the consortium and interact via diverse mecha- nisms to integrate external stakeholders of its value network into its evolution. Base pillars of the EXCELLERAT Centre of Excellence

68 69 HiDALGO – HPC and Big Data EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 HiDALGO Technologies for Global Challenges

Project website pending HiDALGO

COORDINATING ORGANISATION Atos Spain, Spain

OTHER PARTNERS • Universität Stuttgart – High Performance Computing Center Stuttgart, Germany • Instytut Chemii Bioorganicznej Polskiej Akademii Nauk – Poznan Supercomputing and Networking Center, Poland • ICCS (Institute of Communication and Computer Systems), Greece even more obvious if coupled systems are Global decisions with their dependencies • Brunel University London, United Kingdom OBJECTIVE • Know Center GmbH, Austria Developing evidence and understanding considered: problem statements and their cannot be based on incomplete problem • Szechenyi Istvan University, Hungary concerning Global Challenges and their corresponding parameters are dependent assessments or gut feelings anymore, • Paris-Lodron-Universität Salzburg, Austria • European Centre for Medium-Range underlying parameters is rapidly becoming on each other, which results in interconnec- since impacts cannot be foreseen without Weather Forecasts, United Kingdom a vital challenge for modern societies. Va- ted simulations with a tremendous overall an accurate problem representation and • MoonStar Communications GmbH, Germany • DIALOGIK Gemeinnützige Gesellschaft rious examples, such as health care, the complexity. Although the process for brin- its systemic evolution. Therefore, HiDAL- für Kommunikations - und transition of green technologies or the evo- ging together the different communities has GO bridges that shortcoming by enabling Kooperationsforschung MBH, Germany lution of the global climate up to hazards already started within the Centre of Excel- highly accurate simulations, data analytics • Magyar Kozut Nonprofit Zrt., Hungary • ARH Informatikai Zrt., Hungary and stress tests for the financial sector de- lence for Global Systems Science (CoeGSS), and data visualisation, but also by providing monstrate the complexity of the involved the importance of assisted decision making technology as well as knowledge on how to CONTACT systems and underpin their interdisciplina- by addressing global, multi-dimensional pro- integrate the various workflows and the Francisco Javier Nieto ry as well as their globality. This becomes blems is more important than ever. corresponding data. [email protected]

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/12/2018 - 30/11/2021

70 71 Materials design at the Exascale EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

MaX MAX

http://www.max-centre.eu : @max_center2

COORDINATING ORGANISATION CNR Nano, Modena, Italy

OTHER PARTNERS • SISSA Trieste, Italy (Stefano Baroni) • ICN2 Barcelona, Spain (Pablo Ordejόn) • FZJ Jülich, Germany (Stefan Blügel, Dirk Pleiter) • EPFL Lausanne, Switzerland (Nicola Marzari) MaX - Materials design at the eXascale is low predictive simulations of materials and The main actions include: • CINECA Bologna, Italy (Carlo Cavazzoni) a Centre of Excellence with focus on dri- their properties from the laws of quantum 1. code and library restructuring: including • Barcelona Supercomputing Centre (BSC), Spain (Stephan Mohr) ving the evolution and exascale transition physics and chemistry, without resorting to modularizing and adapting for heteroge- • CSCS ETH Zürich, Switzerland of materials science codes and libraries, empirical parameters. The exascale pers- neous architectures, as well as adoption (Joost VandeVondele) • CEA, France (Thierry Deutsch) and creating an integrated ecosystem of pective is expected to boost the massive of new algorithms; • UGent, Belgium (Stefaan Cottenier) codes, data, workflows and analysis tools use of these codes in designing materials 2. codesign: working on codes and provi- • E4 Computer Engineering, Italy (Fabrizio Magugliani) for high-performance (HPC) and high- structures and functionalities for research ding feedback to architects and integra- • Arm, UK (Filippo Spiga) throughput computing (HTC). Particular and manufacturing. MaX works with code tors; developing workflows and turn-key • ICTP UNESCO, Trieste, Italy (Ivan Girotto) emphasis is on co-design activities to en- developers and experts from HPC centres solutions for properties calculations and • Trust-IT, Italy (Silvana Muscella) sure that future HPC architectures are well to support such transition. It focuses on se- curated data sharing CONTACT suited for the materials science applica- lected complementary open-source codes: 3. enabling convergence of HPC, HTC and Elisa Molinari tions and their users. BigDFT, CP2k, FLEUR, Quantum ESPRESSO, high-performance data analytics; Director SIESTA, YAMBO. In addition, it contributes 4. widening access to codes, training, and [email protected] +39 059 2055 629 PROJECT DESCRIPTION to the development of the AIIDA materials transferring know-how to user commu- The focus of MaX is on first principles mate- informatics infrastructure and the Mate- nities. CALL rials science applications, i.e. codes that al- rials Cloud environment. Results and lessons learned on MaX INFRAEDI-02-2018 flagship codes and projects are made avai- lable to the developers and users communi- PROJECT TIMESPAN ties at large. 01/12/2018 - 30/11/2021

72 73 Performance Optimisation EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 and Productivity 2

POP2 POP2

https://pop-coe.eu : @POP_HPC

COORDINATING ORGANISATION Barcelona Supercomputing Centre (BSC), Spain

OTHER PARTNERS • Universität Stuttgart HLRS, Germany • Forschungszentrum Jülich GmbH, Germany • Numerical Algorithms Group Ltd, UK • Rheinisch-Westfälische Technische Hochschule Aachen (RWTH-AACHEN), The growing complexity of parallel com- of Excellence since October 2015. The ef- The objective of POP2 is to continue and Germany puters is leading to a situation where code fort in the POP project resulted in more improve the POP project operating a • Teratec, France • Université de Versailles Saint Quentin, owners and users are not aware of the de- than 120 assessment services provided to Centre of Excellence in Computing Appli- France tailed issues affecting the performance of customers in academia, research and indus- cations in the area of Performance Opti- • Vysoka Skola Banska - Technicka Univerzita Ostrava, Czech Republic their applications. The result is often an try helping them to better understand the misation and Productivity with a special inefficient use of the infrastructures. Even behaviour and improve the performance focus on very large scale towards exascale. CONTACT in the cases where the need to get further of their applications. The external view, POP2 will continue the service oriented Jesus Labarta performance and efficiency is perceived, advice, and help provided by POP have activity, giving code developers, users and [email protected] the code developers may not have insight been extremely useful for many of these infrastructure operators an impartial exter- Judit Gimenez [email protected] enough on its detailed causes so as to pro- customers to significantly improve the per- nal view that will help them improve their perly address the problem. This may lead formance of their codes by factors of 20% codes. We will still target the same catego- CALL to blind attempts to restructure codes in a in some cases but up to 2x or even 10x in ries of users and focus on identifying per- INFRAEDI-02-2018 way that might not be the most productive others. The POP experience was also ex- formance issues and proposing techniques ones. tremely valuable to identify issues in me- that can help applications in the direction PROJECT TIMESPAN POP2 extends and expands the activities thodologies and tools that if improved will of exascale. Our objective is to perform 180 01/12/2018 - 30/11/2021 successfully carried out by the POP Centre reduce the assessment cycle time. services over a three-year period.

74 75 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

European Processor

77 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EPI EPI

2. Ensuring that a significant part of that skills, and competencies needed to design http://european-processor- and execute a sustainable roadmap for technology and intellectual property is initiative.eu European; research and innovation in processor and : @EUProcessor 3. Ensuring that the application areas of the computing technology, fostering future : european-processor-initiative technology are not limited only to HPC, exascale HPC and emerging applications, : www.youtube.com/channel/ but cover other areas, such as automo- including Big Data, and automotive com- UCGvQcTosJdWhd013SHnIbpA/ tive and data centers, thus ensuring the puting for autonomous vehicles. Develop- featured economic viability of the initiative. ment of a new processor to be at the core EPI gathers 27 partners from 10 European of future computing systems will be divi- COORDINATING ORGANISATION countries, with a wide range of expertise ded into several streams: Bull SAS (Atos group), France and background: HPC, supercomputing • Common Platform and global architecture OTHER PARTNERS Europe has an ambitious plan to become a [stream 1] centers, automotive computing, including • BSC, Barcelona supercomputing center - main player in supercomputing. One of the researchers and key industrial players. The • HPC general purpose processor [stream 2] centro nacional de supercomputacion, Spain core components for achieving that goal is fact that the envisioned European proces- • Accelerator [stream 3] • IFAG, Infineon Technologies AG, Germany • SemiDynamics, SemiDynamics technology a processor. The European Processor Initia- sor is planned to be based on already exis- • Automotive platform [stream 4] services SL, Spain tive (EPI) is a part of a broader strategy to ting tools either owned by the partners or The results from the two streams related to • CEA, Commissariat à l’Energie Atomique et aux énergies alternatives, France develop and independent European HPC being offered as open-sources with a large the general-purpose processor and accele- • CHALMERS, Chalmers tekniska högskola, Sweden industry based on domestic and innovative community of users, provides three key rator chips will generate a heterogeneous, • ETH Zürich, Eidgenössische Technische technologies as presented in the EuroHPC energy-efficient CPU for use in both stan- Hochschule Zürich, Switzerland exploitation advantages: (1) the time-to- • FORTH, Foundation for Research Joint Undertaking proposed by the Euro- market will be reduced as most of these dard and non-traditional and compute-in- and Technology Hellas, Greece pean Commission. The general objective of tools are already used in industry and well tensive segments, e.g. automotive where • GENCI, Grand Equipement National de Calcul Intensif, France EPI is to design a roadmap and develop the known. (2) It will enable EPI partners to in- SoA in autonomous driving requires signifi- • IST, Instituto Superior Technico, Portugal key Intellectual Property for future Euro- corporate the results in their commercial cant computational resources. • JÜLICH, Forschungszentrum Jülich GmbH, Germany pean low-power processors addressing ex- portfolio or in their scientific roadmap. (3) EPI strives to maximize the synergies • UNIBO, Alma mater studiorum - treme scale computing (exascale), high-per- it fundamentally reduces the technological between the two streams and will work Universita di Bologna, Italy formance, big-data and emerging verticals with existing EU initiatives on technology, in- • UNIZG-FER, Sveučilište u Zagrebu, Fakultet risk associated to advanced technologies elektrotehnike i računarstva, Croatia (e.g. automotive computing) and other developed and implemented in EPI. frastructure and applications, to position Eu- • Fraunhofer, Fraunhofer Gesellschaaft zur fields that require highly efficient compu- EPI covers a complete range of expertise, rope as a world leader in HPC and emerging Förderung der angewandten Forschung E.V., Germany ting infrastructure. More pre- markets for exascale era such as automotive • ST-I, STMICROELECTRONICS SRL, Italy cisely, EPI aims at establishing computing for autonomous driving. • E4, E4 Computer Engineering SPA, Italy • UNIPI, Universita di Pisa, Italy the key technologies to reach • SURFsara BV, Netherlands three fundamental goals: • Kalray SA, France CONTACT 1. Developing low-power pro- • Extoll GmbH, Germany [email protected] • CINECA, Cineca Consorzio cessor technology to be in- Interuniversitario, Italy cluded in an advanced expe- • BMW Group, Bayerische Motoren Werke CALL Aktiengesellschaft, Germany rimental platform towards SGA-LPMT-01-2018 • Elektrobit, Automotive GmbH, Germany exascale system for Europe • KIT, Karlsruher Institut für Technologie, Germany • Menta SAS, France in 2021 and exascale in 2022- PROJECT TIMESPAN • Prove & Run, France 2023; The EPI timeline 01/12/2018 - 30/11/2021 • SIPEARL SAS, France

78 79 European scalable, modular EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 Mont-Blanc and power-efficient HPC processor

2020 www.montblanc-project.eu : @MontBlanc_Eu : @mont-blanc-project

COORDINATING ORGANISATION MONT-BLANC 2020 MONT-BLANC Bull (Atos group), France

OTHER PARTNERS • Arm (United Kingdom) • Barcelona Supercomputing Centre (BSC), Spain • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France Following on from the three successive (IPs) and provide a blueprint for its • Jülich Forschungszentrum, Germany Mont-Blanc projects since 2011, the three first-generation implementation; • Kalray, France • SemiDynamics, Spain core partners Arm, Barcelona Supercom- • deliver initial proof-of-concept demons- puting Center and Bull (Atos Group) have tration of its critical components on real CONTACT The Mont-Blanc 2020 physical architecture united again to trigger the development of life applications; Etienne Walter the next generation of industrial processor • explore the reuse of the building blocks [email protected] for Big Data and High Performance Compu- to serve other markets than HPC, with me- ting. The Mont-Blanc 2020 consortium also thodologies enabling a better time-pre- PROJECT TIMESPAN CALL includes CEA, Forschungszentrum Jülich, dictability, especially for mixed-critical 01/12/2017 - 30/11/2020 ICT-05-2017 Kalray, and SemiDynamics. applications where guaranteed execution The Mont-Blanc 2020 project intends to & response times are crucial. pave the way to the future low-power Eu- ropean processor for Exascale. To improve Mont-Blanc 2020 will provide new IPs, such the economic sustainability of the pro- as a new low-power mesh interconnect cessor generations that will result from based on the Coherent Hub Interface (CHI) the Mont-Blanc 2020 effort, the project architecture. Mont-Blanc 2020 will demons- includes the analysis of the requirements trate the performance with a prototype of other markets. The project’s strategy implementation in RTL for some of its key based on modular packaging would make components, and demonstration on an it possible to create a family of SoCs targe- emulation platform. ting different markets, such as “embedded HPC” for autonomous driving. The project’s The Mont-Blanc 2020 project is at the heart actual objectives are to: of the European exascale supercomputer effort, since most of the IP developed wit- • define a low-power System-on-Chip archi- hin the project will be reused and produc- tecture targeting Exascale; tized in the European Processor Initiative • implement new critical building blocks (EPI). A Key Output: the MB2020 demonstrator

80 81 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

HPC and Big Data testbeds

83 Fostering precision agriculture EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 CYBELE and livestock farming through secure access to large-scale HPC-enabled virtual industrial experimentation CYBELE environment empowering scalable www.cybele-project.eu big data analytics : @CYBELE_H2020 : @Cybele-H2020 : @Cybele-project

11001101101010001011001011111110100000110101101111100111101111011010101100110 COORDINATING ORGANISATION Waterford Institute of Technology, Ireland

OTHER PARTNERS • Barcelona Supercomputing Centre (BSC), Spain CYBELE • Bull SAS (Atos group), France • CINECA Consorzio Interuniversitario, Italy CYBELE generates innovation and creates data discovery, processing, combination 3. a bouquet of domain specific and gene- • Instytut Chemii Bioorganicznej Polskiej value in the domain of agri-food, and its and visualisation services, Stakeholders ric services on top of the virtual research Akademii Nauk, Poland • RYAX Technologies, France verticals in the sub-domains of Precision shall be enabled to generate more value environment facilitating the elicitation • Universität Stuttgart, Germany Agriculture (PA) and Precision Livestock and deeper insights in operations. of knowledge from big agri-food related • EXODUS Anonymos Etaireia Pliroforikis, Greece • LEANXCALE SL, Spain Farming (PLF) in specific. This will be de- CYBELE develops large scale HPC-enabled data, addressing the issue of increa- • ICCS (Institute of Communication monstrated by the real-life industrial cases, test beds and delivers a distributed big sing responsiveness and empowering and Computer Systems), Greece • Centre for Research and Technology Hellas empowering capacity building within the data management architecture and a data automation-assisted decision making, - Information Technologies Institute (ITI), Greece industrial and research community. CYBE- management strategy providing: empowering the stakeholders to use re- • Tampereen Korkeakoulusaatio SR, Finland LE aims at demonstrating how the conver- sources in a more environmentally res- • UBITECH, Greece • University of Piraeus Research Center, Greece gence of HPC, Big Data, Cloud Computing 1. integrated, unmediated access to large ponsible manner, improve sourcing de- • SUITE5 Data Intelligence Solutions Ltd, Cyprus and the Internet of Things can revolutio- scale datasets of diverse types from a cisions, and implement circular-economy • INTRASOFT International SA, Luxembourg • ENGINEERING - Ingegneria Informatica nise farming, reduce scarcity and increase multitude of distributed data sources, solutions in the food chain. SPA, Italy food supply, bringing social, economic, and • Wageningen University, Netherlands • Stichting Wageningen Research, Netherlands environmental benefits. 2. a data and service driven virtual HPC-en- • BIOSENSE Institute, Serbia CYBELE intends to safeguard that stakehol- abled environment supporting the exe- • Donau Soja Gemeinnutzige GmbH, Austria ders have integrated, unmediated access • Agroknow IKE, Greece cution of multi-parametric agri-food • GMV Aerospace and Defence SA, Spain to a vast amount of large-scale datasets of related impact model experiments, op- • Federacion de Cooperativas Agroalimentares diverse types from a variety of sources. By timising the features of processing large de la Comunidad Valenciana, Spain • University of Strathclyde, United Kingdom providing secure and unmediated access to scale datasets and • ILVO - Instituut voor Landbouw- en large-scale HPC infrastructures supporting Visserrijonderzoek, Belgium • VION Food Nederland BV, Netherlands • Olokliromena Pliroforiaka Sistimataae, Greece • Kobenhavns Universitet, Denmark • EVENFLOW, Belgium • Open Geospatial Consortium (Europe) Ltd, CALL United Kingdom ICT-11-2018-2019 CONTACT PROJECT TIMESPAN [email protected] 01/01/2019 - 31/12/2021 [email protected]

84 85 Deep-Learning and HPC to Boost EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 DeepHealth Biomedical Applications for Health

biomedical platforms or applications. 14 PILOTS AND SEVEN PLATFORMS https://deephealth-project.eu The toolkit is composed of two core libra- TO VALIDATE THE DEEPHEALTH : @DeepHealthEU ries, the European Distributed Deep Lear- PROPOSED INNOVATIONS DEEPHEALTH : @DeepHealthEU ning Library (EDDLL) and the European The DeepHealth innovations will be va- : @DeepHealthEU Computer Vision Library (ECVL) and a de- lidated in 14 pilot test-beds through the dicated front-end for facilitating the use use of seven different biomedical and AI COORDINATING ORGANISATION of the libraries to computer and data scien- software platforms provided by partners. Everis Spain SL, Spain tists. The libraries will be integrated and vali- THE DEEPHEALTH CONCEPT – dated in seven AI and biomedical software OTHER PARTNERS APPLICATION SCENARIOS platforms: commercial platforms: (everis • Azienda Ospedaliera Citta della Salute e della Scienza di Torino, Italy The DeepHealth concept focuses on scena- Lumen, PHILIPS Open Innovation Platform, • Barcelona Supercomputing Centre (BSC), Spain The main goal of the DeepHealth project rios where image processing is needed for THALES PIAF) and research-oriented plat- • Centre Hospitalier Universitaire Vaudois, is to put HPC computing power at the ser- diagnosis. In the training environment IT ex- forms (CEA’s ExpressIFTM, CRS4’s Digital Switzerland • Centro di Ricerca, Sviluppo e Studi vice of biomedical applications; and apply perts work with datasets of images for trai- Pathology, WINGS MigraineNet). Superiori in Sardegna SRL, Italy Deep Learning (DL) and Computer Vision ning predictive models. In the production The use cases cover three main areas: (i) • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France (CV) techniques on large and complex bio- environment the medical personnel ingests Neurological diseases, (ii) Tumour detec- • EPFL - Ecole Polytechnique medical datasets to support new and more an image coming from a scan into a plat- tion and early cancer prediction and (iii) Fédérale de Lausanne, Switzerland • Fundacion para el Fomento efficient ways of diagnosis, monitoring and form or biomedical application that uses Digital pathology and automated image de la Investigacion Sanitaria y Biomedica treatment of diseases. predictive models to get clues to support annotation. The pilots will allow evaluating de la Comunitat Valenciana, Spain them during diagnosis. The DeepHealth the performance of the proposed solutions • Karolinska Institutet, Sweden • Otto-von-Guericke-Universität Magdeburg, THE DEEPHEALTH TOOLKIT: A KEY toolkit will allow the IT staff to train models in terms of the time needed for pre-pro- Germany OPEN-SOURCE ASSET FOR EHEALTH and run the training algorithms over hybrid cessing images, the time needed to train • Philips Medical Systems Nederland BV, Netherlands • Pro Design Electronic GmbH, Germany AI-BASED SOLUTIONS HPC + big data architectures without a pro- models and the time to put models in pro- • SIVECO Romania SA, Romania At the centre of the proposed innovations found knowledge of DL, CV, HPC or big data duction. In some cases, it is expected to re- • Spitalul Clinic Prof Dr Theodor Burghele, Romania • STELAR Security Technology is the DeepHealth toolkit, an open-source and increase their productivity reducing duce these times from days or weeks to just Law Research UG, Germany that will provide a unified the required time to do it. hours. This is one of the major expected im- • Thales SIX GTS France SAS, France framework to pacts of the DeepHealth project. • TREE Technology SA, Spain • Università degli Studi di Modena e Reggio exploit heteroge- This project has received funding from the Emilia, Italy neous HPC and European Union’s Horizon 2020 research • Università degli Studi di Torino, Italy • Universitat Politecnica de Valencia, Spain big data architec- and innovation programme under grant • WINGS ICT Solutions Information & tures assembled agreement No 825111, DeepHealth Pro- Communication Technologies IKE, Greece with DL and CV ject. CONTACT capabilities to op- Monica Caballero timise the training (Project manager) of predictive mo- CALL [email protected] dels. The toolkit ICT-11-2018-2019 Jon Ander Gómez (Technical manager) will be ready to [email protected] be integrated in PROJECT TIMESPAN current and new 01/01/2019 - 31/12/2021

86 87 HPC and Cloud-enhanced Testbed EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EVOLVE for Extracting Value from Diverse Data at Large Scale EVOLVE www.evolve-h2020.eu : @Evolve_H2020 : @evolve-h2020

COORDINATING ORGANISATION Datadirect Networks France, France

OTHER PARTNERS • AVL LIST GmbH, Austria • BMW - Bayerische Motoren Werke AG, Germany • Bull SAS (Atos group), France EVOLVE is a pan European Innovation Ac- Therefore, one key differentiator of • Businesses: Reduced capital and opera- • Cybeletech, France • Globaz S.A., Portugal tion with 19 key partners from 11 European EVOLVE is the ambition to tackle heteroge- tional costs for acquiring and maintaining • IBM Ireland Ltd, Ireland countries introducing important elements neity: in the infrastructure itself, from data computing infrastructure. • Idryma Technologias Kai Erevnas, Greece • ICCS (Institute of Communication of High-Performance Computing (HPC) and processing to data movement aspects, for • Society: Accelerated innovation via faster and Computer Systems), Greece Cloud in Big Data platforms taking advan- workflow optimization. Heterogeneity is design and deployment of innovative ser- • KOOLA d.o.o., Bosnia and Herzegovina • Kompetenzzentrum - Das Virtuelle tage of recent technological advancements also addressed at the level of user needs vices that unleash creativity. Fahrzeug, Forschungsgesellschaft mbH, to enable cost-effective applications in 7 with support of cloud and big data tools on Austria different pilots to keep up with the unpre- HPC platform. EVOLVE intends to build and demonstrate • MEMEX SRL, Italy • MEMOSCALE AS, Norway cedented data growth we are experiencing. the proposed testbed with real-life, mas- • NEUROCOM Luxembourg SA, Luxembourg EVOLVE aims to build a large-scale testbed EVOLVE aims to take concrete and decisive sive datasets from demanding applications • ONAPP Ltd, Gibraltar • Space Hellas SA, Greece by integrating technology from: steps in bringing together the Big Data, areas. To realize this vision, EVOLVE brings • Thales Alenia Space France SAS, France • The HPC world: An advanced computing HPC, and Cloud worlds, and to increase the together technologies and pilot partners • TIEMME SPA, Italy • WEBLYZARD Technology GmbH, Austria platform with HPC features and systems ability to extract value from massive and from EU industry with demonstrated ex- software. demanding datasets. EVOLVE aims to bring perience, established markets, and vested CONTACT • The Big Data world: A versatile big-data the following benefits for processing large interest. Furthermore, EVOLVE will conduct Catarina Pereira processing stack for end-to-end work- and demanding datasets: a set of 10-15 Proof-of-Concepts with [email protected] flows. • Performance: Reduced turn-around time stakeholders from the Big Data value chain Jean-Thomas Acquaviva [email protected] • The Cloud world: Ease of deployment, for domain-experts, industry (large and to build up digital ecosystems to achieve a Pierre-Aimé Agnel access, and use in a shared manner, while SMEs), and end-users. broader market penetration. [email protected] addressing data protection. • Experts: Increased productivity when de- signing new products and services, by pro- CALL cessing large datasets. ICT-11-2018-2019

PROJECT TIMESPAN 01/12/2018 - 30/11/2021

88 89 Large-scale EXecution EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 LEXIS for Industry & Society LEXIS

(TOSCA), augmenting them with new effi- http://lexis-project.eu cient hardware capabilities in the form of : @LexisProject Data Nodes and federation, usage moni- : @LEXIS2020 toring and accounting/billing supports to : @lexis-project realise an innovative solution oriented to stimulate the interest of European indus- COORDINATING ORGANISATION tries and creating an ecosystem of organi- IT4Innovations, VSB – Technical University sations that could benefit from the imple- of Ostrava, Czechia mented platform of LEXIS that sets up a OTHER PARTNERS connection between HPC, HPDA and Data • Alfred Wegener Institute Helmholtz Centre Management. for Polar and Marine Research, Germany The increasing quantities of data gene- LEXIS (Large-scale EXecution for Industry • Atos BDS (Bull), France rated by modern industrial and business & Society) project will build an advanced The consortium will develop a demons- • Bayerische Akademie der Wissenschaften / Leibniz Rechenzentrum der BAdW, Germany processes pose enormous challenges for engineering platform at the confluence of trator with a significant Open Source di- • Bayncore Labs Ltd, Ireland organisations seeking to glean knowledge HPC, Cloud and Big Data which will leve- mension including validation, test and • CEA, Commissariat à l’Energie Atomique et aux énergies alternatives, France and understanding from the data. Combi- rage large-scale geographically-distributed documentation. It will be validated in the • CYCLOPS Labs GmbH, Switzerland nations of HPC, Cloud and Big Data tech- resources from existing HPC infrastructure, large-scale pilots – in the industrial and • Centro Internazionale in Monitoraggio Ambientale - Fondazione CIMA, Italy nologies are key to meet the increasingly employ Big Data analytic solutions and aug- scientific sectors (Aeronautics, Earthquake • ECMWF, European Centre for Medium diverse needs of large and small organi- ment them with Cloud services. Driven by and Tsunami, Weather and Climate) – where Range Weather Forecasts, United Kingdom sations alike. Critically, access to powerful the requirements of the pilots, the LEXIS significant improvements in KPIs including • GE AVIO SRL, Italy • Helmholtz Zentrum Potsdam Deutsches compute platforms for SMEs - which has platform will build on best of breed data job execution time and solution accuracy Geoforschungs Zentrum GFZ, Germany been difficult due to both technical and fi- management solutions (EUDAT) and ad- are anticipated. LEXIS will organise a spe- • Information Technology for Humanitarian Assistance Cooperation and Action, Italy nancial reasons - may now be possible. vanced, distributed orchestration solutions cific call stimulating the project framework • LINKS Foundation, Italy adoption and stakeholders’ engagement • NUMTECH SARL, France • OUTPOST24, France on the targeted pilots. • Teseo Eiffage Energie Systèmes, Italy

LEXIS will promote the solution to the CONTACT HPC, Cloud and Big Data sectors maximi- Jan Martinovic [email protected] sing impact through targeted and qualified communication. LEXIS brings together a CALL consortium with the skills and experience ICT-11-2018-2019 to deliver a complex multi-faceted project, spanning a range of complex technologies PROJECT TIMESPAN across seven European countries, including 01/01/2019 - 30/06/2021 large industry, flagship HPC centres, in- dustrial and scientific compute pilot users, technology providers and SMEs.

90 91 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

International Cooperation

93 ENERXICO Supercomputing and Energy for Mexico EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

www.enerxico-project.eu ENERXICO : @enerxico-project

COORDINATING ORGANISATION • Barcelona Supercomputing Centre (BSC), Spain • Instituto Nacional de Investigaciones Nucleares (ININ), Mexico

OTHER PARTNERS • Bull SAS (Atos Group), France • Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico This project is going to apply exascale HPC main stakeholders of the energy industry energy industry and are also of interest Nacional (CINVESTAV), Mexico techniques to different energy industry in Mexico and European energy companies for European companies. These simulation • Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas simulations of critical interest for Mexico. working at the Mexican market, jointly tools should be ready to be used in exas- (CIEMAT), Spain ENERXICO will give solutions for oil & with the main European HPC company. cale computers that are in the roadmap of • Instituto Mexicano del Petroleo (IMP), Mexico • Instituto Nacional de Investigaciones gas industry in upstream, midstream and The main objectives of the project are: - to European IT companies. - to improve the Nucleares (ININ), Mexico downstream problems, wind energy indus- develop beyond the state of the art high cooperation between industries from EU • Instituto Politecnico Nacional (IPN-ESUA), Mexico try and combustion efficiency for trans- performance simulation tools that can and Mexico. - to improve the cooperation • Technische Universität München (TUM), Germany portation. This project brings together the help the modernization of the Mexican between the leading research groups in EU • Universidad Autonoma Metropolitana and Mexico (UAM-A), Mexico • Instituto de Ingenieria de la Universidad Nacional Autonoma de Mexico (UNAM- IINGEN), Mexico • Universitat Politecnica de Valencia (UPV), Spain • Université Grenoble Alpes (UGA), France • IBERDOLA , Spain • REPSOL SA, Spain

PROJECT OBSERVER Petroleos Mexicanos Exploracion y Produccion (PEMEX), Mexico

CONTACT Rose Gregorio [email protected] Madeleine Gray [email protected]

CALL FETHPC-01-2018

PROJECT TIMESPAN 01/06/2019 - 31/05/2021

94 95 High performance computing EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 HPCWE for wind energy HPCWE www.hpcwe-project.eu

COORDINATING ORGANISATION The University of Nottingham, United Kingdom

OTHER PARTNERS • Danmarks Tekniske Universitet, Denmark • Electricité de France (EDF), France • Imperial College of Science Technology and Medicine, United Kingdom storing data on large-scale file systems. • The University of Edinburgh (EPCC), HPCWE research aims at alleviating the United Kingdom Wind energy has become an increasingly novel algorithms. This leads to the deve- bottlenecks caused by data transfer from • Universidade de Sao Paulo, Brazil • Universidade Estadual de Campinas, Brazil important as a clean and renewable alter- lopment of methods for verification, vali- memory to disk. • Universidade Federal de Santa Catarina, Brazil native to fossil fuels in the energy portfo- dation, uncertainty quantification (VVUQ) • Universität Stuttgart, Germany • Universiteit Twente, Netherlands lios of both Europe and Brazil. At almost and in-situ scientific data interpretation; The HPCWE consortium consists of 13 • VORTEX Factoria de Calculs SL, Spain every stage in wind energy exploitation (ii) accurate integration of meso-scale at- partners representing the top academic ranging from wind turbine design, wind mosphere dynamics and micro-scale wind institutes, HPC centres and industries in CONTACT resource assessment to wind farm layout turbine flow simulations, as this interface Europe and Brazil. By exploring this col- Xuerui Mao Project Coordinator and operations, the application of HPC is a is the key for accurate wind energy simu- laboration, this consortium will develop [email protected] must. The goal of HPCWE is to address the lations. In HPCWE a novel scale integra- novel algorithms, implement them in state- +441157486020 following key open challenges in applying tion approach will be applied and tested of-the-art codes and test the codes in aca- HPC on wind energy: (i) efficient use of HPC through test cases in a Brazil wind farm; demic and industrial cases to benefit the CALL FETHPC-01-2018 resources in wind turbine simulations, via and (iii) adjoint-based optimization, which wind energy industry and research in both the development and implementation of implies large I/O consumption as well as Europe and Brazil. PROJECT TIMESPAN 01/06/2019 - 31/05/2021

Simulation of the wake flow downstream of a wind turbine

96 97 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Other projects

99 WCET-Aware Parallelization EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 ARGO of Model-Based Applications for Heterogeneous Parallel Systems ARGO

http://argo-project.eu

COORDINATING ORGANISATION Karlsruher Institut für Technologie (KIT), Germany

OTHER PARTNERS • ABSINT Angewandte Informatik GmbH, Germany • DLR - Deutsches Zentrum für Luft - und Raumfahrt EV, Germany • EMMTRIX Technologies GmbH, Germany • Frauhofer Gesellschaft zur Förderung Increasing performance and reducing cost, addressed this challenge with a holistic industrial automation domains on cus- der angewandten Forschung E.V., Germany • SCILAB Entreprises, France while maintaining safety levels and pro- tool chain for programming heterogeneous tomized heterogeneous many-core plat- • Technologiko Ekpaideftiko Idryma (TEI) grammability are the key demands for em- multi- and many-core architectures using forms. Dytikis Elladas, Greece • Université de Rennes I, France bedded and cyber-physical systems in Euro- automatic parallelization of model-based pean domains, e.g. aerospace, automation, real-time applications. ARGO enhances The challenging research and innovation CONTACT and automotive. For many applications, the WCET-aware automatic parallelization action was implemented by the unique Prof. Dr.-Ing. Dr. h. c. Jürgen Becker necessary performance with low energy by a cross-layer programming approach ARGO consortium that brings together Coordinator [email protected] consumption can only be provided by cus- combining automatic tool-based and user- industry, leading research institutes and tomized computing platforms based on guided parallelization to reduce the need universities. High class SMEs such as Sci- CALL heterogeneous many-core architectures. for expertise in programming parallel he- lab Enterprises, AbsInt GmbH and em- ICT-04-2015 However, their parallel programming with terogeneous architectures. An integrated mtrix Technologies GmbH contributed time-critical embedded applications suffers multi-core WCET analysis computes safe their diverse know-how in model-based PROJECT TIMESPAN from a complex toolchain and programmi- bounds for the WCET of parallelized pro- design environments, WCET calculation 01/01/2016 - 31/03/2019 ng process. grams, which qualifies the approach for the and automated software parallelization. use in the hard real-time domain. The ARGO The academic partners contributed their The ARGO (WCET-Aware Parallelization of approach was assessed and demonstrated outstanding expertise in code transfor- Model-Based Applications for Heteroge- by prototyping comprehensive time-criti- mations, automatic parallelization and neous Parallel Systems) research project cal applications from both aerospace and multi-core WCET analysis.

100 101 EXtreme-scale Analytics via Multimodal EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EXA MODE Ontology Discovery & Enhancement

www.examode.eu EXA MODE : @examode : @examode.eu

COORDINATING ORGANISATION University of Applied Sciences Western Switzerland, HES-SO, Switzerland

OTHER PARTNERS • Azienda Ospedaliera per l’Emergenza Cannizzaro, Italy • MicroscopeIT Sp. z o.o., Poland • Ontotext AD, Bulgaria Exascale volumes of diverse data from extreme analytics methods and tools that ExaMode is relevant to ICT12 in several • Radboud University Medical Center, distributed sources are continuously pro- are adopted in decision making by industry aspects: Netherlands • SURFsara BV, Netherlands duced. Healthcare data stand out in the and hospitals. Deep learning naturally al- 1. Challenge: it extracts knowledge and va- • Università degli Studi di Padova, Italy size produced (production 2020 >2000 lows to build semantic representations of lue from heterogeneous quickly increa- exabytes), heterogeneity (many media, ac- entities and relations in multimodal data. sing data volumes. CONTACT quisition methods), included knowledge Knowledge discovery is performed via se- 2. Scope: the consortium develops and re- Dr. Manfredo Atzori (e.g. diagnostic reports) and commercial va- mantic document-level networks in text leases new methods and concepts for [email protected] lue. The supervised nature of deep learning extreme scale analytics to accelerate Prof. Henning Müller and the extraction of homogeneous fea- [email protected] models requires large labelled, annotated tures in heterogeneous images. deep analysis also via data compression, data, which precludes models to extract The results are fused, aligned to medi- for precise predictions, support deci- CALL knowledge and value. ExaMode solves this cal ontologies, visualized and refined. sion making and visualize multi-modal ICT-12-2018-2020 by allowing easy & fast, weakly supervised Knowledge is then applied using a seman- knowledge. knowledge discovery of heterogeneous tic middleware to compress, segment and 3. Impact: the multi-modal/media semantic PROJECT TIMESPAN exascale data provided by the partners, classify images and it is exploited in deci- middleware makes heterogeneous data 01/01/2019 - 31/12/2022 limiting human interaction. Its objectives sion support and semantic knowledge ma- management & analysis easier & faster, include the development and release of nagement prototypes. it improves architectures for complex distributed systems with better tools in- creasing speed of data throughput and access, as resulting from tests in extreme analysis by industry and in hospitals.

102 103 M2DC Modular Microserver DataCentre EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 M2DC

ware accelerators. Within the project, the http://m2dc.eu consortium worked on microservers that : @M2DC_Project use high-performance ARMv8 server proces- : @M2DCproject sors; low-power ARMv8 embedded/mobile : @m2dc processors; processors; and accelerators (MPSoCs, GPUs, and FPGAs). This includes a COORDINATING ORGANISATION new microserver designs based on ARMv8 Instytut Chemii Bioorganicznej Polskiej and Intel Stratix 10. The platform includes Akademii Nauk – Poznań Supercomputing and Networking Center (PSNC), Poland an optimized middleware for deployment of the optimized appliances. It may contain up OTHER PARTNERS to 144 low power and 27 high performance • Alliance Services Plus, France The Modular Microserver DataCentre to various types of applications, while ad- microservers. • Arm Limited, United Kingdom • Beyond.PL SP ZOO, Poland (M2DC) project developed a new class of en- vanced management strategies and system The main features of the platform include • CEWE Stiftung & CO KGaA, Germany ergy-efficient and Total Cost of Ownership efficiency enhancements (SEE) can be used energy efficiency, built-in hardware accele- • Christmann Informationstechnik + Medien GmbH & CO KG, Germany (TCO)-optimized appliances with built-in ef- to improve energy efficiency, performance, rated functions and lower costs of deploy- • CEA - Commissariat à l’Energie Atomique ficiency and dependability enhancements. security and reliability. Data centre capable ment and upgrades. Baseline benchmarks et aux énergies alternatives, France The appliances are easy to integrate with a abstraction of the underlying heterogeneity show the high potential of M2DC appliances • Huawei Technologies Düsseldorf GmbH, Germany broad ecosystem of management software of the server is provided by an OpenSta- for the targeted applications including pho- • Offis EV, Germany and fully software-defined to enable opti- ck-based middleware. to finishing systems, IoT data processing, • Politecnico di Milano, Italy • REFLEX CES, France mization for a variety of future demanding In more detail, M2DC developed a re- cloud computing, artificial neural networks • Universität Bielefeld, Germany applications in a cost-effective way. The source-efficient, highly scalable hete- and HPC. They reached in many cases signifi- • Vodafone Automotive Italia S.P.A., Italy • XLAB d.o.o., Slovenia highly flexible M2DC server platform en- rogeneous microserver platform able to cant efficiency and Total Cost of Ownership ables customization and smooth adaptation integrate various microservers and hard- (TCO) gains, higher than 50%. CONTACT Ariel Oleksiak Project Coordinator [email protected]

CALL ICT-04-2015

PROJECT TIMESPAN 01/01/2016 - 30/06/2019

104 105 OPRECOMP Open transPREcision COMPuting EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

http://oprecomp.eu : @oprecompProject OPRECOMP

COORDINATING ORGANISATION IBM Research GmbH, Switzerland

OTHER PARTNERS • Alma Mater Studiorum - Università di Bologna, Italy • CINECA Consorzio Interuniversitario, Italy • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France • ETHZ - Eidgenössische Technische Guaranteed numerical precision of each shift from traditional computing paradigms can be lifted for intermediate calculations. Hochschule Zürich, Switzerland elementary step in a complex computation is now mandatory. While approximate computing approaches • Greenwavs Technologies, France • Technische Universität Kaiserslautern, has been the mainstay of traditional com- OPRECOMP aims at demolishing the ul- have been used before, in OPRECOMP for Germany puting systems for many years. This era, tra-conservative “precise” computing abs- the first time ever, a complete framework • The Queen’s University of Belfast, United Kingdom fueled by Moore’s law and the constant traction and replacing it with a more flexible for transprecision computing, covering • Università degli Studi di Perugia, Italy exponential improvement in computing ef- and efficient one, namely transprecision devices, circuits, software tools, and algo- • Universitat Jaume I de Castellon, Spain ficiency, is at its twilight: from tiny nodes of computing. OPRECOMP will investigate rithms, along with the mathematical theo- CONTACT the Internet-of-Things, to large HPC compu- the theoretical and practical understan- ry and physical foundations of the ideas Cristiano MALOSSI, PhD ting centers, sub-picoJoule/operation en- ding of the energy efficiency boost obtai- will be developed. It not only will provide [email protected] ergy efficiency is essential for practical rea- nable when accuracy requirements on data error bounds with respect to full precision lizations. To overcome the “power wall”, a being processed, stored and communicated results, but it will also enable major energy CALL efficiency improvements even when there FETPROACT-01-2016 is no freedom to relax end-to-end applica- tion quality-of-results. PROJECT TIMESPAN 01/01/2017 - 31/12/2020 The mission of OPRECOMP is to demons- trate, by using physical demonstrators, that this idea holds in a huge range of applica- tion scenarios in the domains of IoT, Big Data Analytics, Deep Learning, and HPC simulations: from the sub-milliWatt to the MegaWatt range, spanning nine orders of magnitude. In view of industrial exploita- tion, we will prove quality and reliability and demonstrate that transprecision com- puting is the way to think about future sys- tems.

106 107 PROviding Computing solutions EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 PROCESS for ExaScale ChallengeS

with an extensive track record of rapid ap- PROCESS www.www.process-project.eu plication development. In addition to provi- : @PROCESS_H2020 ding the service prototypes that can cope with very large data, PROCESS addresses COORDINATING ORGANISATION the work programme goals by using the Ludwig-Maximilians-Universität München, tools and services with heterogeneous Germany use cases, including exascale learning on OTHER PARTNERS medical images, airline revenue manage- • Universiteit van Amsterdam, Netherlands ment and open data for global disaster risk • Stichting Netherlands EScience Centre, reduction. This diversity of user communi- Netherlands • Haute Ecole Spécialisée de Suisse ties ensures that in addition to supporting Occidentale, Switzerland Large-scale Computing Paradigms The PROCESS demonstrators Data Analytics and Mining communities that push the envelope, the • Lufthansa Systems GmbH & CO KG, Data Preprocessing High-Performance Cloud Germany will pave the way towards Data Post-Processing Computing Computing solutions will also ease the learning curve Large-scale Modelling Accelerated Intensive • Inmark Europa SA, Spain exascale data services that Robust Math Modelling Computing Computing for broadest possible range of user commu- • Ustav Informatiky, Slovenska Machine Learning Big Data Multi-core will accelerate innovation and Deep Learning Processing Computing nities. In addition, the chosen open source Akademia Vied, Slovakia • Akademia Gorniczo-Hutnicza IM. password maximise the benefits of these Users Identity Provider Service strategy maximises the potential for up- Stanislawa Staszica W Krakowie, Poland token emerging data solutions. The Secure Access REST API take and reuse, together with mature sof- Model Execution Environment main tangible outputs of PRO- SSL TOSCA tware engineering practices that minimise CONTACT Jupyter LOBCDER Authentication M Template Repository CESS are very large data ser- plugin browser Authorization the efforts needed to set up and maintain Dieter Kranzlmüller REST API Coordinator vice prototypes, implemented services based on the PROCESS software Cloudify Application Repository SSH Maximilian Höb LOBCDER Rimrock Atmosphere Docker / Singularity M using a mature, modular, ge- plugin plugin plugin Virtual Images releases. Technical-Coordinator and Contact [email protected] neralizable open source solu- WebDAV The PROCESS project has achieved its mid- REST API REST API REST API tion for user friendly exascale LOBCDER term objectives and accelerated the de- Rimrock Atmosphere CALL data. The services will be tho- Data infrastructure management layer M M M velopment in the use case’s communities. EINFRA-21-2017 GSI-SSH + REST API REST API Cloud API roughly validated in real-world Queueing System Implementations for LOFAR will also be

Kubernetes DataNet M settings, both in scientific re- Cloud valuable for the upcoming Square Kilome- HPC CloudCloud Resourcesresources PROJECT TIMESPAN search and in advanced indus- Resources resources tre Array (SKA) project, which increases Micro-infrastructure containers 01/11/2017 - 31/10/2020 try deployments. the underlying challenge in the next years DataNet Adapter Dispel Gate M To achieve these ambitious to Exabytes. The PROCESS solutions are

Data Adaptors Data Adaptors objectives, the project consor- Data Adaptor deployed in three supercomputing centres solution for the data processing challenge.

Metadata tium brings together the key Repository across Europe, in Munich, Amsterdam and The already connected centres allow PRO- players in the new data-driven NextCloud Jupyter Krakow, enabling already today a testbed CESS to include applications based on HPC, ecosystem: top-level HPC and for exascale applications of tomorrow. Cloud and Accelerator-systems. The future big data centres, communities The figure shows the detailed PROCESS main tasks include the support of EOSC- – such as LOFAR – with unique architecture, with the main components based solutions and the optimisation of the Data Sources data challenges that the cur- API Java API FTP/FTP TCP/IP NFS FTP/FTPS end-user portal, orchestration, data and already deployed services.

Hadoop Distributed Distributed Distributed HDF5 HBase rent solutions are unable to File System Storage File System Archive Storage compute service. The heart of the data ser- StorageStorage Resources meet and experienced e-In- TCP/IP GridFTP SRM REST API HTTP resourcesresources vice is LOBCDER and its Micro-Infrastruc- DICOM LOFAR LTA dCache Copernicus CouchDB frastructure solution providers Archive Tape Archive Storage Open Access Hub ture-Containers, enabling a low-overhead

108 109 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

Ecosystem development

111 Consolidation of European Research EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EUROLAB-4- Excellence in Exascale HPC Systems HPC-2 https://eurolab4hpc.eu : @eurolab4hpc

COORDINATING ORGANISATION

Chalmers University of Technology, Sweden EUROLAB-4-HPC-2

OTHER PARTNERS • Barcelona Supercomputing Centre (BSC), Spain • Ecole Polytechnique Fédérale de Lausanne, Switzerland • ETHZ - Eidgenössische Technische Hochschule Zürich, Switzerland • FORTH Institute for Computer Science, Europe has made significant progress in be- This will be realised by means of four short- avenue to train more experts in Exascale Greece coming a leader in large parts of the HPC term objectives: hardware and software. • INRIA, Institut National de Recherche en Informatique et Automatique, France ecosystem: from industrial and scientific • RWTH-AACHEN Rheinisch-Westfälische application providers via system software LONG-TERM VISION ON HPC Technische Hochschule Aachen STRUCTURE PROMOTE (RWTH-AACHEN), Germany to Exascale systems, bringing together THE HPC COMMUNITY ENTREPRENEURSHIP Radical changes in computing are foreseen • TECHNION - Israel Institute of Technlogy, Israel technical and business stakeholders. Des- Uniting the HPC Community, Entrepreneurial training, for the near future. The next Eurolab4HPC • The University of Edinburgh, pite such gains, excellence in research in Eurolab4HPC aims to build a business prototyping, Long-Term Vision on High-Performance robust ecosystem, hence business plan development United Kingdom high performance computing systems is building a sustainable and helping with funding. Computing, planned release in January • The University of Manchester, foundation for HPC innovation fragmented across Europe and opportuni- 2020, will present an assessment of poten- United Kingdom in Europe. • Universität Augsburg, Germany ties for synergy are missed. tial changes for High-Performance Compu- • Universität Stuttgart, Germany • Universiteit Gent, Belgium At this moment, there is fierce internatio- DISSEMINATE STIMULATE ting in the next decade. You will be able to nal competition to sustain long-term lea- COMMUNITY NEWS TECHNOLOGY TRANSFER download it here: https://www.eurolab- CONTACT dership in HPC technology and there re- 4hpc.eu/vision/. Investing in dissemination Connecting with other [email protected] mains much to do. Since high-performance activities, creating a stronger technology transfer activities Eurolab4HPC brand and providing seed funding Per Stenström computing (HPC) is a vital technology for and HPC ecosystem. for HPC technology transfer [email protected] projects. European economic competitiveness and scientific excellence, it is indispensable CALL for innovating enterprises, scientific re- OPEN SOURCE FETHPC-03-2017 searchers, and government agencies while Another mission of Eurolab4HPC is to bring PROJECT TIMESPAN it generates new discoveries and break- together researchers, industry and users 01/05/2018 - 30/04/2020 through products and services. By uniting for the development and the use of open the HPC community, Eurolab4HPC aims to source hardware and software creating a build a robust ecosystem, hence building a community to work on long-term projects. sustainable foundation for HPC innovation We believe that open source projects provi- in Europe. de a natural forum for testing ideas and dis- covering innovation opportunities thereby promoting promote entrepreneurship, fos- tering industry take-up, and providing an

112 113 European eXtreme Data EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 EXDCI-2 and Computing Initiative - 2 EXDCI-2 http://exdci.eu : @exdci_eu

COORDINATING ORGANISATION Partnership for Advanced Computing in Europe (PRACE) AISBL, Belgium

OTHER PARTNERS European Technology Platform for High Performance Computing (ETP4HPC), Netherlands

CONTACT The project EXDCI-2 builds upon the suc- nization in HPC, Big Data, Cloud and em- At the international level, EXDCI-2 will Marjolein Oorsprong cess of EXDCI and will continue the coor- bedded computing, for a more competitive contribute to the international visibility of [email protected] dination of the HPC ecosystem with impor- related value chain in Europe. It will deve- Europe, develop contacts with the world Renata Gimenez tant enhancements to better address the lop a HPC technology roadmap addressing leading HPC ecosystems and increase Euro- [email protected] convergence of big data, cloud and HPC. the convergence with HPDA and the emer- pean impact on HPC standards. CALL gence of new HPC uses. It will deliver appli- FETHPC-03-2017 The strategic objectives of EXDCI-2 are: cation and applied mathematics roadmaps EXDCI-2 will improve the HPC awareness by a) Development and advocacy of a competi- that will pave the road towards exascale developing international event such as the PROJECT TIMESPAN tive European HPC Exascale Strategy simulation in academic and industrial do- EuroHPC Summit Week and by targeting 01/03/2018 - 31/08/2020 b) Coordination of the stakeholder commu- mains. It will develop a shared vision for the specific audience through dedicated media nity for European HPC at the Exascale. future of HPC that increases the synergies by disseminating the achievements of the and prepares for targeted research collabo- European HPC ecosystem. EXDCI-2 mobilizes the European HPC rations. EXDCI-2 will work to increase the stakeholders through the joint action of impact of the H2020 HPC research projects, PRACE and ETP4HPC. It will promote glo- by identifying synergies and supporting bal community structuring and synchro- market acceptance of the results.

114 115 Pan-European Research Infrastructure EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019 HPC-EUROPA3 on High Performance Computing

increase awareness of HPC within SMEs. A www.hpc-europa.eu series of workshops targeted at SMEs has : @hpceuropa3 been organised to encourage their uptake : @hpceuropa HPC-EUROPA3 of HPC. : @hpc-europa3-eu-project The Joint Research Activity, Container- : www.youtube.com/channel/ as-a-service for HPC, aims to enable easy UC9uOpFQGP9V0TQPXFUOgs_A portability of end-user applications to different HPC centres, providing clear be- COORDINATING ORGANISATION nefits for HPC-Europa3 visitors. This ap- CINECA Consorzio Interuniversitario, Italy proach enables different applications and runtime environments to be supported OTHER PARTNERS TRANSNATIONAL ACCESS • Barcelona Supercomputing Centre (BSC), the EU and Associated States (see http:// simultaneously on the same hardware re- Spain The core activity of HPC-Europa3 is the bit.ly/AssociatedStates), but limited places sources with no significant decrease in per- • CNRS - Centre National de la Recherche Transnational Access programme, which are available for researchers from third formance. It also provides the capability to Scientifique, France • CSC – IT Center for Science Ltd., Finland funds short collaborative research visits countries. fully capture and package all dependencies • GRNET - Ethniko Diktyo Erevnas in any scientific domain using High Perfor- HPC-Europa3 has introduced a “Regional of an application so that the application can Technologias AE, Greece • Kungliga Tekniska Hoegskolan (KTH), Sweden mance Computing (HPC). Access Programme” to encourage applica- be migrated to other machines or preser- • ICHEC - National University of Ireland Visitors gain access to some of Europe’s tions from researchers working in the Baltic ved to enable a reproducible environment Galway, Ireland • SURFsara BV, Netherlands most powerful HPC facilities, as well as and South-East Europe regions. The Regio- in the future. • EPCC - The University of Edinburgh, technical support from the relevant HPC nal Access Programme specifically targets The results from this JRA can be found in United Kingdom centre to enable them to make the best use researchers with little or no HPC expe- various deliverables which are available at • HLRS - Universität Stuttgart, Germany of these facilities. rience who need more powerful computing www.hpc-europa.eu/public_documents. CONTACT Applicants identify a “host” researcher facilities than they have, but not the most [email protected] working in their own field of research, and powerful systems offered by HPC-Europa3. are integrated into the host department Such researchers can apply respectively to CALL during their visit. Visits can be made to KTH-PDC in Sweden or GRNET in Athens, INFRAIA-01-2016-2017 any research institute in Finland, Germany, and will be given priority over more expe- Greece, Ireland, Italy, the Netherlands, rienced applicants. PROJECT TIMESPAN Spain, Sweden or the UK. (Note that project 01/05/2017 - 30/04/2021 partner CNRS does not participate in the NETWORKING AND JOINT Transnational Access activity and therefore RESEARCH ACTIVITIES it is not possible to visit research groups in The main HPC-Europa3 Networking Acti- France). vity, External co-operation for enhancing HPC-Europa3 visits can last between 3 and the best use of HPC, aims to build stron- 13 weeks. ger relationships with other European HPC The programme is open to researchers of initiatives, e.g. PRACE, ETP4HPC, and the Flow of a circular synthetic jet represented using any level, from academia or industry, wor- HPC Centres of Excellence. The goal is to the Q-criterion and coloured by the magnitude of the velocity (from research carried out by HPC- king in any area of computational science. provide HPC services in a more integrated Europa3 visitor Arnau Miró, Universitat Politècnica Priority is given to researchers working in way across Europe. This activity also aims to de Catalunya)

116 117 EUROPEAN HIGH-PERFORMANCE COMPUTING HANDBOOK 2019

ALLScale An Exascale Programming, Multi-objective Optimisation and Resilience Management Environment Based on Nested Recursive Parallelism ANTAREX AutoTuning and Adaptivity appRoach for Energy efficient eXascale HPC systems BioExcel Centre of Excellence for Biomolecular Research

COEGSS Center of Excellence for Global Systems Science

ComPat Computing Patterns for High Performance Multiscale Computing APPENDIX - COMPLETED PROJECTS APPENDIX - COMPLETED

CompBioMed A Centre of Excellence in Computational Biomedicine

EoCoE Energy oriented Centre of Excellence for computer applications

ESCAPE Energy-efficient SCalable Algorithms for weather Appendix - Prediction at Exascale ExaFLOW Enabling Exascale Fluid Dynamics Simulations

ExCAPE Exascale Compound Activity Prediction Engine Completed projects EXTRA Exploiting eXascale Technology with Reconfigurable Architectures greenFLASH Green Flash, energy efficient high performance computing Projects completed before 1 January 2019 are not featured in this edition of the Handbook, for real-time science but you can find the descriptions of recently completed projects in the 2018 Handbook, INTERTWINE Programming Model INTERoperability ToWards Exascale available on our website: (INTERTWinE) MaX Materials design at the eXascale

Mont-Blanc 3 Mont-Blanc 3, European scalable and power efficient HPC platform based on low-power embedded technology NoMaD The Novel Materials Discovery Laboratory POP Performance Optimisation and Productivity

READEX Runtime Exploitation of Application Dynamism for Energy-efficient eXascale computing SAGE Percipient StorAGe for Exascale Data Centric Computing

119 Imprint: © Etp4Hpc

Text: Etp4Hpc

Graphic design and layout: www.maiffret.net

Paper: Genyous digital

Printed and bound in november 2019: Burlet Graphics, France

Contact [email protected]

The EXDCI-2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 800957.

120