European High-Performance Computing projects - HANDBOOK 2021 | 1

European High-Performance Computing Projects Handbook 2021

2 | European High-Performance Computing projects – HANDBOOK 2021

Text: ETP4HPC

Contact:

[email protected]

www.etp4hpc.eu

@ETP4HPC

@etp4hpc

This handbook was produced with the support of the HPC-GIG project. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 824151.

European High-Performance Computing projects - HANDBOOK 2021 | 3

Dear Reader,

Welcome to the seventh edition of the ETP4HPC Handbook of Europe- an HPC projects! This 2021 issue is bigger than ever, listing 68 on-going projects (and 34 more finalised projects are referenced). The EuroHPC- funded projects launched earlier this year are now covered. But we are not limiting ourselves to EuroHPC, or to the former HPC cPPP projects: this Handbook aims to provide a comprehensive overview of all pro- jects in our arena. And our arena tends to expand, as HPC no longer lives in isolation but interacts with many related domains such as Big Data, Artificial Intelligence, Internet of Things and other technologies.

The projects featured in this Handbook are funded by 25 different calls. For ease of use, we have grouped them by scope, rather than by call. However, we also provide in European Ex- ascale Projects an overview of the different calls, indicating where you will find the correspond- ing projects in our Handbook. And if you are looking for a project by name, the alphabetical index of projects at the end of the Handbook will point you directly to the right page.

I trust this Handbook will help you navigate the complex European HPC landscape. Our dearest wish is that this document will give you ideas and contacts for many future innovative projects that will nourish and strengthen our European HPC ecosystem and technological sovereignty!

Best regards,

ETP4HPC Chairman Jean-Pierre Panziera

4 | European High-Performance Computing projects – HANDBOOK 2021

Table of Contents

European Exascale Projects ...... 6 Technology: HW and SW building blocks for exascale ...... 9 ADMIRE ...... 10 ASPIDE...... 12 DCoMEX ...... 14 DEEP-EST ...... 16 DEEP-SEA ...... 18 EPEEC ...... 20 EPI ...... 22 EPiGRAM-HS ...... 26 eProcessor ...... 28 ESCAPE-2 ...... 30 EuroExa ...... 32 EXA2PRO ...... 34 ExaQUte ...... 36 IO-SEA ...... 38 MAELSTROM ...... 40 MAESTRO ...... 42 MEEP ...... 44 Mont-Blanc 2020 ...... 46 NEASQC ...... 48 RECIPE ...... 50 RED-SEA ...... 52 Sage2 ...... 54 SODALITE...... 56 SPARCITY ...... 58 TEXTAROSSA ...... 60 TIME-X ...... 62 VECMA ...... 64 VESTEC ...... 66 Centres of Excellence in computing applications ...... 69 FocusCoE ...... 70 BioExcel-2 ...... 72 ChEESE ...... 74

European High-Performance Computing projects - HANDBOOK 2021 | 5

CoEC ...... 76 CompBioMed2 ...... 78 E-CAM ...... 80 EoCoE-II ...... 82 ESiWACE2 ...... 84 EXCELLERAT ...... 86 HiDALGO ...... 88 MaX ...... 90 NOMAD ...... 92 PerMedCoE...... 94 POP2 ...... 96 RAISE ...... 98 TREX ...... 100 Applications in the HPC Continuum ...... 103 ACROSS ...... 104 CYBELE ...... 106 DeepHealth ...... 108 eFlows4HPC ...... 110 ENERXICO ...... 112 EVOLVE...... 114 EXA MODE ...... 116 exaFOAM ...... 118 EXSCALATE4COV ...... 120 HEROES ...... 122 HPCWE ...... 124 LEXIS ...... 126 LIGATE ...... 128 MICROCARD ...... 130 NextSim ...... 132 OPTIMA ...... 134 Phidias ...... 136 REGALE ...... 138 SCALABLE ...... 140 Ecosystem development ...... 143 CASTIEL ...... 144 EuroCC ...... 146 FF4EUROHPC ...... 148 HiPEAC ...... 150 HPC-EUROPA3 ...... 152 Index of on-going projects ...... 154 Index of completed projects ...... 155

6 | European High-Performance Computing projects – HANDBOOK 2021

European Exascale Projects

Our Handbook features a number of projects, which, though not financed by HPC cPPP or Euro- HPC calls, are closely related to HPC and have their place here. For ease of use, we have grouped projects by scope, instead of by call. However, we also provide in the table below an overview of the different calls, indicating where you will find the corresponding projects in our Handbook. If you are looking for a project by name, the alphabetical index of projects at the end of the Handbook will point you directly to the right page. Finally, this Handbook only features on-going projects. Projects that ended before January 2021 are listed in the Annex, and described in detail in previous editions of the Handbook (available on our website at https://www.etp4hpc.eu/european-hpc-handbook.html ).

Chapter Corresponding calls Comments Technology: EuroHPC-01-2019 10 projects started in 2021, most HW and SW building Extreme scale computing and data of them with a duration of 3 years blocks for exascale driven technologies EuroHPC-06-2019 1 project started in 2020 Advanced Experimental Platform To- wards Exascale FETFLAG-05-2020 1 project started in 2020 Complementary call on Quantum Computing FETHPC-01-2016 2 projects started in 2017 and ending at the beginning of 2021 FETHPC-02-2017 11 projects started in 2018 with a Transition to Exascale duration of around 3 years ICT-05-2017 Prequel to EPI Customised and low energy computing (including Low power processor tech- nologies) ICT-16-2018 1 project related to HPC Software Technologies ICT-42-2017 The European Processor Initiative Framework Partnership Agreement in (EPI) European low-power microprocessor technologies SGA-LPMT-01-2018 Specific Grant Agreement under the Framework Partnership Agreement “EPI FPA”

European High-Performance Computing projects - HANDBOOK 2021 | 7

Chapter Corresponding calls Comments Centres of Excellence E-INFRA-5-2015 1 remaining project ending at the in computing applica- Centres of Excellence for computing beginning of 2021 tions applications INFRAEDI-02-2018 9 projects and 1 Coordination and Centres of Excellence for computing Support Action started in 2018 applications INFRAEDI-05-2020 5 projects started in 2020 and 2021 Centres of Excellence in exascale com- with a duration of around 3 years puting Applications in the CEF-TC-2018-5 1 project related to HPC HPC Continuum EuroHPC-02-2019 5 projects started in 2021 HPC and data centric environments and application platforms EuroHPC-03-2019 5 projects started in 2021 Industrial software codes for extreme scale computing environments and applications FETHPC-01-2018 2 projects started in June 2019 International Cooperation on HPC ICT-11-2018-2019 Subtopic (a) 4 Innovative Actions started in HPC and Big Data enabled Large-scale 2018 Test-beds and Applications ICT-12-2018-2020 1 project related to HPC Big Data technologies and extreme- scale analytics SC1-PHE-CORONAVIRUS-2020 Special call Advancing knowledge for the clinical and public health response to the 2019- nCoV epidemic Ecosystem develop- FETHPC-2-2014 2 Coordination and Support Ac- ment HPC Ecosystem Development tions FETHPC-03-2017 2 Coordination and Support Ac- Exascale HPC ecosystem development tions INFRAIA-01-2016-2017 HPC-EUROPA3 Integrating Activities for Advanced Communities EuroHPC-04-2019 2 projects starting in 2020 Innovating and Widening the HPC use and skills base EuroHPC-05-2019 1 project starting in 2020 Innovating and Widening the HPC use and skills base ICT-01-2019 1 project related to HPC Computing technologies and engineer- ing methods for cyber-physical systems of systems

8 | European High-Performance Computing projects – HANDBOOK 2021

Technology: HW and SW building blocks for exascale | 9

Technology: HW and SW building blocks for exascale

10 | European High-Performance Computing projects – HANDBOOK 2021

Adaptive multi-tier intelligent ADMIRE data manager for Exascale

The growing need to process extremely large serious bottleneck. At the same time, emerg- data sets is one of the main drivers for build- ing multi-tier storage hierarchies come with ing exascale HPC systems today. However, the potential to remove this barrier. But the flat storage hierarchies found in classic maximising performance still requires careful HPC architectures no longer satisfy the per- control to avoid congestion and balance formance requirements of data-processing computational with storage performance. applications. Uncoordinated file access in Unfortunately, appropriate interfaces and combination with limited bandwidth make policies for managing such an enhanced I/O the centralised back-end parallel file system a stack are still lacking.

Technology: HW and SW building blocks for exascale | 11

The main objective of the ADMIRE project is Extreme scale computing and data driv- to establish this control by creating an active en technologies I/O stack that dynamically adjusts computa- tion and storage requirements through intel- ligent global coordination, malleability of www.admire-eurohpc.eu computation and I/O, and the scheduling of Twitter: @admire_eurohpc storage resources along all levels of the stor- age hierarchy. To achieve this, we will devel- COORDINATING ORGANISATION op a software-defined framework based on Universidad Carlos III de Madrid, Spain the principles of scalable monitoring and OTHER PARTNERS control, separated control and data paths,  and the orchestration of key system compo- Johannes Gutenberg-Universität Mainz, nents and applications through embedded  control points. Barcelona Supercomputing Centre (BSC), Spain  Our software-only solution will allow the Technische Universität Darmstadt, Ger- throughput of HPC systems and the perfor- many  mance of individual applications to be sub- DataDirect Networks (DDN), France  stantially increased – and consequently ener- Institut National de Recherche en Informa- gy consumption to be decreased – by taking tique et Automatique (INRIA), France  advantage of fast and power-efficient node- ParaTools, SAS, France  local storage tiers using novel, European ad- Forschungszentrum Jülich, Germany  hoc storage systems and in-transit/in-situ Consorzio Interuniversitario Nazionale per processing facilities. Furthermore, our en- l’Informatica (CINI),  hanced I/O stack will offer quality-of-service CINECA, Consorzio Interuniversitario, Italy  (QoS) and resilience. An integrated and oper- E4 Computer Engineering SpA, Italy  ational prototype will be validated with sev- Instytut Chemii Bioorganicznej Polskiej eral use cases from various domains, includ- Akademii Nauk, Poland  ing climate/weather, life sciences, physics, Kungliga Tekniska Högskolan (KTH), Sweden remote sensing, and deep learning. CONTACT Jesús Carretero, [email protected]

CALL EuroHPC-01-2019

PROJECT TIMESPAN 01/04/2021- 31/03/2024

12 | European High-Performance Computing projects – HANDBOOK 2021

exAScale ProgrammIng models for ASPIDE extreme Data processing

OBJECTIVES ments). This way, the proposed solution assists application developers to access and The ASPIDE project will contribute with the use resources without the need to manage definition of a new programming paradigms, low-level architectural entities. In the same APIs, runtime tools and methodologies for way, we want to provide a way to easily expressing data-intensive tasks on Exascale switch among different execution modes or systems, which can pave the way for the policies without requiring to modify the ap- exploitation of massive parallelism over a plications source code. simplified model of the system architecture, promoting high performance and efficiency, AD-HOC IN-MEMORY STORAGE SYSTEM and offering powerful operations and mecha- (IMSS) nisms for processing extreme data sources at IMSS is a proposal to enhance I/O in both high speed and/or real-time. traditional HPC and High-Performance Data DCEX PROGRAMMING MODEL Analytics (HPDA) systems. The architectural design follows a client-server design model The design of parallel patterns for program- where the client itself will be responsible of ming Exascale applications has been one of the server entities deployment. We propose the main goals of the ASPIDE project. DCEx an application-attached deployment con- addresses various features to enhance per- strained to application's nodes and an appli- formance of data intensive computations in cation-detached considering offshore nodes. Exascale systems. Indeed, the cost of access- The client layer is in charge of dealing with ing, moving, and processing data across a data locality exploitation alongside the im- parallel system can be enormous. The work- plementation of multiple I/O patterns provid- flow modelling in the DCEx enables set-up a ing diverse data distribution policies. data life cycle management, allowing data locality and data affinity. The elementary INTELLIGENT ANOMALIES DETECTION workflow units permit to either place the SYSTEM data close to computational node where data ASPIDE proposes an engine for data analysis is processed (data locality) or to distribute the for events detection in monitoring data for computation where data was previously the purposes of application autotuning. Over- generated (data affinity avoids data move- all, we developed an anomalies detection

Technology: HW and SW building blocks for exascale | 13 approach based on machine learning, capable Transition to Exascale Computing of detecting anomalies during Exascale appli- cation execution, such as hardware failures and communication bottlenecks. We utilise www.aspide-project.eu the events and anomalies detection engine to Twitter: @ASPIDE_PROJECT constrain the search space of the optimiza- tion problem, thus further improving the COORDINATING ORGANISATION execution efficiency of the Exascale applica- Universidad Carlos III de Madrid, Spain tions. OTHER PARTNERS AUTO-TUNING  Institute e-Austria Timisoara, Romania We introduce the ASPIDE auto-tuning ap-  University of Calabria, Italy proach based on a multi-objective optimiza-  Universität Klagenfurt, Austria tion algorithm that considers multi-  Institute of Bioorganic Chemistry of Polish dimensional search space with pluggable Academy of Sciences, Poland objectives, including execution time and  Servicio Madrileño de Salud, Spain energy. More- over, to further improve the  INTEGRIS SA Italy application execution, the ASPIDE approach  Atos (Bull SAS), France utilizes a machine learning (ML) based events detection approach, capable of identifying CONTACT point and contextual anomalies. In general, Javier GARCIA-BLAS, [email protected] the ASPIDE auto-tuner assists developers in CALL understanding the non-functional properties FETHPC-02-2017 of their applications by making it easy to analyse and experiment with the input pa- PROJECT TIMESPAN rameters. The auto-tuner further supports 15/06/2018 - 14/06/2021 them in exposing their obtained insights using tunable parameters. LARGE SCALE MONITORING One important challenge in ExaScale compu- ting consists of developing scalable compo- nents that are able to monitor in a coordinat- ed and efficient manner the use of the hard- ware resources and the behaviour of the applications. ASPIDE provides a hierarchical monitor system based on well-known com- ponents based on data aggregators, machine learning techniques, and time series analysis that aim to reduce the overhead of sensing large scale infrastructures.

14 | European High-Performance Computing projects – HANDBOOK 2021

Data Driven Computational DCoMEX Mechanics at Exascale

DCoMEX aims to provide unprecedented DCoMEX exploits the computational power of advances to the field of Computational Me- modern exascale architectures, to provide a chanics by developing novel numerical meth- robust and user-friendly framework that can ods enhanced by Artificial Intelligence, along be adopted in many applications. This frame- with a scalable software framework that work is comprised of AI-Solve library integrat- enables exascale computing. ed in two complementary computational mechanics HPC libraries. The first is a general- A key innovation of our project is the devel- purpose multiphysics engine and the second a opment of AISolve, a novel scalable library of Bayesian uncertainty quantification and op- AI-enhanced algorithms for the solution of timisation platform. large scale sparse linear system that are the core of computational mechanics. We will demonstrate DCoMEX potential by detailed simulations in two case studies: Our methods fuse physics-constrained ma- • patient-specific optimization of cancer chine learning with efficient block-iterative immunotherapy treatment, and methods and incorporate experimental data • design of advanced composite materials at multiple levels of fidelity to quantify model and structures at multiple scales. uncertainties. Efficient deployment of these methods in exascale supercomputers will We envision that software and methods provide scientists and engineers with unprec- developed in this project can be further cus- edented capabilities for predictive simula- tomized and also facilitate developments in tions of mechanical systems in applications critical European industrial sectors like medi- ranging from bioengineering to manufactur- cine, infrastructure, materials, automotive ing. and aeronautics design.

Technology: HW and SW building blocks for exascale | 15

Extreme scale computing and data driv- en technologies

mgroup.ntua.gr/dcomex

COORDINATING ORGANISATION National Technical University of Athens (NTUA), Greece

OTHER PARTNERS  Eidgenössische Technische Hochschule Zürich, Switzerland  University of Cyprus, Cyprus  Technische Universität München, Germany  National Infrastructures for Research and Technology, Greece

CONTACT mgroup.ntua.gr/dcomex/

CALL

EuroHPC-01-2019

PROJECT TIMESPAN

01/04/2021- 31/03/2024

16 | European High-Performance Computing projects – HANDBOOK 2021

DEEP PROJECTS DEEP-EST Dynamical Exascale Entry Platform

There is more than one way to build a super- done within the node, combining a central computer, and meeting the diverse demands processing unit (CPU) with one or more ac- of modern applications, which increasingly celerators. In DEEP-EST the resources were combine data analytics and artificial intelli- segregated and pooled into compute mod- gence (AI) with simulation, requires a flexible ules, as this enables to flexibly adapt the system architecture. Since 2011, the DEEP system to very diverse application require- series of projects (DEEP, DEEP-ER, DEEP-EST) ments. In addition to usability and flexibility, has pioneered an innovative concept known the sustained performance made possible by as the Modular Supercomputer Architecture following this approach aims to reach ex- (MSA), whereby multiple modules are cou- ascale levels. pled like building blocks. Each module is tailored to the needs of a specific class of One important aspect that makes the DEEP applications, and all modules together behave architecture stand out is the co-design ap- as a single machine. proach, which is a key component of the project. In DEEP-EST, six ambitious HPC/HPDA Connected by a high-speed, federated net- applications were used to define and evaluate work and programmed in a uniform system the hardware and software technologies software and programming environment, the developed. Careful analysis of the application supercomputer allows an application to be codes allowed a fuller understanding of their distributed over several hardware modules, requirements, which informed the proto- running each code component on the one type’s design and configuration. which best suits its particular needs. Specifi- cally, DEEP-EST, finished in March 2021, has In addition to traditional compute-intensive built a prototype with three modules: a gen- HPC applications, the DEEP-EST DAM includes eral-purpose Cluster Module (CM) for low or leading-edge memory and storage technology medium scalable codes, the highly scalable tailored to the needs of data-intensive work- Extreme Booster Module (ESB) comprising a loads, which occur in data analytics and ML. cluster of accelerators, and a Data Analytics Module (DAM), which were tested with six Through the DEEP projects, researchers have applications combining high-performance shown that combining resources in compute computing (HPC) with high-performance data modules efficiently serves applications from analytics (HPDA) and machine learning (ML). multi-physics simulations to simulations integrating HPC with HPDA, to complex het- The DEEP approach is part of the trend to- erogeneous workflows such as those in artifi- wards using accelerators to improve perfor- cial intelligence applications. mance and overall energy efficiency – but with a twist. Traditionally, heterogeneity is

Technology: HW and SW building blocks for exascale | 17

The next step in the series of DEEP projects is Co-design of HPC systems and applica- made with DEEP-SEA – Software for Exascale tions Architectures.

www.deep-projects.eu  European Organisation for Nuclear Re- Twitter: @DEEPprojects search (CERN), Switzerland  ParTec, Germany COORDINATING ORGANISATION Partners in DEEP and DEEP-ER: Forschungszentrum Jülich (Jülich Supercom-  CERFACS, France puting Centre), Germany  CGG, France  CINECA, Italy OTHER PARTNERS  Eurotech, Italy Current partners in DEEP-EST:  EPFL, Switzerland  Intel, Germany  Seagate, UK  Leibniz Supercomputing Centre, Germany  INRIA, Institut National de Recherche en  Barcelona Supercomputing Centre (BSC), Informatique et Automatique, France Spain  The Cyprus Institute, Cyprus  Megware Computer Vertrieb und Service  Universität Regensburg, Germany GmbH, Germany  Heidelberg University, Germany CONTACT  EXTOLL, Germany Dr. Estela Suarez,  The University of Edinburgh, [email protected]  Fraunhofer ITWM, Germany CALL  Astron,  KU Leuven, Belgium ETHPC-01-2016  National Center For Supercomputing Appli- PROJECT TIMESPAN (DEEP-EST) cations, Bulgaria 01/07/2017 - 31/03/2021  Norges Miljo-Og Biovitenskaplige, Norway  Universitet Haskoli Islands, Iceland

18 | European High-Performance Computing projects – HANDBOOK 2021

DEEP PROJECTS DEEP-SEA Dynamical Exascale Entry Platform

DEEP projects go into the next round: After run. Also, the need grows to port codes to DEEP-EST (Extreme Scale Technologies) was different technologies such as accelerators or finished successfully in March 2021, DEEP-SEA even to completely new platforms. Not to (Software for Exascale Architectures) started mention the burden of optimization for a on April 1, 2021. The 4th member of the DEEP variety of HPC systems. project series focusses on the question how future exascale systems can be used in an DEEP-SEA addresses this challenge. The main easy – or partly automated – manner, while goals of this project are: at the same time being as energy efficient as • Better manage and program compute possible. and memory heterogeneity. • Targets easier programming for Modular With DEEP-EST, a prototype was established Supercomputers. that leverages the benefits of a Modular Supercomputing Architecture (MSA). Here, To achieve these goals, DEEP-SEA looks at all different components like standard CPUs, relevant parts of the system: nodes, clusters, accelerators like GPUs, among others, build a applications, system software and tools. complex mesh of heterogeneous technolo- gies. Each technology is best suited for specif- To provide solutions that are usable on as ic workloads, applications can leverage the many HPC systems as possible and that scale parts of the MSA that allow for optimal ener- to exascale and beyond, DEEP-SEA uses the gy/performance ratio. Large-scale simula- co-design approach that has already proven tions, data analytics, machine- and deep effective in earlier DEEP projects. Real-world learning – different jobs have different re- applications from seven areas form an inte- quirements to run at the equilibrium of ener- gral part of DEEP-SEA to prove the practical gy usage and compute performance. use of the chosen approach: • Space Weather (xPic, AIDApy) The downside: the complexity on system and • Weather Forecast (IFS) node level makes it hard to allocate all re- sources in the best way. Parallelization is not • Seismic imaging (RTM, BSIT) trivial in traditional HPC environment – with • Molecular dynamics (GROMACS) the rising complexity of heterogeneous re- • Computational fluid dynamics (Nek5000) sources is will be hardly manageable by the • Neutron Monte Carlo transport for nu- users and the application developers. They clear energy (PATMOS) need to understand their codes and the un- • Earth System Modelling (TSMP) derlying hardware very well in order to decide on what components parts of the code should

Technology: HW and SW building blocks for exascale | 19

To support the co-design approach, an early- Extreme scale computing and data driv- access programme will be established as soon en technologies as the first relevant milestones are reached.

DEEP-SEA builds upon exascale concepts www.deep-projects.eu developed over nearly 10 years. It will build a Twitter: @DEEPprojects software stack for heterogeneous compute and memory systems that allows scientists COORDINATING ORGANISATION and developers to make best use of all availa- Forschungszentrum Jülich (Jülich Supercom- ble resources. puting Centre), Germany

OTHER PARTNERS  Atos (Bull SAS), France  Leibniz Supercomputing Centre , Germany  Barcelona Supercomputing Centre (BSC), Spain  Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V, Germany  Katholieke Universiteit (KU) Leuven, Bel- gium  Eidgenössische Technische Hochschule (ETH) Zürich, Switzerland  ParTec, Germany  Idryma Technologias Kai Erevnas (FORTH), Greece  Commissariat à l'énergie atomique et aux énergies alternatives (CEA), France  Technische Universität München, Germany  Technische Universität Darmstadt, Germany

 Kungliga Tekniska Högskolan (KTH), Sweden  European Centre for Medium-Range Weat- her Forecasts (ECMWF), United Kingdom

CONTACT Dr. Estela Suarez, [email protected] CALL EuroHPC-01-2019 PROJECT TIMESPAN 01/04/2021 - 31/03/2024

20 | European High-Performance Computing projects – HANDBOOK 2021

European joint Effort toward a EPEEC Highly Productive Programming Environment for Heterogeneous Exascale Computing

EPEEC’s main goal is to develop and deploy a An automatic generator of compiler directives production-ready parallel programming envi- provides outstanding coding productivity ronment that turns upcoming overwhelming- from the very beginning of the application ly-heterogeneous exascale supercomputers developing/porting process. Developers are into manageable platforms for domain appli- able to leverage either shared memory or cation developers. The consortium has signifi- distributed-shared memory programming cantly advanced and integrated existing state- flavours, and code in their preferred lan- of-the-art components based on European guage: C, Fortran, or C++. EPEEC ensures the technology (programming models, runtime composability and interoperability of its systems, and tools) with key features ena- programming models and runtimes, which bling three overarching objectives: high cod- incorporate specific features to handle data- ing productivity, high performance, and ener- intensive and extreme-data applications. gy awareness. Enhanced leading-edge performance tools offer integral profiling, performance predic- tion, and visualisation of traces.

Technology: HW and SW building blocks for exascale | 21

EPEEC exploits results from past FET projects Transition to Exascale Computing that led to the cutting-edge software compo- nents it builds upon, and pursues influencing the most relevant parallel programming www.epeec-project.eu standardisation bodies. The consortium is Twitter: #EPEECproject composed of European institutions and indi- viduals with the highest expertise in their COORDINATING ORGANISATION field, including not only leading research Barcelona Supercomputing Centre (BSC), centres and universities but also SME/start- Spain up companies, all of them recognised as high- OTHER PARTNERS tech innovators worldwide. Adhering to the  Fraunhofer Gesellschaft zur Förderung der Work Programme’s guidelines, EPEEC fea- angewandten Forschung E.V., Germany tures the participation of young and high-  INESC ID - Intituto de Engenhariade Siste- potential researchers, and includes careful mas e Computadores, Investigacao e De- dissemination, exploitation, and public en- senvolvimento em Lisboa, Portugal gagement plans.  INRIA, Institut National de Recherche en APPLICATIONS Informatique et Automatique, France  Appentra Solutions SL, Spain Five applications representative of different  CINECA Consorzio Interuniversitario, Italy relevant scientific domains serve as part of a  ETA SCALE AB, Sweden strong inter-disciplinary co-design approach  CERFACS, France and as technology demonstrators:  Interuniversitair Micro-Electronica Cen- • AVBP trum, Belgium • DIOGENeS  Uppsala Universitet, Sweden • OSIRIS • Quantum ESPRESSO CONTACT • SMURFF Antonio J. Peña, [email protected] PROGRAMMING ENVIRONMENT CALL FETHPC-02-2017 The EPEEC programming environment for exascale is developed to achieve high pro- PROJECT TIMESPAN gramming productivity, high execution effi- 01/10/2018 - 31/03/2022 ciency and scalability, energy awareness, and smooth composability/interoperability. It is formed by five main components: • Parallelware • OmpSs • GASPI • ArgoDSM • BSC performance tools (Extrae, Paraver, Dimemas)

Software prototypes and releases are availa- ble for download on the EPEEC website.

22 | European High-Performance Computing projects – HANDBOOK 2021 EPI European Processor Initiative

Europe has an ambitious plan to become a and independent European HPC industry main player in supercomputing. One of the based on domestic and innovative technolo- core components for achieving that goal is a gies as presented in the EuroHPC Joint Under- processor. The European Processor Initiative taking proposed by the European Commis- (EPI) is a part of a broader strategy to develop sion.

The EPI timeline

Technology: HW and SW building blocks for exascale | 23

The general objective of EPI is to design a 3. Ensuring that the application areas of roadmap and develop the key Intellectual the technology are not limited only to Property for future European low-power HPC, but cover other areas, such as au- processors addressing extreme scale compu- tomotive and data centres, thus ensuring ting (exascale), high-performance, big-data the economic viability of the initiative. and emerging verticals (e.g. automotive com- puting) and other fields that require highly EPI gathers 28 partners from 10 European efficient computing infrastructure. More countries, with a wide range of expertise and precisely, EPI aims at establishing the key background: HPC, supercomputing centres, technologies to reach three fundamental automotive computing, including researchers goals: and key industrial players. The fact that the envisioned European processor is planned to 1. Developing low-power processor tech- be based on already existing tools either nology to be included in advanced exper- owned by the partners or being offered as imental pilot platforms towards exascale open-source with a large community of us- systems for Europe; ers,provides three key exploitation ad- 2. Ensuring that a significant part of that vantages: (1) the time-to-market will be re- technology and intellectual property is duced as most of these tools are already used European; in industry and well known. (2) It will enable

24 | European High-Performance Computing projects – HANDBOOK 2021

EPI partners to incorporate the results in their commercial portfolio or in their scientific roadmap. (3) it fundamentally reduces the technological risk associated to advanced technologies developed and implemented in EPI.

EPI covers a complete range of expertise, skills, and competencies needed to design and execute a sustainable roadmap for re- search and innovation in processor and com- puting technology, fostering future exascale HPC and emerging applications, including Big Data, and automotive computing for auton- omous vehicles. Development of a new pro- cessor to be at the core of future computing systems will be divided into several streams:

• Common Platform and global architec- ture [stream 1] • HPC general purpose processor [stream 2] • Accelerator [stream 3] • Automotive platform [stream 4]

The results from the two streams related to the general-purpose processor and accelera- tor chips will generate a heterogeneous, energy-efficient CPU for use in both standard and non-traditional and compute-intensive segments, e.g. automotive where SoA in autonomous driving requires significant com- putational resources.

EPI strives to maximize the synergies between the two streams and will work with existing EU initiatives on technology, infrastructure and applications, to position Europe as a world leader in HPC and emerging markets for exascale era such as automotive computing for autonomous driving.

Technology: HW and SW building blocks for exascale | 25

European Processor

european-processor-initiative.eu  Kalray SA, France  Extoll GmbH, Germany Twitter: @EUProcessor  CINECA, Cineca Consorzio Interuniversitar- LinkedIn: @european-processor- io, Italy initiative  BMW Group, Bayerische Motoren Werke YouTube: www.youtube.com Aktiengesellschaft, Germany  Elektrobit, Automotive GmbH, Germany COORDINATING ORGANISATION  KIT, Karlsruher Institut für Technologie, Atos (Bull SAS), France Germany  Menta SAS, France OTHER PARTNERS  Prove & Run, France  BSC, Barcelona supercomputing center –  SIPEARL SAS, France centro nacional de supercomputación,  Kernkonzept, Germany Spain  IFAG, Infineon Technologies AG, Germany CONTACT  SemiDynamics, SemiDynamics technology [email protected] services SL, Spain CALL  CEA, Commissariat à l’Energie Atomique et SGA-LPMT-01-2018 aux énergies alternatives, France  CHALMERS, Chalmers tekniska högskola, PROJECT TIMESPAN Sweden 01/12/2018 - 30/11/2021  ETH Zürich, Eidgenössische Technische Hochschule Zürich, Switzerland  FORTH, Foundation for Research and Tech- nology Hellas, Greece  GENCI, Grand Equipement National de Calcul Intensif, France  IST, Instituto Superior Technico, Portugal  JÜLICH, Forschungszentrum Jülich GmbH, Germany  UNIBO, Alma mater studiorum - Universita di Bologna, Italy  UNIZG-FER, Sveučilište u Zagrebu, Fakultet elektrotehnike i računarstva, Croatia  Fraunhofer, Fraunhofer Gesellschaaft zur Förderung der angewandten Forschung E.V., Germany  ST-I, STMICROELECTRONICS SRL, Italy  E4, E4 Computer Engineering SPA, Italy  UNIPI, Universita di Pisa, Italy  SURFsara BV, Netherlands

26 | European High-Performance Computing projects – HANDBOOK 2021

Exascale Programming Models for EPiGRAM-HS Heterogeneous Systems

MOTIVATION • Increasing presence of heterogeneous technologies on pre-exascale supercom- puters • Need to port key HPC and emerging applications to these systems on time for exascale OBJECTIVES • Extend the programmability of large- scale heterogeneous systems with GPUs, FPGAs, HBM and NVM • Introduce new concepts and functionali- ties, and implement them in two wide- ly-used HPC programming systems for large-scale supercomputers: MPI and GASPI • Maximize the productivity of application development on heterogeneous super- computers by: - providing auto-tuned collective communication - a framework for automatic code generation for FPGAs - a memory abstraction device com- prised of APIs - a runtime for automatic data place- ment on diverse memories and a DSL for large-scale deep-learning frameworks

Technology: HW and SW building blocks for exascale | 27

Transition to Exascale Computing

epigram-hs.eu Twitter: @epigramhs

COORDINATING ORGANISATION KTH, Sweden

OTHER PARTNERS  EPCC, United Kingdom  ETHZ - Eidgenössische Technische Hochs- chule Zürich , Switzerland  Fraunhofer, Germany  Cray, Switzerland  ECMWF, United Kingdom

CONTACT Stefano Markidis, [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 01/09/2018 - 31/08/2021

28 | European High-Performance Computing projects – HANDBOOK 2021

European, extendable, energy- eProcessor efficient, energetic, embedded, extensible, Processor Ecosystem

The eProcessor project is an ambitious com- (ISA) and will feature high performance bination of processor design, based on the computing and data analytics accelerators RISC-V open source hardware ISA, applica- coupled to a high performance, low ener- tions and system software, bringing together gy out-of-order processor (Europe's first multiple partners to leverage and extend pre- high performance out-of-order 64-bit existing Intellectual Property (IP), combined RISC-V platform). This is a major first step with new IP that can be used as building in the direction of an open European blocks for future HPC systems, both for tradi- software/hardware ecosystem, which will tional and emerging application domains. As guarantee technology independence. such, the eProcessor project’s overall goal is 2. Software/hardware co-design for im- to create an open source full stack ecosys- proved application performance and sys- tem (both software and hardware) by tem energy efficiency: eProcessor will achieving the following objectives: meet the performance and energy re- 1. Extend open source to include open quirements of new and existing HPC ap- source hardware for HPC: The eProcessor plications. eProcessor will co-design solu- technology will be based on the RISC-V tions to provide high performance, low- open source instruction set architecture power, and fault tolerance. Uniquely, the

Technology: HW and SW building blocks for exascale | 29

project will specialize all components of Extreme scale computing and data driv- the system in the context of a broad ap- en technologies plication domain: a combination of ener- gy efficient accelerators, adaptive on-chip memory structures, and a flexible and eprocessor.eu high performance energy-efficient CPU, Twitter: @eprocessor_eu with the corresponding open source LinkedIn: @eprocessor software stack. 3. HPC and HPDA applications: eProcessor COORDINATING ORGANISATION will use a diverse set of applications in Barcelona Supercomputing Center (BSC) HPC and high performance data analytics (HPDA, which includes Artificial Intelli- OTHER PARTNERS gence (AI), Deep Learning (DL), Machine  Chalmers tekniska högskola, Sweden Learning (ML) and Bioinformatics applica-  Idryma Technologias Kai Erevnas, Founda- tions to drive the design of the overall tion for Research and Technology, Greece system. eProcessor will extend these ap-  Università degli Studi di Roma "La Sapien- plications and their frameworks to sup- za", Italy port the RISC-V ISA.  Cortus, France 4. Focus on sustained application perfor-  christmann informationstechnik + medien mance: Many HPC and HPDA applications GmbH & Co. KG, Germany use sparse data sets and/or low/mixed-  Universität Bielefeld, Germany precision. Instead of focusing on the peak  EXTOLL GmbH, Germany performance of dense computations,  Thales SA, France eProcessor targets a broader collection of  Exascale Performance Systems – EXAPSYS applications by developing a system tar- P.C., Greece geting sustained application performance. 5. The Stimulate European collaboration: CONTACT eProcessor partners will leverage their ex- Nehir Sonmez, [email protected] isting IP from multiple European projects such as European Processor Initiative (EPI), Low-Energy Toolset for Heteroge- CALL neous Computing (LEGaTO), MareNos- EuroHPC-01-2019 trum Experimental Exascale Platform (MEEP), POP2 CoE, Tulipp, EuroEXA, Ex- PROJECT TIMESPAN aNeSt and DeepHealth and extend their 01/04/2021 - 31/03/2024 capabilities and improve the Technical Readiness Level (TRL).

By employing software/hardware co-design, as shown in the figure on the previous page, eProcessor covers the whole system stack from HPC and HPDA applications at the top, down to the processor core with accelerators at the bottom while all levels are mutable so as to enable true, unrestricted co-design.

30 | European High-Performance Computing projects – HANDBOOK 2021

Energy-efficient SCalable ESCAPE-2 Algorithms for weather and climate Prediction at Exascale

OBJECTIVE putation of dynamics and physics on multiple ESCAPE-2 will develop world-class, extreme- scales and multiple levels. Highly-scalable scale computing capabilities for European spatial discretization will be combined with operational numerical weather and climate proven large time-stepping techniques to prediction, and provide the key components optimize both time-to-solution and energy- for weather and climate domain benchmarks to-solution. Connecting multi-grid tools, to be deployed on extreme-scale demonstra- iterative solvers, and overlapping computa- tors and beyond. This will be achieved by tions with flexible-order spatial discretization developing bespoke and novel mathematical will strengthen algorithm resilience against and algorithmic concepts, combining them soft or hard failure. In addition, machine with proven methods, and thereby reas- learning techniques will be applied for accel- sessing the mathematical foundations form- erating complex sub-components. The sum of ing the basis of Earth system models. ESCAPE- these efforts will aim at achieving at the same 2 also invests in significantly more productive time: performance, resilience, accuracy and programming models for the weather-climate portability. community through which novel algorithm development will be accelerated and future- proofed. Eventually, the project aims at providing exascale-ready production bench- marks to be operated on extreme-scale de- monstrators (EsD) and beyond. ESCAPE-2 combines cross-disciplinary uncertainty quan- tification tools (URANIE) for high- performance computing, originating from the energy sector, with ensemble-based weather and climate models to quantify the effect of model and data related uncertainties on forecasting – a capability, which weather and climate prediction has pioneered since the 1960s. The mathematics and algorithmic research in ESCAPE-2 will focus on implementing data structures and tools supporting parallel com-

Technology: HW and SW building blocks for exascale | 31

Transition to Exascale Computing

www.hpc-escape2.eu/

COORDINATING ORGANISATION ECMWF - European Centre for Medium-range Weather Forecasts, United Kingdom

OTHER PARTNERS  DKRZ - Deutsches Klimarechenzentrum GmbH, Germany  Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany  Eidgenössisches Depertment des Innern, Switzerland  Barcelona Supercomputing Centre (BSC), Spain  CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France  Loughborough University, United Kingdom  Institut Royal Météorologique de Belgique, Belgium  Politecnico di Milano, Italy  Danmarks Meteorologiske Institut, Den- mark  Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, Italy  Atos (Bull SAS), France

CONTACT Peter Bauer, [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 01/10/2018 - 30/09/2021

32 | European High-Performance Computing projects – HANDBOOK 2021

The Largest Group of ExaScale EuroExa Projects and Biggest Co-Design Effort

EuroEXA is a co-designed project funded by • Energy Efficiency – 250 PetaFLOPS per MW EU Horizon 2020. We are working to develop for future iterations of semiconductor tech- technologies that meet the demands of Ex- nologies within the architecture. aScale computing – delivering a ground- • Compactness – showing that an ExaScale breaking architecture capable of scaling to machine could be built within 30 shipping peak performance of 400 PetaFLOPS, with a containers, with an edge-to-edge distance peak power system envelope of 30MW that of less than 40m for the computers. approaches PUE parity through renewables and immersion-based cooling. Partners have ported a rich mix of key appli- cations to the EuroExa FPGA platform – from The project has a €20m investment over 52 forecasting the weather and modelling parti- months and is part of a total €50m invest- cle physics to simulating the human brain and ment made by the European Commission searching for new drugs to treat disease. across the EuroEXA group of projects support- ing research, innovation and action across We are working to build very dense PCBs, applications, system software, hardware, bringing the electronics as close together as networking, storage, liquid cooling, and data possible to drive performance and reduce centre technologies. energy demands. Similarly, we have devel- oped a three-level interconnect that gives Our project group was assembled to drive the data the shortest possible routes to travel, development of our members’ technologies, while allowing for the expansion of the sys- working together to co-design the future of tem with commodity OpenFlow switches – EU-based ExaScale supercomputers. To do offering scalability with fewer bottlenecks. that, we are building three testbeds, showing new architectures and technologies which The project objectives include to develop and can be extrapolated to demonstrate: deploy Arm Cortex and AMD EPYC technology processing systems with FPGA acceleration at • Compute Performance – up to 400 Peta- PetaFLOP level an ExaScale procurement for FLOPS, with an estimated system power of deployment in 2022/23. around 30MW peak consumption.

Technology: HW and SW building blocks for exascale | 33

Co-design of HPC systems and applica- tions

www.euroexa.eu Twitter: @EuroEXA LinkedIn: @euroEXA

The EuroEXA Blade COORDINATING ORGANISATION ICCS (Institute of Communication and Com- puter Systems), Greece OTHER PARTNERS  Arm, UK  The University of Manchester, UK  Barcelona Supercomputing Centre (BSC), Spain  FORTH (Foundation for Research and Tech- nology Hellas), Greece  The Hartree Centre of STFC, UK  IMEC, Belgium  ZeroPoint Technologies, Sweden  Iceotope, UK  Synelixis Solutions Ltd., Greece  Maxeler Technologies, UK  Neurasmus, Netherlands  INFN (Istituto Nazionale Di Fisica Nucleare), Italy  INAF (Istituto Nazionale Di Astrofisica, Italy  ECMWF (European Centre for Medium- Range Weather Forecasts), International  Fraunhofer, Germany CONTACT Peter Hopton (Dissemination & Exploitation Lead) [email protected] CALL FETHPC-01-2016 PROJECT TIMESPAN 01/09/2017 - 31/12/2021

34 | European High-Performance Computing projects – HANDBOOK 2021

Enhancing Programmability and EXA2PRO boosting Performance Portability for Exascale Computing Systems

Advancing scientific progress in domains such EXA2PRO project is developing programming as physics, energy and material science by models and tools in a co-design activity to- developing and deploying applications in gether with relevant applications from the computing clusters of hundreds of thousand high energy physics, material science, chemi- CPU cores is exciting, yet very challenging. cal processes for cleaner environment and The European Commission-funded project energy storage domains. The EXA2PRO goal is EXA2PRO – Enhancing Programmability and to increase the productivity of developing and boosting Performance Portability for Exascale deploying applications on heterogeneous Computing Systems - designs and develops a computing systems and to promote and framework for enabling the efficient deploy- lower the barrier of access to exascale com- ment of applications in supercomputing puting systems to the scientific community systems. and industry.

EXA2PRO framework and indicative results

Technology: HW and SW building blocks for exascale | 35

As the project ends within 2021, tools of the Transition to Exascale Computing EXAPRO framework have been evaluated in a wide variety of applications, generating im- pressive results. Some recent success stories exa2pro.eu/ are the following: Twitter: @exa2pro_h2020 • The EXA2PRO tools have been used to advance CO2 capture technologies, by COORDINATING ORGANISATION enabling the generation of CO2 capture ICCS (Institute of Communication and Com- solutions 41% faster. puter Systems), Greece • The performance of a supercapacitor simulation code (MetalWalls) improved OTHER PARTNERS by 33% by applying the EXA2PRO  Linköpings Universitet, Sweden runtime system.  CERTH, Ethniko Kentro Erevnas kai Tech- • Initial results of porting the supercapaci- nologikis Anaptyxis, Greece tor simulation code on a dataflow engine  INRIA, Institut National de Recherche en accelerator, showed significant perfor- Informatique et Automatique, France mance and energy gains.  Forschungszentrum Jülich GmbH, Germany  • The EXA2PRO programming model has Maxeler Technologies Ltd, United Kingdom  been applied to a multi-node neural CNRS - Centre National de la Recherche simulation code, greatly improving its Scientifique, France  scalability. UoM, University of Macedonia, Greece

EXA2PRO outcomes have major impact on: CONTACT Prof. Dimitrios Soudris (project coordinator), 1. the scientific and industrial community [email protected] that focuses on application deployment Dr. Lazaros Papadopoulos (technical coordi- in supercomputing centres: The nator), [email protected] EXA2PRO framework improves produc- tivity. After applying the EXA2PRO API in CALL applications, evaluation across a variety FETHPC-02-2017 of architectures is performed automati- cally. PROJECT TIMESPAN 2. application developers that target ex- 01/05/2018 - 31/07/2021 ascale computing systems: EXA2PRO provides tools for improving source code maintainability/reusability, which will al- low application evolution with reduced developers’ effort. 3. the scientific community and the indus- try relevant to the EXA2PRO applica- tions, with significant impact on the ma- terials and processes design for CO2 cap- ture and on the supercapacitors indus- try.

36 | European High-Performance Computing projects – HANDBOOK 2021

ExaQUte - EXAscale Quantification ExaQUte of Uncertainties for Technology and Science Simulation

OBJECTIVE The efficient exploitation of Exascale system will be addressed by combining State-of-the- The ExaQUte project aims at constructing a Art dynamic task-scheduling technologies framework to enable Uncertainty Quantifica- with space-time accelerated solution meth- tion (UQ) and Optimization Under Uncertain- ods, where parallelism is harvested both in ties (OUU) in complex engineering problems space and time. using computational simulations on Exascale systems. The methods and tools developed in ExaQUte will be applicable to many fields of science The stochastic problem of quantifying uncer- and technology. The chosen application fo- tainties will be tackled by using a Multi Level cuses on wind engineering, a field of notable MonteCarlo (MLMC) approach that allows a industrial interest for which currently no high number of stochastic variables. New reliable solution exists. This will include the theoretical developments will be carried out quantification of uncertainties in the re- to enable its combination with adaptive mesh sponse of civil engineering structures to the refinement, considering both, octree-based wind action, and the shape optimization and anisotropic mesh adaptation. taking into account uncertainties related to Gradient-based optimization techniques will wind loading, structural shape and material be extended to consider uncertainties by behaviour. developing methods to compute stochastic sensitivities. This requires new theoretical All developments in ExaQUte will be open- and computational developments. With a source and will follow a modular approach, proper definition of risk measures and con- thus maximizing future impact. straints, these methods allow high- performance robust designs, also maximizing the solution reliability.

The description of complex geometries will be possible by employing embedded methods, which guarantee a high robustness in the mesh generation and adaptation steps, while allowing preserving the exact geometry rep- resentation.

Technology: HW and SW building blocks for exascale | 37

Transition to Exascale Computing

www.exaqute.eu Twitter: @ExaQUteEU

COORDINATING ORGANISATION CIMNE- Centre Internacional de Metodes Numerics en Enginyeria, Spain

OTHER PARTNERS  Barcelona Supercomputing Centre (BSC), Spain  Technische Universität München, Germany  INRIA, Institut National de Recherche en Informatique et Automatique, France  IT4Innovations, VSB – Technical University of Ostrava, Czech Republic  Ecole Polytechnique Fédérale de Lausanne, Switzerland  Universitat Politecnica de Catalunya, Spain  str.ucture GmbH, Germany

CONTACT Cecilia Soriano, [email protected] Riccardo Rossi, [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 01/06/2018 - 31/05/2021

38 | European High-Performance Computing projects – HANDBOOK 2021

IO Software for Exascale IO-SEA Architecture

IO-SEA aims to provide a novel data man- systematically analyse the telemetry records agement and storage platform for exascale to make smart decisions on data placement. computing based on hierarchical storage These ideas coupled with in-storage- management (HSM) and on-demand provi- computation remove unnecessary data sioning of storage services. The platform will movements within the system. efficiently make use of storage tiers spanning NVMe and NVRAM at the top all the way The IO-SEA project (EuroHPC-2019-1 topic b) down to tapebased technologies. System has connections to the DEEP-SEA (topic d) requirements are driven by data intensive and RED-SEA (topic c) project. It leverages use-cases, in a very strict co-design approach. technologies developed by the SAGE, SAGE2 The concept of ephemeral data nodes and and NextGEN-IO projects, and strengthens data accessors is introduced that allow users the TLR of the developed products and tech- to flexibly operate the system, using various nologies. well-known data access paradigms, such as POSIX namespaces, S3/Swift Interfaces, MPI- IO and other data formats and protocols. These ephemeral resources eliminate the problem of treating storage resources as static and unchanging system components – which is not a tenable proposition for data intensive exascale environments. The meth- ods and techniques are applicable to exascale class data intensive applications and work- flows that need to be deployed in highly heterogeneous computing environments.

Critical aspects of intelligent data placement are considered for extreme volumes of data. This ensures that the right resources among the storage tiers are used and accessed by data nodes as close as possible to compute nodes – optimising performance, cost, and energy at extreme scale. Advanced IO instru- mentation and monitoring features will be developed to that effect leveraging the latest advancements in AI and machine learning to

Technology: HW and SW building blocks for exascale | 39

Extreme scale computing and data driv- en technologies

iosea-project.eu Twitter: @iosea_eu

COORDINATING ORGANISATION

Commissariat à l'énergie atomique et aux

énergies alternatives (CEA), France

OTHER PARTNERS  Atos (Bull SAS), France  Forschungszentrum Jülich, Germany  European Centre for Medium-Range Weat- her Forecasts, United Kingdom  Seagate Systems (UK) Limited, United Kingdom  National University of Ireland Galway, Ireland  IT4Innovations, VSB – Technical University of Ostrava, Czech Republic  Kungliga Tekniska Högskolan (KTH), Sweden  Masaryk University, Czech Republic  Johannes Gutenberg-Universitat Mainz, Germany

CONTACT Philippe DENIEL, [email protected]

CALL EuroHPC-01-2019

PROJECT TIMESPAN 01/04/2021 - 31/03/2024

40 | European High-Performance Computing projects – HANDBOOK 2021

MAchinE Learning for Scalable MAELSTROM meTeoROlogy and cliMate

To develop Europe’s computer architecture of ity and training efficiency for machine learn- the future, MAELSTROM will co-design be- ing at scale, and large-scale machine learning spoke compute system designs for optimal applications for the domain of weather and application performance and energy efficien- climate science. cy, a software framework to optimise usabil-

Technology: HW and SW building blocks for exascale | 41

The MAELSTROM compute system designs Extreme scale computing and data driv- will benchmark the applications across a en technologies range of computing systems regarding energy consumption, time-to-solution, numerical precision and solution accuracy. www.maelstrom-eurohpc.eu Twitter: @MAELSTROM_EU Customised compute systems will be de- LinkedIn: @maelstrom-eu signed that are optimised for application needs to strengthen Europe’s high- performance computing portfolio and to pull COORDINATING ORGANISATION recent hardware developments, driven by European Centre for Medium-Range Weather general machine learning applications, to- Forecasts, United Kingdom ward needs of weather and climate applica- tions. OTHER PARTNERS  4Cast GmbH & Co. KG, Germany The MAELSTROM software framework will  E4 Computer Engineering SpA, Italy enable scientists to apply and compare ma-  Eidgenössische Technische Hochschule chine learning tools and libraries efficiently Zürich, Switzerland across a wide range of computer systems. A  Forschungszentrum Jülich, Germany user interface will link application developers  Meteorologisk institutt, Norway with compute system designers, and auto-  Université du Luxembourg, Luxembourg mated benchmarking and error detection of machine learning solutions will be performed CONTACT during the development phase. Tools will be Peter Dueben, [email protected] published as open source. CALL The MAELSTROM machine learning applica- EuroHPC-01-2019 tions will cover all important components of the workflow of weather and climate predic- PROJECT TIMESPAN tions including the processing of observa- 01/04/2021 - 31/03/2024 tions, the assimilation of observations to generate initial and reference conditions, model simulations, as well as post-processing of model data and the development of fore- cast products. For each application, bench- mark datasets with up to 10 terabytes of data will be published online for training and ma- chine learning tool-developments at the scale of the fastest supercomputers in the world. MAELSTROM machine learning solutions will serve as blueprint for a wide range of ma- chine learning applications on supercomput- ers in the future.

42 | European High-Performance Computing projects – HANDBOOK 2021

Middleware for memory and data- MAESTRO awareness in workflows

Maestro will build a data-aware and memory- sees as the two major impediments of mod- aware middleware framework that addresses ern HPC software: ubiquitous problems of data movement in 1. Moving data through memory was not complex memory hierarchies and at many always the bottleneck. The software stack levels of the HPC software stack. that HPC relies upon was built through dec- Though HPC and HPDA applications pose a ades of a different situation – when the cost broad variety of efficiency challenges, it of performing floating point operations would be fair to say that the performance of (FLOPS) was paramount. Several decades of both has become dominated by data move- technical evolution built a software stack and ment through the memory and storage sys- programming models highly fit for optimising tems, as opposed to floating point computa- floating-point operations but lacking in basic tional capability. Despite this shift, current data handling functionality. We characterise software technologies remain severely lim- the set of technical issues at missing data- ited in their ability to optimise data move- awareness. ment. The Maestro project addresses what it

Technology: HW and SW building blocks for exascale | 43

2. Software rightfully insulates users from Transition to Exascale Computing hardware details, especially as we move higher up the software stack. But HPC appli- cations, programming environments and www.maestro-data.eu/ systems software cannot make key data Twitter: @maestrodata movement decisions without some under- LinkedIn: @maestrodata standing of the hardware, especially the increasingly complex memory hierarchy. With COORDINATING ORGANISATION the exception of runtimes, which treat Forschungszentrum Jülich GmbH, Germany memory in a domain-specific manner, soft- ware typically must make hardware-neutral OTHER PARTNERS decisions which can often leave performance  CEA - Commissariat à l’Energie Atomique et on the table. We characterise this issue as aux énergies alternatives, France missing memory-awareness.  Appentra Solutions SL, Spain  ETHZ - Eidgenössische Technische Hochs- Maestro proposes a middleware framework chule Zürich, Switzerland that enables memory- and data-awareness.  ECMWF - European Centre for Medium- Maestro has developed new data-aware range Weather Forecasts, United Kingdom abstractions that can be be used in any level  Seagate Systems UK Ltd, United Kingdom of software, e.g. compiler, runtime or applica-  Hewlett Packard Enterprise, Switzerland tion. Core elements are the Core Data Objects (CDO), which through a give/take semantics CONTACT provided to the middleware. The middleware Dirk Pleiter, [email protected] is being designed such that it enables model- ling of memory and storage hierarchies to CALL allow for reasoning about data movement FETHPC-02-2017 and placement based on costs of moving data objects. The middleware will support auto- matic movement and promotion of data in PROJECT TIMESPAN memories and storage and allow for data 01/09/2018 - 30/11/2021 transformations and optimisation.

Maestro follows a co-design methodology using a set of applications and workflows from diverse areas including numerical weather forecasting, earth-system modelling, materials sciences and in-situ data analysis pipelines for computational fluid dynamics simulations.

44 | European High-Performance Computing projects – HANDBOOK 2021

The MareNostrum MEEP Experimental Exascale Platform

MEEP is an exploratory platform, flexible FPGA (field-programmable gate array) based emulation, to develop, integrate, test and co- design hardware and software for exascale supercomputers and other hardware targets, based on European-developed intellectual property (IP).

MEEP provides two important functions: • An evaluation platform of pre-silicon IP and ideas, at speed and scale; • Software development and experimen- tation platform to enable software read- iness for new hardware.

MEEP enables software development, accel- erating software maturity, compared to the limitations of software simulation. IP can be tested and validated before moving to silicon, saving time and money.

The objectives of MEEP are to leverage and extend projects like the European Processor Initiative (EPI) and the Performance Optimisa- tion and Productivity Centre of Excellence (POP CoE).

The ultimate goal of the project is to create an open full-stack ecosystem that can be used for academic purposes and integrated into a functional accelerator or cores for traditional and emerging HPC applications.

Technology: HW and SW building blocks for exascale | 45

European Processor

meep-project.eu

COORDINATING ORGANISATION Barcelona Supercomputing Center (BSC), Spain

OTHER PARTNERS  University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia  The Scientific and Technological Research Council of Turkey, Turkey

CONTACT [email protected]

CALL EuroHPC-06-2019

PROJECT TIMESPAN

01/01/2020 - 31/12/2022

46 | European High-Performance Computing projects – HANDBOOK 2021

European scalable, modular and Mont-Blanc power-efficient HPC processor 2020

Now that the Mont-Blanc 2020 project has we selected the Arm instruction set, with its come to an end, it is time to look back on its Scalable Vector Extension (SVE) optimized for accomplishments. Mont-Blanc 2020 was the HPC and artificial intelligence (AI). It has last of a long series of projects. The initial particular technological relevance for high- concept when we started in 2011 was very end cores; more importantly, the availability disruptive: leveraging mobile (Arm) chips and of a dynamic software ecosystem was neces- their power-efficiency to run high- sary to run real applications as required by performance computing (HPC) applications. our co-design methodology. Successive Mont-Blanc projects have wit- An important achievement of Mont-Blanc nessed and accompanied the rise of Arm 2020 is the tools and methodology we select- processors in servers: the current Fugaku ed and developed for processor simulation system ranked first in the TOP500 has and virtual prototyping, i.e. the tools that reached the apex of high-end Arm CPUs, allowed our researchers to test applications vindicating the initial rationale behind the and evaluate future performance prior to project. silicon availability. We developed a unique co- One of the strong points of the Mont-Blanc design methodology for SoC infrastructure projects was theirindustry / academia collab- verification and optimization. Co-design is oration. Mont-Blanc 2020 was no exception, always a challenge, but we faced an addition- with a team of three core partners with com- al test, which was to get hardware and soft- plementary profiles (Arm, Atos, BSC), three ware teams from different organizations to active SMEs (Kalray, Semidynamics, Sipearl) work together. We had to build a bridge and prominent research partners (BSC, CEA, between the computer-aided design (CAD) Jülich Supercomputing Centre). tools used by our industrial partners and the open source tools used by our academic The focus of Mont-Blanc 2020 was processor partners. Our approach has increased the design. It essentially addressed system-on- speed of simulation by a factor of 1,000 and chip (SoC) design and processor intellectual even by 10,000 for some applications. property (IP) to enable the work of the Euro- pean Processor Initiative (EPI), focusing on Many of the features we developed for our the challenge of achieving extremely high Simulation Framework are already used out- performance per Watt. For the compute unit, side of the project. For example, Mont-Blanc

Technology: HW and SW building blocks for exascale | 47

2020 was instrumental in the implementation European Processor of SVE instructions in gem5, which is part of the official open source release 20.0 of gem5. Another example is the SVE-related im- montblanc-project.eu provements to the MUlti-scale Simulation Twitter: @MontBlanc_Eu Approach (MUSA) developed within Mont- LinkedIn: @mont-blanc-project Blanc 2020, which are used within EPI. However, the top Mont-Blanc 2020 achieve- COORDINATING ORGANISATION ment is without doubt the IP developed for a Atos (Bull SAS), France low power network on chip (NoC). NoC is critical in a SoC when targeting highly de- OTHER PARTNERS manding applications. It is also very challeng-  Arm, United Kingdom ing: in manycore architectures you need to  Barcelona Supercomputing Centre (BSC), maintain low latency while increasing the Spain number of cores as well as the throughput of  CEA - Commissariat à l’Energie Atomique et each core. The NoC IP developed by Mont- aux énergies alternatives, France Blanc 2020 will be included in the next-  Jülich Forschungszentrum, Germany generation EPI processor. Our NoC and relat-  Kalray, France ed NoC IPs are also integrated in Atos’s IP  SemiDynamics, Spain portfolio that will serve future commercial and research projects. CONTACT To conclude, European sovereignty in the Etienne Walter, provision of HPC technology was part of [email protected] Mont-Blanc’s vision from the start. We aimed to contribute to the revival of the European CALL SoC design ecosystem by creating an IP port- ICT-05-2017 folio as well as boosting the skills necessary for chip design. Today we can say: mission PROJECT TIMESPAN accomplished! 01/12/2017 - 31/03/2021

The Mont-Blanc 2020 physical architecture

48 | European High-Performance Computing projects – HANDBOOK 2021

NExt ApplicationS of Quantum NEASQC Computing

The NEASQC project brings together academ- The NEASQC consortium has chosen a wide ic experts and industrial end-users to investi- selection of NISQ-compatible industrial and gate and develop a new breed of Quantum- financial use-cases, and will develop new enabled applications that can take advantage quantum software techniques to solve those of NISQ (Noise Intermediate-Scale Quantum) use-cases with a practical quantum ad- systems in the near future. NEASQC is use- vantage. To achieve this, the project brings case driven, addressing practical problems together a multidisciplinary consortium of such as drug discovery, CO2 capture, smart academic and industry experts in Quantum energy management, natural language pro- Computing, High Performance Computing, cessing, breast cancer detection, probabilistic Artificial Intelligence, chemistry... risk assessment for energy infrastructures or hydrocarbon well optimisation. NEASQC aims The ultimate ambition of NEASQC is to en- to initiate an active European community courage European user communities to inves- around NISQ Quantum Computing by provid- tigate NISQ quantum computing. For this ing a common toolset that will attract new purpose, the project consortium will define industrial users. and make available a complete and common toolset that new industrial actors can use to NEASQC aims at demonstrating that, though start their own practical investigation and the millions of qubits that will guarantee fully share their results. fault-tolerant quantum computing are still far away, there are practical use cases for the NEASQC also aims to build a much-needed NISQ (Noise Intermediate-Scale Quantum) bridge between Quantum Computing hard- devices that will be available in the near ware activities, particularly those of the future. NISQ computing can deliver significant Quantum Technologies flagship, and the end- advantages when running certain applica- user community. Even more than in classical tions, thus bringing game-changing benefits IT, NISQ computing demands a strong coop- to users, and particularly industrial users. eration between hardware teams and soft- ware users. We expect our work in use cases will provide strong directions for the devel- opment of NISQ machines, what will be very valuable to the nascent quantum hardware industry.

Technology: HW and SW building blocks for exascale | 49

NEASQC OBJECTIVES Quantum Computing 1. Develop 9 industrial and financial use cases with a practical quantum ad- neasqc.eu vantage for NISQ machines. Twitter: @NEASQC 2. Develop open source NISQ programming LinkedIn: @neasqc-project libraries for industrial use cases, with a view to facilitate quantum computing COORDINATING ORGANISATION experimentation for new users. Atos (Bull SAS), France 3. Build a strong user community dedicated to industrial NISQ applications. OTHER PARTNERS 4. Develop software stacks and bench-  marks for the Quantum Technology Flag- AstraZeneca AB, Sweden  ship hardware platforms. CESGA (Fundación Publica Gallega Centro Tecnológico de Supercomputación de Gali- NEASQC is one of the projects selected within cia), Spain  the second wave of Quantum Flagship pro- Electricité de France (EDF), France  jects. HQS Quantum Simulations GmbH, Germany  HSBC Bank Plc, United Kingdom  Irish Centre for High-End Computing (ICHEC), Ireland  Leiden University, Netherlands  TILDE SIA, Latvia  TOTAL S.A., France  Universidade da Coruña – CITIC, Spain  Université de Lorraine (LORIA), France

CONTACT Cyril Allouche, [email protected]

CALL FETFLAG-05-2020

PROJECT TIMESPAN 01/09/2020 – 31/08/2024

50 | European High-Performance Computing projects – HANDBOOK 2021

REliable power and time- RECIPE ConstraInts-aware Predictive management of heterogeneous Exascale systems

OBJECTIVE variety of applications within reasonable power budgets, will be one of the most criti- The current HPC facilities will need to grow by cal aspects in the evolution of HPC infrastruc- an order of magnitude in the next few years ture towards Exascale. This middleware will to reach the Exascale range. The dedicated need to address the critical issue of reliability middleware needed to manage the enormous in face of the increasing number of resources, complexity of future HPC centres, where deep and therefore decreasing mean time between heterogeneity is needed to handle the wide failures.

The RECIPE Hardware-Software Stack

Technology: HW and SW building blocks for exascale | 51

To close this gap, RECIPE provides: a hierar- Transition to Exascale Computing chical runtime resource management infra- structure optimizing energy efficiency and ensuring reliability for both time-critical and www.recipe-project.eu throughput-oriented computation; a predic- Twitter: @EUrecipe tive reliability methodology to support the enforcing of QoS guarantees in face of both COORDINATING ORGANISATION transient and long-term hardware failures, Politecnico di Milano, Italy including thermal, timing and reliability mod- els; and a set of integration layers allowing OTHER PARTNERS the resource manager to interact with both  the application and the underlying deeply UPV - Universitat Politecnica de Valencia, heterogeneous architecture, addressing them Spain  in a disaggregate way. CeRICT - Centro Regionale Information Communication Technology scrl, Italy  Quantitative goals for RECIPE include: 25% BSC - Barcelona Supercomputing Centre, increase in energy efficiency (perfor- Spain  mance/watt) with an 15% MTTF improve- PSNC - Instytut Chemii Bioorganicznej ment due to proactive thermal management; Polskiej Akademii Nauk, Poland  energy-delay product improved up to 25%; EPFL - Ecole Polytechnique Fédérale de 20% reduction of faulty executions. Lausanne, Switzerland  IBTS - Intelligence behind Things Solutions The project will assess its results against the SRL, Italy  following set of real world use cases, address- CHUV - Centre Hospitalier Universitaire ing key application domains ranging from Vaudois, Switzerland well-established HPC applications such as geophysical exploration and meteorology, to CONTACT emerging application domains such as bio- Prof. William Fornaciari, medical machine learning and data analytics. [email protected] Prof. Giovanni Agosta, [email protected] To this end, RECIPE relies on a consortium composed of four leading academic partners CALL (POLIMI, UPV, EPFL, CeRICT); two supercom- FETHPC-02-2017 puting centres, BSC and PSNC; a research hospital, CHUV, and an SME, IBTS, which PROJECT TIMESPAN provide effective exploitation avenues 01/05/2018 - 31/10/2021 through industry-based use cases.

52 | European High-Performance Computing projects – HANDBOOK 2021

Network Solution for Exascale RED-SEA Architectures

Network interconnects play an enabling role funded initiatives to provide a competitive in HPC systems – and this will be even truer and efficient network solution for the ex- for the coming Exascale systems that will rely ascale era and beyond. This involves develop- on higher node counts and increased use of ing the key IPs and the software environment parallelism and communication. Moreover, that will deliver: next-generation HPC and data-driven systems will be powered by heterogeneous computing • scalability, while maintaining an ac- devices, including low-power Arm and RISC-V ceptable total cost of ownership and processors, high-end CPUs, vector accelera- power efficiency; tion units and GPUs suitable for massive • virtualization and security, to allow single-instruction multiple-data (SIMD) work- various applications to efficiently and loads, as well as FPGA and ASIC designs tai- safely share an HPC system; lored for extremely power-efficient custom • Quality-of-service and congestion man- codes. agement to make it possible to share the platform among users and applications These compute units will be surrounded by with different demands; distributed, heterogeneous (often deep) • reliability at scale, because fault toler- memory hierarchies, including high- ance is a key concern in a system with a bandwidth memories and fast devices offer- very large number of components; ing microsecond-level access time. At the same time, modern data-parallel processing • support of high-bandwidth low-latency units such as GPUs and vector accelerators HPC Ethernet, as HPC systems increas- can crunch data at amazing rates (tens of ingly need to interact securely with the TFLOPS). In this landscape, the network may outside world, including public clouds, well become the next big bottleneck, similar edge servers or third party HPC systems; to memory in single node systems. • support of heterogeneous programming model and runtimes to facilitate the RED-SEA will build upon the European inter- convergence of HPC and HPDA; connect BXI (BullSequana eXascale Intercon- • support for low-power processors and nect), together with standard and mature accelerators. technology (Ethernet) and previous EU-

Technology: HW and SW building blocks for exascale | 53

RED-SEA IN THE MODULAR SUPERCOM- Extreme scale computing and data driv- PUTING ARCHITECTURE en technologies

RED-SEA supports the Modular Supercompu- redsea-project.eu ting Architecture (MSA) that underpins all of Twitter: @redsea_EU the SEA projects (DEEP-SEA and IO-SEA, two LinkedIn: @redsea-project other projects resulting from the EuroHPC-01- 2019 call). COORDINATING ORGANISATION Atos (Bull SAS), France In the MSA, BXI is the HPC fabric within each compute module, delivering low-latency, high OTHER PARTNERS bandwidth and all required HPC features,  Idryma Technologias Kai Erevnas, Greece whereas Ethernet is the high-performance  Commissariat à l'énergie atomique et aux federative network that offers interface to énergies alternatives (CEA), France storage and with other compute modules.  Forschungszentrum Jülich, Germany RED-SEA will design a seamless interface  EXTOLL GmbH, Germany between BXI and Ethernet via a new Gateway  Eidgenössische Technische Hochschule solution. Zürich, Switzerland  Universitat Politècnica de València, Spain  Universidad de Castilla - La Manche, Spain  Istituto Nazionale di Fisica Nucleare – INFN, Italy  Exapsys - Exascale Performance Systems, Greece  eXact lab SRL, Italy

CONTACT

redsea-project.eu/contact/

CALL

EuroHPC-01-2019

PROJECT TIMESPAN 01/04/2021 - 31/03/2024

54 | European High-Performance Computing projects – HANDBOOK 2021

Percipient Storage for Exascale Sage2 Data Centric Computing 2

OBJECTIVE computing scientific workflows and AI/deep learning leveraging the latest developments The landscape for High Performance Compu- in storage infrastructure software and storage ting is changing with the proliferation of technology ecosystem. enormous volumes of data created by scien- tific instruments and sensors, in addition to Sage2 aims to provide significantly enhanced data from simulations. This data needs to be scientific throughput, improved scalability, stored, processed and analysed, and existing and, time & energy to solution for the use storage system technologies in the realm of cases at scale. Sage2 will also dramatically extreme computing need to be adapted to increase the productivity of developers and achieve reasonable efficiency in achieving users of these systems. higher scientific throughput. Sage2 will provide a highly performant and We started on the journey to address this resilient, QoS capable multi-tiered storage problem with the SAGE project. The HPC use system, with data layouts across the tiers cases and the technology ecosystem is now managed by the Mero Object Store, which is further evolving and there are new require- capable of handling in-transit/in-situ pro- ments and innovations that are brought to cessing of data within the storage system, the forefront. It is extremely critical to ad- accessible through the Clovis API. dress them today without “reinventing the wheel” leveraging existing initiatives and The 3-rack prototype system is now available know-how to build the pieces of the Exascale for use at Jülich Supercomputing Centre. puzzle as quickly and efficiently as we can.

The SAGE paradigm already provides a basic framework to address the extreme scale data aspects of High Performance Computing on the path to Exascale. Sage2 (Percipient Stor- AGe for Exascale Data Centric Computing 2) intends to validate a next generation storage system building on top of the already existing SAGE platform to address new use case re- quirements in the areas of extreme scale

Technology: HW and SW building blocks for exascale | 55

Transition to Exascale Computing

sagestorage.eu Twitter: @SageStorage

COORDINATING ORGANISATION Seagate Systems UK Ltd, United Kingdom

OTHER PARTNERS  Atos (Bull SAS), France  CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France  United Kingdom Atomic Energy Authority, United Kingdom  Kungliga Tekniska Hoegskolan (KTH), Swe- den  Forschungszentrum Jülich GmbH, Germany  The University of Edinburgh, United King- dom  Kitware SAS, France  Arm Ltd, United Kingdom

CONTACT Sai Narasimhamurthy, [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 01/09/2018 - 30/11/2021

56 | European High-Performance Computing projects – HANDBOOK 2021

SOftware Defined AppLication SODALITE Infrastructures managemenT and Engineering

Empowering the DevOps World

In recent years the global ICT market has seen SODALITE brings the vast knowledge of per- a tremendous rise in utility computing, which formance optimisation accrued by the HPC serves as the backend for practically any new industry into the cloud computing arena. So, technology, methodology or advancement it exploits a model-driven approach, empow- from healthcare to aerospace. We are enter- ered by ontological reasoning, to support ing a new era of heterogeneous, software- experts operating applications or resources defined, high performance compute envi- over heterogeneous infrastructures. In gen- ronments. In this context, SODALITE creates eral lines, the most benefited actors are: order into chaos, as the way for all these Application Ops Experts. Those in charge of technologies to be harmonized in a single operating (deploy, execute, optimize and ecosystem extending their focus to other monitor) applications, as they will be able to types of hardware and architectures. better understand the application requirements

Technology: HW and SW building blocks for exascale | 57 in terms of deployment/execution environ- Software Technologies ment and QoS. Resource Experts. As from the infrastructure management point of view, they can operate www.sodalite.eu resources on different environments, such as Twitter: @SODALITESW cloud, HPC or edge, in a simplified manner. LinkedIn: @sodalite-eu Quality Experts. As they can make use of GitHub: github.com/SODALITE-EU SODALITE libraries of patterns for addressing specific performance and quality problems, both from infrastructure and application COORDINATING ORGANISATION perspectives. XLAB, Slovenia

SODALITE PILLARS AND LAYERS OTHER PARTNERS SODALITE offers a layered solution (5 layers)  Atos Spain, Spain based on four pillars: (i) semantic intelligence  IBM, Israel for describing, deploying and optimising  Politecnico di Milano (POLIMI), Italy applications; (ii) quality of service based on  HPE, Switzerland guaranteed SLAs; (iii) heterogeneity support  ITI-CERTH, Greece of underlying infrastructures; and (iv) securi-  Adaptant, Germany ty, privacy and policy. Each of these layers  Jheronimus Academy of Data Science represents a product itself but can be com- (JADS), Netherlands bined for an enhanced experience.  High Performance Computing Center - Layer 1: Smart IDE Universität Stuttgart (HLRS), Germany It provides the user with an IDE and a seman- tic reasoner API based on sophisticated CONTACT knowledge graphs and semantic reasoning. Project Coordinator: Layer 2: deployMent OrchestratiOn pRovi- Daniel Vladušič, sioniNG (MOORING) [email protected] +38640352527 Consists of a framework for the usage of IaC and concepts of modelling application deploy- CALL ment, ensuring a smooth resource operation. H2020-ICT-2018-2 Layer 3: Verification & Bug Prediction (FindI- aCBug) It is responsible for the verification of the IaC TIMESPAN code and the detection of various quality 01/02/2019 - 31/01/2022 issues such as errors, smells, antipatterns and bugs in IaC artifacts. Layer 4: monitoRing & rEconFIguraTion (REFIT) Ensures the monitoring of system and net- work, collecting application-level and infra- structure-level metrics and events, used for reconfiguration and refactoring. Layer 5: PerfOrmancE optimization This layer is composed of static and dynamic application optimisation, during design and runtime.

58 | European High-Performance Computing projects – HANDBOOK 2021

An Optimization and Co-design SPARCITY Framework for Sparse Computation

Perfectly aligned with the vision of the Euro- The framework enables comprehensive appli- HPC Joint Undertaking, the SparCity project cation characterization and modeling, per- aims at creating a supercomputing framework forming synergistic node-level and system- that will provide efficient algorithms and level software optimizations. By creating a coherent tools specifically designed for max- digital SuperTwin, the framework is also imising the performance and energy efficien- capable of evaluating existing hardware com- cy of sparse computations on emerging HPC ponents and addressing what-if scenarios on systems, while also opening up new usage emerging architectures and systems in a co- areas for sparse computations in data analyt- design perspective. ics and deep learning.

Technology: HW and SW building blocks for exascale | 59

To demonstrate the effectiveness, societal Extreme scale computing and data driv- impact, and usability of the framework, the en technologies SparCity project will enhance the computing scale and energy efficiency of four challenging real-life applications that come from drasti- sparcity.eu cally different domains, namely, computa- Twitter: @sparcity_eu tional cardiology, social networks, bioinfor- LinkedIn: sparcity-project-944b4320a matics and autonomous driving. By targeting this collection of challenging applications, SparCity will develop world-class, extreme COORDINATING ORGANISATION scale and energy-efficient HPC technologies, Koç Üniversitesi, Turkey and contribute to building a sustainable ex- ascale ecosystem and increasing Europe’s OTHER PARTNERS competitiveness.  Sabancı University,Turkey  Simula Research Laboratory AS, Norway  INESC ID - Instituto de Engenharia de Sis- temas e Computadores, Investigação e De- senvolvimento em Lisboa, Portugal  Ludwig-Maximilians-Universität München, Germany  Graphcore AS, Norway

CONTACT Prof. Didem Unat, [email protected]

CALL EuroHPC-01-2019

PROJECT TIMESPAN

01/04/2021 - 31/03/2024

60 | European High-Performance Computing projects – HANDBOOK 2021

Towards EXtreme scale TEXTAROSSA Technologies and Accelerators for euROhpc hw/Sw Supercomputing Applications for exascale

To achieve high performance and high energy node platforms. TEXTAROSSA aims at tackling efficiency on near-future exascale computing this gap through applying a co-design ap- systems, a technology gap needs to be proach to heterogeneous HPC solutions, bridged: increase efficiency of computation supported by the integration and extension of with extreme efficiency in HW and new IPs, programming models and tools derived arithmetics, as well as providing methods and from European research projects, led by tools for seamless integration of reconfigura- TEXTAROSSA partners. ble accelerators in heterogeneous HPC multi-

The TEXTAROSSA Co-Design flow

Technology: HW and SW building blocks for exascale | 61

The main directions for innovation are to- Extreme scale computing and data driv- wards: en technologies 1. enabling mixed-precision computing, through the definition of IPs, libraries, and compilers supporting novel data textarossa.eu types (including Posits), used also to Twitter: @TEXTAROSSA1 boost the performance of AI accelera- Facebook: @textarossa tors; implementing new multilevel thermal 2. COORDINATING ORGANISATION management and two-phase liquid cool- Agenzia nazionale per le nuove tecnologie, ing; l'energia e lo sviluppo economico sostenibile 3. developing improved data movement (ENEA), Italy and storage tools through compression; 4. ensure secure HPC operation through OTHER PARTNERS HW accelerated cryptography;  Fraunhofer-Gesellschaft zur Förderung der 5. providing RISC-V based IP for fast task angewandten Forschung e.V., Germany scheduling and IPs for low-latency in-  Consorzio Interuniversitario Nazionale per tra/inter-node communication. l’Informatica (CINI), Italy  Institut National de Recherche en Informa- These technologies will be tested on the tique et en Automatique (INRIA), France Integrated Development Vehicles mirroring  Atos (Bull SAS), France and extending the European Processor Initia-  E4 Computer Engineering SpA, Italy tive ARM64-based architecture, and on an  Barcelona Supercomputing Centre (BSC), OpenSequana testbed. To drive the technolo- Spain gy development and assess the impact of the  Instytut Chemii Bioorganicznej Polskiej proposed innovations TEXTAROSSA will use a Akademii Nauk, Poland selected but representative number of HPC,  Istituto Nazionale di Fisica Nucleare – INFN, HPDA and AI demonstrators covering chal- Italy lenging HPC domains such as general-purpose  Consiglio Nazionale delle Ricerche (CNR), numerical kernels, High Energy Physics (HEP), Italy Oil & Gas, climate modelling, and emerging  In Quattro Srl, Italy domains such as High Performance Data CONTACT Analytics (HPDA) and High Performance Arti- Project Coordinator: Massimo Celino, ENEA, ficial Intelligence (HPC-AI). [email protected]

Project Technical Manager: William The TEXTAROSSA consortium includes: three Fornaciari, Politecnico di Milano – DEIB leading Italian universities (Politecnico di [email protected] Milano, Università degli studi di Torino and Università di Pisa, Linked Third Parties of CALL CINI), CINECA (Italy, Inkind Third Party of EuroHPC-01-2019 ENEA) and Universitat Politècnica de Catalu- nya (UPC, Spain, Inkind Third Party of BSC). PROJECT TIMESPAN 01/04/2021 - 31/03/2024

62 | European High-Performance Computing projects – HANDBOOK 2021

TIME parallelisation: for eXascale TIME-X computing and beyond

Recent successes have established the poten- To realise these ambitious, yet achievable tial of parallel-in-time integration as a power- goals, the inherently inter-disciplinary TIME-X ful algorithmic paradigm to unlock the per- Consortium unites top researchers from formance of Exascale systems. However, numerical analysis and applied mathematics, these successes have mainly been achieved in computer science and the selected applica- a rather academic setting, without an over- tion domains. Europe is leading research in arching understanding. TIME-X will take the parallel-in-time integration. next leap in the development and deploy- ment of this promising new approach for TIME-X unites all relevant actors at the Euro- massively parallel HPC simulation, enabling pean level for the first time in a joint strategic efficient parallel-in-time integration for real- research effort. The project will enable taking life applications. the necessary next step: advancing parallel- in-time integration from an academ- We will: ic/mathematical methodology into a widely • Provide software for parallel-in-time available technology with a convincing proof integration on current and future Ex- of concept, maintaining European leadership ascale HPC architectures, delivering sub- in this rapidly advancing field and paving the stantial improvements in parallel scaling; way for industrial adoption. • Develop novel algorithmic concepts for parallel-in-time integration, deepening our mathematical understanding of their convergence behaviour and including advances in multi-scale methodology; • Demonstrate the impact of parallel-in- time integration, showcasing the poten- tial on problems that, to date, cannot be tackled with full parallel efficiency in three diverse and challenging application fields with high societal impact: weather and climate, medicine, drug design and electromagnetics.

Technology: HW and SW building blocks for exascale | 63

Extreme scale computing and data driven technologies

time-x-eurohpc.eu

COORDINATING ORGANISATION KU Leuven, Belgium

OTHER PARTNERS  Sorbonne Université, France  Forschungszentrum Jülich, Germany  Technische Universität Hamburg, Germany  Technische Universität Darmstadt, Germany  Università della Svizzera italiana, Switzerland  Bergische Universität Wuppertal, Germany  Technische Universität München, Germany  Ecole nationale des ponts et chaussées, France  Université de Genève, Switzerland

CONTACT Giovanni Samaey, [email protected]

CALL EuroHPC-01-2019

PROJECT TIMESPAN 01/04/2021 - 31/03/2024

64 | European High-Performance Computing projects – HANDBOOK 2021

Verified Exascale Computing for VECMA Multiscale Applications

The purpose of this project is to enable a The project includes a fast track that will diverse set of multiscale, multiphysics appli- ensure applications are able to apply availa- cations -- from fusion and advanced materials ble multiscale VVUQ tools as soon as possible, through climate and migration, to drug dis- while guiding the deep track development of covery and the sharp end of clinical decision new capabilities and their integration into a making in personalised medicine -- to run on wider set of production applications by the current multi-petascale computers and end of the project. The deep track includes emerging exascale environments with high the development of more disruptive and fidelity such that their output is «actionable». automated algorithms, and their exascale- That is, the calculations and simulations are aware implementation in a more intrusive certifiable as validated (V), verified (V) and way with respect to the underlying and pre- equipped with uncertainty quantification existing multiscale modelling and simulation (UQ) by tight error bars such that they may be schemes. relied upon for making important decisions in all the domains of concern. The potential impact of these certified mul- tiscale simulations is enormous, and we have The central deliverable is an open source already been promoting the VVUQ toolkit toolkit for multiscale VVUQ based on generic (VECMAtk) across a wide range of scientific multiscale VV and UQ primitives, to be re- and social scientific domains, as well as within leased in stages over the lifetime of this pro- computational science more broadly. We ject, fully tested and evaluated in emerging have made ten releases of the toolkit with the exascale environments, actively promoted dedicated website at https://www.vecma- over the lifetime of this project, and made toolkit.eu/ available for public users to ac- widely available in European HPC centres. cess, download and use.

Technology: HW and SW building blocks for exascale | 65

To further develop and disseminate the Transition to Exascale Computing VECMAtk, we have been working with the Alan Turing Institute to jointly run the planned event of Reliability and Reproducibil- www.vecma.eu ity in Computational Science: Implementing Twitter: @VECMA4 Verification, Validation and Uncertainty Quantification in silico. The event comprised COORDINATING ORGANISATION as part of it the first VECMA training work- shop in January 2020. University College London, United Kingdom

Ahead of this event, we had run a VECMA OTHER PARTNERS hackathon event in association with VECMAtk  Max-Planck-Gesellschaft zur Förderung der in September 2019. Wissenschaften EV, Germany  Brunel University London, United Kingdom We held an online conference: Multiscale  Bayerische Akademie der Wissenschaften, Modelling, Uncertainty Quantification and the Germany Reliability of Computer Simulations in June  Atos (Bull SAS), France 2020. The conference was a combination of  Stichting Nederlandse Wetenschappelijk three events that were due to take place at Onderzoek Instituten, Netherlands the SIAM Conference on Uncertainty Quanti-  CBK Sci Con Ltd, United Kingdom fication (UQ20) and the International Confer-  Universiteit van Amsterdam, Netherlands ence on Computational Science (ICCS) 2020.  Instytut Chemii Bioorganicznej Polskiej These events associated with VECMAtk have Akademii Nauk, Poland participation from some of our external users from industry and government who use the INTERNATIONAL PARTNER tools we are developing on HPC at supercom- Rutgers University, USA puter centres in Europe. CONTACT Hugh Martin [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 15/06/2018 - 14/12/2021

66 | European High-Performance Computing projects – HANDBOOK 2021

Visual Exploration and Sampling VESTEC Toolkit for Extreme Computing

OBJECTIVE cally more and more accurate pictures of emerging, time-critical phenomena. Innova- Technological advances in high performance tive data compression approaches, based on computing are creating exciting new oppor- topological feature extraction and data sam- tunities that move well beyond improving the pling, will result in considerable reductions in precision of simulation models. The use of storage and processing demands by discard- extreme computing in real-time applications ing domain-irrelevant data. with high velocity data and live analytics is within reach. The availability of fast growing Three emerging use cases will demonstrate social and sensor networks raises new possi- the immense benefit for urgent decision bilities in monitoring, assessing and predicting making: wildfire monitoring and forecasting; environmental, social and economic incidents analysis of risk associated with mosquito- as they happen. Add in grand challenges in borne diseases; and the effects of space data fusion, analysis and visualization, and weather on technical supply chains. VESTEC extreme computing hardware has an increas- brings together experts in each domain to ingly essential role in enabling efficient pro- address the challenges holistically. cessing workflows for huge heterogeneous data streams.

VESTEC will create the software solutions needed to realise this vision for urgent deci- sion making in various fields with high impact for the European community. VESTEC will build a flexible toolchain to combine multiple data sources, efficiently extract essential features, enable flexible scheduling and inter- active supercomputing, and realise 3D visuali- zation environments for interactive explora- tions by stakeholders and decision makers. VESTEC will develop and evaluate methods and interfaces to integrate high-performance data analytics processes into running simula- tions and real-time data environments. Inter- active ensemble management will launch new simulations for new data, building up statisti-

Technology: HW and SW building blocks for exascale | 67

Transition to Exascale Computing

www.vestec-project.eu Twitter: @VESTECproject

COORDINATING ORGANISATION DLR - Deutsches Zentrum für Luft- und Raumfahrt EV, Germany

OTHER PARTNERS  The University of Edinburgh, United King- dom  Kungliga Tekniska Hoegskolan (KTH), Swe- den  Sorbonne Université, France  Kitware SAS, France  Intel Deutschland GmbH, Germany  Fondazione Bruno Kessler, Italy  Université Paul Sabatier Toulouse III, Fran- ce  Tecnosylva SL, Spain

CONTACT Andreas Gerndt, [email protected]

CALL FETHPC-02-2017

PROJECT TIMESPAN 01/09/2018 - 31/08/2021

68 | European High-Performance Computing projects – HANDBOOK 2021

Centres of Excellence in computing applications | 69

Centres of Excellence in computing applications

70 | European High-Performance Computing projects – HANDBOOK 2021

Concerted action for the European FocusCoE HPC CoEs

FocusCoE contributes to the success of the EU CoEs to coordinate strategic directions and HPC Ecosystem and the EuroHPC Initiative by collaboration (addressing possible fragmenta- supporting the EU HPC Centres of Excellence tion of activities across the CoEs and coordi- (CoEs) to more effectively fulfil their role nating interactions with the overall HPC eco- within the ecosystem and initiative: ensuring system) and provides support services for the that extreme scale applications result in CoEs in relation to both industrial outreach tangible benefits for addressing scientific, and promotion of their services and compe- industrial or societal challenges. It achieves tences by acting as a focal point for users to this by creating an effective platform for the discover those services.

Centres of Excellence in computing applications | 71

The specific objectives and achievements of H2020 Centre of Excellence FocusCoE are

www.hpccoe.eu • Instigating the creation of the HPC CoE Council (HPC3) and supporting its opera- Twitter: @FocusCoE tion. HPC3 is a vehicle that allows all Euro- LinkedIn: @focus-coe pean HPC CoEs to collectively define an overriding strategy and realise an effective COORDINATING ORGANISATION interaction with the European HPC Ecosys- scapos AG, Germany tem. • To support the HPC CoEs to achieve OTHER PARTNERS enhanced interaction with industry, and  CEA - Commissariat à l’Energie Atomique et SMEs in particular, through concerted out- aux Energies Alternatives, France reach and business development actions.  Kuningla Tekniska högskolan (KTH), Sweden FocusCoE initiated contacts between CoEs  Höchstleistungsrechenzentrum der Univer- and industry by several actions including sität Stuttgart (HLRS), Germany presenting CoEs at sectorial events and  Barcelona Supercomputing Center (BSC), organizing thematic webinars. A first Spain tranche of industrial success stories can be  University College London, UK found at www.hpccoe.eu/success-stories/  Agenzia nazionale per le nuove tecnologie, • To instigate concerted action on training by l'energia e lo sviluppo economico sos- and for the complete set of HPC CoEs. tenibile (ENEA), Italy FocusCoE organised workshops on HPC  National University of Ireland, Galway, training development in Europe which also Ireland served to promote CoE training, supple-  Teratec, France mented by CoE training events ranging  Forschungszentrum Jülich GmbH, Germany from pedagogy through benchmarking to  Partnership for advanced computing in innovation development targeted at CoE Europe (PRACE), Belgium personnel. The CoE Training registry  Max Planck Gesellschaft zur Forderung der (www.hpccoe.eu/coe-training-calendar/) Wissenschaft eV, Germany provides a unique view of all CoE trainings. CONTACT • To promote and concert the capabilities of Guy Lonsdale, [email protected] and services offered by the HPC CoEs and development of the EU HPC CoE “brand” CALL raising awareness with stakeholders and INFRAEDI-02-2018 both academic and industrial users. PROJECT TIMESPAN FocusCoE created regular newsletters1, 01/12/2018 - 30/11/2021 organised CoE workshops at key HPC

events and highlighted CoE achievements

in social media. The set of service offerings

by CoEs to industrial and academic users are presented at 1. www.focus-coe.eu/index.php/newsletter/ www.hpccoe.eu/technological-offerings-of- the-eu-hpc-coes-3/ .

72 | European High-Performance Computing projects – HANDBOOK 2021

Centre of Excellence for BioExcel-2 Computational Biomolecular Research

BioExcel is the leading European Centre of when applying best practices for optimal Excellence for Computational Biomolecular usage of compute infrastructures. At the Research. We develop advanced software, same time, increasing productivity of re- tools and full solutions for high-end and searchers will be of high importance for extreme scale computing for biomolecular achieving reliable scientific results at faster research. The centre supports academic and pace. industrial research by providing expertise and advanced training to end-users and promot- High-performance computing (HPC) and high- ing best practices in the field. The centre throughput computing (HTC) techniques have aspires to operate in the long-term as the now reached a level of maturity for many of core facility for advanced biomolecular mod- the widely-used codes and platforms, but elling and simulations in Europe. taking full advantage of them requires that researchers have the necessary training and OVERVIEW access to guidance by experts. The necessary Much of the current Life Sciences research ecosystem of services in the field is presently relies on intensive biomolecular modelling inadequate. A suitable infrastructure is need- and simulation. As a result, both academia ed to be set up in a sustainable, long-term and industry are facing significant challenges operational fashion.

Centres of Excellence in computing applications | 73

BioExcel CoE was thus established as the go- H2020 Centre of Excellence to provider of a full-range of software appli- cations and training services. These cover fast and scalable software, user-friendly automa- bioexcel.eu/ tion workflows and a support base of experts. Twitter: @BioExcelCoE The main services offered include hands-on LinkedIn: @bioexcel training, tailored customization of code and YouTube: https://goo.gl/5dBzmw personalized consultancy support. BioExcel actively works with: COORDINATING ORGANISATION 1. academic and non-profit researchers, 2. industrial researchers, KTH Royal Institute of Technology, Sweden 3. software vendors and academic code providers, OTHER PARTNERS 4. non-profit and commercial resource  The University of Manchester, UK providers, and  University of Utrecht, the Netherlands 5. related international projects and initia-  Institute of Research in Biomedicine (IRB), tives. Spain  European Molecular Biology Laboratory The centre was established and operates (EMBL-EBI), UK through funding by the EC Horizon 2020  Forschungszentrum Jülich GmbH, Germany program (Grant agreements 675728 and  University of Jyväskylä, Finland 823830).  The University of Edinburgh (EPCC), United Kingdom  Max Planck Gesellschaft, Germany  Across Limits, Malta  Barcelona Supercomputing Centre (BSC), Spain  Ian Harrow Consulting, UK  Norman Consulting, Norway  Nostrum Biodiscovery, Spain

CONTACT Rossen Apostolov, [email protected]

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/01/2019 - 30/06/2022

74 | European High-Performance Computing projects – HANDBOOK 2021

Centre of Excellence for Exascale ChEESE and Solid Earth

ChEESE addresses extreme computing scien- predictive techniques from monitoring earth- tific and societal challenges in Solid Earth by quake and volcanic activity. The selected harnessing European institutions in charge of codes are periodically audited and optimized operational monitoring networks, tier-0 su- at both intranode level (including heteroge- percomputing centres, academia, hardware neous computing nodes) and internode level developers and third-parties from SMEs, on heterogeneous hardware prototypes for industry and public-governance. The scientific the upcoming Exascale architectures, thereby challenge is to prepare 10 open-source flag- ensuring commitment with a co-design ap- ship codes to solve Exascale capacity and proach. Preparation to Exascale considers capability problems on computational seis- also aspects like workflows like data man- mology, magnetohydrodynamics, physical agement and sharing, I/O, post-process and volcanology, tsunamis, and data analysis and visualization. Additionally, ChEESE has devel-

Simulations of (1) volcanic ash dispersal (2) tsunami (3) earthquake and (4) magnetic fields using ChEESE flagship codes.

Centres of Excellence in computing applications | 75 oped WMS-light, an open source workflow H2020 Centre of Excellence management system for the geoscience community that supports a wide range of typical HPC application scenarios and is easy www.cheese-coe.eu to use and integrate with existing workflow Twitter: @Cheese_CoE setups. LinkedIn: @ChEESE-CoE

In parallel with these transversal activities, COORDINATING ORGANISATION ChEESE supports three vertical pillars. Barcelona Supercomputing Centre (BSC), First, it develops pilot demonstrators for Spain challenging scientific problems requiring of Exascale computing in alignment with the vision of European Exascale roadmaps. This OTHER PARTNERS includes near real-time seismic simulations  Istituto Nazionale di Geofisica e Vulcanolo- and full-wave inversion, ensemble-based gia (INGV), Italy volcanic ash dispersal forecasts, faster than  Icelandic Meteorological Office (IMO), real-time tsunami simulations or physics- Iceland based hazard assessments for earthquakes,  Swiss Federal Institute of Technology (ETH), volcanoes and tsunamis. Switzerland Second, some pilots are also intended for  Universität Stuttgart - High Performance enabling of operational services requiring of Computing Center (HLRS), Germany extreme HPC on urgent computing, early  CINECA Consorzio Interuniversitario, Italy warning forecast of geohazards, hazard as-  Technical University of Munich (TUM), sessment and data analytics. Pilots with a Germany higher Technology Readiness Level (TRL) will  Ludwig-Maximilians Universität München be tested in an operational environment in (LMU), Germany collaboration with a broader user’s communi-  University of Málaga (UMA), Spain ty. Additionally, and in collaboration with the  Norwegian Geotechnical Institute (NGI), European Plate Observing System (EPOS) and Norway Collaborative Data Infrastructure (EUDAT),  Institut de Physique du Globe de Paris ChEESE will promote and facilitate the inte- (IPGP), France gration of HPC services to widen the access to  CNRS - Centre National de la Recherche codes and fostering transfer of know-how to Scientifique, France Solid Earth community.  Atos (Bull SAS), France Finally, the third pillar of ChEESE acts as a hub to foster HPC across the Solid Earth Commu- CONTACT nity and related stakeholders and to provide Arnau Folch, [email protected] specialized training on services and capacity Rose Gregorio [email protected] building measures. Training courses ChEESE CALL has organized include: Modelling tsunamis INFRAEDI-02-2018 and volcanic plumes using European flagship codes, Advanced training on HPC for compu- PROJECT TIMESPAN tational seismology and Tools and techniques 01/11/2018 - 31/03/22 to quickly improve performances of HPC applications in Solid Earth.

76 | European High-Performance Computing projects – HANDBOOK 2021

Centre of Excellence in CoEC Combustion

The Centre of Excellence in Combustion with the Paris Agreement. The consortium is (CoEC) is a collaborative effort to exploit composed of leading European institutions in Exascale computing technologies to address computational combustion and HPC and fundamental challenges encountered in com- promotes a core of scientific and technologi- bustion systems. The CoEC vision is aligned cal activities aiming to extend the state-of- with the goals of decarbonization of the the-art in combustion simulation capabilities European power and transportation sectors through advanced methodologies enabled by and Europe´s vision to achieve net-zero Exascale computing. These advances will help greenhouse gas emissions (GHG) by 2050. The increase the TRL of representative codes from CoEC initiative is a contribution of the Euro- the EU combustion community and increase pean HPC combustion community to a long- the EU competitiveness and leadership in term GHG emissions reduction in accordance power and propulsion technologies.

Figure 1. Large eddy simulation of hydrogen. Figure 2. Simulation of turbulent flow using NekRS code.

Centres of Excellence in computing applications | 77

The outcomes of the project will be the basis H2020 Centre of Excellence of a service portfolio to support the academic and industrial activities in the power and transportation sectors. The Centre will also www.coec-project.eu stimulate the consolidation of the European Twitter: @CoEC_EU combustion community, by combining the LinkedIn: @coec-eu knowledge of representative actors from the scientific and industrial sectors with the ca- COORDINATING ORGANISATION pacity of HPC community including research Barcelona Supercomputing Centre (BSC), institutions and supercomputer centres. CoEC Spain will promote the usage of combustion simula- tions through dissemination, training and user support services. OTHER PARTNERS  CERFACS - Centre Européen de Recherche CONCEPT AND APPROACH et de Formation Avancée en Calcul Scienti- The technical activities integrated into CoEC fique, France are defined from the fundamental challenges  Rheinisch-Westfälische Technische Hoch- of relevant application areas where combus- schule Aachen (RWTH-AACHEN), Germany tion is involved. These areas include transpor-  Technische Universiteit Eindhoven, Nether- tation (aviation and automotive), energy and lands power generation, aerospace, fire and safety.  University of Cambridge, United Kingdom All these sectors share common challenges  CNRS - Centre National de la Recherche that are associated to scientific and techno- Scientifique, France logical problems related with combustion  Technische Universität Darmstadt, Ger- simulation; e.g: instabilities, emissions, noise, many plasma, nanoparticles, multiphase flow, au-  ETHZ - Eidgenössische Technische Hoch- toignition or the use of alternative fuels in schule Zürich, Switzerland practical applications. These research topics  Aristotle University of Thessaloniki , Greece have large tradition in combustion science  Forschungszentrum Jülich GmbH, Germany and fundamental progress has been made in  National Centre for Supercomputing Appli- the last decade to obtain further understand- cations, Bulgaria ing on these phenomena using numerical simulations. However, the degree of integra- CONTACT tion in industry still remains low either be- Daniel Mira, [email protected] cause the incomplete fidelity/accuracy of the models, or due to insufficient computing CALL power that hinders the maturity of the tech- INFRAEDI-05-2020 nology to be deployed with the level of indus- trial standards. Exascale computing technolo- PROJECT TIMESPAN gies will enable a new paradigm in physical 01/10/2020 – 30/09/2023 and numerical modelling, which requires a profound revision of models. For this pur- pose, two important building blocks regarding

Simulation Technologies, and Exascale Meth- odologies will also be incorporated in the project.

78 | European High-Performance Computing projects – HANDBOOK 2021

Centre of Excellence in CompBioMed2 Computational Biomedicine

CompBioMed is a user-driven Centre of Excel- Our work has provided the ability to integrate lence (CoE) in Computational Biomedicine, clinical datasets with HPC simulations through designed to nurture and promote the uptake fully working computational pipelines de- and exploitation of high performance compu- signed to provide clinically relevant patient- ting within the biomedical modelling commu- specific models. The reach of the project nity. Our user communities come from aca- beyond the funded partners is manifested by demia, industry and clinical practice. our highly effective Associate Partner Pro- The first phase of the CompBioMed CoE has gramme (all of whom have played an active already achieved notable successes in the role in our activities) with cost free, light- development of applications, training and weight joining mechanism, and an Innovation efficient access mechanisms for using HPC Exchange Programme that has brought up- machines and clouds in computational bio- ward of thirty scientists, industrialists and medicine. We have brought together a grow- clinicians into the project from the wider ing list of HPC codes relevant for biomedicine community. which have been enhanced and scaled up to Furthermore, we have developed and imple- larger machines. Our codes (such as Alya, mented a highly successful training pro- HemeLB, BAC, Palabos and HemoCell) are gramme, targeting the full range of medical now running on several of the world's fastest students, biomedical engineers, biophysics, supercomputers and investigating challenging and computational scientists. This pro- applications ranging from defining clinical gramme contains a mix of tailored courses for biomarkers of arrhythmic risk to the impact of specific groups, webinars and winter schools, mutations on cancer treatment. which is now being packaged into an easy-to- use training package.

Centres of Excellence in computing applications | 79

In CompBioMed2 we are extending the CoE H2020 Centre of Excellence to serve the community for a total of 7 years. CompBioMed has established itself as a hub for practitioners in the field, successfully www.compbiomed.eu nucleating a substantial body of research, Twitter: @bio_comp education, training, innovation and outreach LinkedIn: @compbiomed within the nascent field of Computational Biomedicine. This emergent technology will COORDINATING ORGANISATION enable clinicians to develop and refine per- University College London, UK sonalised medicine strategies ahead of their clinical delivery to the patient. Medical regu- latory authorities are currently embracing the OTHER PARTNERS prospect of using in silico methods in the area  University of Amsterdam, Netherlands of clinical trials and we intend to be in the  The University of Edinburgh, United King- vanguard of this activity, laying the ground- dom work for the application of HPC-based Com-  Barcelona Supercomputing Centre (BSC), putational Biomedicine approaches to a Spain greater number of therapeutic areas. The HPC  SURFsara, Netherlands requirements of our users are as diverse as  University of Oxford, UK the communities we represent. We support  University of Geneva, Switzerland both monolithic codes, potentially scaling to  The University of Sheffield, UK the exascale, and complex workflows requir-  CBK Sci Con Ltd, UK ing support for advanced execution patterns.  Universitat Pompeu Fabra, Spain Understanding the complex outputs of such  Bayerische Akademie der Wissenschaften, simulations requires both rigorous uncertain- Germany ty quantification and the embrace of the  Acellera, Spain convergence of HPC and high-performance  Evotec Ltd, UK data analytics (HPDA). CompBioMed2 seeks  Atos (Bull SAS), France to combine these approaches with the large,  Janssen Pharmaceutica, Belgium heterogeneous datasets from medical records  University of Bologna, Italy and from the experimental laboratory to underpin clinical decision support systems. CONTACT CompBioMed2 will continue to support, Emily Lumley, [email protected] nurture and grow our community of practi- tioners, delivering incubator activities to CALL prepare our most mature applications for INFRAEDI-02-2018 wider usage, providing avenues that will sustain CompBioMed2 well-beyond the pro- PROJECT TIMESPAN posed funding period. 01/10/2019 - 30/09/2023

80 | European High-Performance Computing projects – HANDBOOK 2021 E-CAM

The E-CAM Centre of Excellence is an e- key societal and industrial challenges. These infrastructure for software development, areas are classical molecular dynamics, elec- training and industrial discussion in simula- tronic structure, quantum dynamics and tion and modelling. E-CAM is based around meso- and multi-scale modelling. E-CAM the Centre Européen de Calcul Atomique et develops new scientific ideas and transfers Moléculaire (CECAM) distributed network of them to algorithm development, optimisa- simulation science nodes, and the physical tion, and parallelization in these four respec- computational e-infrastructure of PRACE. We tive areas, and delivers the related training. are a partnership of 13 CECAM Nodes, 4 Postdoctoral researchers are employed under PRACE centres, 12 industrial partners and one each scientific area, working closely with the Centre for Industrial Computing (Hartree scientific programmers to create, oversee and Centre). implement the different software codes, in collaboration with our industrial partners. E-CAM aims at So far E-CAM has • Developing software to solve important • Certified more than 250 software modules simulation and modelling problems in that are open access and easily available industry and academia, with applications for the industrial and academic from drug development, to the design of communities through our software new materials repository; • Tuning those codes to run on HPC, through • Trained about 500 people at our Extended application co-design and the provision of Software Development Workshops in HPC oriented libraries and services; advanced computational methods, good • Training scientists from industry and practices for code development, academia; documentation and maintenance; • Supporting industrial end-users in their • Deployed an online training infrastructure use of simulation and modelling, via to support the development of software for workshops and direct discussions with extreme-scale hardware, contributed to experts in the CECAM community. the LearnHPC platform for scalable HPC training; Our approach is focused on four scientific areas, critical for HPC simulations relevant to

Centres of Excellence in computing applications | 81

• Worked on 10 pilot projects in direct H2020 Centre of Excellence collaboration with our industrial partners; • Organized 10 State of the Art workshops www.e-cam2020.eu and 6 Scoping workshops, that brought together industrialists, software developers Twitter: @ECAM2020 and academic researchers, to discuss the LinkedIn: @e-cam challenges they face; COORDINATING ORGANISATION • Worked on software development projects Ecole Polytechnique Fédérale de Lausanne, that enable an HPC practice with potential Switzerland for transfer into industry. Examples include: GPU re-write of the DL_MESO code for OTHER PARTNERS mesoscale simulations; the development of  University College Dublin, Ireland a load-balancing library specifically  Freie Universität Berlin, Germany targeting MD applications, and of an HTC  Universita degli Studi di Roma La Sapienza, library capable of handling thousands of Italy tasks each requiring peta-scale computing;  CNRS - Centre National de la Recherche among other efforts; Scientifique, France • Developed original dissemination material  Technische Universität Wien, AT for the general public, including a Comics  University of Cambridge, UK issue presented at Festivals and Schools;  Max-Planck-Institut für Polymerforschung, • Ensured sustainability of its main outputs Germany via community effort, memberships, and  ENS - École Normale Supérieure de Lyon, the CECAM network. France  Forschungszentrum Jülich, Germany  Universitat de Barcelona, Spain  Daresbury Laboratory, Scientific and Tech- nology Facilities Council, UK  Scuola Internazionale Superiore Di Trieste, Italy  Universiteit van Amsterdam, Netherlands  Scuola Normale Superiore Pisa, Italy  Aalto University, FI  CSC-IT Centre for Science Ltd, Finland  Irish Centre for High-End Computing, Ireland CONTACT Luke Drury (Project Chair) [email protected] Positioning of the E-CAM infrastructure Ignacio Pagonabarraga (Project Technical Manager) [email protected] CALL EINFRA-5-2015 PROJECT TIMESPAN 01/10/2015 - 31/03/2021

82 | European High-Performance Computing projects – HANDBOOK 2021

Energy Oriented Centre Of EoCoE-II Excellence, Phase II

The Energy-oriented Centre of Excellence ners from 7 countries forms a unique network (EoCoE) applies cutting-edge computational of expertise in energy science, scientific com- methods in its mission to accelerate the puting and HPC, including 3 leading European transition to the production, storage and supercomputing centres. New modelling management of clean, decarbonized energy. capabilities in selected energy sectors will be EoCoE is anchored in the High Performance created at unprecedented scale, demonstrat- Computing (HPC) community and targets ing the potential benefits to the energy indus- research institutes, key commercial players try, such as accelerated design of storage and SMEs who develop and enable energy- devices, high-resolution probabilistic wind relevant numerical models to be run on ex- and solar forecasting for the power grid and ascale supercomputers, demonstrating their quantitative understanding of plasma core- benefits for low-carbon energy technology. edge interactions in ITER-scale tokamaks. The present project draws on a successful These flagship applications will provide a proof-of-principle phase of EoCoE-I, where a high-visibility platform for high-performance large set of diverse computer applications computational energy science, cross-fertilized from four such energy domains achieved through close working connections to the significant efficiency gains thanks to its multi- EERA and EUROfusion consortia. disciplinary expertise in applied mathematics and supercomputing. During this 2nd round, EoCoE is structured around a central Franco- EoCoE-II channels its efforts into 5 scientific German hub coordinating a pan-European Exascale challenges in the low-carbon sectors network, gathering a total of 7 countries and of Energy Meteorology, Materials, Water, 21 teams. Its partners are strongly engaged Wind and Fusion. This multidisciplinary effort in both the HPC and energy fields. The prima- harnesses innovations in computer science ry goal of EoCoE is to create a new, long and mathematical algorithms within a tightly lasting and sustainable community around integrated co-design approach to overcome computational energy science. EoCoE re- performance bottlenecks and to anticipate solves current bottlenecks in application future HPC hardware developments. A world- codes; it develops cutting-edge mathematical class consortium of 18 complementary part- and numerical methods, and tools to foster

Centres of Excellence in computing applications | 83 the usage of Exascale computing. Dedicated H2020 Centre of Excellence services for laboratories and industries are established to leverage this expertise and to develop an ecosystem around HPC for energy. www.eocoe.eu LinkedIn: @hpc-energy We are interested in collaborations in the area of HPC (e.g., programming models, COORDINATING ORGANISATION exascale architectures, linear solvers, I/O) and Maison de la Simulation (MdlS) and Institut also with people working in the energy do- de Recherche sur la Fusion par confinement main and needing expertise for carrying out Magnétique (IRFM) at CEA, France ambitious simulation. OTHER PARTNERS See our service page for more details:  Forschungszentrum Jülich (FZJ), with RWTH http://www.eocoe.eu/services Aachen University, Germany  Agenzia nazionale per le nuove tecnologie, l’energia e lo sviluppo economico sos- tenibile (ENEA), Italy  Barcelona Supercomputer Centre (BSC), Spain  Centre National de la Recherche Scien- tifique (CNRS) with Inst. Nat. Polytechnique Toulouse (INPT), France  Institut National de Recherche en Informa- tique et Automatique (INRIA), France  Centre Européen de Recherche et de For- mation Avancée en Calcul Scientifique (CERFACS), France  Max-Planck Gesellschaft (MPG), Germany  Fraunhofer Gesellschaft, Germany  Friedrich-Alexander Univ. Erlangen- Nürnberg (FAU), Germany  Consiglio Nazionale delle Ricerche (CNR), with Univ. Rome, Tor Vergata (UNITOV), Italy  Università degli Studi di Trento (UNITN), Italy  Instytut Chemii Bioorganicznej Polskiej Akademii Nauk (PSNC), Poland  Université Libre de Bruxelles (ULB), Belgium  University of Bath (UBAH), United Kingdom  Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIE- MAT), Spain  IFP Energies Nouvelles (IFPEN), France  DDN Storage, France CONTACT Edouard Audit, [email protected] CALL INFRAEDI-02-2018 PROJECT TIMESPAN 01/01/2019 - 31/12/2021

84 | European High-Performance Computing projects – HANDBOOK 2021

Centre of Excellence in Simulation ESiWACE2 of Weather and Climate in Europe, Phase 2

ESiWACE2 will leverage the potential of the the prototype-like demonstrators of envisaged EuroHPC pre-exascale systems to ESiWACE1. Besides, while the central focus of push European climate and weather codes to ESiWACE2 lies on achieving scientific world-leading spatial resolution for performance goals with HPC systems that will production-ready simulations, including the become available within the next four years, associated data management and data research and development to prepare the analytics workflows. For this goal, the community for the systems of the exascale portfolio of climate models supported by the era constitutes another project goal. project has been extended with respect to

The illustration above shows clouds simulated by the ICON model at 2.5 km resolution in a DYAMOND Winter simulation. The inset shows the Barbados region, where DYAMOND Winter connects to the EUREC4A measurement campaign. The background image is created by NASA.

Centres of Excellence in computing applications | 85

These developments will be complemented H2020 Centre of Excellence by the establishment of new technologies such as domain-specific languages and tools to minimise data output for ensemble runs. www.esiwace.eu Co-design between model developers, HPC Twitter: @ESIWACE manufacturers and HPC centres is to be fos- tered, in particular through the design and COORDINATING ORGANISATION employment of High-Performance Climate Deutsches Klimarechenzentrum (DKRZ), Germany and Weather benchmarks, with the first OTHER PARTNERS version of these benchmarks feeding into  CNRS - Centre National de la Recherche ESiWACE2 through the FET-HPC research Scientifique, France project ESCAPE-2. Additionally, training and  ECMWF - European Centre for Medium- dedicated services will be set up to enable the Range Weather Forecasts, United Kingdom wider community to efficiently use upcoming  Barcelona Supercomputing Center (BSC), pre-exascale and exascale supercomputers. Spain  Max-Planck-Institut für Meteorologie, Germany  Sveriges meteorologiska och hydrologiska institut, Sweden  CERFACS - Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique, France  National University of Ireland Galway (Irish Centre for High End Computing), Ireland  Met Office, United Kingdom  Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, Italy  The University of Reading, United Kingdom  STFC - Science and Technology Facilities Council, United Kingdom  Atos (Bull SAS), France  Seagate Systems UK Limited, United Kingdom  ETHZ - Eidgenössische Technische Hochs- chule Zürich, Switzerland  The University of Manchester, United Kingdom  Netherlands eScience Centre, Netherlands  Federal Office of Meteorology and Climato- logy, Switzerland  DataDirect Networks, France  Mercator Océan, France CONTACT [email protected] CALL INFRAEDI-02-2018 PROJECT TIMESPAN 01/01/2019 - 31/12/2022

86 | European High-Performance Computing projects – HANDBOOK 2021 EXCELLERAT

Engineering applications will be among the holders (industrial end users, ISVs, technology first exploiting Exascale. Not only in academia providers, HPC providers, academics, code but also in industry. In fact, the industrial developers, engineering experts) of HPC for engineering field is *the* field with the high- engineering. In order to achieve this goal, est Exascale potential. Thus the goal of EXCELLERAT brings together key players from EXCELLERAT is to enable the European engi- industry, research and HPC to provide all neering industry to advance towards Exascale necessary services. This is in line with the technologies and to create a single entry European HPC Strategy as implemented point to services and knowledge for all stake- through the EuroHPC Joint Undertaking.

Core competences of the EXCELLERAT Centre of Excellence

Centres of Excellence in computing applications | 87

To fulfil its mission, EXCELLERAT focuses its H2020 Centre of Excellence developments on six carefully chosen refer- ence applications (Nek5000, Alya, AVBP, TPLS, FEniCS, Coda), which were analysed on their www.excellerat.eu potential to support the aim to achieve Ex- EXCELLERAT services: services.excellerat.eu ascale performance in HPC for Engineering. Twitter: @EXCELLERAT_CoE Thus, they are promising candidates to act as LinkedIn: @excellerat showcases for the evolution of applications towards execution on Exascale Demonstra- SoundCloud: soundcloud.com/user-61965686 tors, Pre-Exascale Systems and Exascale Ma- chines. COORDINATING ORGANISATION Universität Stuttgart (HLRS), Germany EXCELLERAT addresses the setup of a Centre OTHER PARTNERS as an entity, acting as a single hub, covering a  The University of Edinburgh (EPCC), United wide range of issues, from «non-pure- Kingdom technical» services such as access to  CINECA Consorzio Interuniversitario, Italy knowledge or networking up to technical  SICOS BW GmbH, Germany services as e.g. co-design, scalability en-  Kungliga Tekniska Hoegskolan (KTH), Swe- hancement or code porting to new (Exa) den Hardware.  ARCTUR DOO, Slovenia  DLR - Deutsches Zentrum für Luft- und As the consortium contains key players in Raumfahrt EV , Germany HPC, HPDA, AI and experts for the reference  Centre Européen de Recherche et de For- applications, impact (e.g. code improvements mation Avancée en Calcul Scientifique and awareness raising) is guaranteed. The (CERFACS), France scientific excellence of the EXCELLERAT con-  Barcelona Supercomputing Centre (BSC), sortium enables evolution, optimisation, Spain scaling and porting of applications towards  SSC-Services GmbH, Germany disruptive technologies and increases Eu-  Fraunhofer Gesellschaft zur Förderung der rope’s competitiveness in engineering. Within Angewandten Forschung E.V., Germany the frame of the project, EXCELLERAT will  TERATEC, France prove the applicability of the results not only  Rheinisch-Westfälische Technische Hochs- for the six chosen reference applications, but chule Aachen, Germany even going beyond. EXCELLERAT extends the recipients of its developments beyond the CONTACT consortium and interacts via diverse mecha- Dr.-Ing. Bastian Koller, [email protected] nisms to integrate external stakeholders of its value network into its evolution. CALL INFRAEDI-02-2018 The EXCELLERAT team is developing solutions PROJECT TIMESPAN that support the advances towards Exascale 01/12/2018 - 28/02/2022 technologies. These achievements are re- flected in a collection of success stories that are available online.

88 | European High-Performance Computing projects – HANDBOOK 2021

HPC and Big Data Technologies for HiDALGO Global Challenges

OBJECTIVE Global decisions with their dependencies cannot be based on incomplete problem Developing evidence and understanding assessments or gut feelings anymore, since concerning Global Challenges and their un- impacts cannot be foreseen without an accu- derlying parameters is rapidly becoming a rate problem representation and its systemic vital challenge for modern societies. Various evolution. Therefore, HiDALGO bridges that examples, such as health care, the transition shortcoming by enabling highly accurate of green technologies or the evolution of the simulations, data analytics and data visualisa- global climate up to hazards and stress tests tion, but also by providing technology as well for the financial sector demonstrate the as knowledge on how to integrate the various complexity of the involved systems and un- workflows and the corresponding data. derpin their interdisciplinary as well as their globality. This becomes even more obvious if coupled systems are considered: problem statements and their corresponding parame- ters are dependent on each other, which results in interconnected simulations with a tremendous overall complexity. Although the process for bringing together the different communities has already started within the Centre of Excellence for Global Systems Sci- ence (CoeGSS), the importance of assisted decision making by addressing global, multi- dimensional problems is more important than ever.

Centres of Excellence in computing applications | 89

H2020 Centre of Excellence

hidalgo-project.eu/ Twitter: @EU_HiDALGO

COORDINATING ORGANISATION Atos Spain, Spain

OTHER PARTNERS  Universität Stuttgart – High Performance Computing Center Stuttgart, Germany  Instytut Chemii Bioorganicznej Polskiej Akademii Nauk – Poznan Supercomputing and Networking Center, Poland  ICCS (Institute of Communication and Computer Systems), Greece  Brunel University London, United Kingdom  Know Center GmbH, Austria  Szechenyi Istvan University, Hungary  Paris-Lodron-Universität Salzburg, Austria  European Centre for Medium-Range Weat- her Forecasts, United Kingdom  MoonStar Communications GmbH, Ger- many  DIALOGIK Gemeinnützige Gesellschaft für Kommunikations- und Kooperations- forschung MBH, Germany  Magyar Kozut Nonprofit Zrt., Hungary  ARH Informatikai Zrt., Hungary

CONTACT Francisco Javier Nieto ([email protected])

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/12/2018 - 30/11/2021

90 | European High-Performance Computing projects – HANDBOOK 2021 MaX MAterials design at the eXascale

MaX - Materials design at the eXascale is a The main actions of the project include: Centre of Excellence with focus on driving the 1. code and library restructuring: including evolution and exascale transition of materials modularizing and adapting for heteroge- science codes and libraries, and creating an neous architectures, as well as adoption integrated ecosystem of codes, data, work- of new algorithms; flows, and analysis tools for high- 2. co-design: working on codes and provid- performance (HPC) and high-throughput ing feedback to architects and integra- computing (HTC). Particular emphasis is on tors; developing workflows and turn-key co-design activities to ensure that future HPC solutions for properties calculations and architectures are well suited for the materials curated data sharing; science applications and their users. 3. enabling convergence of HPC, HTC, and high-performance data analytics; The focus of MaX is on first principles materi- 4. addressing the skills gap: widening ac- als science applications, i.e., codes that allow cess to codes, training, and transferring predictive simulations of materials and their know-how to user communities; properties from the laws of quantum physics 5. serving end-users in industry and re- and chemistry, without resorting to empirical search: providing dedicated services, parameters. support, and high-level consulting.

The exascale perspective is expected to boost Results and lessons learned on MaX flagship the massive use of these codes in designing codes and projects are made available to the materials structures and functionalities for developers and user communities at large. research and manufacturing.

MaX works with code developers and experts from HPC centres to support such transition. It focuses on selected complementary open- source codes: QUANTUM ESPRESSO, SIESTA, Yambo, FLEUR, CP2K, BigDFT. In addition, it contributes to the development of the AiiDA materials informatics infrastructure and the Materials Cloud environment.

Centres of Excellence in computing applications | 91

H2020 Centre of Excellence

www.max-centre.eu Twitter: @max_center2 LinkedIn: @max-centre

COORDINATING ORGANISATION CNR Nano, Modena, Italy (Elisa Molinari)

OTHER PARTNERS  SISSA Trieste, Italy (Stefano Baroni)  ICN2 Barcelona, Spain (Pablo Ordejόn)  FZJ Jülich, Germany (Stefan Blügel, Dirk Pleiter)  EPFL Lausanne, Switzerland (Nicola Marza- ri)  CINECA Bologna, Italy (Fabio Affinito)  Barcelona Supercomputing Centre (BSC), Spain (Stephan Mohr)  CSCS ETH Zürich, Switzerland (Joost Vande- Vondele)  CEA Grenoble, France (Thierry Deutsch)  UGent, Belgium (Stefaan Cottenier)  E4 Computer Engineering, Italy (Fabrizio Magugliani)  Arm, UK (Conrad Hillairet)  ICTP UNESCO, Trieste, Italy (Ivan Girotto)  Trust-IT Pisa, Italy (Silvana Muscella)

CONTACT Elisa Molinari, Director [email protected]

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/12/2018 - 31/05/2022

92 | European High-Performance Computing projects – HANDBOOK 2021 NOMAD Novel Materials Discovery

Predicting novel materials with specific desir- • develop energy-to-solution as a funda- able properties is a major aim of ab initio mental part of new models. This will be computational materials science (aiCMS) and achieved by developing novel artificial- an urgent requirement of basic and applied intelligence (AI) guided work-flow en- materials science, engineering and industry. gines that optimise the modelling calcu- Such materials can have immense impact on lations and provide significantly more in- the environment and on society, e.g., on formation per calculation performed; energy, transport, IT, medical-device sectors • offer extreme-scale data services (data and much more. Currently, however, precise- infrastructure, storage, retrieval and ly predicting complex materials is computa- analytics/AI); tionally infeasible. • test and demonstrate our results in two exciting use cases, solving urgent chal- NOMAD CoE will develop a new level of ma- lenges for the energy and environment terials modelling, enabled by upcoming HPC that cannot be computed properly with exascale computing and extreme-scale data today’s hard- and software (water split- hardware. ting and novel thermoelectric materials);

• train the next generation of students, In close contact with the R&D community, also in countries where HPC studies are including industry, we will not yet well developed. • develop exascale algorithms to create accurate predictive models of real-world, NOMAD CoE is working closely together with industrially-relevant, complex materials; POP, and it is synergistically complementary • provide exascale software libraries for all to and closely coordinated with the EoCoE, code families (not just selected codes); ECAM, BioExcel and MaX CoEs. NOMAD CoE enhancing today’s aiCMS to take ad- will push the limits of aiCMS to unprecedent- vantage of tomorrow’s HPC computing ed capabilities, speed and accuracy, serving platforms; basic science, industry and thus society.

Centres of Excellence in computing applications | 93

H2020 Centre of Excellence

www.nomad-coe.eu Twitter: @NoMaDCoE

COORDINATING ORGANISATION Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany

OTHER PARTNERS  Humboldt-Universität zu Berlin, Germany  Danmarks Tekniske Universitet, Denmark  Technische Universität Wien, Austria  The University of Warwick, United Kingdom  Université Catholique de Louvain, Belgium  Barcelona Supercomputing Centre (BSC), Spain  CSC – IT Center for Science Ltd., Finland  University of Cambridge, United Kingdom  CEA - Commissariat à l’Energie Atomique et aux Energies Alternatives, France  Aalto University Helsinki, Finland  University of Latvia, Latvia

CONTACT [email protected]

CALL INFRAEDI-05-2020

PROJECT TIMESPAN 01/10/2020 – 30/09/2023

94 | European High-Performance Computing projects – HANDBOOK 2021

HPC / Exascale Centre of PerMedCoE Excellence in Personalised Medicine

Personalised Medicine (PerMed) opens unex- 2. integrating PerMed into the new Euro- plored frontiers to treat diseases at the indi- pean HPC/Exascale ecosystem, by offer- vidual level combining clinical and omics ing access to HPC/Exascale-adapted and information. However, the performances of optimised software; the current simulation software are still insuf- 3. running a comprehensive set of PerMed ficient to tackle medical problems such as use cases; tumour evolution or patient-specific treat- 4. building the basis for the sustainability of ments. The challenge is to develop a sustain- the PerMedCoE by coordinating PerMed able roadmap to scale-up the essential soft- and HPC communities, and reaching out ware for the cell-level simulation to the new to industrial and academic end-users, European HPC/Exascale systems. Simulation with use cases, training, expertise, and of cellular mechanistic models are essential best practices. for the translation of omic data to medical relevant actions and these should be accessi- The PerMedCoE cell-level simulations will fill ble to the end-users in the appropriate envi- the gap between the molecular- and organ- ronment of the PerMed-specific big confiden- level simulations from the CompBioMed and tial data. BioExcel CoEs with which this proposal is aligned at different levels. It will connect The goal of the HPC/Exascale Centre of Excel- methods’ developers with HPC, HTC and lence in Personalised Medicine (PerMedCoE) HPDA experts (at POP and HiDALGO CoEs). is to provide an efficient and sustainable Finally, the PerMedCoE will work with bio- entry point to the HPC/Exascale-upgraded medical consortia (i.e. ELIXIR, LifeTime initia- methodology to translate omics analyses into tive) and pre-exascale infrastructures (BSC actionable models of cellular functions of and CSC), including a substantial co-design medical relevance. It will accomplish so by effort. 1. optimising four core applications for cell- level simulations to the new pre-exascale platforms;

Centres of Excellence in computing applications | 95

H2020 Centre of Excellence

permedcoe.eu Twitter: @PerMedCoE LinkedIn: @permedcoe

COORDINATING ORGANISATION Barcelona Supercomputing Centre (BSC), Spain

OTHER PARTNERS  CSC-IT Centre for Science Ltd, Finland  Université du Luxembourg, Luxembourg  Institut Curie, France  Universitätsklinikum Heidelberg, Germany  IBM Research GmbH, Switzerland  Atos Spain, Spain  Kungliga Tekniska Hoegskolan (KTH), Swe- den  European Molecular Biology Laboratory, Germany  Fundacio Centre de Regulacio Genomica, Spain  Max Delbrück Centre for Molecular Medi- cine, Germany  University of Ljubljana, Slovenia  ELEM BIOTECH SL, Spain

CONTACT Alfonso Valencia [email protected]

CALL INFRAEDI-05-2020

PROJECT TIMESPAN 01/10/2020 – 30/09/2023

96 | European High-Performance Computing projects – HANDBOOK 2021

Performance Optimisation and POP2 Productivity 2

The growing complexity of parallel computers The external view, advice, and help provided is leading to a situation where code owners by POP have been extremely useful for many and users are not aware of the detailed issues of these customers to significantly improve affecting the performance of their applica- the performance of their codes by factors of tions. The result is often an inefficient use of 20% in some cases but up to 2x or even 10x in the infrastructures. Even in the cases where others. The POP experience was also ex- the need to get further performance and tremely valuable to identify issues in meth- efficiency is perceived, the code developers odologies and tools that if improved will may not have insight enough on its detailed reduce the assessment cycle time. causes so as to properly address the problem. POP2 continues to focus on a transversal This may lead to blind attempts to restructure (horizontal) service to offer highly specialized codes in a way that might not be the most expertise and know-how to all other CoEs productive ones. facilitating cross-fertilisation between them and other sectors. This could lead to wider POP2 extends and expands the activities access to codes, including specific and target- successfully carried out by the POP Centre of ed measures for industry & SMEs. Collabora- Excellence since October 2015. The effort in tions with other CoEs and Projects help to the POP and POP2 projects resulted up to reach potential users for services and pro- now in close to 400 assessment services mote support to the governance of HPC Infra- provided to customers in academia, research structures. Centres could adopt the POP and industry helping them to better under- methodology, helping it embed in the wider stand the behaviour and improve the perfor- HPC community, leveraging the use of exist- mance of their applications. ing infrastructures.

Centres of Excellence in computing applications | 97

H2020 Centre of Excellence

pop-coe.eu Twitter: @POP_HPC

COORDINATING ORGANISATION Barcelona Supercomputing Centre (BSC), Spain

OTHER PARTNERS  Universität Stuttgart HLRS, Germany  Forschungszentrum Jülich GmbH, Germany  Numerical Algorithms Group Ltd, UK  Rheinisch-Westfälische Technische Hochs- chule Aachen (RWTH-AACHEN), Germany  Teratec, France  Université de Versailles Saint Quentin, France  IT4Innovations, VSB – Technical University of Ostrava, Czech Republic

CONTACT Jesus Labarta, [email protected] Judit Gimenez, [email protected]

CALL INFRAEDI-02-2018

PROJECT TIMESPAN 01/12/2018 - 31/05/2022

98 | European High-Performance Computing projects – HANDBOOK 2021

Research on AI- and Simulation- RAISE Based Engineering at Exascale

Compute- and data-driven research encom- progressively applied to many stages of work- passes a broad spectrum of disciplines and is flows to solve complex problems. Analysing the key to Europe’s global success in various and processing big data require high compu- scientific and economic fields. The massive tational power and scalable AI solutions. amount of data produced by such technolo- Therefore, it becomes mandatory to develop gies demands novel methods to post-process, entirely new workflows from current applica- analyse, and to reveal valuable mechanisms. tions that efficiently run on future high- The development of artificial intelligence (AI) performance computing architectures at methods is rapidly proceeding and they are Exascale.

Compute- and data-driven use cases of CoE RAISE

Centres of Excellence in computing applications | 99

The Centre of Excellence for AI- and Simula- H2020 Centre of Excellence tion-based Engineering at Exascale (RAISE) will be the excellent enabler for the ad- vancement of such technologies in Europe on www.coe-raise.eu industrial and academic levels, and a driver Twitter: @CoeRaise for novel intertwined AI and HPC methods. LinkedIn: @coe-raise These technologies will be advanced along Facebook: @CoERAISE2021 representative use-cases, covering a wide spectrum of academic and industrial applica- YouTube: www.youtube.com tions, e.g. coming from wind energy harvest- ing, wetting hydrodynamics, manufacturing, COORDINATING ORGANISATION physics, turbomachinery, and aerospace. It Forschungszentrum Jülich GmbH, Germany aims at closing the gap in full loops using OTHER PARTNERS forward simulation models and AI-based  Universitet Haskoli Islands, Iceland inverse inference models, in conjunction with  The Cyprus Institute, Cyprus statistical methods to learn from current and  RWTH Aachen University, Germany historical data. In this context, novel hard-  Barcelona Supercomputing Centre (BSC), ware technologies, i.e. Modular Supercompu- Spain ting Architectures, Quantum Annealing, and  European Organisation for Nuclear Re- prototypes from the DEEP project series will search (CERN), Switzerland be used for exploring unseen performance in  Centre Européen de Recherche et de For- data processing. Best practices, support, and mation Avancée en Calcul Scientifique education for industry, SMEs, academia, and (CERFACS), France HPC centres on Tier-2 level and below will be  Atos (BULL SAS), France developed and provided in RAISE's European  Riga Technical University, Latvia network attracting new user communities.  Flanders Make, Belgium This goes along with the development of a  SAFRAN Helicopter Engines, France business providing new services to various  ParTec AG, Germany user communities.  Delft University of Technology (TU Delft), Netherlands

CONTACT Dr.-Ing. Andreas Lintermann [email protected]

CALL INFRAEDI-05-2020

PROJECT TIMESPAN 01/01/2021 - 31/12/2023

100 | European High-Performance Computing projects – HANDBOOK 2021

Targeting Real chemical accuracy TREX at the Exascale

TREX federates European scientists, HPC TREX’s main focus will be the development of stakeholders, and SMEs to develop and apply a user-friendly and open-source software quantum mechanical simulations in the suite in the domain of stochastic quantum framework of stochastic quantum Monte chemistry simulations, which integrates TREX Carlo methods. This methodology encom- community codes within an interoperable, passes various techniques at the high-end in high-performance platform. In parallel, TREX the accuracy ladder of electronic structure will work on showcases to leverage this approaches and is uniquely positioned to fully methodology for commercial applications as exploit the massive parallelism of the upcom- well as develop and implement software ing exascale architectures. The marriage of components and services that make it easier these advanced methods with exascale will for commercial operators and user communi- enable simulations at the nanoscale of un- ties to use HPC resources for these applica- precedented accuracy, targeting a fully con- tions. sistent description of the quantum mechani- cal electron problem.

Centres of Excellence in computing applications | 101

H2020 Centre of Excellence

trex-coe.eu/ Twitter: @trex_eu LinkedIn: @trex-eu

COORDINATING ORGANISATION University of Twente, Netherlands

OTHER PARTNERS  CNRS - Centre National de la Recherche Scientifique, France  Scuola Internazionale Superiore Di Studi Avanzati Di Trieste, Italy  CINECA Consorzio Interuniversitario, Italy  Forschungszentrum Jülich GmbH, Germany  Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany  Université de Versailles Saint Quentin-en- Yvelines, France  Megware Computer Vertrieb und Service GmbH, Germany  Slovak Academy of Sciences, Bratislava, Slovakia  Universität Wien, Austria  Politechnika Lodzka, Poland  TRUST-IT SRL, Italy CONTACT [email protected] Claudia Filippi, Project Coordinator, [email protected] CALL INFRAEDI-05-2020 PROJECT TIMESPAN 01/10/2020 – 30/09/2023

102 | European High-Performance Computing projects – HANDBOOK 2021

Applications in the HPC Continuum | 103

Applications in the HPC Continuum

104 | European High-Performance Computing projects – HANDBOOK 2021

HPC Big DAta ArtifiCial ACROSS Intelligence cross-stack platfoRm tOwardS ExaScale

Applications in the HPC Continuum | 105

Artificial intelligence and Big Data are going to HPC and data centric environments and revolutionize the HPC domain, by offering new application platforms tools for analyzing the immense amount of data generated by scientific experiments and modern simulation tools. www.acrossproject.eu To this end, ACROSS aims at developing an Twitter: @across_project innovative execution platform for serving LinkedIn: @acrossproject emerging workflows that mix large and com- plex numerical simulations along with Big Data Facebook: @acrossprojecteu analytics, machine learning and deep learning techniques. ACROSS will co-design this innova- COORDINATING ORGANISATION tive platform revolving around a large set of Fondazione LINKS, Italy advanced hardware and software technologies towards the Exascale level of performance. As OTHER PARTNERS such, hardware acceleration technologies will  Atos (Bull SAS), France be of primary interest for ACROSS to get the  IT4Innovations, VSB – Technical University maximum performance while consuming the of Ostrava, Czech Republic least amount of energy. In detail, ACROSS will  CINECA, Consorzio Interuniversitario, Italy move beyond traditional GPUs, integrating and  GE Avio Aero, Italy exploiting FPGAs and Neuromorphic hardware.  European Centre For Medium-Range Looking forward, ACROSS will envision the Weather Foreccasts (ECMWF), United adoption of the future European Processor Kingdom Initiative (EPI) outcomes, also to promote  Consorzio Interuniversitario Nazionale per European first-class technology. Energy- l’Informatica (CINI), Italy efficiency, efficient usage of computing re-  Institut National de Recherche en Informa- sources and application performance will be tique et en Automatique (INRIA), France achieved through the design and implementa-  SINTEF AS, Norway tion of an innovative workflow and resource  Neuropublic AE Pliroforikis & Epikoinonion, management system. As such, a multi-level Greece approach will be used to define the workflows,  Stichting DELTARES, Netherlands manage the resource allocation beyond the  Max-Planck-Gesellschaft Zur Forderung Der traditional queuing systems by using a machine Wissenschaften EV - MPG, Germany learning based approach, efficiently execute  Morfo Design SRL , Italy workload tasks on the allocated heterogene- ous resources. Three pilots have been selected CONTACT to demonstrate the capabilities of the ACROSS Olivier Terzo (project coordinator), platform in three relevant industrial and scien- [email protected] tific domains: (1) aeronautics pilot; (2) weath- er, climate, hydrological and farming pilot; and CALL (3) energy and carbon sequestration pilot. EuroHPC-02-2019 The ACROSS platform will be backed by super- computing and Cloud resources provided by PROJECT TIMESPAN European HPC centres (IT4Innovations and 01/03/2021 - 29/02/2024 CINECA), including a new generation of super- computers.

106 | European High-Performance Computing projects – HANDBOOK 2021

Fostering precision agriculture CYBELE and livestock farming through secure access to large-scale HPC- enabled virtual industrial

experimentation environment empowering scalable big data analytics

CYBELE generates innovation and creates CYBELE develops large scale HPC-enabled test value in the domain of agri-food, and its beds and delivers a distributed big data man- verticals in the sub-domains of Precision agement architecture and a data manage- Agriculture (PA) and Precision Livestock Farm- ment strategy providing: ing (PLF) in specific. This will be demonstrated by the real-life industrial cases, empowering 1) integrated, unmediated access to large capacity building within the industrial and scale datasets of diverse types from a research community. CYBELE aims at demon- multitude of distributed data sources, strating how the convergence of HPC, Big 2) a data and service driven virtual HPC- Data, Cloud Computing and the Internet of enabled environment supporting the ex- Things can revolutionise farming, reduce ecution of multi-parametric agri-food re- scarcity and increase food supply, bringing lated impact model experiments, opti- social, economic, and environmental benefits. mising the features of processing large scale datasets and CYBELE intends to safeguard that stakehold- 3) a bouquet of domain specific and generic ers have integrated, unmediated access to a services on top of the virtual research vast amount of large-scale datasets of diverse environment facilitating the elicitation of types from a variety of sources. By providing knowledge from big agri-food related da- secure and unmediated access to large-scale ta, addressing the issue of increasing re- HPC infrastructures supporting data discov- sponsiveness and empowering automa- ery, processing, combination and visualisation tion-assisted decision making, empower- services, Stakeholders shall be enabled to ing the stakeholders to use resources in generate more value and deeper insights in a more environmentally responsible operations. manner, improve sourcing decisions, and implement circular-economy solutions in the food chain.

Applications in the HPC Continuum | 107

HPC and Big Data enabled Large-scale Test-beds and Applications

www.cybele-project.eu  SUITE5 Data Intelligence Solutions Ltd, Cyprus Twitter: @CYBELE_H2020  INTRASOFT International SA, Luxembourg Facebook: @Cybele-H2020  ENGINEERING - Ingegneria Informatica SPA, LinkedIn: @Cybele-project Italy  Wageningen University, Netherlands COORDINATING ORGANISATION  Stichting Wageningen Research, Nether- Waterford Institute of Technology, Ireland lands  BIOSENSE Institute, Serbia OTHER PARTNERS  Donau Soja Gemeinnutzige GmbH, Austria   Barcelona Supercomputing Centre (BSC), Agroknow IKE, Greece  Spain GMV Aerospace and Defence SA, Spain   Atos (Bull SAS), France Federacion de Cooperativas Agroalimenta-  CINECA Consorzio Interuniversitario, Italy res de la Comunidad Valenciana, Spain   Instytut Chemii Bioorganicznej Polskiej University of Strathclyde, United Kingdom  Akademii Nauk, Poland ILVO - Instituut voor Landbouw- en Visserri-  RYAX Technologies, France jonderzoek, Belgium   Universität Stuttgart, Germany VION Food Nederland BV, Netherlands   EXODUS Anonymos Etaireia Pliroforikis, Olokliromena Pliroforiaka Sistimataae, Greece Greece   LEANXCALE SL, Spain Kobenhavns Universitet, Denmark   ICCS (Institute of Communication and EVENFLOW, Belgium  Computer Systems), Greece Open Geospatial Consortium (Europe) Ltd,  Centre for Research and Technology Hellas United Kingdom - Information Technologies Institute (ITI), Greece CONTACT  Tampereen Korkeakoulusaatio SR, Finland [email protected]  UBITECH, Greece  University of Piraeus Research Center, CALL Greece ICT-11-2018-2019

PROJECT TIMESPAN 01/01/2019 - 31/12/2021

108 | European High-Performance Computing projects – HANDBOOK 2021

Deep-Learning and HPC to Boost DeepHealth Biomedical Applications for Health

The main goal of the DeepHealth project is to sis. In the training environment IT experts put HPC computing power at the service of work with datasets of images for training biomedical applications; and apply Deep predictive models. In the production envi- Learning (DL) and Computer Vision (CV) tech- ronment the medical personnel ingests an niques on large and complex biomedical image coming from a scan into a platform or datasets to support new and more efficient biomedical application that uses predictive ways of diagnosis, monitoring and treatment models to get clues to support them during of diseases. diagnosis. The DeepHealth toolkit will allow the IT staff to train models and run the train- THE DEEPHEALTH TOOLKIT: A KEY OPEN- ing algorithms over hybrid HPC + big data SOURCE ASSET FOR EHEALTH AI-BASED architectures without a profound knowledge SOLUTIONS of DL, CV, HPC or big data and increase their At the centre of the proposed innovations is productivity reducing the required time to do the DeepHealth toolkit, an open-source free it. software that will provide a unified frame- 14 PILOTS AND SEVEN PLATFORMS TO work to exploit heterogeneous HPC and big data architectures assembled with DL and CV VALIDATE THE DEEPHEALTH PROPOSED capabilities to optimise the training of predic- INNOVATIONS tive models. The toolkit is composed of two The DeepHealth innovations are being vali- core libraries plus a back end offering a REST- dated in 14 pilots through the use of seven ful API to other applications to facilitate the different biomedical and AI software plat- use of the libraries, and a dedicated front end forms provided by partners. The libraries are that interacts with the back end for facilitat- being integrated and validated in seven AI ing the use of the libraries to data scientists. and biomedical software platforms: commer- The libraries are the European Distributed cial platforms: (everis Lumen, PHILIPS Open Deep Learning Library (EDDL) and the Euro- Innovation Platform, THALES PIAF) and re- pean Computer Vision Library (ECVL). Version search-oriented platforms (CEA’s Expres- 1.0 of both libraries has been released during sIFTM, CRS4’s Digital Pathology, WINGS second semester of 2021, and both are ready MigraineNet). to be integrated in current and new biomedi- cal platforms or applications. THE DEEPHEALTH CONCEPT – APPLICA- TION SCENARIOS The DeepHealth concept focuses on scenarios where image processing is needed for diagno-

Applications in the HPC Continuum | 109

The use cases cover three main areas: (i) HPC and Big Data enabled Large-scale Neurological diseases, (ii) Tumour detection Test-beds and Applications and early cancer prediction and (iii) Digital pathology and automated image annotation. The pilots will allow evaluating the perfor- mance of the proposed solutions in terms of the time needed for pre-processing images, the time needed to train models and the time to put models in production. In some cases, it is expected to reduce these times from days or weeks to just hours. This is one of the major expected impacts of the DeepHealth project. deephealth-project.eu  SIVECO Romania SA, Romania  Spitalul Clinic Prof Dr Theodor Burghele, Twitter: @DeepHealthEU Romania Facebook: @DeepHealthEU  STELAR Security Technology Law Re- LinkedIn: @deephealtheu search UG, Germany  Thales SIX GTS France SAS, France COORDINATING ORGANISATION  TREE Technology SA, Spain Everis Spain SL, Spain  Università degli Studi di Modena e Reg- gio Emilia, Italy OTHER PARTNERS  Università degli Studi di Torino, Italy   Azienda Ospedaliera Citta della Salute e Universitat Politècnica de València, della Scienza di Torino, Italy Spain   Barcelona Supercomputing Centre (BSC), WINGS ICT Solutions Information & Spain Communication Technologies IKE,  Centre Hospitalier Universitaire Vaudois, Greece Switzerland  Centro di Ricerca, Sviluppo e Studi Supe- CONTACT riori in Sardegna SRL, Italy PROJECT MANAGER: Monica Caballero  CEA - Commissariat à l’Energie Atomique [email protected] et aux énergies alternatives, France TECHNICAL MANAGER:  EPFL - Ecole Polytechnique Fédérale de Jon Ander Gómez [email protected] Lausanne, Switzerland  Fundacion para el Fomento de la Inves- CALL tigacion Sanitaria y Biomedica de la Co- ICT-11-2018-2019 munitat Valenciana, Spain  Karolinska Institutet, Sweden PROJECT TIMESPAN  Otto-von-Guericke-Universität Magde- 01/01/2019 - 30/06/2022 burg, Germany  Philips Medical Systems Nederland BV, Netherlands  Pro Design Electronic GmbH, Germany

110 | European High-Performance Computing projects – HANDBOOK 2021

Enabling dynamic and eFlows4HPC Intelligent workflows in the future EuroHPC ecosystem

Today, developers lack tools that enable the attract first-time users of HPC systems, the development of complex workflows involving project will provide HPC Workflows as a Ser- HPC simulation and modelling with data vice (HPCWaaS), an environment for sharing, analytics (DA) and machine learning (ML). The reusing, deploying and executing existing eFlows4HPC project aims to deliver a work- workflows on HPC systems. The project will flow software stack and an additional set of leverage workflow technologies, machine services to enable the integration of HPC learning and big data libraries from previous simulation and modelling with big data ana- open-source European initiatives. Specific lytics and machine learning in scientific and optimizations of application kernels tackling industrial applications. the use of accelerators (FPGAs, GPUs) and the European Processor Initiative (EPI) will be The software stack will allow to develop performed. To demonstrate the workflow innovative adaptive workflows that efficiently software stack, use cases from three thematic use the computing resources and also innova- pillars have been selected. tive data management solutions. In order to

Applications in the HPC Continuum | 111

Pillar I focuses on the construction of Digi- HPC and data centric environments and talTwins for the design of complex manufac- application platforms tured objects integrating state-of-the-art adaptive solvers with machine learning and data-mining, contributing to the Industry 4.0 eflows4hpc.eu vision. Pillar II develops innovative adaptive Twitter: @eFlows4HPC workflows for climate and the study of Tropi- LinkedIn: @eflows4hpc cal Cyclones (TC) in the context of the CMIP6 experiment. Pillar III explores the modelling of COORDINATING ORGANISATION natural catastrophes - in particular, earth- Barcelona Supercomputing Center (BSC), quakes and their associated tsunamis shortly Spain after such an event is recorded. Leveraging two existing workflows, Pillar III will work on OTHER PARTNERS integrating them with the eFlows4HPC soft-  Centre Internacional de Mètodes Numèrics ware stack and on producing policies for a l'Enginyeria (CIMNE), Spain urgent access to supercomputers. The results  Forschungszentrum Jülich (FZJ), Germany from the three Pillars will be demonstrated to  Universidad Politécnica de Valencia (UPV), the target community CoEs to foster adoption Spain and receive feedback.  Atos (Bull SAS), France  DtoK Lab S.r.l., Italy  Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici (CMCC), Italy  Institut national de recherche en informa- tique et en automatique (INRIA), France  Scuola Internazionale Superiore di Studi Avanzati di Trieste (SISSA), Italy  Instytut Chemii Bioorganicznej Polskiej Akademii Nauk (PSNC), Poland  Universidad de Málaga (UMA), Spain  Istituto nazionale di geofisica e vulcan- ología (INGV), Italy  Alfred-Wegener-Institut, Helmholtz- Zentrum für Polar- und Meeresforschung (AWI), Germany  Eidgenössische Technische Hochschule Zürich (ETHZ), Switzerland  SIEMENS AG, Germany  Norges Geotekniske Institutt (NGI), Norway CONTACT Rosa M. Badia, [email protected]

CALL EuroHPC-02-2019 PROJECT TIMESPAN 01/01/2021 - 31/12/2023

112 | European High-Performance Computing projects – HANDBOOK 2021

Supercomputing and Energy for ENERXICO Mexico

The ENERXICO project is applying exascale • To increase oil & gas reserves using HPC techniques to different energy industry geophysical exploration for subsalt res- simulations of critical interest for Mexico. ervoirs ENERXICO works to provide solutions for the • To improve refining and transport effi- oil and gas industry, improve wind energy ciency of heavy oil performance and increase the efficiency of • To develop a strong wind energy sector biofuels for transportation. Composed of to mitigate oil dependency some of the best academic and industrial • To improve fuel generation using bio- institutions in the EU and Mexico, ENERXICO fuels demonstrates the power of cross-border collaboration to put supercomputing tech- • To improve the cooperation between nologies at the service of the energy sector. industries from the EU and Mexico The specific objectives of the project are: • To improve the cooperation between • To develop beyond state-of-the-art high the leading research groups in the EU performance simulation tools for the en- and Mexico ergy industry

ENERXICO works to optimise numerical simulation codes used in the energy industry for the exascale, such as the Barcelona Subsurface Imaging Tools (BSIT), which simulates seismic wave- fields in a complex topography region.

Applications in the HPC Continuum | 113

International cooperation

www.enerxico-project.eu Twitter: @ENERXICOproject LinkedIn: @enerxico-project COORDINATING ORGANISATIONS  Barcelona Supercomputing Center (BSC), ENERXICO is composed of partners from Spain  Mexico, Spain, France and Germany. Instituto Nacional de Investigaciones Nucle- ares (ININ), Mexico OTHER PARTNERS  Atos (Bull SAS), France  Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional (CINVESTAV) , Mexico  Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIE- MAT), Spain  Instituto Mexicano del Petróleo (IMP) , Mexico  Instituto Politécnico Nacional (IPN-ESIA), Mexico  Technische Universität München (TUM), Germany  Universidad Autónoma Metropolitana (UAM-A), Mexico  Instituto de Ingeniería de la Universidad Nacional Autónoma de México (UNAM- IINGEN), Mexico  Universitat Politècnica de València (UPV), Spain  Université Grenoble Alpes (UGA), France  IBERDROLA , Spain  REPSOL SA, Spain PROJECT OBSERVER Petróleos Mexicanos (PEMEX), Mexico CONTACT Rose Gregorio [email protected] Mireia Cos [email protected] CALL FETHPC-01-2018 PROJECT TIMESPAN 01/06/2019 - 31/08/2021

114 | European High-Performance Computing projects – HANDBOOK 2021

HPC and Cloud-enhanced Testbed EVOLVE for Extracting Value from Diverse Data at Large Scale

EVOLVE is a pan European Innovation Action • Businesses: Reduced capital and opera- with 19 key partners from 10 European coun- tional costs for acquiring and maintaining tries introducing important elements of High- computing infrastructure. Performance Computing (HPC) and Cloud in • Society: Accelerated innovation via faster Big Data platforms taking advantage of recent design and deployment of innovative ser- technological advancements to enable cost- vices that unleash creativity. effective applications in 7 different pilots to keep up with the unprecedented data growth EVOLVE is now reaching its final stretch, as we are experiencing. EVOLVE aims to build a the projects ends in November 2021. large-scale testbed by integrating technology • At this stage the hardware platform is from: fully integrated with heterogeneity at the The HPC world: An advanced computing processing and storage level. Even if this platform with HPC features and systems part of the work plan is achieved, some software. infrastructures enhancements are still The Big Data world: A versatile big-data scheduled, such as the installation of processing stack for end-to-end workflows. nodes with the latest generation of GPU The Cloud world: Ease of deployment, access, and the addition of local NVMe to some and use in a shared manner, while addressing computing nodes. data protection. • Thanks to our implementation of a coop-

eration between Kubernetes with Slurm, EVOLVE aims to take concrete and decisive the software stack is also completed, with steps in bringing together the Big Data, HPC, the ability to run complex containerized and Cloud worlds, and to increase the ability workflows even if they include MPI stag- to extract value from massive and demanding es. datasets. EVOLVE aims to bring the following benefits for processing large and demanding • Most of the focus is now on the pilot datasets: applications integration and the ecosys- tem development. The results collected • Performance: Reduced turn-around time on the fully ported pilot applications are for domain-experts, industry (large and more than encouraging: with speed-up in SMEs), and end-users. the range of 1 to 2 order of magnitudes. • Experts: Increased productivity when More importantly similar gains are ob- designing new products and services, by served on the manageability of the sys- processing large datasets. tem, (even though they are harder to

Applications in the HPC Continuum | 115

quantify). HPC level of performance is HPC and Big Data enabled Large-scale reached while keeping DevOps Cloud ori- Test-beds and Applications ented approached.

Therefore, the central promise of EVOLVE www.evolve-h2020.eu delivering performance while keeping the Twitter: @Evolve_H2020 user experience at the centre will be fulfilled. LinkedIn: @evolve-h2020

COORDINATING ORGANISATION Datadirect Networks France, France OTHER PARTNERS  AVL LIST GmbH, Austria  BMW - Bayerische Motoren Werke AG, Germany  Atos (Bull SAS), France  Cybeletech, France  Globaz S.A., Portugal  IBM Ireland Ltd, Ireland  Idryma Technologias Kai Erevnas, Greece  ICCS (Institute of Communication and Computer Systems), Greece  KOOLA d.o.o., Bosnia and Herzegovina  Kompetenzzentrum - Das Virtuelle Fahr- zeug, Forschungsgesellschaft mbH, Austria  MEMEX SRL, Italy  MEMOSCALE AS, Norway  NEUROCOM Luxembourg SA, Luxembourg  ONAPP Ltd, Gibraltar  Space Hellas SA, Greece  Thales Alenia Space France SAS, France  TIEMME SPA, Italy  WEBLYZARD Technology GmbH, Austria CONTACT Catarina Pereira [email protected] Jean-Thomas Acquaviva [email protected] CALL ICT-11-2018-2019 PROJECT TIMESPAN 01/12/2018 - 30/11/2021

116 | European High-Performance Computing projects – HANDBOOK 2021

EXtreme-scale Analytics via EXA MODE Multimodal Ontology Discovery & Enhancement

Exascale volumes of diverse data from dis- objectives include the development and tributed sources are continuously produced. release of extreme analytics methods and Healthcare data stand out in the size pro- tools that are adopted in decision making by duced (production 2020 >2000 exabytes), industry and hospitals. Deep learning natural- heterogeneity (many media, acquisition ly allows to build semantic representations of methods), included knowledge (e.g. diagnos- entities and relations in multimodal data. tic reports) and commercial value. The super- Knowledge discovery is performed via seman- vised nature of deep learning models requires tic document-level networks in text and the large labelled, annotated data, which pre- extraction of homogeneous features in het- cludes models to extract knowledge and erogeneous images. The results are fused, value. aligned to medical ontologies, visualized and refined. Knowledge is then applied using a ExaMode solves this by allowing easy & fast, semantic middleware to compress, segment weakly supervised knowledge discovery of and classify images and it is exploited in deci- heterogeneous exascale data provided by the sion support and semantic knowledge man- partners, limiting human interaction. Its agement prototypes.

Applications in the HPC Continuum | 117

ExaMode is relevant to ICT12 in several as- Big Data technologies and extreme-scale pects: analytics

1. Challenge: it extracts knowledge and value from heterogeneous quickly in- www.examode.eu creasing data volumes. Twitter: @examode 2. Scope: the consortium develops and LinkedIn: @examode releases new methods and concepts for Facebook: @examode.eu extreme scale analytics to accelerate deep analysis also via data compression, for precise predictions, support decision COORDINATING ORGANISATION making and visualize multi-modal University of Applied Sciences Western Swit- knowledge. zerland, HES-SO, Sierre, Switzerland 3. Impact: the multi-modal/media semantic middleware makes heterogeneous data OTHER PARTNERS management & analysis easier & faster,  Azienda Ospedaliera per l’Emergenza it improves architectures for complex Cannizzaro, Italy distributed systems with better tools in-  MicroscopeIT Sp. z o.o., Poland creasing speed of data throughput and  SIRMA AI, Bulgaria access, as resulting from tests in extreme  Radboud University Medical Center, Nijme- analysis by industry and in hospitals. gen, Netherlands  SURFsara BV, Netherlands  Università degli Studi di Padova, Italy

CONTACT Dr. Manfredo Atzori [email protected] Prof. Henning Müller [email protected]

CALL ICT-12-2018-2020

PROJECT TIMESPAN 01/01/2019 - 31/12/2022

118 | European High-Performance Computing projects – HANDBOOK 2021

Exploitation of Exascale Systems exaFOAM for Open-Source Computational Fluid Dynamics by Mainstream Industry

Computational Fluid Dynamics (CFD) has To ensure success, the project mobilises a become a mature technology in engineering highly-capable consortium of 12 beneficiaries design, contributing strongly to industrial consisting of experts in HPC CFD algorithms competitiveness and sustainability across a and industrial applications and includes uni- wide range of sectors (e.g. transportation, versities, HPC centres, SMEs and code release power generation, disaster prevention). authority OpenCFD Ltd (openfoam.com) as a Future growth depends upon the exploitation linked third party to the PI. Project manage- of massively parallel HPC architectures, how- ment will be facilitated by a clear project ever this is currently hampered by perfor- structure and quantified objectives enable mance scaling bottlenecks. tracking of the project progress.

The ambitious exaFOAM project aims to Special emphasis will be placed on ensuring a overcome these limitations through the de- strong impact of the exaFOAM project. The velopment and validation of a range of algo- project has been conceived to address all rithmic improvements. Improvements across expected impacts set out in the Work Pro- the entire CFD process chain (preprocessing, gramme. All developed code and validation simulation, I/O, post-processing) will be de- cases will be released as open-source to the veloped. Effectiveness will be demonstrated community in coordination with the Open- via a suite of HPC Grand Challenge and Indus- FOAM Governance structure. The involve- trial Application Challenge cases. All devel- ment of 17 industrial supporters and stake- opments will be implemented in the open- holders from outside the consortium under- source CFD software OpenFOAM, one of the scores the industrial relevance of the project most successful open-source projects in the outcomes. A well-structured and multichan- area of computational modelling, with a large nelled plan for dissemination and exploitation industrial and academic user base. of the project outcomes further reinforces the expected impact.

Applications in the HPC Continuum | 119

Industrial software codes for extreme scale computing environments and ap-

plications

exafoam.eu

COORDINATING ORGANISATION ESI Group, France

OTHER PARTNERS  CINECA, Consorzio Interuniversitario, Italy  E4 Computer Engineering SpA, Italy  Politecnico di Milano, Italy  Fakultet strojarstva i brodogradnje, Sveučilište u Zagrebu, Croatia  Upstream CFD GmbH, Germany  Technische Universität Darmstadt, Ger- many  Universität Stuttgart, Germany  Barcelona Supercomputing Centre (BSC), Spain  Wikki Gesellschaft für numerische Konti- nuumsmechanik mbH, Germany  National Technical University of Athens - NTUA, Greece  Universidade do Minho, Portugal

CONTACT Fred Mendonca, [email protected]

CALL

EuroHPC-03-2019

PROJECT TIMESPAN 01/04/2021 - 31/03/2024

120 | European High-Performance Computing projects – HANDBOOK 2021

EXaSCale smArt pLatform Against EXSCALATE4COV paThogEns for Corona Virus

The EXSCALATE4CoV (E4C) project aims to tion in in-vitro models. IIMCB and ELECTRA exploit the most powerful computing re- determines the crystal structure of at least sources currently based in Europe to em- one coronavirus functional proteins to evalu- power smart in-silico drug design. Advanced ate the structural similarities with other viral Computer-Aided Drug Design (CADD) in com- proteins. bination with the high throughput bio- chem- ical and phenotypic screening allows the rapid All the other partners support the project by evaluation of the simulations results and the studying the mechanism of action, synthetiz- reduction of time for the discovery of new ing the most relevant candidate drugs and drugs. enhancing the HPC and AI infrastructure of the consortium. The worldwide biggest virtual Against a pandemic crisis, the immediate screening simulation was performed by eval- identification of effective treatments has a uating more than 1 Trillion molecular docking. paramount importance. First, E4C selected Chelonia leads the communication and dis- through the Dompé EXSCALATE platform, the semination activities. most promising commercialized and develop- ing drugs safe in man. Second, selected from EXSCALATE4CoV consortium identified safe in >500 billion molecules new pan coronavirus man drugs repurposed as 2019-nCoV antiviral inhibitors. and proposed to the EMA innovation task force (ITF) to define a preliminary develop- Huge computational resources are needed, ment strategy and a proposal for a registra- therefore the activities are supported and tion path. The E4C project shared promptly its empowered by four of the most powerful scientific outcomes with the research com- computer centres in Europe: CINECA, BSC, munity by using established channels: JÜLICH and Eni HPC5. The Swiss Institute of ChEMBL portal for the biochemical data, the Bioinformatics (SIB) provides the homology SWISS-MODEL portal for the homology mod- 3D models for the viral proteins. The Fraun- els of viral proteins WT and mutants, the hofer IME provides the BROAD Repurposing Protein Data Bank, the EUDAT for the data Library and biochemical assays for the most generated by in-silico simulations and the E4C relevant viral proteins. Phenotypic screenings project website. have been run by KU LUEVEN to identify molecules capable of blocking virus replica-

Applications in the HPC Continuum | 121

Combining HPC simulations and AI driven Fight against COVID system pharmacology E4C has identified and in vitro experimentally validated a first prom- ising drug for the treatment of mildly symp- www.exscalate4cov.eu tomatic Covid19 patients. Raloxifene Phase II LinkedIn: @exscalate4cov clinical trial was completed in July 2021. The Twitter: @exscalate4cov Spallanzani hospital is the Coordinating Inves- Facebook: @exscalate4cov tigator in the clinical trial. Several initiatives have been launched in the framework of E4C as well as MEDIATE also supported by SAS, COORDINATING ORGANISATION SPIKE MUTANTS, VIRALSEQ and DRUGBOX. Dompé farmaceutici S.p.A., Italy All of them are accessible through the main web portal. OTHER PARTNERS  CINECA Consorzio Interuniversitario, Italy  Politecnico di Milano, Italy  Università degli Studi di Milano, Italy  KU Leuven, Belgium  International Institute of Molecular and Cell Biology, Poland  Elettra Sincrotrone Trieste, Italy  Fraunhofer-Institut für Molekularbiologie und Angewandte Oekologie IME  Barcelona Supercomputing Center, Spain  Forschungszentrum Jülich GmbH, Germany  Universita degli Studi di Napoli Federico II, Italy  Università degli Studi di , Italy  SIB Institute Suisse de Bioinformatique, Switzerland  Kungliga Tekniska Hoegskolan (KTH), Swe- den  Istituto Nazionale per le Malattie Infettive Lazzaro Spallanzani, Italy  Associazione Big Data, Italy  Istituto Nazionale di Fisica Nucleare (INFN), Italy  Chelonia SA, Switzerland

CONTACT [email protected] CALL SC1-PHE-CORONAVIRUS-2020 PROJECT TIMESPAN 06/04/2020 – 05/10/2021

122 | European High-Performance Computing projects – HANDBOOK 2021

Hybrid Eco Responsible Optimized HEROES European Solution

Bridging the gap between HPC & AI/ML user ascale/exascale HPC & ML. Fostered by in- communities and HPC centres is key to un- creasing access from the edge of the network, leash Europe’s innovation potential. A lot of it is equally important to make such resources effort is done to build the European technol- easily and responsibly consumable. ogies able to deliver centralised, pet-

Modules of the HEROES’ architecture. End-users will be able to submit workflows to multiple HPC Centres and Cloud Providers, while taking informed decisions on the placement of the tasks based on their performance, energy and costs constraints.

Applications in the HPC Continuum | 123

The HEROES project aims at developing an HPC and data centric environments and innovative software solution addressing both application platforms the industrial and scientific user communities. The platform will allow end users to submit their complex simulation and ML workflows heroes-project.eu to both HPC data centres and Cloud infra- Twitter: @heroes_hpc structures and provide them with the possi- bility to choose the best option to achieve COORDINATING ORGANISATION their goals in time, within budget and with UCit, France the best energy efficiency.

Although the project will be able, in the fu- OTHER PARTNERS ture, to provide value creation across multiple  Neovia Innovation, France industrial sectors, the initial focus for the  HPCNow! CONSULTING SL, Spain demonstrator will address workflows of stra-  Do IT Systems S.r.l., Italy tegic importance in the field of “Renewable  Barcelona Supercomputing Centre (BSC), Energy” and in “Manufacturing” applications Spain where HPC is involved for the design of more energy efficient products (for example in the CONTACT design of energy efficient vehicles). Philippe Bricard, [email protected] HEROES major innovations reside in its plat- form selection decision module and its appli- CALL cation of marketplace concepts to HPC. The EuroHPC-02-2019 consortium involves 4 European SMEs which bring HPC to their clients and are facing eve- PROJECT TIMESPAN ryday this market demand. A major Super- 01/03/2021 - 28/02/2023 computing Research Centre complements the project with its specific expertise in energy management and resource optimisation. HEROES is supported by Meteosim, UL Re- newables, EDF and Dallara which will advise on workflows relevant to such use cases. At the end of the project, its outcomes will be commercialised and further reinforce Eu- rope's capacity to lead and nurture innova- tion in HPC.

124 | European High-Performance Computing projects – HANDBOOK 2021

High performance computing for HPCWE wind energy

Wind energy has become increasingly im- wind energy exploitation ranging from wind portant as a clean and renewable alternative turbine design, wind resource assessment to to fossil fuels in the energy portfolios of both wind farm layout and operations, the applica- Europe and . At almost every stage in tion of HPC is a must.

Simulation of the power generated by a wind turbine, which responds non-linearly to the incom- ing turbulent wind speed

Applications in the HPC Continuum | 125

The goal of HPCWE is to address the following International Cooperation key open challenges in applying HPC on wind energy: i. Efficient use of HPC resources in wind hpcwe-project.eu turbine simulations, via the development Twitter: @HPCWE_project and implementation of novel algorithms. LinkedIn: @hpcwe This leads to the development of methods for verification, validation, uncertainty COORDINATING ORGANISATION quantification (VVUQ) and in-situ scien- The University of Nottingham, United King- tific data interpretation. dom ii. Accurate integration of meso-scale at- mosphere dynamics and micro-scale wind turbine flow simulations, as this interface OTHER PARTNERS is the key for accurate wind energy simu-  Danmarks Tekniske Universitet, Denmark lations. In HPCWE a novel scale integra-  Electricité de France (EDF), France tion approach will be applied and tested  Imperial College of Science Technology and through test cases in a Brazil wind farm. Medicine, United Kingdom iii. Adjoint-based optimization, which implies  The University of Edinburgh (EPCC), United large I/O consumption as well as storing Kingdom data on large-scale file systems. HPCWE  Universidade de Sao Paulo, Brazil research aims at alleviating the bottle-  Universidade Estadual de Campinas, Brazil necks caused by data transfer from  Universidade Federal de Santa Catarina, memory to disk. Brazil  Universität Stuttgart, Germany The HPCWE consortium consists of 11 part-  Universiteit Twente, Netherlands ners representing the top academic institutes,  VORTEX Factoria de Calculs SL, Spain HPC centres and industries in Europe and Brazil. By exploring this collaboration, this CONTACT consortium will develop novel algorithms, Prof. Atanas Popov (Project Coordinator) implement them in state-of-the-art codes and [email protected] test the codes in academic and industrial cases to benefit the wind energy industry and CALL research in both Europe and Brazil. FETHPC-01-2018

PROJECT TIMESPAN

01/06/2019 - 30/11/2021

126 | European High-Performance Computing projects – HANDBOOK 2021

Large-scale EXecution for Industry LEXIS & Society

The ever-increasing quantity of data generat- technical and financial considerations may ed by modern industrial and business pro- now be possible. cesses poses an enormous challenge for organisations seeking to glean knowledge and The LEXIS (Large-scale EXecution for Industry understanding from the data. Combinations & Society) project is building an advanced of HPC, Cloud and Big Data technologies are engineering platform at the confluence of key to meeting the increasingly diverse needs HPC, Cloud and Big Data. To run application of large and small organisations alike, howev- workflows, including Big Data analytics and er, access to powerful computing platforms simulations, it leverages large-scale geograph- for SMEs which has been difficult due to both ically-distributed resources from the existing

Applications in the HPC Continuum | 127

HPC infrastructure and augments them with HPC and Big Data enabled Large-scale Cloud services. Driven by the requirements of Test-beds and Applications several pilot test cases, the LEXIS platform relies on best-in-class data management solutions (EUDAT) and advanced, distributed lexis-project.eu orchestration solutions (TOSCA), augmenting Twitter: @LexisProject them with new, efficient hardware and plat- Facebook: @LEXIS2020 form capabilities (e.g., in the form of Data LinkedIn: @lexis-project Nodes and federation, usage monitoring and YouTube: bit.ly/youtube_LEXIS accounting/billing support). Thus, LEXIS real- ises an innovative solution aimed at stimulat- COORDINATING ORGANISATION ing the interest of the European industry and IT4Innovations, VSB – Technical University of at creating an ecosystem of organisations that Ostrava, Czech Republic benefit from the LEXIS platform and its well- OTHER PARTNERS integrated HPC, HPDA and Data Management  Alfred Wegener Institute Helmholtz Centre solutions. for Polar and Marine Research, Germany  Atos (Bull SAS), France The consortium is thus developing a demon-  Bayerische Akademie der Wissenschaften / strator with a significant Open Source dimen- Leibniz Rechenzentrum der BAdW, Germa- sion, including validation, tests and documen- ny tation. Platform validation is in progress with  Bayncore Labs Ltd, Ireland large-scale pilots – applications from industri-  CEA, Commissariat à l’Energie Atomique et al and scientific sectors (Aeronautics, Earth- aux énergies alternatives, France  quake and Tsunami, Weather and Climate), CYCLOPS Labs GmbH, Switzerland  Centro Internazionale in Monitoraggio where significant improvements in KPIs in- Ambientale - Fondazione CIMA, Italy cluding job execution time and solution accu-  ECMWF, European Centre for Medium Range racy are anticipated. To further stimulate Weather Forecasts, United Kingdom adoption of the LEXIS framework and increase  EURAXENT, United Kingdom stakeholders’ engagement with the targeted  GE AVIO SRL, Italy pilots, the project consortium has organised  Helmholtz Zentrum Potsdam Deutsches an Open Call for platform validation, attract- Geoforschungs Zentrum GFZ, Germany ing more use cases.  Irish Centre for High-End Computing ICHEC, National University of Ireland Galway, Ireland  LEXIS is promoting its platform to the HPC, Information Technology for Humanitarian Assistance Cooperation and Action, Italy Cloud and Big Data sectors, maximising im-  LINKS Foundation, Italy pact through targeted and qualified commu-  NUMTECH SARL, France nication. The project consortium brings to-  OUTPOST24, France gether partners with the skills and experience  Teseo Eiffage Energie Systèmes, Italy to deliver a complex multi-faceted project, spanning a range of complex technologies CONTACT across seven European countries, including Jan Martinovic [email protected] large industry, flagship HPC centres, industrial CALL and scientific compute pilot users, technology ICT-11-2018-2019 providers and SMEs. PROJECT TIMESPAN 01/01/2019 - 30/12/2021

128 | European High-Performance Computing projects – HANDBOOK 2021

LIgand Generator and portable LIGATE drug discovery platform AT Exascale

Today digital revolution is having a dramatic less than one day to respond promptly for impact on the pharmaceutical industry and example to worldwide pandemic crisis. The the entire healthcare system. The implemen- platform will also support European projects tation of machine learning, extreme scale in repurposing drugs, natural products and computer simulations, and big data analytics nutraceuticals with therapeutic indications to in the drug design and development process answer high unmet medical needs like rare, offer an excellent opportunity to lower the metabolic, neurological and cancer diseases, risk of investment and reduce the time to and emerging ones as new non infective patent and time to patient. pandemics.

LIGATE aims to integrate and co-design best Since the evolution of HPC architectures is in class European open-source components heading toward specialization and extreme together with European Intellectual Proper- heterogeneity, including future exascale ties (whose development has already been architectures, the LIGATE solution focuses co-funded by previous Horizon 2020 pro- also on code portability with the possibility to jects). It will support Europe to keep world- deploy the CADD platform on any available wide leadership on Computer-Aided Drug type of architecture in order not to have a Design (CADD) solutions, exploiting today’s legacy in the hardware. high-end supercomputers and tomorrow’s exascale resources, while fostering the Euro- The project plans to make the platform avail- pean competitiveness in this field. The pro- able and open to support the discovery a ject will enhance the CADD technology of the novel treatment to fight virus infections and drug discovery platform EXSCALATE. multidrug-resistant bacteria. The project will also make available to the research communi- The proposed LIGATE solution enables to ty the outcome of a final simulation. deliver the result of a drug design campaign with the highest speed along with the highest accuracy. This predictability, together with the fully automation of the solution and the availability of the exascale system, will let run the full in silico drug discovery campaign in

Applications in the HPC Continuum | 129

Industrial software codes for extreme scale computing environments and ap-

plications

www.ligateproject.eu Twitter: @ligateproject

LinkedIn: @ligateproject Facebook: @ligateproject

COORDINATING ORGANISATION Dompé farmaceutici S.p.A., Italy

OTHER PARTNERS  Politecnico di Milano, Italy  CINECA, Consorzio Interuniversitario Italy  Kungliga Tekniska Högskolan, Sweden  Università degli Studi di Salerno (UNISA), Italy  Universität Innsbruck, Austria  E4 Computer Engineering SpA, Italy  Chelonia SA, Switzerland  tofmotion GmbH, Austria  IT4Innovations, VSB – Technical University of Ostrava, Czech Republic  Universität Basel, Switzerland

CONTACT [email protected]

CALL

EuroHPC-03-2019

PROJECT TIMESPAN

01/01/2021 - 31/12/2023

130 | European High-Performance Computing projects – HANDBOOK 2021

Numerical modelling of cardiac MICROCARD electrophysiology at the cellular scale

The MICROCARD project will develop an aging and diseased hearts they need to move exascale simulation platform to study the from a continuum approach to a representa- mechanisms of cardiac arrhythmia with mod- tion of individual cells and their interconnec- els that represent the heart cell by cell. tions. This implies a different, harder numeri- cal problem and a 10,000-fold increase in Cardiac arrhythmia, which are among the problem size, and consequently a transition most frequent causes of death, are disorders from tier 1 to exascale systems. Moreover, of the electrical synchronization system of the the project aims to limit its climate impact by heart. Numerical models of this complex developing energy-efficient code, and oper- system are already highly sophisticated and ates in a domain where numerical simulation widely used, but to match observations in is all but commonplace.

Applications in the HPC Continuum | 131

To meet these challenges the platform is co- HPC and data centric environments and designed by HPC experts, numerical scien- application platforms tists, biomedical engineers, and biomedical scientists, from academia and industry. We develop numerical schemes suitable for ex- microcard.eu ascale parallelism together with matching linear-system solvers and preconditioners, COORDINATING ORGANISATION and a dedicated compiler to translate high- Université de Bordeaux, France level model descriptions into optimized, energy-efficient system code for heterogene- OTHER PARTNERS ous computing systems. The code will be  Université de Strasbourg, France resilient to hardware failures and will use an  Simula Research Laboratory AS, Norway energy-aware task placement strategy.  Università degli Studi di Pavia, Italy  USI: Università della Svizzera italiana, Swit- At the same time, we develop highly parallel zerland mesh generation software to build the ex-  KIT - Karlsruhe Institute for Technology , tremely large and complex models that we Germany will need for our simulations. These meshes  Konrad-Zuse-Zentrum für Infor- will be based on large volumes of confocal mationstechnik Berlin, Germany microscopy data that were recently acquired  MEGWARE Computer Vertrieb und Service with the latest clearing and labelling tech- GmbH, Germany niques.  NumeriCor GmbH, Austria  Orobix srl, Italy Our simulation platform will be applied to real-life use cases and will be made accessible CONTACT for a wide range of users. It will be adaptable Mark Potse, [email protected] (scientific to similar biological systems such as nerves, coordinator) and some of the components and techniques will be reusable in a wide range of applica- CALL tions. EuroHPC-02-2019

MICROCARD is coordinated by the IHU Liryc, PROJECT TIMESPAN an institute dedicated to research on cardiac arrhythmia that is part of the University of 01/04/2021 - 30/09/2024 Bordeaux (France) and its university hospital. The project also has strong ties with the Institute for Experimental Cardiovascular Medicine in Freiburg (Germany), and several other clinical and experimental research groups. Together with an end-user advisory board composed of prominent experimental and clinical cardiologists these links will help us to have a real impact on cardiology re- search and ultimately on patient care.

132 | European High-Performance Computing projects – HANDBOOK 2021

CODA: Next generation of NextSim industrial aerodynamic simulation code

NextSim partners, as fundamental European series of market relevant problems. The players in Aeronautics and Simulation, recog- numerical simulation of those problems is still nise that there is a need to increase the capa- a challenge for the aeronautical industry and bilities of current Computational Fluid Dy- their solution, at a required accuracy and an namics tools for aeronautical design by re- affordable computational cost, is still not engineering them for extreme-scale parallel possible with the current industrial solvers. computing platforms. Following this idea, three additional working The backbone of NextSim is centred on the areas are proposed in NextSim: algorithms for fact that, today, the capabilities of leading- numerical efficiency, algorithms for data edge emerging HPC architectures are not fully management and the efficiency implementa- exploited by industrial simulation tools. Cur- tion of those algorithms in the most advanced rent state-of-the-art industrial solvers do not HPC platforms. take sufficient advantage of the immense capabilities of new hardware architectures, Finally, NextSim will provide access to project such as streaming processors or many-core results through the “mini-apps” concept, platforms. A combined research effort focus- small pieces of software, seeking synergies ing on algorithms and HPC is the only way to with open-source components, which make possible to develop and advance simu- demonstrate the use of the novel mathemati- lation tools to meet the needs of the Europe- cal methods and algorithms developed in an aeronautical industry. CODA but that will be freely distributed to the scientific community. NextSim will focus on the development of the numerical flow solver CODA (Finite Volume and high-order discontinuous Galerkin schemes), that will be the new reference solver for aerodynamic applications inside AIRBUS group, having a significant impact in the aeronautical market. To demonstrate NextSim market impact, AIRBUS has defined a

Applications in the HPC Continuum | 133

Industrial software codes for extreme scale computing environments and ap-

plications

nextsimproject.eu Twitter: @NextSim_EU

LinkedIn: @nextsim-eu

COORDINATING ORGANISATION Barcelona Supercomputing Center (BSC), Spain

OTHER PARTNERS  Universidad Politécnica de Madrid, Spain  CIMNE - Centre Internacional de Mètodes Numèrics a l'Enginyeria, Spain  Office National d'Etudes et de Recherches Aérospatiales (ONERA), France  Deutsches Zentrum für Luft- und Raumfahrt (DLR), Germany  Centre européen de recherche et de forma- tion avancée en calcul scientifique (CER- FACS), France  Airbus Operations SAS, France

CONTACT

Oriol Lehmkuhl, [email protected]

Jordi Pons, [email protected]

CALL EuroHPC-03-2019

PROJECT TIMESPAN 01/03/2021 - 29/02/2024

134 | European High-Performance Computing projects – HANDBOOK 2021

Optimizing Industrial Applications OPTIMA for Heterogeneous HPC systems

In order to support the expanding demands main advantage of those devices is that, since for processing power from emerging HPC they can be reconfigured at any time so as to applications, within a pragmatic energy enve- implement tailor-made application accelera- lope, the future HPC systems will incorporate tors, their energy efficiency and/or perfor- accelerators. One promising approach, to- mance, in most of the cases, is much higher wards this end, is the utilization of FPGAs; the than that of CPUs and GPUs.

Applications in the HPC Continuum | 135

OPTIMA is an SME-driven project aiming to Industrial software codes for extreme port and optimize a number of industrial scale computing environments and ap- applications as well as a set of open-source plications libraries, utilized in at least 3 different appli- cation domains, to two novel FPGA-populated HPC systems: a Maxeler MPC-X node located optima-hpc.eu at the Jülich Supercomputer Centre and the Twitter: @optima_hpc ExaNest prototype located at the Institute of Computer Science - FORTH. Several innova- LinkedIn: @optima-hpc tive programming environments, toolchains and runtimes will be utilized including Max- COORDINATING ORGANISATION eler’s MaxCompiler, Xilinx Runtime (XRT) and Telecommunication Systems Research Insti- Fraunhofer's GASPI/GPI-2. It is expected that tute, Greece the applications and the libraries will be executed, in those heterogeneous HPC sys- OTHER PARTNERS tems at significantly higher energy-efficiency  Cyberbotics SARL, Switzerland as described by the Energy Delay Product  Fraunhofer-Gesellschaft zur Förderung der metric (EDP); in particular, the EDPof the angewandten Forschung e.V., Germany OPTIMA applications and libraries when  Exapsys | Exascale Performance Systems, executed on the targeted FPGA-based HPC Greece systems, is expected to be more than 10x  Institute of Communications and Computer higher than that triggered by CPU-based Systems, Greece systems and more than 3x higher than the  M3E S.r.l. Italy GPU-based ones.  Maxeler IoT-Labs B.V., Netherlands  Forschungszentrum Jülich, Germany The main outcomes of OPTIMA will be that:  EnginSoft SpA, Italy  Appentra Solutions SL, Spain • the participating SMEs will gain a signifi- cant advantage since they will be able to CONTACT execute their applications much more ef- [email protected] ficiently than the competition, • it will be further proved that Europe is at CALL the forefront of developing efficient EuroHPC-03-2019 FPGA-populated HPC systems and appli- cation/libraries taking advantage of PROJECT TIMESPAN them, 01/03/2021 - 30/11/2023 • the open-source libraries as well as the open-source applications developed within OPTIMA will allow third parties to easily target FPGA-based HPC systems for their application developments, • there will be an open-to-use HPC infra- structure supported by a specially formed sustainability body.

136 | European High-Performance Computing projects – HANDBOOK 2021

Prototype of HPC/Data Phidias Infrastructure for On-demand Services

In today’s world, more and more data are The project foresees the development of constantly being generated, resulting in three use cases: changing the nature of computing, with an • Intelligent screening of a large amount of increasing number of data-intensive critical satellite data for detection and identifi- applications. cation of anomalous atmospheric com- position events; As the Council adopted a regulation on estab- • Processing on-demand services for envi- lishing the European High Performance Com- ronmental monitoring; and. puting Joint Undertaking (EuroHPC) in July • Improving the use of cloud services for 2021, the regulation paves the way for the marine data management. development within Europe of the next gen- eration of supercomputers, strengthens research and innovation capabilities, the development of a supercomputing infrastruc- ture ecosystem and the acquisition of world- class supercomputers. The PHIDIAS project addresses a big challenge for Earth sciences. Today, many datasets are disseminated, and researchers are not always aware of their existence, their level of quality and more generally how to access and reuse them for analysis, in combination with their datasets to provide new information and knowledge. The PHIDIAS project, funded by the European Union’s Connecting Europe Facility (CEF), is built within this framework, aiming to be- come a reference point for the Earth science community enabling them to discover, man- age and process spatial and environmental data, through the development of a set of high-performance computing (HPC) based services and tools exploiting large satellite datasets.

Applications in the HPC Continuum | 137

www.phidias-hpc.eu "With the efficient support of the consortium, I am working to ensure that the PHIDIAS Twitter: @PhidiasHpc objectives are being consistently pursued, to LinkedIn: @phidias-hpc deliver a catalogue that will implement in- teroperable services for the discovery, access COORDINATING ORGANISATION and processing of data, guaranteeing the Centre Informatique National de largest degree of reusability of data as possi- l'Enseignement Supérieur (CINES), France ble, and the improvement of the FAIRisation of satellite and environmental datasets. Es- OTHER PARTNERS sentially, paving the way and making life  Centre Européen de Recherche et de For- easier for the next generation of HPC and the mation Avancée en Calcul Scientifique Computational Scientific community." (CERFACS), France  Centre National de la Recherche Scien- Boris Dintrans, Director of CINES and PHIDIAS tifique (CNRS), France project coordinator  IT Centre for Science Ltd. (CSC), Finland  Geomatys, France By federating different data sources, PHIDIAS  Institut de Recherche pour le Développe- will provide, on one hand, an interoperable ment (IRD), France and easy to use catalogue of environmental  Institut Français de Recherche pour l'ex- resources accessible and browsable by any- ploitation de la Mer (IFREMER), France body, and, on the other hand, will provide  Mariene Informatie Service MARIS BV researchers and practitioners with a platform (MARIS), Netherlands on top of which new knowledge and new  Neovia Innovation, France business models can be developed.  SPASCIA, France  Finnish Environment Institute (SYKE), Fin- Synergies between earth science communi- land ties, high performance computing, and data-  Trust-IT, Italy driven operations have the potential of ena-  Université de Liège, Belgium bling novel innovation opportunities and pave the way for the realisation of brand-new CONTACT applications and services. PHIDIAS foresees a Boris Dintrans (Project Coordinator) large social impact in terms of overall envi- [email protected] ronmental monitoring capabilities enabled by [email protected] the pooling of different stakeholders, namely, researchers, public authorities, private play- CALL ers, and citizen scientists. CEF-TC-2018-5 Finally, it is also worth stressing that the social dimension cannot be underestimated. PROJECT TIMESPAN It is, however, our standpoint that the trans- 01/09/2019 - 30/09/2022 parent access to environmental data enabled by PHIDIAS can allow us to provide due care to the players and countries that could be negatively affected by the low-carbon econ- omy.

138 | European High-Performance Computing projects – HANDBOOK 2021

An open architecture to equip REGALE next generation HPC applications with exascale capabilities

Supercomputers in the near future will be These systems will be very heterogeneous much more than just larger and more power- and complex on node level. They will leverage ful machines than the ones used today. We very different technologies, such as general- are approaching exascale system – High Per- purpose CPUs as well as GPU and FPGA accel- formance Computers that will be able to do erators. For energy efficient computing with- 1018 floating point operations per second. out significant trade-off on the performance With that, some paradigms of HPC will be- side, application will have to leverage these come obsolete. And energy consumption will technologies dynamically. To achieve this, a become one of the key factors for the sys- new software stack is needed. tems to come.

Applications in the HPC Continuum | 139

One of the new challenges for exascale is the HPC and data centric environments and total cost of ownership (TCO), and more application platforms specifically, the operating costs of exascale systems, the environmental footprint associ- ated with the energy and power consumed, regale-project.eu not only by the machine itself but also by the Twitter: @EuRegale infrastructure supporting it, and power and performance inhomogeneity, stemming from COORDINATING ORGANISATION manufacturing variability. The current state of Institute of Communication and Computer practice in system wide resource manage- Systems (ICCS), Greece ment is rather passive in the sense that it merely serves user resource requests and applies brute-force enforcement of adminis- OTHER PARTNERS trator-driven policies. What is needed is a  Technische Universität München (TUM), holistic coordinator that will be able to sup- Germany port power, energy, performance, and  Leibniz Supercomputing Centre, Germany throughput optimizations across the whole  ANDRITZ HYDRO GMBH, Austria system in a scalable manner.  Université Grenoble Alpes, France In the race to greater performance using less  Atos (Bull SAS), France energy, the focus has often been on hard-  Barcelona Supercomputing Centre (BSC), ware. However, system software can now Spain play a crucial role towards a controlled bal-  CINECA, Consorzio Interuniversitario, Italy ance between application performance,  E4 Computer Engineering SpA, Italy power constraints and energy consumption,  National Technical University of Athens - taking advantage of software interfaces for NTUA, Greece hardware configuration, without neglecting  Ryax Technologies, France application needs. REGALE intends to realize  Electricité de France (EDF), France these benefits under real operational envi-  Ubitech, Greece ronments by equipping the HPC system soft-  SCiO Private Company, Greece ware stack with the mechanisms and policies  TWT GmbH Science & Innovation, Germany for effective performance, power and energy  Alma Mater Studiorum - Università di Bo- control and optimization, bridging the gap logna, Italy between research and practice. The REGALE project started in April 2021. It CONTACT aims to build a software stack that is not only Georgios Goumas, [email protected] capable of bringing efficiency to tomorrow’s CALL HPC systems. It also will be open and scalable EuroHPC-02-2019 for massive supercomputers. REGALE brings together leading supercomputing stakehold- PROJECT TIMESPAN ers, prestigious academics, top European 01/04/2021 - 31/03/2024 supercomputing centers and end users from critical target sectors, covering the entire value chain in system software and applica- tions for extreme scale technologies.

140 | European High-Performance Computing projects – HANDBOOK 2021

SCAlable LAttice Boltzmann Leaps SCALABLE to Exascale

In SCALABLE, eminent industrials and aca- demic partners will team up to improve the In the public domain research code waLBerla, performance, scalability, and energy efficien- superb performance and outstanding scalabil- cy of an industrial LBM-based computational ity has been demonstrated, reaching more fluid dynamics (CFD) software. than a trillion (1012) lattice cells already on Petascale systems. WaLBerla performance Lattice Boltzmann methods (LBM) have al- excels because of its uncompromising unique, ready evolved to become trustworthy alter- architecture-specific automatic generation of natives to conventional CFD. In several engi- optimized compute kernels, together with neering applications, they are shown to be carefully designed parallel data structures. roughly an order of magnitude faster than However, is not compliant with industrial Navier-Stokes approaches in a fair compari- applications due to lack of a geometry engine son and in comparable scenarios. and user friendliness for non-HPC experts.

In the context of EuroHPC, LBM is especially On the other hand, the industrial CFD soft- well suited to exploit advanced supercom- ware LaBS already has such industrial capabil- puter architectures through vectorization, ities at a proven high level of maturity, but it accelerators, and massive parallelization. still has performance worthy of improvement.

The Karolina supercomputer (IT4Innovations, VSB-TUO) – 3 CFD numerical simulations on air- craft from various perspectives

Applications in the HPC Continuum | 141

SCALABLE will make the most of these two Industrial software codes for extreme existing CFD tools (waLBerla and LaBS, the scale computing environments and ap- ProLB software) notably by transferring cut- plications ting-edge technology and aiming to break down the silos between the scientific compu- ting world and that of physical flow modelling scalable-hpc.eu in order to deliver improved efficiency and Twitter: @scalable_hpc scalability for the upcoming European Ex- ascale systems. LinkedIn: @scalable-hpc

The project outcomes will directly benefit to COORDINATING ORGANISATION the European industry as confirmed by the CS Group, France active involvement of Renault and Airbus in the SCALABLE project, and will additionally OTHER PARTNERS contribute to fundamental research.  Friedrich-Alexander-Universität Erlangen- Nürnberg, Germany  Centre européen de recherche et de forma- tion avancée en calcul scientifique (CER- FACS), France  Forschungszentrum Jülich, Germany  Neovia Innovation, France  IT4Innovations, VSB – Technical University of Ostrava, Czech Republic  AIRBUS Operations GmbH, Germany  RENAULT SAS, France

CONTACT Jérôme Texier, Project Coordinator, [email protected]

CALL EuroHPC-03-2019

PROJECT TIMESPAN

01/01/2021 - 31/12/2023

142 | European High-Performance Computing projects – HANDBOOK 2021

Ecosystem development | 143

Ecosystem development

144 | European High-Performance Computing projects – HANDBOOK 2021

Connecting the countries NCCs to CASTIEL each other and the European Ecosystem

WHAT IS CASTIEL? raising awareness of high-performance com- puting (HPC)-related technologies and exper- The Coordination and Support Action (CSA) tise. As a hub for information exchange and CASTIEL leads to cross-European networking training, CASTIEL promotes networking activities between National Competence among NCCs and strengthens idea exchange Centres (NCCs) in HPC-related topics ad- by developing best practices. The identifica- dressed through the EuroCC project. CASTIEL tion of synergies, challenges, and possible emphasises training, industrial interaction and cooperation, business development,

Ecosystem development | 145 solutions is implemented through the close EuroHPC Innovating and Widening the cooperation of the NCCs at a European level. HPC use and skills base

WHAT IS THE RELATIONSHIP BETWEEN eurocc-access.eu CASTIEL AND EUROCC? Twitter: @CASTIEL_project CASTIEL, the Coordination and Support Action LinkedIn: @CASTIEL-project (CSA) closely associated with EuroCC, com- bines the National Competence Centres (NCC) COORDINATING ORGANISATION formed in EuroCC into a pan-European net- work. The aggregation of HPC, high- High-Performance Computing Center performance data analytics (HPDA), and Stuttgart (USTUTT/HLRS), Germany artificial intelligence (AI) competencies demonstrates the global competitiveness of OTHER PARTNERS the EU partners. The two activities are the  Gauss Centre for Supercomputing e.V. beginning of a strategic positioning of the (GCS), Germany European HPC competence and will contrib-  Consorzio Interuniversitario (CINECA), Italy ute to the comprehensive independence of  TERATEC, France the mentioned technologies in Europe.  Barcelona Supercomputing Center – Centro Nacional De Supercomputación (BSC), Spain  Partnership for Advanced Computing in Europe AISBL (PRACE), Belgium

CONTACT Dr.-Ing. Bastian Koller, [email protected]

CALL EuroHPC-04-2019

PROJECT TIMESPAN 01.09.2020 – 31.08.2022

146 | European High-Performance Computing projects – HANDBOOK 2021 EuroCC European Competence Centres

Within the EuroCC project under the Europe- and more efficient research ultimately bene- an Union’s Horizon 2020 (H2020), participat- fits state and national governments and ing countries are tasked with establishing a society as a whole. single National Competence Centre (NCC) in the area of high-performance computing The National Competence Centres shall act as (HPC) in their respective countries. These the first points of contact for HPC, HPDA and NCCs coordinate activities in all HPC-related AI in their respective countries. The NCCs will fields at the national level and serve as a bundle information and provide experts, offer contact point for customers from industry, access to expertise and HPC training and science, (future) HPC experts, and the general education. public alike. The EuroCC project is funded 50 percent through H2020 (EuroHPC Joint Un- The overall objective of EuroCC — to create a dertaking [JU]) and 50 percent through na- European base of excellence in HPC by filling tional funding programmes within the partner existing gaps — comes with a clear vision: countries. offering a broad portfolio of services in all HPC, HPDA, and AI-related areas, tailored to WHAT ARE THE NCCS’ TASKS? the needs of industry, science, and public The tasks of the National Competence Cen- administrations. tres include mapping existing HPC, high- performance data analytics (HPDA) and artifi- COLLABORATION BETWEEN EUROCC cial intelligence (AI) competences, HPC train- AND CASTIEL ing and the identification of HPC, HPDA and AI EuroCC works closely together with the Coor- experts, as well as the coordination of ser- dination and Support Action (CSA) CASTIEL, vices such as business development, applica- which links the national centres throughout tion support, technology transfer, training Europe and ensures successful transfer and and education, and access to expertise. Re- training possibilities between the participat- searchers from academia and industry both ing countries. benefit from this competence concentration,

Ecosystem development | 147

EuroHPC Innovating and Widening the HPC use and skills base

eurocc-access.eu  Norwegian Research Centre AS (NORCE), Norway Twitter: @EuroCC_project  SINTEF AS (SINTEF), Norway LinkedIn: @EuroCC  Academic Computer Centre Cyfronet AGH Newsletter: subscribe on eurocc-access.eu (CYFRONET), Poland  Fundação para a Ciência e a Tecnologia COORDINATING ORGANISATION (FCT), Portugal High-Performance Computing Center  National Institute for Research- Stuttgart (USTUTT/HLRS), Germany Development in Informatics – ICI Bucharest (ICIB), Romania OTHER PARTNERS  Academic and Research Network of Slove-  Gauss Centre for Supercomputing (GCS) nia (ARNES), Slovenia Germany  Barcelona Supercomputing Center – Centro  Institute of Information and Communica- Nacional de Supercomputación (BSC), Spain tion Technologies at Bulgarian Academy of  Uppsala University (UU), Sweden Sciences (IICT-BAS), Bulgaria  Eidgenössische Technische Hochschule  Universität Wien (UNIVIE), Austria Zürich (ETH Zurich), Switzerland  University of Zagreb University Computing  The Scientific and Technological Research Centre (SRCE), Croatia Council of Turkey (TUBITAK), Turkey  Computation-based Science and Techno-  The University of Edinburgh (EPCC), United logy Research Centre, The Cyprus Institute Kingdom (CaSToRC-CyI), Cyprus  TERATEC (TERATEC), France  IT4Innovations, VSB – Technical University  SURFSARA BV (SURFSARA), Netherlands of Ostrava, Czech Republic  Centre de recherche en aéronautique asbl  Technical University of Denmark (DTU), (Cenaero), Belgium Denmark  Luxinnovation GIE (LXI), Luxembourg  University of Tartu HPC Center (UTHPC),  Center of Operations of the Slovak Acade- Estonia my of Sciences (CC SAS), Slovak Republic  CSC – IT Center for Science Ltd (CSC), Finland  University of Ss. Cyril and Methodius, Fa-  National Infrastructures for Research and culty of computer science and engineering Technology S.A. (GRNET S.A.), Greece (UKIM), Republic of North Macedonia  Kormányzati Informatikai Fejlesztési  Háskóli Íslands – University of Iceland (UI- Ügynökség (KIFÜ), Hungary CE), Iceland  National University of Ireland, Galway (NUI  University of Donja Gorica (UDG), Montenegro Galway), Ireland  CINECA – Consorzio Interuniversitario CONTACT (CINECA), Italy Dr.-Ing. Bastian Koller, [email protected]  Vilnius University (LitGrid-HPC), Lithuania CALL  Riga Technical University (RTU), Latvia EuroHPC-04-2019  UNINETT Sigma2 AS (Sigma2), Norway PROJECT TIMESPAN 01.09.2020 – 31.08.2022

148 | European High-Performance Computing projects – HANDBOOK 2021 FF4EUROHPC Connecting SMEs with HPC

BASE PILLARS OF THE FF4EUROHPC THE FF4EUROHPC GENERAL OBJECTIVES PROJECT ARE: FF4EuroHPC has the general objective of • Realise a portfolio of business-oriented supporting the EuroHPC initiative to promote application “experiments” that are driven the industrial uptake of high-performance by the SME end-users needs and executed computing (HPC) technology, in particular by by teams covering all required actors in the small and medium-sized enterprises (SMEs), value-chain, with the innovation potential and thus increase the innovation potential of for HPC use and service provision of utmost European industry. In order to achieve that priority. general objective, FF4EuroHPC has defined a set of specific objectives.

Ecosystem development | 149

• Support the future national HPC Compe- EuroHPC Innovating and Widening the tence Centres (resulting from the call Eu- HPC use and skills base roHPC-04-2019) to more effectively collab- orate with SMEs through involvement in the FF4EuroHPC experiment portfolio and ff4eurohpc.eu integration in related outreach activities. Twitter: @FF4EuroHPC • Support the participating SMEs in the es- LinkedIn: @FF4EuroHPC tablishment of HPC-related innovation ei- Newsletter: subscribe on ff4euroHPC.eu ther by using HPC infrastructures or ser- vices for their business needs or by provid- ing new HPC-based services. COORDINATING ORGANISATION • Facilitate the widening of industrial HPC High Performance Computing Centrer user communities and service providers in Stuttgart (USTUTT/HLRS), Germany Europe by delivering compelling success stories for the use of HPC by SMEs; ensur- OTHER PARTNERS ing maximal awareness via communication  Scapos AG,Germany and dissemination in collaboration with  Consorzio Interuniversitario (CINECA), Italy relevant Digital Innovation Hubs (DIHs) and  Teratec (TERATEC), France industry associations.  Fundacion publica gallega centro tecnologi- co de supercomputacion de galicia (CESGA), The open calls target highest quality experi- Spain ments involving innovative, agile SMEs and  ARCTUR racunalniski inzeniring doo (ARC- putting forward work plans built around TUR), Slovenia innovation targets arising from the use of advanced HPC services. Proposals are sought CONTACT that address business challenges from Euro- Dr.-Ing. Bastian Koller, [email protected] pean SMEs from varied application domains, with strong preference being given to engi- CALL neering and manufacturing, or sectors able EuroHPC-04-2019 to demonstrate fast economic growth or particular economic impact for Europe. The PROJECT TIMESPAN highest priority is given to proposals directly 01.09.2020 – 31.08.2023 addressing the business challenges of manu- facturing SMEs.

150 | European High-Performance Computing projects – HANDBOOK 2021

High Performance Embedded HiPEAC Architecture and Compilation

HiPEAC is a coordination and support action Physical Systems (i) by attracting members (CSA) that aims to structure, connect and from the Cyber-Physical Systems community, cross-fertilise the European academic and industry and innovation community, (ii) by industrial research and innovation communi- organising quarterly networking activities to ties in Embedded Computing and Cyber- connect the different communities, (iii) by

Extract from the HiPEAC comic Past, present and future of the internet and digitally-augmented humanity - A HiPEAC vision

Ecosystem development | 151 attracting computing talent and by retaining European network in computing it in Europe (iv) by producing a high-quality systems vision on the future of computing systems in Europe, (v) by producing an impact analysis, and (vi) by professionally disseminating the hipeac.net research outcomes in and beyond the Euro- Twitter: @hipeac pean computing systems community. LinkedIn: hipeac.net/linkedin YouTube: hipeac.net/youtube This HiPEAC CSA project is the continuation of five successful projects with the same name COORDINATING ORGANISATION (HiPEAC1-5). HiPEAC will leverage the existing Universiteit Gent, Belgium community, the expertise and the set of instruments that were developed since 2004 OTHER PARTNERS and work on the objectives of call ICT-01-  Barcelona Supercomputing Centre (BSC), 2019, mainly in the field of Cyber-physical Spain Systems of Systems (CPSoS).  Chalmers Tekniska Högskola AB, Sweden  Idryma Technologias Kai Erevnas, Greece The overall approach of the HiPEAC CSA is  Institut National de Recherche en Informa- that it brings together all actors and stake- tique et en Automatique (INRIA), France holders in the CPSoS and computing systems  The University of Edinburgh (EPCC), United community in Europe in one well-managed Kingdom structure where they can interact, dissemi-  Universita di Pisa, Italy nate/share information, transfer  Rheinisch-Westfälische Technische knowledge/technology, exchange human Hochschule Aachen (RWTH-AACHEN), resources, think about future challenges, Germany experiment with ideas to strengthen the  Commissariat à l'énergie atomique et aux community, etc. énergies alternatives (CEA), France  Arm Ltd, United Kingdom The HiPEAC CSA will support its members and  THALES, France projects with tasks that are too diffi-  Artemisia Vereniging, Netherlands cult/complex to carry out individually like vision building, professional communications, CONTACT recruitment, event management at the Euro- pean level, and cross-fertilise the European Koen De Bosschere, [email protected] academic and industrial research. By offering such services, a burden is taken away from CALL the projects and members. They can then ICT-01-2019 focus on the content, and the impact of their efforts is amplified. PROJECT TIMESPAN 01/12/2019 - 28/02/2023

152 | European High-Performance Computing projects – HANDBOOK 2021

Pan-European Research HPC-EUROPA3 Infrastructure on High Performance Computing

TRANSNATIONAL ACCESS http://bit.ly/AssociatedStates), but limited places are available for researchers from third The core activity of HPC-Europa3 is the countries. Transnational Access programme, which HPC-Europa3 has introduced a “Regional funds short collaborative research visits in Access Programme” to encourage applica- any scientific domain using High Performance tions from researchers working in the Baltic Computing (HPC). and South-East Europe regions. The Regional Visitors gain access to some of Europe’s most Access Programme specifically targets re- powerful HPC systems, as well as technical searchers with little or no HPC experience support from the relevant HPC centre to who need more powerful computing facilities enable them to make the best use of these than they have, but not the most powerful facilities. systems offered by HPC-Europa3. Such re- Applicants identify a “host” researcher work- searchers can apply respectively to KTH-PDC ing in their own field of research, and are in Sweden or GRNET in Athens, and will be integrated into the host department during given priority over more experienced appli- their visit. Visits can be made to any research cants. institute in Finland, Germany, Greece, Ireland, Italy, the Netherlands, Spain, Sweden or the NETWORKING AND JOINT RESEARCH UK. (Note that project partner CNRS does not ACTIVITIES participate in the Transnational Access activi- The main HPC-Europa3 Networking Activity, ty and therefore it is not possible to visit External co-operation for enhancing the best research groups in France). use of HPC, aims to build stronger relation- HPC-Europa3 visits can last between 3 and 13 ships with other European HPC initiatives, weeks. e.g., PRACE, ETP4HPC, and the HPC Centres of The programme is open to researchers of any Excellence. The goal is to provide HPC ser- level, from academia or industry, working in vices in a more integrated way across Europe. any area of computational science. Priority is This activity also aims to increase awareness given to researchers working in the EU and of HPC within SMEs. A series of workshops Associated States (see targeted at SMEs has been organised to en- courage their uptake of HPC.

Ecosystem development | 153

The Joint Research Activity, Container-as-a- www.hpc-europa.eu service for HPC, aims to enable easy portabil- ity of end-user applications to different HPC Twitter: @hpceuropa3 centres, providing clear benefits for HPC- Facebook: @hpceuropa Europa3 visitors. This approach enables dif- LinkedIn: @hpc-europa3-eu-project ferent applications and runtime environments YouTube: www.youtube.com to be supported simultaneously on the same hardware resources with no significant de- COORDINATING ORGANISATION crease in performance. It also provides the CINECA Consorzio Interuniversitario, Italy capability to fully capture and package all dependencies of an application so that the application can be migrated to other ma- OTHER PARTNERS  chines or preserved to enable a reproducible BSC - Barcelona Supercomputing Centre, environment in the future. Spain  CNRS - Centre National de la Recherche Scientifique, France The results from this JRA can be found in  CSC – IT Center for Science Ltd., Finland various deliverables which are available at  EPCC - The University of Edinburgh, United http://www.hpc- Kingdom europa.eu/public_documents.  GRNET - Ethniko Diktyo Erevnas Technolo- gias AE, Greece  HLRS - Universität Stuttgart, Germany  KTH - Kungliga Tekniska Hoegskolan, Swe- den  ICHEC - National University of Ireland Gal- way, Ireland  SURF BV, The Netherlands

CONTACT [email protected]

CALL INFRAIA-01-2016-2017

PROJECT TIMESPAN 01/05/2017 - 30/04/2022 Flow of a circular synthetic jet represented using the Q-criterion and coloured by the magnitude of the velocity (from research carried out by HPC-Europa3 visitor Arnau Miró, Universitat Politècnica de Catalunya)

154 | European High-Performance Computing projects – HANDBOOK 2021 Index of on-going projects

ACROSS, 104 HEROES, 122 ADMIRE, 10 HiDALGO, 88 ASPIDE, 12 HiPEAC, 150 BioExcel-2, 72 HPC-EUROPA3, 152 CASTIEL, 144 HPCWE, 124 ChEESE, 74 IO-SEA, 38 CoEC, 76 LEXIS, 126 CompBioMed2, 78 LIGATE, 128 CYBELE, 106 MAELSTROM, 40 DComEX, 14 MAESTRO, 42 DEEP-EST, 16 MaX, 90 DeepHealth, 108 MEEP, 44 DEEP-SEA, 18 MICROCARD, 130 E-cam, 80 Mont-Blanc 2020, 46 eFlows4HPC, 110 NEASQC, 48 ENERXICO, 112 NextSim, 132 EoCoE-II, 82 NOMAD, 92 EPEEC, 20 OPTIMA, 134 EPI, 22 PerMedCoE, 94 EPiGRAM-HS, 26 Phidias, 136 eProcessor, 28 POP2, 96 ESCAPE-2, 30 RAISE, 98 ESiWACE2, 84 RECIPE, 50 EuroCC, 146 RED-SEA, 52 EuroExa, 32 REGALE, 138 European Processor Initiative, 22 Sage2, 54 EVOLVE, 114 SCALABLE, 140 EXA MODE, 116 SODALITE, 56 EXA2PRO, 34 SPARCITY, 58 exaFOAM, 118 TEXTAROSSA, 60 ExaQUte, 36 TIME-X, 62 EXCELLERAT, 86 TRex, 100 EXSCALATE4COV, 120 VECMA, 64 FF4EUROHPC, 148 VESTEC, 66 FocusCoE, 70

Index | 155 Index of completed projects

Projects completed before 1st January 2021 are not featured in this edition of the Handbook, but you can find the descriptions of completed projects in the 2018, 2019 or 2020 Handbooks, available on our website:

Project name Project description Handbook Edition ALLScale An Exascale Programming, Multi-objective Optimisation and 2018 Resilience Management Environment Based on Nested Recursive Parallelism ANTAREX AutoTuning and Adaptivity appRoach for Energy efficient 2018 eXascale HPC systems AQMO Air Quality and MObility 2020

ARGO WCET-Aware Parallelization of Model-Based Applications for 2019 Heterogeneous Parallel Systems BioExcel 1 Centre of Excellence for Biomolecular Research 2019

COEGSS Centre of Excellence for Global Systems Science 2018

ComPat Computing Patterns for High Performance Multiscale Com- 2018 puting CompBioMed 1 A Centre of Excellence in Computational Biomedicine 2018

ECOSCALE Energy-efficient heterogeneous COmputing at exaSCALE 2019

EoCoE 1 Energy oriented Centre of Excellence for computer applica- 2018 tions ESCAPE 1 Energy-efficient SCalable Algorithms for weather Prediction 2018 at Exascale ESIWACE 1 Centre of Excellence in Simulation of Weather and Climate in 2019 Europe EUROLAB-4-HPC Consolidation of European Research Excellence in Exascale 2020 2 HPC Systems ExaFLOW Enabling Exascale Fluid Dynamics Simulations 2018

ExaHyPE An Exascale Hyperbolic PDE Engine 2019

ExaNeST European Exascale System Interconnect and Storage 2019

156 | European High-Performance Computing projects – HANDBOOK 2021

Project name Project description Handbook Edition ExaNoDe European Exascale Processor and Memory Node Design 2019

ExCAPE Exascale Compound Activity Prediction Engine 2018

EXDCI-2 European eXtreme Data and Computing Initiative 2020

EXTRA Exploiting eXascale Technology with Reconfigurable Archi- 2018 tectures greenFLASH Green Flash, energy efficient high performance computing 2018 for real-time science INTERTWINE Programming Model INTERoperability ToWards Exascale 2018 (INTERTWinE) MANGO Exploring Manycore Architectures for Next-GeneratiOn HPC 2018 systems MaX 1 Materials design at the eXascale 2019

M2DC Modular Microserver DataCentre 2019

Mont-Blanc 3 Mont-Blanc 3, European scalable and power efficient HPC 2019 platform based on low-power embedded technology NEXTGenIO Next Generation I/O for the Exascale 2018 NLAFET Parallel Numerical Linear Algebra for Future Extreme Scale 2018 Systems NoMaD 1 The Novel Materials Discovery Laboratory 2018 POP 1 Performance Optimisation and Productivity 2018

PROCESS PROviding Computing solutions for ExaScale ChallengeS 2020

OPRECOMP Open Transprecision COmputing 2020

READEX Runtime Exploitation of Application Dynamism for Energy- 2018 efficient eXascale computing SAGE 1 Percipient StorAGe for Exascale Data Centric Computing 2018