European High-Performance www.etp4hpc.eu Computing Projects www.etp4hpc.eu/euexascale : @ETP4HPC Contact: [email protected] Handbook 2020 www.etp4hpc.eu www.etp4hpc.eu/EuExascale : @Etp4HPC : @etp4hpc Contact: [email protected] 2 European High-Performance Computing Projects - HANDBOOK 2020 3

Dear Reader,

I am told that the idea of this publication goes back to 2015, when ETP4HPC, now a partner in EuroHPC, was looking for a mechanism to provide a comprehensive overview of all projects in our arena. That first issue was a simple summary, photocopied and stapled manually.

The number of pages and the appearance of this year’s issue mirror the growth and maturity of our ecosystem: 50 on-going projects are listed in the current edition, and references are included to 30 projects that have been finalised. Also, the current project landscape is characterised by its diversity: from the early FETHPC projects, we have progressed to application-domain specific projects (e.g. the Centres of Excellence), integrated co-design projects, as well as test beds and processor development projects.

I am glad to note that the EuroHPC Joint Undertaking can claim part of the credit for the size and breadth of this effort. In the coming years, the role of our JU will become even more visible as more and more EuroHPC-funded projects get under way.

I would also like to emphasise the emergence of a new concept: the TransContinuum, which is being developed by ETP4HPC and other similar organisations. A number of projects already fit into the definition of this term: the role HPC plays in complex workflows in conjunction with Big Data, Artificial Intelligence, Internet of Things and other technologies. I am sure we will see more work of this sort – the application of HPC being at the core of the Digital of the Future.

I trust this Handbook will continue to serve as an efficient repository of our project work, helping you to orientate in this complex landscape. In these difficult times, it is important to keep in touch and keep the light of progress lit. Best regards,

ETP4HPC Chairman Jean-Pierre Panziera 4 European High-Performance Computing Projects - HANDBOOK 2020

Table of Contents

6. European Exascale Projects 36. Centres of Excellence 9. Technology Projects in computing applications 10. ASPIDE 38. FocusCoE 12. DEEP projects 40. BioExcel 14. EPEEC 42. ChEESE 16. EPiGRAM-HS 44. CoEC 18. ESCAPE-2 46. CompBioMed2 20. EuroExa 48. E-cam 22. EXA2PRO 50. EoCoE-II 24. ExaQUte 52. ESiWACE2 26. MAESTRO 54. EXCELLERAT 28. RECIPE 56. HiDALGO 30. Sage2 58. MaX 32. VECMA 60. NOMAD 34. VESTEC 62. PerMedCoE 64. POP2 66. TREX 5

68. European Processor 100. Other projects 70. EPI 102. EXA MODE 74. Mont-Blanc 2020 104. EXSCALATE4COV 106. OPRECOMP 76. Applications in the HPC Continuum 108. PROCESS 78. AQMO 110. SODALITE

80. CYBELE 112. Ecosystem development 82. DeepHealth 114. CASTIEL 84. EVOLVE 116. EuroCC 86. LEXIS 118. EUROLAB-4-HPC-2 88. Phidias 120. EXDCI-2 90. International Cooperation 122. FF4EUROHPC 92. ENERXICO 124. HPC-EUROPA3

94. HPCWE 126. Appendix - Completed projects 96. Quantum Computing for HPC 98. NEASQC 6 European High-Performance Computing Projects - HANDBOOK 2020

European Exascale Projects

Our Handbook features a number of pro- where you will find the corresponding pro- jects, which, though not financed by HPC jects in our Handbook. Finally, this cPPP or EuroHPC-related calls, are closely Handbook only features on-going projects. related to HPC and have their place here. Projects that ended before January 2020 For ease of use, we have grouped projects are listed in the Annex, and described in de- by scope or objective, instead of by call. tail in previous editions of the Handbook However, we also provide in the table below (available on our website at www.etp4hpc. an overview of the different calls, indicating eu/european-hpc-handbook.html ).

CHAPTER CORRESPONDING CALLS COMMENTS Technology FETHPC-01-2016 2 projects starting in 2017 Projects Co-design of HPC systems and applications with a duration of around 3 years All projects 11 projects starting 2018 in this chapter are FETHPC-02-2017 with a duration of around 3 years hardware and/or Transition to Exascale software oriented Centres E-INFRA-5-2015 9 projects starting in 2016 with a of Excellence Centres of Excellence duration of around 3 years - 1 of in computing for computing applications which was still on-going in 2020 applications INFRAEDI-02-2018 10 projects and 1 Coordination Centres of Excellence and Support Action starting for computing applications in 2018

INFRAEDI-05-2020 4 projects starting in 2020 Centres of Excellence in exascale computing with a duration of around 3 years European ICT-42-2017 The European Processor Initiative Processor Framework Partnership Agreement in European (EPI) low-power microprocessor technologies

SGA-LPMT-01-2018 Specific Grant Agreement under the Framework Partnership Agreement “EPI FPA”

ICT-05-2017 Prequel to EPI Customised and low energy computing (including Low power processor technologies) 7

Applications ICT-11-2018-2019 4 Innovative Action starting in the HPC Subtopic (a) HPC and Big Data enabled in 2018 Continuum Large-scale Test-beds and Applications CEF-TC-2018-5 1 project related to HPC

CEF/ICT/A2017 1 project related to HPC International FETHPC-01-2018 2 projects starting June 2019 Cooperation International Cooperation on HPC Quantum FETFLAG-05-2020 1 project related to HPC starting Computing for HPC Complementary call on Quantum Computing in 2020

Other projects ICT-12-2018-2020 A selection of technology projects Big Data technologies related to HPC (hardware/software) and extreme-scale analytics from various ICT, FET and INFRA calls, as well as from special calls FETPROACT-01-2016 Proactive: emerging themes and communities

EINFRA-21-2017 Platform-driven e-infrastructure innovation

SC1-PHE-CORONAVIRUS-2020 Advancing knowledge for the clinical and public health response to the 2019-nCoV epidemic

ICT-16-2018 Software Technologies Ecosystem FETHPC-2-2014 2 Coordination development HPC Ecosystem Development and Support Actions FETHPC-03-2017 2 Coordination Exascale HPC ecosystem development and Support Actions

INFRAIA-01-2016-2017 HPC-EUROPA3 Integrating Activities for Advanced Communities

EuroHPC-04-2019 2 projects starting in 2020 Innovating and Widening the HPC use and skills base

EuroHPC-05-2019 1 project related to HPC Innovating and Widening the HPC use and skills base 8 European High-Performance Computing Projects - HANDBOOK 2020 9

Technology Projects 10 European High-Performance Computing Projects - HANDBOOK 2020 ASPIDE exAScale ProgramIng models for extreme Data processing

Objective mined (analysed) in parallel, the one billion Extreme Data is an incarnation of Big of social data posts queried in real-time Data concept distinguished by the mas- on an in-memory components database. sive amounts of data that must be que- Traditional disks or commercial storage ried, communicated and analysed in (near) cannot handle nowadays the extreme scale real-time by using a very large number of of such application data. memory/storage elements and Exascale computing systems. Immediate examples Following the need of improvement of cur- are the scientific data produced at a rate rent concepts and technologies, ASPIDE’s of hundreds of gigabits-per-second that activities focus on data-intensive appli- must be stored, filtered and analysed, the cations running on systems composed millions of images per day that must be of up to millions of computing elements

www.aspide-project.eu OTHER PARTNERS : @ASPIDE_PROJECT • Institute e-Austria Timisoara, Romania COORDINATING • University of Calabria, ORGANISATION • Universität Klagenfurt, Austria Universidad Carlos III • Institute of Bioorganic Chemistry de Madrid, of Polish Academy of Sciences, Poland • Servicio Madrileño de Salud, Spain • INTEGRIS SA Italy • Bull SAS (Atos Group), France TECHNOLOGY PROJECTS 11

(Exascale systems). Practical results will and efficiency, and offering powerful ope- include the methodology and software pro- rations and mechanisms for processing totypes that will be designed and used to extreme data sources at high speed and/or implement Exascale applications. real-time. The ASPIDE project will contribute with the definition of a new programming para- digms, APIs, runtime tools and methodo- logies for expressing data-intensive tasks on Exascale systems, which can pave the way for the exploitation of massive paralle- lism over a simplified model of the system architecture, promoting high performance

CONTACT CALL PROJECT TIMESPAN Javier GARCIA-BLAS FETHPC-02-2017 15/06/2018 - 14/12/2020 [email protected] 12 European High-Performance Computing Projects - HANDBOOK 2020 DEEP projects Dynamical Exascale Entry Platform

There is more than one way to build a su- its particular needs. Specifically, DEEP- percomputer, and meeting the diverse EST, the latest project in the series, has demands of modern applications, which in- built a prototype with three modules: a creasingly combine data analytics and ar- general-purpose Cluster Module (CM) for tificial intelligence (AI) with simulation, re- low or medium scalable codes, the highly quires a flexible system architecture. Since scalable Extreme Booster Module (ESB) 2011, the DEEP series of projects (DEEP, comprising a cluster of accelerators, and DEEP-ER, DEEP-EST) has pioneered an a Data Analytics Module (DAM), which will innovative concept known as the Modular be tested with six applications combining Supercomputer Architecture (MSA), whe- high-performance computing (HPC) with reby multiple modules are coupled like buil- high-performance data analytics (HPDA) ding blocks. Each module is tailored to the and machine learning (ML).The DEEP ap- needs of a specific class of applications, proach is part of the trend towards using and all modules together behave as a single accelerators to improve performance and machine. overall energy efficiency – but with a twist. Connected by a high-speed, federated Traditionally, heterogeneity is done within network and programmed in a uniform the node, combining a central processing system software and programming en- unit (CPU) with one or more accelerators. vironment, the supercomputer allows an In DEEP-EST the resources are segre- application to be distributed over seve- gated and pooled into compute modules, ral hardware modules, running each code as this enables to flexibly adapt the system component on the one which best suits to very diverse application requirements.

www.deep-projects.eu OTHER PARTNERS • Astron : @DEEPprojects Current partners in DEEP-EST: • KU Leuven • Forschungszentrum Jülich • National Center For Supercomputing Applications COORDINATING • Intel (Bulgaria) ORGANISATION • Leibniz Supercomputing Centre • Norges Miljo-Og Biovitenskaplige Forschungszentrum Jülich • Barcelona Supercomputing Universitet Haskoli Islands (Jülich Supercomputing Centre (BSC), Spain • European Organisation for Centre) • Megware Computer Vertrieb und Nuclear Research (CERN) Service GmbH • ParTec • Heidelberg University • EXTOLL • The University of Edinburgh • Fraunhofer ITWM TECHNOLOGY PROJECTS 13

Projects

technologies developed. Careful analy- sis of the application codes allows a ful- ler understanding of their requirements, which informed the prototype’s design and configuration. In addition to traditional compute-inten- sive HPC applications, the DEEP-EST DAM includes leading-edge memory and storage technology tailored to the needs of data-in- tensive workloads, which occur in data ana- lytics and ML.Through the DEEP projects, researchers have shown that combining In addition to usability and flexibility, the resources in compute modules efficiently sustained performance made possible by serves applications from multi-physics si- following this approach aims to reach exas- mulations to simulations integrating HPC cale levels. with HPDA, to complex heterogeneous One important aspect that makes the DEEP workflows such as those in artificial intel- architecture stand out is the co-design ap- ligence applications.The next step in the proach, which is a key component of the series of DEEP projects will be DEEP-SEA project. In DEEP-EST, six ambitious HPC/ – Software for Exascale Architectures. This HPDA applications will be used to define will deliver the programming environment and evaluate the hardware and software for future European exascale systems.

Partners in DEEP and DEEP-ER: CONTACT PROJECT TIMESPAN • CERFACS Dr. Estela Suarez (DEEP-EST) • CGG [email protected] 01/07/2017 - 31/03/2021 • CINECA • Eurotech CALL • EPFL ETHPC-01-2016 • Seagate • INRIA, Institut National de Recherche en Informatique et Automatique • Mellanox • The Institute • Universität Regensburg 14 European High-Performance Computing Projects - HANDBOOK 2020 EPEEC European joint Effort toward a Highly Productive Programming Environment for Heterogeneous Exascale Computing

Objective their preferred language: C, Fortran, or EPEEC’s main goal is to develop and de- C++. EPEEC ensures the composability ploy a production-ready parallel program- and interoperability of its programming ming environment that turns upcoming models and runtimes, which incorporate overwhelmingly-heterogeneous exascale specific features to handle data-intensive supercomputers into manageable plat- and extreme-data applications. Enhanced forms for domain application developers. leading-edge performance tools offer inte- The consortium is significantly advancing gral profiling, performance prediction, and and integrating existing state-of-the-art visualisation of traces. components based on European techno- EPEEC exploits results from past FET pro- logy (programming models, runtime sys- jects that led to the cutting-edge software tems, and tools) with key features enabling components it builds upon, and pursues 3 overarching objectives: high coding pro- influencing the most relevant parallel pro- ductivity, high performance, and energy gramming standardisation bodies. The awareness. consortium is composed of European ins- An automatic generator of compiler di- titutions and individuals with the highest rectives provides outstanding coding pro- expertise in their field, including not only ductivity from the very beginning of the leading research centres and universi- application developing/porting process. ties but also SME/start-up companies, Developers will be able to leverage either all of them recognised as high-tech inno- shared memory or distributed-shared me- vators worldwide. Adhering to the Work mory programming flavours, and code in Programme’s guidelines, EPEEC features

www.epeec-project.eu OTHER PARTNERS • Appentra Solutions SL, Spain : #EPEECproject • Fraunhofer Gesellschaft • CINECA Consorzio zur Förderung der angewandten Interuniversitario, Italy • ETA SCALE AB, Sweden COORDINATING Forschung E.V., Germany ORGANISATION • INESC ID - Intituto • CERFACS, France de Engenhariade Sistemas • Interuniversitair Barcelona e Computadores, Investigacao Micro-Electronica Centrum, Supercomputing e Desenvolvimento em Lisboa, Belgium Centre (BSC), Portugal Spain • Uppsala Universitet, Sweden • INRIA, Institut National de Recherche en Informatique et Automatique, France TECHNOLOGY PROJECTS 15

EPEEC Programming Environment

the participation of young and high-po- PROGRAMMING ENVIRONMENT tential researchers, and includes careful The EPEEC programming environment for dissemination, exploitation, and public en- exascale is being developed to achieve gagement plans. high programming productivity, high exe- cution efficiency and scalability, energy APPLICATIONS awareness, and smooth composability/ Five applications representative of diffe- interoperability. It is formed by five main rent relevant scientific domains serve components: as part of a strong inter-disciplinary • Parallelware co-design approach and as technology • OmpSs demonstrators: • GASPI • AVBP • ArgoDSM • DIOGENeS • BSC performance tools • OSIRIS (Extrae, Paraver, Dimemas) • Quantum ESPRESSO • SMURFF Intermediate software prototypes are avai- lable for download on the EPEEC website.

CONTACT CALL PROJECT TIMESPAN Antonio J. Peña, FETHPC-02-2017 01/10/2018 - 30/09/2021 [email protected] 16 European High-Performance Computing Projects - HANDBOOK 2020 EPiGRAM-HS Exascale Programming Models for Heterogeneous Systems

MOTIVATION OBJECTIVES • Increasing presence of heterogeneous • Extend the programmability of large- technologies on pre-exascale super- scale heterogeneous systems with computers GPUs, FPGAs, HBM and NVM • Need to port key HPC and emerging ap- • Introduce new concepts and functio- plications to these systems on time for nalities, and implement them in two exascale widely-used HPC programming systems for large-scale supercomputers: MPI and GASPI

epigram-hs.eu OTHER PARTNERS : @EpigramHs • EPCC, UK • ETHZ - Eidgenössische COORDINATING Technische Hochschule Zürich, ORGANISATION Switzerland • Fraunhofer, Germany KTH, Sweden • Cray, Switzerland • ECMWF, UK TECHNOLOGY PROJECTS 17

• Maximize the productivity of application development on heterogeneous super- computers by: - providing auto-tuned collective communication - a framework for automatic code generation for FPGAs - a memory abstraction device com- prised of APIs - a runtime for automatic data pla- cement on diverse memories and a DSL for large-scale deep-learning frameworks

CONTACT CALL PROJECT TIMESPAN Stefano Markidis FETHPC-02-2017 01/09/2018 - 31/08/2021 [email protected] 18 European High-Performance Computing Projects - HANDBOOK 2020 ESCAPE-2 Energy-efficient SCalable Algorithms for weather and climate Prediction at Exascale

Objective algorithm development will be accelerated ESCAPE-2 will develop world-class, ex- and future-proofed. Eventually, the project treme-scale computing capabilities for aims at providing exascale-ready produc- European operational numerical weather tion benchmarks to be operated on extre- and climate prediction, and provide the key me-scale demonstrators (EsD) and beyond. components for weather and climate do- ESCAPE-2 combines cross-disciplinary main benchmarks to be deployed on extre- uncertainty quantification tools (URANIE) me-scale demonstrators and beyond. This for high-performance computing, origi- will be achieved by developing bespoke nating from the energy sector, with en- and novel mathematical and algorithmic semble-based weather and climate models concepts, combining them with proven to quantify the effect of model and data methods, and thereby reassessing the related uncertainties on forecasting – a ca- mathematical foundations forming the ba- pability, which weather and climate predic- sis of Earth system models. ESCAPE-2 also tion has pioneered since the 1960s. invests in significantly more productive The mathematics and algorithmic research programming models for the weather-cli- in ESCAPE-2 will focus on implemen- mate community through which novel ting data structures and tools supporting

www.ecmwf.int/escape OTHER PARTNERS • Loughborough University, United Kingdom • DKRZ - Deutsches COORDINATING Klimarechenzentrum GmbH, • Institut Royal Météorologique ORGANISATION Germany de Belgique, Belgium • Politecnico di Milano, Italy ECMWF - European • Max-Planck-Gesellschaft zur • Danmarks Meteorologiske Centre for Medium-range Förderung der Wissenschaften Institut, Denmark Weather Forecasts, EV, Germany UK • Eidgenössisches Depertment • Fondazione Centro des Innern, Switzerland Euro-Mediterraneo sui • Barcelona Supercomputing Cambiamenti Climatici, Italy Centre (BSC), Spain • Bull SAS (Atos group), France • CEA - Commissariat à l’Energie Atomique et aux énergies alternatives, France TECHNOLOGY PROJECTS 19

parallel computation of dynamics and physics on multiple scales and multiple levels. Highly-scalable spatial discretiza- tion will be combined with proven large time-stepping techniques to optimize both time-to-solution and energy-to-solution. Connecting multi-grid tools, iterative sol- vers, and overlapping computations with flexible-order spatial discretization will strengthen algorithm resilience against soft or hard failure. In addition, machine learning techniques will be applied for ac- celerating complex sub-components. The sum of these efforts will aim at achieving at the same time: performance, resilience, accuracy and portability.

CONTACT CALL PROJECT TIMESPAN Peter Bauer FETHPC-02-2017 01/10/2018 - 30/09/2021 [email protected] 20 European High-Performance Computing Projects - HANDBOOK 2020 EuroEXA The Largest Group of ExaScale Projects and Biggest Co-Design Effort

The Horizon 2020 EuroEXA project pro- • Show that an exascale machine could poses a ground-breaking design for mind be built in 2020 within 30 shipping blowing results: Over four times higher containers with an edge to edge dis- performance and four times higher energy tance of less than 40m efficiency than today’s high-performance platforms. Originally the informal names This project has a €20m investment over for a group of H2020 research projects, a 42-month period and is part of a total ExaNeSt, EcoScale and ExaNoDe, EuroEXA €50m, investment made be the European has its own EU investment as a co-design Commission across the EuroEXA group of project to further develop technologies projects supporting research, innovation from the project group and support the and action across applications, system EU in its bid to deliver EU based ExaScale software, hardware, networking, storage supercomputers. liquid cooling and data centre technologies.

Vision: The project is also supported by a high • First testbed architecture will be shown value donation of IP from ARM and to be capable of scaling to world-class Synopsys. Funded under H2020-EU.1.2.2. peak performance in excess of 400 FET Proactive (FETHPC-2016-01) as a re- PFLOPS with an estimated system sult of a competitive selection process, power of around 30 MW peak the consortium, partners bring a rich mix • A compute-centric 250 PFLOPS per 15 of key applications from across climate/ MW by 2019 weather, physics/energy and life-science/

www.euroexa.eu OTHER PARTNERS • Neurasmus, Netherlands : @EuroEXA • Arm, UK • INFN (Istituto Nazionale Di Fisica Nucleare), Italy • The University of Manchester, UK • INAF (Istituto Nazionale COORDINATING • B arcelona Supercomputing Di Astrofisica, Italy ORGANISATION Centre (BSC), Spain • ECMWF (European Centre • FORTH (Foundation for Research ICCS0, Institute for Medium- Range Weather and Technology Hellas), of Communication Forecasts), International and Computer Systems, • The Hartree Centre of STFC, UK • Fraunhofer, Germany Greece • IMEC, Belgium • ZeroPoint Technologies, SE • Iceotope, UK • Synelixis Solutions Ltd., Greece • Maxeler Technologies, UK TECHNOLOGY PROJECTS 21

The EuroEXA Blade

bioinformatics. The project objectives in- clude to develop and deploy an ARM Cortex technology processing system with FPGA acceleration at peta-flop level an Exa-scale procurement for deployment in 2022/23.

CONTACT CALL PROJECT TIMESPAN Peter Hopton FETHPC-01-2016 01/09/2017 - 28/02/2021 (Dissemination & Exploitation Lead) [email protected] +44 1142 245500 22 European High-Performance Computing Projects - HANDBOOK 2020 EXA2PRO Enhancing Programmability and boosting Performance Portability for Exascale Computing Systems

The vision of EXA2PRO is to develop a pro- 1. Increased programmability that enables gramming environment that will enable the the efficient exploitation of heteroge- productive deployment of highly parallel neity of modern supercomputing sys- applications in exascale computing sys- tems, which allows the evaluation of tems. EXA2PRO programming environment more complex problems. will integrate tools that will address signifi- 2. Effective deployment in an environment cant exascale challenges. It will support a that provides features critical for exas- wide range of scientific applications, provi- cale computing systems such as fault to- de tools for improving source code quality, lerance, flexibility of execution and per- enable efficient exploitation of exascale formance monitoring based on EXA2PRO systems’ heterogeneity and integrate tools optimization tools. for data and memory management optimi- 3. Identification of trade-offs between de- zation. Additionally, it will provide fault-to- sign qualities (source code maintainabi- lerance mechanisms, both user-exposed lity/reusability) and run-time constraints and at runtime system level and perfor- (performance/energy consumption). mance monitoring features. EXA2PRO will be evaluated using 4 applications from 3 different domains, which will be deployed in JUELICH supercomputing centre: High energy physics, materials and supercapa- citors. The applications will leverage the EXA2PRO toolchain and we expect:

www.exa2pro.eu OTHER PARTNERS • CNRS, Centre National de la Recherche Scientifique, France • Linköpings universitet, Sweden • UoM, University of Macedonia, COORDINATING • CERTH, Ethniko Kentro Erevnas Greece ORGANISATION kai Technologikis Anaptyxis, ICCS, institute Greece of communication • INRIA, Institut National de and computer systems, Recherche en Informatique et Greece Automatique, France • Forschungszentrum Jülich GmbH, Germany • Maxeler Technologies Ltd, United Kingdom TECHNOLOGY PROJECTS 23

EXA2PRO outcome is expected to have ma- jor impact on: 1. the scientific and industrial community that focuses on application deployment in supercomputing centres: EXA2PRO environment will allow efficient applica- tion deployment with reduced effort. 2. application developers that target exas- cale systems: EXA2PRO will provide tools for improving source code main- tainability/reusability, which will allow application evolution with reduced deve- lopers’ effort. 3. the scientific community and the indus- try relevant to the EXA2PRO applica- tions: Significant impact is expected on the materials and processes design for CO2 capture and on the supercapacitors industry.

CONTACT CALL PROJECT TIMESPAN Prof. Dimitrios Soudris FETHPC-02-2017 01/05/2018 - 30/04/2021 [email protected]

24 European High-Performance Computing Projects - HANDBOOK 2020 ExaQUte EXAscale Quantification of Uncertainties for Technology and Science Simulation

Objective Gradient-based optimization techniques The ExaQUte project aims at construc- will be extended to consider uncertain- ting a framework to enable Uncertainty ties by developing methods to compute Quantification (UQ) and Optimization Under stochastic sensitivities. This requires Uncertainties (OUU) in complex enginee- new theoretical and computational deve- ring problems using computational simula- lopments. With a proper definition of risk tions on Exascale systems. measures and constraints, these methods The stochastic problem of quantifying un- allow high-performance robust designs, certainties will be tackled by using a Multi also maximizing the solution reliability. Level MonteCarlo (MLMC) approach that al- The description of complex geometries will lows a high number of stochastic variables. be possible by employing embedded me- New theoretical developments will be thods, which guarantee a high robustness carried out to enable its combination with in the mesh generation and adaptation adaptive mesh refinement, considering steps, while allowing preserving the exact both, octree-based and anisotropic mesh geometry representation. adaptation. The efficient exploitation of Exascale system will be addressed by combining

www.exaqute.eu OTHER PARTNERS • Ecole Polytechnique Fédérale de Lausanne, Switzerland : @ ExaQUteEU • Barcelona Supercomputing Centre (BSC), Spain • Universitat Politecnica de Catalunya, Spain COORDINATING • Technische Universität München, • str.ucture GmbH, Germany ORGANISATION Germany • INRIA, Institut National de CIMNE- Centre Internacional Recherche en Informatique et de Metodes Numerics Automatique, France en Enginyeria, Spain • Vysoka Skola Banska - Technicka Univerzita Ostrava, Czech Republic TECHNOLOGY PROJECTS 25

State-of-the-Art dynamic task-scheduling All developments in ExaQUte will be open- technologies with space-time accelerated source and will follow a modular approach, solution methods, where parallelism is har- thus maximizing future impact. vested both in space and time. The methods and tools developed in ExaQUte will be applicable to many fields of science and technology. The chosen appli- cation focuses on wind engineering, a field of notable industrial interest for which cur- rently no reliable solution exists. This will include the quantification of uncertainties in the response of civil engineering struc- tures to the wind action, and the shape optimization taking into account uncer- tainties related to wind loading, structural shape and material behaviour.

CONTACT CALL PROJECT TIMESPAN Cecilia Soriano FETHPC-02-2017 01/06/2018 - 31/05/2021 [email protected] Riccardo Rossi [email protected] 26 European High-Performance Computing Projects - HANDBOOK 2020 MAESTRO Middleware for memory and data-awareness in workflows

Maestro will build a data-aware and me- (FLOPS) was paramount. Several decades mory-aware middleware framework that of technical evolution built a software sta- addresses ubiquitous problems of data mo- ck and programming models highly fit for vement in complex memory hierarchies and optimising floating point operations but at many levels of the HPC software stack. lacking in basic data handling functiona- Though HPC and HPDA applications pose lity. We characterise the set of technical a broad variety of efficiency challenges, it issues at missing data-awareness. would be fair to say that the performance 2. Software rightfully insulates users from of both has become dominated by data hardware details, especially as we move movement through the memory and sto- higher up the software stack. But HPC rage systems, as opposed to floating point applications, programming environ- computational capability. Despite this ments and systems software cannot shift, current software technologies re- make key data movement decisions main severely limited in their ability to opti- without some understanding of the mise data movement. The Maestro project hardware, especially the increasingly addresses what it sees as the two major complex memory hierarchy. With the ex- impediments of modern HPC software: ception of runtimes, which treat memo- 1. Moving data through memory was not ry in a domain-specific manner, software always the bottleneck. The software sta- typically must make hardware-neutral ck that HPC relies upon was built through decisions which can often leave perfor- decades of a different situation – when the mance on the table. We characterise this cost of performing floating point operations issue as missing memory-awareness.

www.maestro-data.eu OTHER PARTNERS • Seagate Systems UK Ltd, United Kingdom : @maestrodata • CEA - Commissariat à l’Energie Atomique et aux énergies • Hewlett-Packard (Schweiz) GmbH, Switzerland COORDINATING alternatives, France ORGANISATION • Appentra Solutions SL, Spain • ETHZ - Eidgenössische Forschungszentrum Technische Hochschule Zürich, Jülich GmbH, Switzerland Germany • ECMWF - European Centre for Medium-range Weather Forecasts, United Kingdom TECHNOLOGY PROJECTS 27

Maestro proposes a middleware framework placement based on costs of moving data that enables memory- and data-awar- objects. The middleware will support auto- eness. Maestro has developed new da- matic movement and promotion of data in ta-aware abstractions that can be be used memories and storage and allow for data in any level of software, e.g. compiler, run- transformations and optimisation. time or application. Core elements are the Maestro follows a co-design methodology Core Data Objects (CDO), which through using a set of applications and workflows a give/take semantics provided to the from diverse areas including numerical middleware. The middleware is being de- weather forecasting, earth-system model- signed such that it enables modelling of ling, materials sciences and in-situ data memory and storage hierarchies to allow analysis pipelines for computational fluid for reasoning about data movement and dynamics simulations.

CONTACT CALL PROJECT TIMESPAN Dirk Pleiter FETHPC-02-2017 01/09/2018 - 31/08/2021 [email protected] 28 European High-Performance Computing Projects - HANDBOOK 2020 RECIPE REliable power and time-ConstraInts-aware Predictive management of heterogeneous Exascale systems

OBJECTIVE To close this gap, RECIPE provides: a hie- The current HPC facilities will need to grow rarchical runtime resource management by an order of magnitude in the next few infrastructure optimizing energy efficiency years to reach the Exascale range. The and ensuring reliability for both time-criti- dedicated middleware needed to manage cal and throughput-oriented computation; the enormous complexity of future HPC a predictive reliability methodology to sup- centres, where deep heterogeneity is nee- port the enforcing of QoS guarantees in ded to handle the wide variety of applica- face of both transient and long-term hard- tions within reasonable power budgets, ware failures, including thermal, timing and will be one of the most critical aspects in reliability models; and a set of integration the evolution of HPC infrastructure towar- layers allowing the resource manager to ds Exascale. This middleware will need interact with both the application and the to address the critical issue of reliability underlying deeply heterogeneous archi- in face of the increasing number of re- tecture, addressing them in a disaggregate sources, and therefore decreasing mean way. time between failures.

www.recipe-project.eu OTHER PARTNERS • EPFL - Ecole Polytechnique Fédérale de Lausanne, : @EUrecipe • Universitat Politecnica de Switzerland Valencia, Spain • Intelligence behind Things • Centro Regionale Information COORDINATING Solutions SRL, Italy ORGANISATION Communication Technology scrl, Italy • Centre Hospitalier Universitaire Politecnico di Milano, Vaudois, Switzerland • Barcelona Supercomputing Italy Centre (BSC), Spain • Instytut Chemii Bioorganicznej Polskiej Akademii Nauk, Poland TECHNOLOGY PROJECTS 29

Quantitative goals for RECIPE include: To this end, RECIPE relies on a consor- 25% increase in energy efficiency (perfor- tium composed of four leading academic mance/watt) with an 15% MTTF improve- partners (POLIMI, UPV, EPFL, CeRICT); two ment due to proactive thermal manage- supercomputing centres, BSC and PSNC; a ment; energy-delay product improved up to research hospital, CHUV, and an SME, IBTS, 25%; 20% reduction of faulty executions. which provide effective exploitation ave- nues through industry-based use cases. The project will assess its results against the following set of real world use cases, addressing key application domains ran- ging from well-established HPC applica- tions such as geophysical exploration and meteorology, to emerging application do- mains such as biomedical machine lear- ning and data analytics.

CONTACT CALL PROJECT TIMESPAN Prof. William Fornaciari FETHPC-02-2017 01/05/2018 - 30/04/2021 [email protected] 30 European High-Performance Computing Projects - HANDBOOK 2020 Sage2 Percipient Storage for Exascale Data Centric Computing 2

Objective is extremely critical to address them today The landscape for High Performance without “reinventing the wheel” leveraging Computing is changing with the prolifera- existing initiatives and know-how to build tion of enormous volumes of data created the pieces of the Exascale puzzle as quickly by scientific instruments and sensors, in and efficiently as we can. addition to data from simulations. This data needs to be stored, processed and ana- The SAGE paradigm already provides a lysed, and existing storage system techno- basic framework to address the extreme logies in the realm of extreme computing scale data aspects of High Performance need to be adapted to achieve reasonable Computing on the path to Exascale. Sage2 efficiency in achieving higher scientific (Percipient StorAGe for Exascale Data throughput. We started on the journey to Centric Computing 2) intends to validate a address this problem with the SAGE pro- next generation storage system building on ject. The HPC use cases and the technolo- top of the already existing SAGE platform gy ecosystem is now further evolving and to address new use case requirements there are new requirements and innova- in the areas of extreme scale computing tions that are brought to the forefront. It scientific workflows and AI/deep learning

www.sagestorage.eu OTHER PARTNERS • Forschungszentrum Jülich GmbH, Germany : @SageStorage • Bull SAS (Atos Group), France • The University of Edinburgh, • CEA - Commissariat à l’Energie United Kingdom COORDINATING Atomique et aux énergies • Kitware SAS, France ORGANISATION alternatives, France • United Kingdom Atomic Energy • Arm Ltd, United Kingdom Seagate Systems UK Ltd, Authority, United Kingdom United Kingdom • Kungliga Tekniska Hoegskolan (KTH), Sweden TECHNOLOGY PROJECTS 31

leveraging the latest developments in sto- system, accessible through the Clovis API. rage infrastructure software and storage The 3-rack prototype system is now avai- technology ecosystem. lable for use at Juelich Supercomputing Centre. Sage2 aims to provide significantly en- hanced scientific throughput, improved scalability, and, time & energy to solution for the use cases at scale. Sage2 will also dramatically increase the productivity of developers and users of these systems.

Sage2 will provide a highly performant and resilient, QoS capable multi-tiered storage system, with data layouts across the tiers managed by the Mero Object Store, which is capable of handling in-transit/in-situ processing of data within the storage

CONTACT CALL PROJECT TIMESPAN Sai Narasimhamurthy FETHPC-02-2017 01/09/2018 - 31/08/2021 sai.narasimhamurthy@seagate. com 32 European High-Performance Computing Projects - HANDBOOK 2020 VECMA Verified Exascale Computing for Multiscale Applications

The purpose of this project is to enable a and made widely available in European HPC diverse set of multiscale, multiphysics ap- centres. plications -- from fusion and advanced ma- terials through climate and migration, to The project includes a fast track that will drug discovery and the sharp end of clinical ensure applications are able to apply avai- decision making in personalised medicine lable multiscale VVUQ tools as soon as -- to run on current multi-petascale compu- possible, while guiding the deep track de- ters and emerging exascale environments velopment of new capabilities and their with high fidelity such that their output is integration into a wider set of production «actionable». That is, the calculations and applications by the end of the project. The simulations are certifiable as validated (V), deep track includes the development of verified (V) and equipped with uncertainty more disruptive and automated algorithms, quantification (UQ) by tight error bars such and their exascale-aware implementation that they may be relied upon for making in a more intrusive way with respect to the important decisions in all the domains of underlying and pre-existing multiscale mo- concern. The central deliverable is an open delling and simulation schemes. source toolkit for multiscale VVUQ based on generic multiscale VV and UQ primitives, The potential impact of these certified to be released in stages over the lifetime of multiscale simulations is enormous, and this project, fully tested and evaluated in we have already been promoting the VVUQ emerging exascale environments, actively toolkit (VECMAtk) across a wide range of promoted over the lifetime of this project, scientific and social scientific domains,

www.vecma.eu OTHER PARTNERS • CBK Sci Con Ltd, United Kingdom : @VECMA4 • Max-Planck-Gesellschaft zur • Universiteit van Amsterdam, Förderung der Wissenschaften Netherlands • Instytut Chemii Bioorganicznej COORDINATING EV, Germany Polskiej Akademii Nauk, Poland ORGANISATION • Brunel University London, United Kingdom University College London, • Bayerische Akademie der UK INTERNATIONAL Wissenschaften, Germany PARTNER • Bull SAS (Atos Group), France Rutgers University, USA • Stichting Nederlandse Wetenschappelijk Onderzoek Instituten, Netherlands TECHNOLOGY PROJECTS 33

as well as within computational science 2020. The conference was a combination more broadly. We have made ten releases of three events that were due to take place of the toolkit with the dedicated website at at the SIAM Conference on Uncertainty https://www.vecma-toolkit.eu/, available Quantification (UQ20) and the International for public users to access, download and Conference on Computational Science use. (ICCS) 2020. These events associated with VECMAtk have participation from some of To further develop and disseminate our external users from industry and go- the VECMAtk, we have been working vernment who use the tools we are deve- with the Alan Turing Institute to jointly loping on HPC at supercomputer centres in run the planned event of Reliability and Europe. Reproducibility in Computational Science: Implementing Verification, Validation and Uncertainty Quantification in sili- co. The event comprised as part of it the first VECMA training workshop in January 2020. Ahead of this event, we had run a VECMA hackathon event in association with VECMAtk in September 2019. We held an online conference: Multiscale Modelling, Uncertainty Quantification and the Reliability of Computer Simulations in June

CONTACT CALL PROJECT TIMESPAN Xuanye Gu FETHPC-02-2017 15/06/2018 - 14/12/2021 [email protected] 34 European High-Performance Computing Projects - HANDBOOK 2020 VESTEC Visual Exploration and Sampling Toolkit for Extreme Computing

Objective VESTEC will create the software solutions Technological advances in high perfor- needed to realise this vision for urgent de- mance computing are creating exciting new cision making in various fields with high im- opportunities that move well beyond im- pact for the European community. VESTEC proving the precision of simulation models. will build a flexible toolchain to combine The use of extreme computing in real-time multiple data sources, efficiently extract applications with high velocity data and live essential features, enable flexible sche- analytics is within reach. The availability of duling and interactive supercomputing, fast growing social and sensor networks and realise 3D visualization environments raises new possibilities in monitoring, as- for interactive explorations by stakehol- sessing and predicting environmental, so- ders and decision makers. VESTEC will cial and economic incidents as they happen. develop and evaluate methods and inter- Add in grand challenges in data fusion, ana- faces to integrate high-performance data lysis and visualization, and extreme compu- analytics processes into running simula- ting hardware has an increasingly essential tions and real-time data environments. role in enabling efficient processing work- Interactive ensemble management will flows for huge heterogeneous data streams. launch new simulations for new data,

www.vestec-project.eu/ OTHER PARTNERS • Université Paul Sabatier Toulouse III, France : @VESTECproject • The University of Edinburgh, United Kingdom • Tecnosylva SL, Spain COORDINATING • Kungliga Tekniska Hoegskolan ORGANISATION (KTH), Sweden • Sorbonne Université, France DLR - Deutsches Zentrum für Luft- und Raumfahrt EV, • Kitware SAS, France Germany • Intel Deutschland GmbH, Germany • Fondazione Bruno Kessler, Italy TECHNOLOGY PROJECTS 35

building up statistically more and more ac- curate pictures of emerging, time-critical phenomena. Innovative data compression approaches, based on topological feature extraction and data sampling, will result in considerable reductions in storage and processing demands by discarding do- main-irrelevant data. Three emerging use cases will demons- trate the immense benefit for urgent - de cision making: wildfire monitoring and forecasting; analysis of risk associated with mosquito-borne diseases; and the ef- fects of space weather on technical supply chains. VESTEC brings together experts in each domain to address the challenges holistically.

CONTACT CALL PROJECT TIMESPAN Andreas Gerndt FETHPC-02-2017 01/09/2018 - 31/08/2021 [email protected] 36 European High-Performance Computing Projects - HANDBOOK 2020 37

Centres of Excellence in computing applications 38 European High-Performance Computing Projects - HANDBOOK 2020 FocusCoE Concerted action for the European HPC CoEs

Objective and promotion of their services and com- FocusCoE contributes to the success of petences by acting as a focal point for us- the EU HPC Ecosystem and the EuroHPC ers to discover those services. Initiative by supporting the EU HPC Centres of Excellence (CoEs) to more effectively The specific objectives and achievements fulfil their role within the ecosystem and of FocusCoE are initiative: ensuring that extreme scale ap- • The creation of the HPC CoE Council plications result in tangible benefits for (HPC3) in May 2019 in Poznań that allows all addressing scientific, industrial or societal European HPC CoEs to collectively define challenges. It achieves this by creating an an overriding strategy and collaborative effective platform for the CoEs to coordi- implementation for interactions with, and nate strategic directions and collaboration contributions to, the EU HPC Ecosystem. (addressing possible fragmentation of ac- • To support the HPC CoEs to achieve en- tivities across the CoEs and coordinating hanced interaction with industry, and interactions with the overall HPC ecosys- SMEs in particular, through concerted tem) and provides support services for the out-reach and business development CoEs in relation to both industrial outreach actions. FocusCoE has already initiated

www.focus-coe.eu OTHER PARTNERS • Agenzia nazionale per le nuove tecnologie, l’energia e lo sviluppo : @FocusCoE • CEA - Commissariat à l’Energie economico sostenibile (ENEA), Italy : @focus-coe Atomique et aux Energies Alternatives, France • National University of Ireland, Galway, Ireland • Kungliga Tekniska Högskolan COORDINATING (KTH), Sweden • Teratec, France ORGANISATION • Höchstleistungsrechenzentrum • Forschungszentrum Jülich scapos AG, der Universität Stuttgart (HLRS), GmbH, Germany Germany Germany • Partnership for advanced • Barcelona Supercomputing computing in Europe (PRACE), Center (BSC), Spain Belgium • University College London, UK • Max Planck Gesellschaft zur Forderung der Wissenschaft eV, Germany CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 39

contacts between CoEs and industry and • To promote the capabilities of, and ser- presented CoES at sectorial events. A vices offered by, the HPC CoEs and de- first tranche of industrial success stories velopment of the EU HPC CoE “brand”, from the CoEs is available. raising awareness with stakeholders and • To instigate concerted action on training both academic and industrial users. This by and for the complete set of HPC CoEs: has been done by creating regular news- providing consolidating vehicle for user letters1, highlighting CoE achievements training offered by the CoEs and by PRACE in social media and handbooks. Amongst (PATCs) and providing cross-area training the next steps, a dedicated website pre- to the CoEs (e.g. on sustainable business senting the set of service offerings by development). An HPC training stakeholder CoEs to industrial and academic users will meeting was organised in October 2019 in be developed. Brussels, defining training and education needs and the assessing requirements for HPC training programmes in the EU. Access to a CoE Training registry is provi- ded through the FocusCoE website. 1. www.focus-coe.eu/index.php/newsletter/

CONTACT CALL PROJECT TIMESPAN Guy Lonsdale INFRAEDI-02-2018 01/12/2018s - 30/11/2021 [email protected] 40 European High-Performance Computing Projects - HANDBOOK 2020 BioExcel Centre of Excellence for Computational Biomolecular Research

BioExcel is the leading European Centre of achieving reliable scientific results at fas- Excellence for Computational Biomolecular ter pace. Research. We develop advanced software, High-performance computing (HPC) and tools and full solutions for high-end and high-throughput computing (HTC) tech- extreme computing for biomolecular re- niques have now reached a level of maturity search. The centre supports academic and for many of the widely-used codes and plat- industrial research by providing expertise forms, but taking full advantage of them re- and advanced training to end-users and quires that researchers have the necessary promoting best practices in the field. The training and access to guidance by experts. centre asprires to operate in the long-term The necessary ecosystem of services in as the core facility for advanced biomole- the field is presently inadequate. A suitable cular modelling and simulations in Europe. infrastructure needed to be set up in a sus- tainable, long-term operational fashion. Overview BioExcel CoE was thus established as Much of the current Life Sciences research the go-to provider of a full-range of sof- relies on intensive biomolecular modelling tware applications and training services. and simulation. As a result, both academia These cover fast and scalable software, and industry are facing significant challen- user-friendly automation workflows and a ges when applying best practices for opti- support base of experts. The main services mal usage of compute infrastructures. At offered include hands-on training, tailored the same time, increasing productivity of customization of code and personalized researchers will be of high importance for consultancy support. BioExcel actively

bioexcel.eu/ OTHER PARTNERS • The University of Edinburgh, UK : @BioExcelCoE • The University of Manchester, UK • Max Planck Gesellschaft, Germany : @bioexcel • University of Utrecht, the • Across Limits, Malta : https://www.youtube. Netherlands com/c/bioexcelcoe • Institute of Research in • Barcelona Supercomputing Biomedicine (IRB), Spain Centre (BSC), Spain • Ian Harrow Consulting, UK COORDINATING • European Molecular Biology • Norman Consulting, Norway ORGANISATION Laboratory (EMBL-EBI), UK • Forschungszentrum Jülich • Nostrum Biodiscovery, Spain KTH Royal Institute GmbH, Germany of Technology, • University of Jyvaskylä, Finland Sweden CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 41

works with: • related international projects and • academic and non-profit researchers, initiatives. • industrial researchers, • software vendors and academic code The centre was established and operates providers, through funding by the EC Horizon 2020 • non-profit and commercial resource program (Grant agreements 675728 and providers, and 823830).

CONTACT CALL PROJECT TIMESPAN Rossen Apostolov INFRAEDI-02-2018 01/01/2019 - 31/12/2021 [email protected] 42 European High-Performance Computing Projects - HANDBOOK 2020 ChEESE Centre of Excellence for Exascale and Solid Earth

ChEESE addresses extreme computing to Exascale considers also aspects like scientific and societal challenges in Solid workflows like data management and sha- Earth by harnessing European institu- ring, I/O, post-process and visualization. tions in charge of operational monitoring Additionally, ChEESE has developed WMS- networks, tier-0 supercomputing centres, light, a free and open source workflow academia, hardware developers and management system for the geoscience third-parties from SMEs, industry and pu- community with the goal to support the in- blic-governance. The scientific challenge tegration of different applications and ser- is to prepare 10 open-source flagship co- vices. Hence, WMS-light supports a wide des to solve Exascale capacity and capa- range of typical HPC application scenarios bility problems on computational seismo- and is designed for easy usage and integra- logy, magnetohydrodynamics, physical tion of existing workflow setups. volcanology, tsunamis, and data analysis and predictive techniques from monitoring In the illustration below are featured left, earthquake and volcanic activity. The se- the simulation of a tsunami propagating lected codes are periodically audited and from the Calabrian Arc towards southern optimized at both intranode level (including Italy, and right, a tsunami inundation using heterogeneous computing nodes) and in- a high-resolution map towards an area ternode level on heterogeneous hardware offshore Catania, Italy. In ChEESE, we will prototypes for the upcoming Exascale ar- use High Performance Computing to simu- chitectures, thereby ensuring commitment late thousands of these kinds of scenarios with a co-design approach. Preparation and use these to represent the coastal

www.cheese-coe.eu OTHER PARTNERS • Technical University of Munich (TUM), Germany : @Cheese_CoE • Istituto Nazionale di Geofisica e Vulcanologia (INGV), Italy • Ludwig-Maximilians Universität : @ChEESE-CoE München (LMU), Germany • Icelandic Meteorological Office (IMO), Iceland • University of Málaga (UMA), Spain COORDINATING • Norwegian Geotechnical ORGANISATION • Swiss Federal Institute of Technology (ETH), Switzerland Institute (NGI), Norway Barcelona Supercomputing • Universität Stuttgart - • Institut de Physique du Globe Centre (BSC), High Performance Computing de Paris (IPGP), France Spain Center (HLRS), Germany • CNRS - Centre National • CINECA Consorzio de la Recherche Scientifique, Interuniversitario, Italy France • Bull SAS (Atos Group), France CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 43

tsunami hazard at high resolution. community. Additionally, and in collabo- In parallel with these transversal activities, ration with the European Plate Observing ChEESE supports three vertical pillars. System (EPOS) and Collaborative Data First, it develops pilot demonstrators for Infrastructure (EUDAT), ChEESE will pro- challenging scientific problems requiring mote and facilitate the integration of HPC of Exascale computing in alignment with services to widen the access to codes and the vision of European Exascale road- fostering transfer of know-how to Solid maps. This includes near real-time seis- Earth community. mic simulations and full-wave inversion, Finally, the third pillar of ChEESE acts as ensemble-based volcanic ash dispersal a hub to foster HPC across the Solid Earth forecasts, faster than real-time tsunami Community and related stakeholders and simulations or physics-based hazard as- to provide specialized training on services sessments for earthquakes, volcanoes and and capacity building measures. Training tsunamis. courses ChEESE has organized include: Second, some pilots are also intended for Modelling tsunamis and volcanic plumes enabling of operational services requiring using European flagship codes, Advanced of extreme HPC on urgent computing, ear- training on HPC for computational seismo- ly warning forecast of geohazards, hazard logy and Tools and techniques to quickly assessment and data analytics. Pilots with improve performances of HPC applications a higher Technology Readiness Level (TRL) in Solid Earth. will be tested in an operational environ- ment in collaboration with a broader user’s

CONTACT CALL PROJECT TIMESPAN Arnau Folch INFRAEDI-02-2018 01/11/2018 - 31/10/2021 [email protected] Rose Gregorio [email protected] 44 European High-Performance Computing Projects - HANDBOOK 2020 CoEC Center of Excellence in Combustion

The Centre of Excellence in Combustion computing. These advances will contribute (CoEC) is a collaborative effort to exploit to increase the TRL of representative Exascale computing technologies to ad- codes from the EU combustion community dress fundamental challenges encoun- and increase the EU competitiveness tered in combustion systems. The CoEC and leadership in power and propulsion vision is aligned with the goals of decar- technologies. The outcomes of the project bonization of the European power and will be the basis of a service portfolio transportation sectors and Europe´s vision to support the academic and industrial to achieve net-zero greenhouse gas emis- activities in the power and transportation sions (GHG) by 2050. The CoEC initiative sectors. The Centre will also stimulate is a contribution of the European HPC the consolidation of the European com- combustion community to a long-term GHG bustion community, by combining the emissions reduction in accordance with knowledge of representative actors from the Paris Agreement. The consortium is the scientific and industrial sectors with composed by leading European institutions the capacity of HPC-community including in the fields of computational combustion research institutions and supercomput- and High-Performance Computing er centres. CoEC will promote the usage and promotes a core of scientific and of combustion simulations through dissem- technological activities aiming to extend ination, training and user support services. the state-of-the-art in combustion simulation capabilities through advanced methodologies enabled by Exascale

www.coec-project.eu OTHER PARTNERS • Technische Universität Darmstadt, Germany : @CoEC_EU • CERFACS - Centre Européen • ETHZ - Eidgenössische : @coec-eu de Recherche et de Formation Avancée en Calcul Scientifique, Technische Hochschule Zürich, France Switzerland COORDINATING • Rheinisch-Westfälische • Aristotelio Panepistimio ORGANISATION Technische Hochschule Aachen Thessalonikis, Greece Barcelona Supercomputing (RWTH-AACHEN), Germany • Forschungszentrum Jülich Centre (BSC), • Technische Universiteit GmbH, Germany Spain Eindhoven, Netherlands • Association «National • University of Cambridge, Centre for Supercomputing United Kingdom Applications», Bulgaria • CNRS - Centre National de la Recherche Scientifique, France CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 45

Large eddy simulation of hydrogen flames in practical configurations.

Concept and approach has been made in the last decade to obtain The technical activities integrated into the further understanding on these phenome- CoEC are defined from the fundamental na using numerical simulations. However, challenges of relevant application areas the degree of integration in industry still re- where combustion is involved. These areas mains low either because the incomplete include transportation (aviation and au- fidelity/accuracy of the models, or due to tomotive), energy and power generation, insufficient computing power that - hin aerospace, fire and safety. All these- sec ders the maturity of the technology to be tors share common challenges that are deployed with the level of industrial stan- associated to scientific and technologi- dards. Exascale computing technologies cal problems related with the combustion will enable a new paradigm in physical and simulation; e.g: instabilities, emissions, numerical modelling, which requires a pro- noise, plasma, nanoparticles, multiphase found revision of models. For this purpose, flow, autoignition or the use of alternative two important building blocks regarding fuels in practical applications. These re- Simulation Technologies, and Exascale search topics have large tradition in com- Methodologies will also be incorporated in bustion science and fundamental progress the Centre of Excellence.

CONTACT CALL PROJECT TIMESPAN Daniel Mira INFRAEDI-05-2020 01/10/2020 – 30/09/2023 [email protected] 46 European High-Performance Computing Projects - HANDBOOK 2020 CompBioMed2 Centre of Excellence in Computational Biomedicine

CompBioMed is a user-driven Centre arrhythmic risk to the impact of mutations of Excellence (CoE) in Computational on cancer treatment. Biomedicine, designed to nurture and pro- Our work has provided the ability to inte- mote the uptake and exploitation of high grate clinical datasets with HPC simulations performance computing within the biome- through fully working computational pipe- dical modelling community. Our user com- lines designed to provide clinically relevant munities come from academia, industry patient-specific models. The reach of the and clinical practice. project beyond the funded partners is ma- The first phase of the CompBioMed CoE nifested by our highly effective Associate has already achieved notable successes in Partner Programme (all of whom have the development of applications, training played an active role in our activities) with and efficient access mechanisms for using cost free, lightweight joining mechanism, HPC machines and clouds in computational and an Innovation Exchange Programme biomedicine. We have brought together a that has brought upward of thirty scien- growing list of HPC codes relevant for bio- tists, industrialists and clinicians into the medicine which have been enhanced and project from the wider community. scaled up to larger machines. Our codes Furthermore, we have developed and im- (such as Alya, HemeLB, BAC, Palabos and plemented a highly successful training HemoCell) are now running on several of programme, targeting the full range of the world’s fastest supercomputers and medical students, biomedical engineers, investigating challenging applications ran- biophysics, and computational scientists. ging from defining clinical biomarkers of This programme contains a mix of tailored

www.compbiomed.eu OTHER PARTNERS • The University of Sheffield, UK : @bio_comp • University of Amsterdam, • CBK Sci Con Ltd, UK Netherlands • Universitat Pompeu Fabra, Spain COORDINATING • The University of Edinburgh, UK • Bayerische Akademie der ORGANISATION • Barcelona Supercomputing Wissenschaften, Germany Centre (BSC), Spain • Acellera, Spain University College London, UK • SURFsara, Netherlands • Evotec Ltd, UK • University of Oxford, UK • Atos (Bull SAS), France • University of Geneva, Switzerland • Janssen Pharmaceutica, Belgium CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 47

courses for specific groups, webinars and The HPC requirements of our users are as winter schools, which is now being pac- diverse as the communities we represent. kaged into an easy to use training package. We support both monolithic codes, poten- In CompBioMed2 we are extending the tially scaling to the exascale, and complex CoE to serve the community for a total of workflows requiring support for advanced 7 years. CompBioMed has established it- execution patterns. Understanding the self as a hub for practitioners in the field, complex outputs of such simulations re- successfully nucleating a substantial body quires both rigorous uncertainty quanti- of research, education, training, innova- fication and the embrace of the -conver tion and outreach within the nascent field gence of HPC and high-performance data of Computational Biomedicine. This emer- analytics (HPDA). CompBioMed2 seeks to gent technology will enable clinicians to combine these approaches with the large, develop and refine personalised medicine heterogeneous datasets from medical re- strategies ahead of their clinical delivery cords and from the experimental labora- to the patient. Medical regulatory autho- tory to underpin clinical decision support rities are currently embracing the pros- systems. CompBioMed2 will continue to pect of using in silico methods in the area support, nurture and grow our commu- of clinical trials and we intend to be in the nity of practitioners, delivering incubator vanguard of this activity, laying the ground- activities to prepare our most mature ap- work for the application of HPC-based plications for wider usage, providing ave- Computational Biomedicine approaches nues that will sustain CompBioMed2 well- to a greater number of therapeutic areas. beyond the proposed funding period.

CONTACT CALL PROJECT TIMESPAN Emily Lumley INFRAEDI-02-2018 01/10/2019 - 30/09/2023 [email protected] 48 European High-Performance Computing Projects - HANDBOOK 2020 E-cam

The E-CAM Centre of Excellence is an e-in- academia ; frastructure for software development, • Supporting industrial end-users in their training and industrial discussion in simula- use of simulation and modelling, via tion and modelling. E-CAM is based around workshops and direct discussions with the Centre Européen de Calcul Atomique et experts in the CECAM community. Moléculaire (CECAM) distributed network of simulation science nodes, and the physical Our approach is focused on four scientific computational e-infrastructure of PRACE. areas, critical for HPC simulations relevant We are a partnership of 13 CECAM Nodes, to key societal and industrial challenges. 4 PRACE centres, 12 industrial partners These areas are classical molecular dyna- and one Centre for Industrial Computing mics, electronic structure, quantum dyna- (Hartree Centre). mics and meso- and multi-scale modelling. E-CAM aims at E-CAM develops new scientific ideas and • Developing software to solve important transfers them to algorithm development, simulation and modelling problems in in- optimisation, and parallelization in these dustry and academia, with applications four respective areas, and delivers the re- from drug development, to the design of lated training. Postdoctoral researchers new materials are employed under each scientific area, • Tuning those codes to run on HPC, through working closely with the scientific pro- application co-design and the provision of grammers to create, oversee and imple- HPC oriented libraries and services; ment the different software codes, in col- • Training scientists from industry and laboration with our industrial partners.

www.e-cam2020.eu OTHER PARTNERS • ENS - École Normale Supérieure de Lyon, France : @ECAM2020 • University College Dublin, Ireland • Forschungszentrum Jülich, • Freie Universität Berlin, Germany : @e-cam Germany • Universita degli Studi di Roma La • Universitat de Barcelona, Spain Sapienza, Italy COORDINATING • Daresbury Laboratory, Scientific • CNRS - Centre National de la ORGANISATION and Technology Facilities Recherche Scientifique, France Ecole Polytechnique Council, UK • Technische Universität Wien, AT Fédérale de Lausanne, • Scuola Internazionale Superiore Switzerland • University of Cambridge, UK Di Trieste, Italy • Max-Planck-Institut für • Universiteit van Amsterdam, Polymerforschung, Germany Netherlands CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 49 ecam

So far E-CAM has practices for code development, docu- • Certified more than 160 software modules mentation and maintenance that are open access and easily available • Deployed an online training infrastructure for the industrial and academic communi- to support the development of software ties through our software repository for extreme-scale hardware • Trained 300 people at our Extended • Worked on 10 pilot projects in direct colla- Software Development Workshops in ad- boration with our industrial partners vanced computational methods, good • Organized 8 State of the Art workshops and 6 Scoping workshops, that brought together industrialists, software develo- CECAM e-CAM pers and academic researchers, to dis-

Outputs Science communities infrastructure cuss the challenges they face QD ES MS MD Management and coordination • Worked on software development pro- jects that enable an HPC practice with po-

S/W

IP tential for transfer into industry. Examples Algorithms, standards, sw tools, etc

include: GPU re-write of the DL_MESO

capacity Human Training, professionalisation code for mesoscale simulations; the de- velopment of a load-balancing library spe-

Impact Outreach, industry, society cifically targeting MD applications, and of an HTC library capable of handling thou- sands of tasks each requiring peta-scale Positioning of the e-CAM infrastructure computing; among other efforts.

• Scuola Normale Superiore CONTACT CALL Pisa, Italy Luke Drury EINFRA-5-2015 • Aalto University, FI (Project Chair) • CSC-IT Centre for Science [email protected] Ltd, Finland +353 1 6621333 ext 321 PROJECT TIMESPAN • Irish Centre for High-End Ana Catarina Mendonça 01/10/2015 - 31/03/2021 Computing, Ireland (Project Coordinator) [email protected] +41 21 693 0395 50 European High-Performance Computing Projects - HANDBOOK 2020 EoCoE-II Energy Oriented Centre Of Excellence, Phase II

The Energy-oriented Centre of Excellence sectors of Energy Meteorology, Materials, (EoCoE) applies cutting-edge computatio- Water, Wind and Fusion. This multidis- nal methods in its mission to accelerate ciplinary effort harnesses innovations in the transition to the production, storage computer science and mathematical al- and management of clean, decarbonized gorithms within a tightly integrated co-de- energy. EoCoE is anchored in the High sign approach to overcome performance Performance Computing (HPC) community bottlenecks and to anticipate future HPC and targets research institutes, key com- hardware developments. A world-class mercial players and SMEs who develop and consortium of 18 complementary partners enable energy-relevant numerical models from 7 countries forms a unique network to be run on exascale supercomputers, de- of expertise in energy science, scientific monstrating their benefits for low-carbon computing and HPC, including 3 leading energy technology. The present project European supercomputing centres. New draws on a successful proof-of-principle modelling capabilities in selected energy phase of EoCoE-I, where a large set of di- sectors will be created at unprecedented verse computer applications from four scale, demonstrating the potential benefits such energy domains achieved significant to the energy industry, such as accelerated efficiency gains thanks to its multidiscipli- design of storage devices, high-resolution nary expertise in applied mathematics and probabilistic wind and solar forecasting for supercomputing. During this 2nd round, the power grid and quantitative unders- EoCoE-II channels its efforts into 5 scien- tanding of plasma core-edge interactions tific Exascale challenges in the low-carbon in ITER-scale tokamaks. These flagship

www.eocoe.eu OTHER PARTNERS • Institut National de Recherche en Informatique et Automatique : @hpc-energy • Forschungszentrum Jülich (FZJ), (INRIA), France with RWTH Aachen University, Germany • Centre Européen de Recherche COORDINATING et de Formation Avancée en • Agenzia nazionale per le nuove ORGANISATION Calcul Scientifique (CERFACS), tecnologie, l’energia e lo sviluppo France Maison de la Simulation (MdlS) economico sostenibile (ENEA), and Institut de Recherche Italy • Max-Planck Gesellschaft (MPG), sur la Fusion par confinement Germany • Barcelona Supercomputer Centre Magnétique (IRFM) at CEA, (BSC), Spain • Fraunhofer Gesellschaft, France Germany • Centre National de la Recherche Scientifique (CNRS) with Inst. • Friedrich-Alexander Univ. Nat. Polytechnique Toulouse Erlangen-Nürnberg (FAU), (INPT), France Germany CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 51

applications will provide a high-visibility We are interested in collaborations in the platform for high-performance compu- area of HPC (e.g. programming models, tational energy science, cross-fertilized exascale architectures, linear solvers, I/O) through close working connections to the and also with people working in the energy EERA and EUROfusion consortia. domain and needing expertise for carrying out ambitious simulation. See our service EoCoE is structured around a cen- page for more details: tral Franco-German hub coordinating a http://www.eocoe.eu/services pan-European network, gathering a total of 7 countries and 21 teams. Its partners are strongly engaged in both the HPC and en- ergy fields. The primary goal of EoCoE is to create a new, long lasting and sustainable community around computational energy science. EoCoE resolves current bottle- necks in application codes; it develops cutting-edge mathematical and numerical methods, and tools to foster the usage of Exascale computing. Dedicated services for laboratories and industries are establi- shed to leverage this expertise and to de- velop an ecosystem around HPC for energy.

• Consiglio Nazionale delle • Centro de Investigaciones CONTACT Ricerche (CNR), with Univ. Rome, Energéticas, Medioambientales y Edouard Audit Tor Vergata (UNITOV), Italy Tecnológicas (CIEMAT), Spain [email protected] • Università degli Studi di Trento • IFP Energies Nouvelles (IFPEN), +33 1 69 08 96 35 (UNITN), Italy France • Instytut Chemii Bioorganicznej • DDN Storage Polskiej Akademii Nauk (PSNC), CALL Poland INFRAEDI-02-2018 • Université Libre de Bruxelles (ULB), Belgium PROJECT TIMESPAN • University of Bath (UBAH), United Kingdom 01/01/2019 - 31/12/2021 52 European High-Performance Computing Projects - HANDBOOK 2020 ESiWACE2 Centre of Excellence in Simulation of Weather and Climate in Europe, Phase 2

ESiWACE2 will leverage the potential of the These developments will be comple- envisaged EuroHPC pre-exascale systems mented by the establishment of new to push European climate and weather co- technologies such as domain-specific lan- des to world-leading spatial resolution for guages and tools to minimise data output production-ready simulations, including for ensemble runs. Co-design between the associated data management and data model developers, HPC manufacturers analytics workflows. For this goal, the port- and HPC centres is to be fostered, in par- folio of climate models supported by the ticular through the design and employ- project has been extended with respect ment of High-Performance Climate and to the prototype-like demonstrators of Weather benchmarks, with the first ver- ESiWACE1. Besides, while the central focus sion of these benchmarks feeding into of ESiWACE2 lies on achieving scientific ESiWACE2 through the FET-HPC research performance goals with HPC systems that project ESCAPE-2. Additionally, training will become available within the next four and dedicated services will be set up to years, research and development to prepare enable the wider community to efficiently the community for the systems of the exas- use upcoming pre-exascale and exascale cale era constitutes another project goal. supercomputers.

www.esiwace.eu OTHER PARTNERS • CERFACS - Centre Européen de Recherche et de Formation : @ESIWACE • CNRS - Centre National de la Avancée en Calcul Scientifique, Recherche Scientifique, France France • ECMWF - European Centre COORDINATING • National University of Ireland for Medium-Range Weather ORGANISATION Galway (Irish Centre for High End Forecasts, United Kingdom Deutsches Computing), Ireland • Barcelona Supercomputing Klimarechenzentrum • Met Office, United Kingdom Center (BSC), Spain (DKRZ), • Fondazione Centro Euro- • Max-Planck-Institut für Germany Mediterraneo sui Cambiamenti Meteorologie, Germany Climatici, Italy • Sveriges meteorologiska och • The University of Reading, hydrologiska institut, Sweden United Kingdom CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 53

• STFC - Science and Technology • Federal Office of Meteorology CONTACT Facilities Council, United and Climatology, Switzerland Joachim Biercamp Kingdom • DataDirect Networks, France [email protected] • Atos (Bull SAS), France • Mercator Océan, France • Seagate Systems UK Limited, United Kingdom CALL • ETHZ - Eidgenössische INFRAEDI-02-2018 Technische Hochschule Zürich, Switzerland PROJECT TIMESPAN • The University of Manchester, United Kingdom 01/01/2019 - 31/12/2022 • Netherlands eScience Centre, Netherlands 54 European High-Performance Computing Projects - HANDBOOK 2020 EXCELLERAT

Engineering applications will be among AVBP, TPLS, FEniCS, Coda), which were the first exploiting Exascale. Not only in analysed on their potential to support the academia but also in industry. In fact, the aim to achieve Exascale performance in industrial engineering field is the field HPC for Engineering. Thus, they are pro- with the highest Exascale potential. Thus mising candidates to act as showcases for the goal of EXCELLERAT is to enable the European engineering industry to advance towards Exascale technologies and to create a single entry point to services and knowledge for all stakeholders (industrial end users, ISVs, technology providers, HPC providers, academics, code developers, en- gineering experts) of HPC for engineering. In order to achieve this goal, EXCELLERAT brings together key players from industry, research and HPC to provide all necessary services.. This is in line with the European HPC Strategy as implemented through the EuroHPC Joint Undertaking. To fulfil its mission, EXCELLERAT focuses its developments on six carefully chosen Core competences of the EXCELLERAT Centre reference applications (Nek5000, Alya, of Excellence

www.excellerat.eu OTHER PARTNERS • Centre Européen de Recherche et de Formation Avancée en : @EXCELLERAT_CoE • The University of Edinburgh Calcul Scientifique (CERFACS), (EPCC), United Kingdom : @excellerat France • CINECA Consorzio • Barcelona Supercomputing Interuniversitario, Italy COORDINATING Centre (BSC), Spain • SICOS BW GmbH, Germany ORGANISATION • SSC-Services GmbH, Germany • Kungliga Tekniska Hoegskolan Universität Stuttgart (HLRS), • Fraunhofer Gesellschaft zur (KTH), Sweden Germany Förderung der Angewandten • ARCTUR DOO, Slovenia Forschung E.V., Germany • DLR - Deutsches Zentrum • TERATEC, France für Luft- und Raumfahrt EV , • Rheinisch-Westfälische Germany Technische Hochschule Aachen, Germany CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 55

the evolution of applications towards exe- frame of the project, EXCELLERAT will cution on Exascale Demonstrators, Pre- prove the applicability of the results not Exascale Systems and Exascale Machines. only for the six chosen reference applica- EXCELLERAT addresses the setup of tions, but even going beyond. EXCELLERAT a Centre as an entity, acting as a single extends the recipients of its developments hub, covering a wide range of issues, from beyond the consortium and interacts via «non-pure-technical» services such as diverse mechanisms to integrate external access to knowledge or networking up to stakeholders of its value network into its technical services as e.g. co-design, scala- evolution. bility enhancement or code porting to new (Exa) Hardware.

As the consortium contains key players in HPC, HPDA, AI and experts for the re- ference applications, impact (e.g. code improvements and awareness raising) is guaranteed. The scientific excellence of the EXCELLERAT consortium enables evolution, optimisation, scaling and por- ting of applications towards disruptive technologies and increases Europe’s com- petitiveness in engineering. Within the

CONTACT CALL PROJECT TIMESPAN Dr.-Ing. Bastian Koller INFRAEDI-02-2018 01/12/2018 - 30/11/2021 [email protected] 56 European High-Performance Computing Projects - HANDBOOK 2020 HiDALGO HPC and Big Data Technologies for Global Challenges

OBJECTIVE overall complexity. Although the process Developing evidence and understanding for bringing together the different commu- concerning Global Challenges and their nities has already started within the Centre underlying parameters is rapidly becoming of Excellence for Global Systems Science a vital challenge for modern societies. (CoeGSS), the importance of assisted de- Various examples, such as health care, the cision making by addressing global, mul- transition of green technologies or the evo- ti-dimensional problems is more important lution of the global climate up to hazards than ever. and stress tests for the financial sector de- monstrate the complexity of the involved Global decisions with their dependencies systems and underpin their interdisciplina- cannot be based on incomplete problem ry as well as their globality. This becomes assessments or gut feelings anymore, even more obvious if coupled systems are since impacts cannot be foreseen without considered: problem statements and their an accurate problem representation and corresponding parameters are dependent its systemic evolution. Therefore, HiDALGO on each other, which results in inter- bridges that shortcoming by enabling connected simulations with a tremendous highly accurate simulations, data analytics

www.hidalgo-project.eu OTHER PARTNERS • Know Center GmbH, Austria : @EU_HiDALGO • Universität Stuttgart – High • Szechenyi Istvan University, Performance Computing Center Hungary • Paris-Lodron-Universität COORDINATING Stuttgart, Germany Salzburg, Austria ORGANISATION • Instytut Chemii Bioorganicznej Polskiej Akademii Nauk – Poznan • European Centre for Medium- Atos Spain, Supercomputing and Networking Range Weather Forecasts, Spain Center, Poland United Kingdom • ICCS (Institute of Communication • MoonStar Communications and Computer Systems), Greece GmbH, Germany • Brunel University London, United Kingdom CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 57

and data visualisation, but also by providing technology as well as knowledge on how to integrate the various workflows and the corresponding data.

• DIALOGIK Gemeinnützige CONTACT CALL Gesellschaft für Francisco Javier Nieto INFRAEDI-02-2018 Kommunikations- und [email protected] Kooperationsforschung MBH, Germany PROJECT TIMESPAN • Magyar Kozut Nonprofit Zrt., 01/12/2018 - 30/11/2021 Hungary • ARH Informatikai Zrt., Hungary 58 European High-Performance Computing Projects - HANDBOOK 2020 MaX Materials design at the Exascale

MaX - Materials design at the eXascale is and manufacturing. a Centre of Excellence with focus on dri- MaX works with code developers and ex- ving the evolution and exascale transition perts from HPC centres to support such of materials science codes and libraries, transition. It focuses on selected com- and creating an integrated ecosystem of plementary open-source codes: BigDFT, codes, data, workflows and analysis tools CP2k, FLEUR, Quantum ESPRESSO, for high-performance (HPC) and high- SIESTA, YAMBO. In addition, it contributes throughput computing (HTC). Particular to the development of the AiiDA mate- emphasis is on co-design activities to en- rials informatics infrastructure and the sure that future HPC architectures are well Materials Cloud environment. suited for the materials science applica- tions and their users. The main actions of the project include: 1. code and library restructuring: including The focus of MaX is on first principles mate- modularizing and adapting for heteroge- rials science applications, i.e. codes that al- neous architectures, as well as adoption low predictive simulations of materials and of new algorithms; their properties from the laws of quantum 2. co-design: working on codes and provi- physics and chemistry, without resorting to ding feedback to architects and integra- empirical parameters. The exascale pers- tors; developing workflows and turn-key pective is expected to boost the massive solutions for properties calculations and use of these codes in designing materials curated data sharing; structures and functionalities for research 3. enabling convergence of HPC, HTC and

www.max-centre.eu OTHER PARTNERS • CSCS ETH Zürich, Switzerland (Joost VandeVondele) : @max_center2 • SISSA Trieste, Italy (Stefano Baroni) • CEA Grenoble, France : @max_centre (Thierry Deutsch) • ICN2 Barcelona, Spain (Pablo Ordejón) • UGent, Belgium COORDINATING (Stefaan Cottenier) ORGANISATION • FZJ Jülich, Germany (Stefan Blügel, Dirk Pleiter) • E4 Computer Engineering, Italy CNR Nano, Modena, (Fabrizio Magugliani) • EPFL Lausanne, Switzerland Italy (Nicola Marzari) • Arm, UK (Conrad Hillairet) • CINECA Bologna, Italy • ICTP UNESCO Trieste, Italy (Fabio Affinito) (Ivan Girotto) • Barcelona Supercomputing • Trust-IT Pisa, Italy Centre (BSC), Spain (Silvana Muscella) (Stephan Mohr) CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 59

high-performance data analytics; 4. addressing the skills gap: widening ac- cess to codes, training, and transferring know-how to user communities; 5. serving end-users in industry and re- search: providing dedicated services, support, and high-level consulting.

Results and lessons learned on MaX flagship codes and projects are made avai- lable to the developers and user communi- ties at large.

CONTACT CALL PROJECT TIMESPAN Elisa Molinari INFRAEDI-02-2018 01/12/2018 - 30/11/2021 Director [email protected] +39 059 2055 629 60 European High-Performance Computing Projects - HANDBOOK 2020 NOMAD Novel Materials Discovery

Predicting novel materials with specific de- In close contact with the R&D community, sirable properties is a major aim of ab initio including industry, we will computational materials science (aiCMS) • develop exascale algorithms to create and an urgent requirement of basic and accurate predictive models of real-world, applied materials science, engineering and industrially-relevant, complex materials; industry. Such materials can have immense • provide exascale software libraries for impact on the environment and on society, all code families (not just selected co- e.g. on energy, transport, IT, medical-de- des); enhancing today’s aiCMS to take vice sectors and much more. Currently, advantage of tomorrow’s HPC computing however, precisely predicting complex ma- platforms; terials is computationally infeasible. • develop energy-to-solution as a funda- mental part of new models. This will be NOMAD CoE will develop a new level of achieved by developing novel artificial-in- materials modelling, enabled by upco- telligence (AI) guided work-flow engines ming HPC exascale computing and extre- that optimise the modelling calculations me-scale data hardware. and provide significantly more informa- tion per calculation performed;

www.nomad-coe.eu OTHER PARTNERS • CSC – IT Center for Science Ltd., Finland : @NoMaDCoE • Humboldt-Universität zu Berlin, Germany • University of Cambridge, United Kingdom COORDINATING • Danmarks Tekniske Universitet, Denmark • CEA - Commissariat à l’Energie ORGANISATION Atomique et aux Energies • Technische Universität Max-Planck-Gesellschaft Alternatives, France Wien, Austria zur Förderung • Aalto University Helsinki, • The University of Warwick, der Wissenschaften EV, Finland Germany United Kingdom • University of Latvia, • Université Catholique Latvia de Louvain, Belgium • Barcelona Supercomputing Centre (BSC), Spain CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 61

• offer extreme-scale data services (data NOMAD CoE will push the limits of aiCMS to infrastructure, storage, retrieval and unprecedented capabilities, speed and ac- analytics/AI); curacy, serving basic science, industry and • test and demonstrate our results in two thus society. exciting use cases, solving urgent challen- ges for the energy and environment that cannot be computed properly with today’s hard- and software (water splitting and novel thermoelectric materials); • train the next generation of students, also in countries where HPC studies are not yet well developed.

NOMAD CoE is working closely together with POP, and it is synergistically comple- mentary to and closely coordinated with the EoCoE, ECAM, BioExcel and MaX CoEs.

CONTACT CALL PROJECT TIMESPAN [email protected] INFRAEDI-05-2020 01/10/2020 – 30/09/2023 62 European High-Performance Computing Projects - HANDBOOK 2020 PerMedCoE HPC /Exascale Center of Excellence in Personalised Medicine

Personalised Medicine (PerMed) opens The goal of the HPC/Exascale Centre unexplored frontiers to treat diseases at the of Excellence in Personalised Medicine individual level combining clinical and omics (PerMedCoE) is to provide an efficient information. However, the performances of and sustainable entry point to the HPC/ the current simulation software are still in- Exascale-upgraded methodology to trans- sufficient to tackle medical problems such late omics analyses into actionable models as tumour evolution or patient-specific of cellular functions of medical relevance. treatments. The challenge is to develop a It will accomplish so by sustainable roadmap to scale-up the essen- 1. optimising four core applications for tial software for the cell-level simulation to cell-level simulations to the new pre- the new European HPC/Exascale systems. exascale platforms; Simulation of cellular mechanistic models 2. integrating PerMed into the new are essential for the translation of omic European HPC/Exascale ecosystem, data to medical relevant actions and these by offering access to HPC/Exascale- should be accessible to the end-users in the adapted and optimised software; appropriate environment of the PerMed- 3. running a comprehensive set of PerMed specific big confidential data. use cases;

www.permedcoe.eu OTHER PARTNERS • European Molecular Biology Laboratory, Germany : @PerMedCoE • CSC-IT Centre for Science Ltd, Finland • Fundacio Centre de Regulacio : @permedcoe Genomica, Spain : youtube.com/channel/ • Université du Luxembourg, Luxembourg • Max Delbrück Centre for UClZV7luI2oU-TV- Molecular Medicine, Germany jKtEPzKw • Institut Curie, France • University of Ljubljana, Slovenia • Universitätsklinikum • ELEM BIOTECH SL, Spain COORDINATING Heidelberg, Germany ORGANISATION • IBM Research GmbH, Switzerland Barcelona Supercomputing • Atos Spain, Spain Centre (BSC), Spain • Kungliga Tekniska Hoegskolan (KTH), Sweden CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 63

4. building the basis for the sustainability of infrastructures (BSC and CSC), including a the PerMedCoE by coordinating PerMed substantial co-design effort. and HPC communities, and reaching out to industrial and academic end-users, with use cases, training, expertise, and best practices.

The PerMedCoE cell-level simulations will fill the gap between the molecu- lar- and organ-level simulations from the CompBioMed and BioExcel CoEs with which this proposal is aligned at different levels. It will connect methods’ developers with HPC, HTC and HPDA experts (at POP and HiDALGO CoEs). Finally, the PerMedCoE will work with biomedical consortia (i.e. ELIXIR, LifeTime initiative) and pre-exascale

CONTACT CALL PROJECT TIMESPAN Alfonso Valencia INFRAEDI-05-2020 01/10/2020 – 30/09/2023 [email protected] 64 European High-Performance Computing Projects - HANDBOOK 2020 POP2 Performance Optimisation and Productivity 2

The growing complexity of parallel com- of Excellence since October 2015. The ef- puters is leading to a situation where code fort in the POP and POP2 projects resulted owners and users are not aware of the de- up to now in more than 200 assessment tailed issues affecting the performance of services provided to customers in acade- their applications. The result is often an mia, research and industry helping them inefficient use of the infrastructures. Even to better understand the behaviour and in the cases where the need to get further improve the performance of their applica- performance and efficiency is perceived, tions. The external view, advice, and help the code developers may not have insight provided by POP have been extremely use- enough on its detailed causes so as to pro- ful for many of these customers to signi- perly address the problem. This may lead ficantly improve the performance of their to blind attempts to restructure codes in a codes by factors of 20% in some cases way that might not be the most productive but up to 2x or even 10x in others. The POP ones. experience was also extremely valuable to identify issues in methodologies and tools POP2 extends and expands the activities that if improved will reduce the assess- successfully carried out by the POP Centre ment cycle time.

www.pop-coe.eu OTHER PARTNERS • Teratec, France : @POP_HPC • Universität Stuttgart HLRS, • Université de Versailles Germany Saint Quentin, France • Vysoka Skola Banska - Technicka COORDINATING • Forschungszentrum Jülich GmbH, Germany Univerzita Ostrava, ORGANISATION Czech Republic • Numerical Algorithms Barcelona Supercomputing Group Ltd, UK Centre (BSC), Spain • Rheinisch-Westfälische Technische Hochschule Aachen (RWTH-AACHEN), Germany CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 65

POP2 continues to focus on a transversal (horizontal) service to offer highly spe- cialized expertise and know-how to all other CoEs facilitating cross-fertilisation between them and other sectors. This could lead to wider access to codes, inclu- ding specific and targeted measures for industry & SMEs. Collaborations with other CoEs and Projects will help to reach poten- tial users for services and promote support to the governance of HPC Infrastructures. Centres could adopt the POP methodo- logy, helping it embed in the wider HPC community, leveraging the use of existing infrastructures.

CONTACT CALL PROJECT TIMESPAN Jesus Labarta INFRAEDI-02-2018 01/12/2018 - 30/11/2021 [email protected] Judit Gimenez [email protected] 66 European High-Performance Computing Projects - HANDBOOK 2020 TREX Targeting Real chemical accuracy at the EXascale

TREX federates European scientists, HPC software suite in the domain of stochastic stakeholders, and SMEs to develop and ap- quantum chemistry simulations, which in- ply quantum mechanical simulations in the tegrates TREX community codes within an framework of stochastic quantum Monte interoperable, high-performance platform. Carlo methods. This methodology encom- In parallel, TREX will work on show-cases to passes various techniques at the high-end leverage this methodology for commercial in the accuracy ladder of electronic struc- applications as well as develop and imple- ture approaches and is uniquely positioned ment software components and services to fully exploit the massive parallelism of that make it easier for commercial opera- the upcoming exascale architectures. The tors and user communities to use HPC re- marriage of these advanced methods with sources for these applications. exascale will enable simulations at the nanoscale of unprecedented accuracy, tar- geting a fully consistent description of the quantum mechanical electron problem. TREX’s main focus will be the develop- ment of a user-friendly and open-source

trex-coe.eu/ OTHER PARTNERS • Université de Versailles Saint Quentin-en-Yvelines, France : @trex_eu • CNRS - Centre National de la Recherche Scientifique, France • Megware Computer Vertrieb und : @trex-eu Service GmbH, Germany • Scuola Internazionale Superiore Di Studi Avanzati Di Trieste, Italy • Slovenska Technicka Univerzita V COORDINATING Bratislave, Slovakia ORGANISATION • CINECA Consorzio Interuniversitario, Italy • Universität Wien, Austria University of Twente, • Forschungszentrum Jülich • Politechnika Lodzka, Poland Netherlands GmbH, Germany • TRUST-IT SRL, Italy • Max-Planck-Gesellschaft zur Förderung der Wissenschaften EV, Germany CENTRES OF EXCELLENCE IN COMPUTING APPLICATIONS 67

CONTACT CALL PROJECT TIMESPAN Claudia Filippi INFRAEDI-05-2020 01/10/2020 – 30/09/2023 Project Coordinator [email protected] 68 European High-Performance Computing Projects - HANDBOOK 2020 69

European Processor 70 European High-Performance Computing Projects - HANDBOOK 2020 EPI European Processor Initiative

Europe has an ambitious plan to become a 1. Developing low-power processor tech- main player in supercomputing. One of the nology to be included in an advanced ex- core components for achieving that goal perimental pilot platforms towards exas- is a processor. The European Processor cale systems for Europe in 2021-2022; Initiative (EPI) is a part of a broader strate- 2. Ensuring that a significant part of that gy to develop and independent European technology and intellectual property is HPC industry based on domestic and in- European; novative technologies as presented in the 3. Ensuring that the application areas of the EuroHPC Joint Undertaking proposed by technology are not limited only to HPC, the European Commission. The general but cover other areas, such as automo- objective of EPI is to design a roadmap tive and data centers, thus ensuring the and develop the key Intellectual Property economic viability of the initiative. for future European low-power proces- sors addressing extreme scale computing EPI gathers 27 partners from 10 European (exascale), high-performance, big-data and countries, with a wide range of expertise emerging verticals (e.g. automotive com- and background: HPC, supercomputing puting) and other fields that require highly centers, automotive computing, including efficient computing infrastructure. More researchers and key industrial players. The precisely, EPI aims at establishing the key fact that the envisioned European proces- technologies to reach three fundamental sor is planned to be based on already exis- goals: ting tools either owned by the partners or EUROPEAN PROCESSOR 71

being offered as open-source with a large the core of future computing systems will community of users, provides three key be divided into several streams: exploitation advantages: (1) the time-to- • Common Platform and global market will be reduced as most of these architecture [stream 1] tools are already used in industry and well • HPC general purpose processor known. (2) It will enable EPI partners to in- [stream 2] corporate the results in their commercial • Accelerator [stream 3] portfolio or in their scientific roadmap. (3) • Automotive platform [stream 4] it fundamentally reduces the technological risk associated to advanced technologies The results from the two streams related developed and implemented in EPI. to the general-purpose processor and ac- celerator chips will generate a heteroge- EPI covers a complete range of expertise, neous, energy-efficient CPU for use in both skills, and competencies needed to de- standard and non-traditional and com- sign and execute a sustainable roadmap pute-intensive segments, e.g. automotive for research and innovation in processor where SoA in autonomous driving requires and computing technology, fostering fu- significant computational resources. ture exascale HPC and emerging appli- cations, including Big Data, and automo- tive computing for autonomous vehicles. Development of a new processor to be at 72 European High-Performance Computing Projects - HANDBOOK 2020

The EPI timeline

EPI strives to maximize the synergies between the two streams and will work with existing EU initiatives on technology, infrastructure and applications, to position Europe as a world leader in HPC and emerging markets for exascale era such as automotive computing for au- tonomous driving.

www.european- OTHER PARTNERS Technische Hochschule Zürich, Switzerland processor-initiative.eu • BSC, Barcelona supercomputing : @EUProcessor center – centro nacional de • FORTH, Foundation for Research and Technology Hellas, Greece : @european-processor- supercomputacion, Spain initiative • IFAG, Infineon Technologies AG, • GENCI, Grand Equipement Germany National de Calcul Intensif, : www.youtube.com/ France channel/UCGvQcTosJdWh • SemiDynamics, SemiDynamics d013SHnIbpA/featured technology services SL, Spain • IST, Instituto Superior Technico, Portugal • CEA, Commissariat à l’Energie Atomique et aux énergies • JÜLICH, Forschungszentrum COORDINATING Jülich GmbH, Germany ORGANISATION alternatives, France • CHALMERS, Chalmers tekniska • UNIBO, Alma mater studiorum - Bull SAS (Atos group), högskola, Sweden Universita di Bologna, Italy France • ETH Zürich, Eidgenössische • UNIZG-FER, Sveučilište u EUROPEAN PROCESSOR 73

Zagrebu, Fakultet elektrotehnike • CINECA, Cineca Consorzio CONTACT i računarstva, Croatia Interuniversitario, Italy contact@european-processor- • Fraunhofer, Fraunhofer • BMW Group, Bayerische Motoren initiative.eu Gesellschaaft zur Förderung der Werke Aktiengesellschaft, angewandten Forschung E.V., Germany Germany • Elektrobit, Automotive GmbH, CALL • ST-I, STMICROELECTRONICS Germany SGA-LPMT-01-2018 SRL, Italy • KIT, Karlsruher Institut für • E4, E4 Computer Engineering Technologie, Germany PROJECT TIMESPAN SPA, Italy • Menta SAS, France 01/12/2018 - 30/11/2021 • UNIPI, Universita di Pisa, Italy • Prove & Run, France • SURFsara BV, Netherlands • SIPEARL SAS, France • Kalray SA, France • Extoll GmbH, Germany 74 European High-Performance Computing Projects - HANDBOOK 2020 Mont-Blanc 2020 European scalable, modular and power-efficient HPC processor

Following on from the three successive HPC” for autonomous driving. The project’s Mont-Blanc projects since 2011, the three actual objectives are to: core partners Arm, Barcelona Supercom- puting Center and Atos have united again • define a low-power System-on-Chip ar- to trigger the development of the next chitecture targeting Exascale; generation of industrial processor for Big • implement new critical building blocks Data and High Performance Computing. (IPs) and provide a blueprint for its The Mont-Blanc 2020 consortium also first-generation implementation; includes CEA, Forschungszentrum Jülich, • deliver initial proof-of-concept de- Kalray, and SemiDynamics. monstration of its critical components on real life applications; The Mont-Blanc 2020 project intends to pave the way to the future low-power European processor for Exascale. To im- prove the economic sustainability of the processor generations that will result from the Mont-Blanc 2020 effort, the project includes the analysis of the requirements of other markets. The project’s strategy based on modular packaging would make it possible to create a family of SoCs targe- ting different markets, such as “embedded The Mont-Blanc 2020 physical architecture

montblanc-project.eu OTHER PARTNERS : @MontBlanc_Eu • Arm, United Kingdom) : @mont-blanc-project • Barcelona Supercomputing Centre (BSC), Spain COORDINATING • CEA - Commissariat à l’Energie ORGANISATION Atomique et aux énergies alternatives, France Atos (Bull SAS), • Jülich Forschungszentrum, France Germany • Kalray, France • SemiDynamics, Spain EUROPEAN PROCESSOR 75

A Key Output: the MB2020 demonstrator

• explore the reuse of the building blocks architecture. Mont-Blanc 2020 will de- to serve other markets than HPC, monstrate the performance with a proto- with methodologies enabling a bet- type implementation in RTL for some of its ter time-predictability, especially for key components, and demonstration on an mixed-critical applications where gua- emulation platform. ranteed execution & response times The Mont-Blanc 2020 project is at the heart are crucial. of the European exascale supercomputer effort, since most of the IP developed wit- Mont-Blanc 2020 will provide new IPs, such hin the project will be reused and produc- as a new low-power mesh interconnect tized in the European Processor Initiative based on the Coherent Hub Interface (CHI) (EPI).

CONTACT CALL PROJECT TIMESPAN Etienne Walter ICT-05-2017 01/12/2017 - 31/03/2021 [email protected] 76 European High-Performance Computing Projects - HANDBOOK 2020 77

Applications in the HPC Continuum 78 European High-Performance Computing Projects - HANDBOOK 2020 AQMO Air Quality and Mobility

Many metropolises are struggling to re- the use of High-Performance Computing duce air pollution. At the core of this issue (HPC) both centrally in supercomputing are two important requirements: getting a centres and distributed on enhanced sen- better temporal and spatial measurement sors (Edge computing) or cloud resources. coverage, and having the ability to use si- HPC capabilities are necessary to perform mulations to answer ”what-if” questions. accurate numerical simulations of pol- The AQMO project provides an end-to-end lution dispersion, including sensors data urban platform based on an edge-to-HPC assimilation. and cloud hybrid computing model fuelled The dissemination of the platform and its by Open Data. use by other cities will be eased by its mo- The AQMO platform is deployed in the dular design: transferable and replicable Rennes metropolitan area (Brittany, systems, and addition data and capabilities France) which consists of 43 cities and can be added thanks to open Application is the 12th largest metropolis in France in Program Interfaces. terms of population. To achieve, in a cost-effective manner, ri- A transversal approach was chosen for the gorous air quality measurements in a wide design, that spans from sensors to super- area, the local transportation bus network computers in order to deliver day-to-day is enhanced with mobile sensors. In the data as well as capabilities to help catas- case of measurements for catastrophic trophic event handling, denoted as “urgent event, the use of drones is explored, in computing”. The AQMO project explores connection with the UAV-Retina project

aqmo.irisa.fr OTHER PARTNERS : @aqmoproject • Air Breizh • AmpliSIM COORDINATING • GENCI ORGANISATION • IDRIS (CNRS) Université Rennes 1, • Keolis Rennes France • Neovia Innovation • Rennes Métropole • Ryax Technologies • UCit APPLICATIONS IN THE HPC CONTINUUM 79

A Q M O

Service that is being developed by Rennes Métropole. Metadata will be published on the French national Open Data Portal and the European Data Portal. The AQMO platform has been under test for over a year now in a few of the Rennes Metropolis’ buses and has remained stable during the full experiment. The design choices have been proven to fit the project needs, with new sensors needed to get a measurement context. For instance, the team wants to be able to explain the de- tection of peaks of particles. The partners have implemented a smart camera that will be added to the platform to help with these detections. supported by the EIT-Digital. AQMO is im- plementing HPC as a service as a mean The year 2020 is dedicated to deploying the to provide both routinely and also on-de- platform in 20 buses to start the monito- mand air quality simulations. The resul- ring of the air quality (based on AlphaSense ting data will be made available to citizens OPC-N3 sensors) in the whole Rennes thanks to the Open-Data Metropolitan metropolis.

CONTACT CALL PROJECT TIMESPAN François BODIN INEA/CEF/ICT/A2017 01/09/2018 – 31/12/2020 80 European High-Performance Computing Projects - HANDBOOK 2020 CYBELE Fostering precision agriculture and livestock farming through secure access to large-scale HPC-enabled virtual industrial experimentation environment empowering scalable big data analytics

CYBELE generates innovation and creates CYBELE intends to safeguard that value in the domain of agri-food, and its stakeholders have integrated, unmediated verticals in the sub-domains of Precision access to a vast amount of large-scale Agriculture (PA) and Precision Livestock datasets of diverse types from a variety Farming (PLF) in specific. This will be of sources. By providing secure and un- demonstrated by the real-life industrial mediated access to large-scale HPC in- cases, empowering capacity building wit- frastructures supporting data discovery, hin the industrial and research commu- processing, combination and visualisation nity. CYBELE aims at demonstrating how services, Stakeholders shall be enabled to the convergence of HPC, Big Data, Cloud generate more value and deeper insights in Computing and the Internet of Things can operations. revolutionise farming, reduce scarcity and CYBELE develops large scale HPC-enabled increase food supply, bringing social, eco- test beds and delivers a distributed big nomic, and environmental benefits. data management architecture and a data management strategy providing:

www.cybele-project.eu OTHER PARTNERS • Centre for Research and Technology Hellas - : @CYBELE_H2020 • Barcelona Supercomputing Information Technologies Centre (BSC), Spain : @cybeleh2020 Institute (ITI), Greece • Bull SAS (Atos group), France : @Cybele- • Tampereen Korkeakoulusaatio project-766977186/ • CINECA Consorzio SR, Finland Interuniversitario, Italy • UBITECH, Greece • Instytut Chemii Bioorganicznej COORDINATING • University of Piraeus Polskiej Akademii Nauk, Poland ORGANISATION Research Center, Greece • RYAX Technologies, France Waterford Institute • SUITE5 Data Intelligence • Universität Stuttgart, Germany of Technology, Solutions Ltd, Cyprus Ireland • EXODUS Anonymos Etaireia • INTRASOFT International SA, Pliroforikis, Greece Luxembourg • LEANXCALE SL, Spain • ENGINEERING - Ingegneria • ICCS (Institute of Communication Informatica SPA, Italy and Computer Systems), Greece APPLICATIONS IN THE HPC CONTINUUM 81

1. integrated, unmediated access to large automation-assisted decision making, scale datasets of diverse types from a empowering the stakeholders to use re- multitude of distributed data sources, sources in a more environmentally res- 2. a data and service driven virtual HPC- ponsible manner, improve sourcing deci- enabled environment supporting the sions, and implement circular-economy execution of multi-parametric agri-food solutions in the food chain. related impact model experiments, opti- mising the features of processing large scale datasets and 3. a bouquet of domain specific and gene- ric services on top of the virtual research environment facilitating the elicitation of knowledge from big agri-food related data, addressing the issue of increa- sing responsiveness and empowering

• Wageningen University, • ILVO - Instituut voor Landbouw- CONTACT Netherlands en Visserrijonderzoek, Belgium [email protected] • Stichting Wageningen Research, • VION Food Nederland BV, [email protected] Netherlands Netherlands • BIOSENSE Institute, Serbia • Olokliromena Pliroforiaka • Donau Soja Gemeinnutzige Sistimataae, Greece CALL GmbH, Austria • Kobenhavns Universitet, ICT-11-2018-2019 • Agroknow IKE, Greece Denmark • GMV Aerospace and Defence SA, • EVENFLOW, Belgium PROJECT TIMESPAN Spain • Open Geospatial Consortium 01/01/2019 - 31/12/2021 • Federacion de Cooperativas (Europe) Ltd, United Kingdom Agroalimentares de la Comunidad Valenciana, Spain • University of Strathclyde, United Kingdom 82 European High-Performance Computing Projects - HANDBOOK 2020 DeepHealth Deep-Learning and HPC to Boost Biomedical Applications for Health

The main goal of the DeepHealth project The toolkit is composed of two core is to put HPC computing power at the ser- libraries, the European Distributed Deep vice of biomedical applications; and apply Learning Library (EDDLL) and the European Deep Learning (DL) and Computer Vision Computer Vision Library (ECVL), plus a (CV) techniques on large and complex bio- back-end offering a RESTful API to other medical datasets to support new and more applications, and a dedicated front-end efficient ways of diagnosis, monitoring and that interacts with the back-end for facili- treatment of diseases. tating the use of the libraries to computer and data scientists without the need of wri- The DeepHealth toolkit: ting code. A key open-source asset for Health DeepHealth will also develop HPC in- AI-based solutions frastructure support for an efficient exe- At the centre of the proposed innovations cution of the libraries, with a focus on usa- is the DeepHealth toolkit, a free open bility and portability. source software that will provide a uni- fied framework to exploit heterogeneous The DeepHealth concept HPC and big data architectures assembled – application scenarios with DL and CV capabilities to optimise the The DeepHealth concept focuses on sce- training of predictive models. The toolk- narios where image processing is needed it will be ready to be integrated in current for diagnosis. In the training environment IT and new biomedical software platforms or experts work with image datasets for trai- applications. ning predictive models. In the production

deephealth-project.eu OTHER PARTNERS • EPFL - Ecole Polytechnique Fédérale de Lausanne, : @DeepHealthEU • Azienda Ospedaliera Citta della Switzerland : @DeepHealthEU Salute e della Scienza di Torino, Italy • Fundacion para el Fomento : @deephealtheu de la Investigacion Sanitaria • Barcelona Supercomputing y Biomedica de la Comunitat Centre (BSC), Spain Valenciana, Spain COORDINATING • Centre Hospitalier Universitaire • Karolinska Institutet, Sweden ORGANISATION Vaudois, Switzerland • Otto-von-Guericke-Universität Everis Spain SL, • Centro di Ricerca, Sviluppo e Magdeburg, Germany Spain Studi Superiori in Sardegna SRL, Italy • Philips Medical Systems Nederland BV, Netherlands • CEA - Commissariat à l’Energie Atomique et aux énergies • Pro Design Electronic GmbH, alternatives, France Germany APPLICATIONS IN THE HPC CONTINUUM 83

environment the medical personnel ingests OpenDeepHealth. an image coming from a scan into a plat- The use cases cover three main areas: (1) form or biomedical application that uses Neurological diseases, (2) Tumour detec- predictive models to get clues to support tion and early cancer prediction and (3) them during diagnosis. The DeepHealth Digital pathology and automated image toolkit will allow the IT staff to train models annotation. The pilots will evaluate the and run the training algorithms over hybrid performance of the proposed solutions HPC + big data architectures without a pro- in terms of the time needed for pre-pro- found knowledge of DL, CV, HPC or big data cessing images, the time needed to train and increase their productivity reducing models and the time to put models in pro- the required time to do it. duction. In some cases, it is expected to reduce these times from days or weeks to 14 pilots and seven platforms to validate just hours. This is one of the major expec- the DeepHealth proposed innovations ted impacts of the DeepHealth project. The DeepHealth innovations will be va- This project has received funding from the lidated in 14 pilot test-beds through the European Union’s Horizon 2020 research use of seven different biomedical and AI and innovation programme under grant software platforms that integrate and ex- agreement No 825111. ploit the libraries: everis Lumen, PHILIPS Open Innovation Platform, THALES PIAF, CEA’s ExpressIFTM, CRS4’s Digital Pathology, WINGS MigraineNet and UNITO’s

• SIVECO Romania SA, Romania • WINGS ICT Solutions Information CALL & Communication Technologies • Spitalul Clinic Prof Dr Theodor ICT-11-2018-2019 Burghele, Romania IKE, Greece • STELAR Security Technology PROJECT TIMESPAN Law Research UG, Germany CONTACT • Thales SIX GTS France SAS, 01/01/2019 - 31/12/2021 PROJECT MANAGER: France Monica Caballero • TREE Technology SA, Spain monica.caballero.galeote • Università degli Studi di Modena @everis.com e Reggio Emilia, Italy TECHNICAL MANAGER: • Università degli Studi di Torino, Jon Ander Gómez Italy [email protected] • Universitat Politècnica de València, Spain 84 European High-Performance Computing Projects - HANDBOOK 2020 EVOLVE HPC and Cloud-enhanced Testbed for Extracting Value from Diverse Data at Large Scale

EVOLVE is a pan European Innovation EVOLVE aims to take concrete and deci- Action with 19 key partners from 10 sive steps in bringing together the Big Data, European countries introducing important HPC, and Cloud worlds, and to increase the elements of High-Performance Computing ability to extract value from massive and (HPC) and Cloud in Big Data platforms ta- demanding datasets. EVOLVE aims to bring king advantage of recent technological the following benefits for processing large advancements to enable cost-effective and demanding datasets: applications in 7 different pilots to keep up Performance: Reduced turn-around time with the unprecedented data growth we for domain-experts, industry (large and are experiencing. SMEs), and end-users. EVOLVE aims to build a large-scale testbed Experts: Increased productivity when de- by integrating technology from: signing new products and services, by pro- The HPC world: An advanced computing cessing large datasets. platform with HPC features and systems Businesses: Reduced capital and opera- software. tional costs for acquiring and maintaining The Big Data world: A versatile big-data pro- computing infrastructure. cessing stack for end-to-end workflows. Society: Accelerated innovation via faster The Cloud world: Ease of deployment, ac- design and deployment of innovative ser- cess, and use in a shared manner, while vices that unleash creativity. addressing data protection. At its mid-term, EVOLVE has success- fully delivered a heterogeneous platform with CPU, GPU, FPGA and a deep storage

www.evolve-h2020.eu OTHER PARTNERS • ICCS (Institute of Communication and Computer Systems), Greece : @Evolve_H2020 • AVL LIST GmbH, Austria • KOOLA d.o.o., Bosnia • BMW - Bayerische Motoren Werke : @evolve-h2020 and Herzegovina AG, Germany • Kompetenzzentrum - • Atos (Bull SAS), France COORDINATING Das Virtuelle Fahrzeug, ORGANISATION • Cybeletech, France Forschungsgesellschaft mbH, Datadirect Networks France, • Globaz S.A., Portugal Austria France • IBM Ireland Ltd, Ireland • MEMEX SRL, Italy • Idryma Technologias • MEMOSCALE AS, Norway Kai Erevnas, Greece • NEUROCOM Luxembourg SA, Luxembourg APPLICATIONS IN THE HPC CONTINUUM 85

grade improvements. Activities from CPU to storage is monitored at the CPU or job granularity thanks to an extended version of Prometheus. Currently 5 out of the 7 pilot applications are ported on the platform, and run on a daily basis with a speed-up over x100 for some cases.

As a first step toward broader market pe- netration, on the top of its initial set of 7 pi- lots, Evolve has attracted additional Proof- of-Concepts (PoCs) applications. These PoCs take advantage of Evolve technology architecture from local NVMe to dis-aggre- and allow Evolve to enlarge and optimize gated burst-buffer and capacity oriented its portability aspects. Some of these PoCs parallel file system. The platform is on- are already running on the platform. line with more than 50 active users. The In the coming months Evolve plans to ful- system is accessible with notebook and fil its vision of user centric converged runs complex containerized workflows platform: new grade of GPUs, improved using big data components such as Kafka containerization and deeper workflow or Spark which are accelerated with HPC implementation.

• ONAPP Ltd, Gibraltar CONTACT CALL • Space Hellas SA, Greece Catarina Pereira ICT-11-2018-2019 • Thales Alenia Space France SAS, [email protected] France Jean-Thomas Acquaviva PROJECT TIMESPAN • TIEMME SPA, Italy [email protected] • WEBLYZARD Technology GmbH, 01/12/2018 - 30/11/2021 Austria 86 European High-Performance Computing Projects - HANDBOOK 2020 LEXIS Large-scale EXecution for Industry & Society

The ever-increasing quantity of data ge- of several pilot test cases, the LEXIS plat- nerated by modern industrial and bu- form relies on best-in-class data manage- siness processes poses an enormous ment solutions (EUDAT) and advanced, dis- challenge for organisations seeking to tributed orchestration solutions (TOSCA), glean knowledge and understanding from augmenting them with new, efficient hard- the data. Combinations of HPC, Cloud and ware and platform capabilities (e.g. in the Big Data technologies are key to meeting form of Data Nodes and federation, usage the increasingly diverse needs of large and monitoring and accounting/billing sup- small organisations alike, however, access port). Thus, LEXIS realises an innovative to powerful computing platforms for SMEs solution aimed at stimulating the interest which has been difficult due to both tech- of the European industry and at creating nical and financial considerations may now an ecosystem of organisations that benefit be possible. from the LEXIS platform and its well-inte- The LEXIS (Large-scale EXecution for grated HPC, HPDA and Data Management Industry & Society) project is building an solutions. advanced engineering platform at the The consortium is thus developing a de- confluence of HPC, Cloud and Big Data, monstrator with a significant Open Source which leverages large-scale geographical- dimension, including validation, test and ly-distributed resources from the existing documentation. It will be validated in HPC infrastructure, employs Big Data ana- large-scale pilots – in industrial and scien- lytics solutions and augments them with tific sectors (Aeronautics, Earthquake and Cloud services. Driven by the requirements Tsunami, Weather and Climate), where

www.lexis-project.eu OTHER PARTNERS • CYCLOPS Labs GmbH, Switzerland : @LexisProject • Alfred Wegener Institute • Centro Internazionale in : @LEXIS2020 Helmholtz Centre for Polar and Marine Research, Germany Monitoraggio Ambientale - : @lexis-project Fondazione CIMA, Italy • Atos BDS (Bull), France : bit.ly/youtube_LEXIS • ECMWF, European Centre • Bayerische Akademie der for Medium Range Weather Wissenschaften / Leibniz Forecasts, United Kingdom COORDINATING Rechenzentrum der BAdW, ORGANISATION Germany • GE AVIO SRL, Italy • Helmholtz Zentrum Potsdam IT4Innovations, VSB – • Bayncore Labs Ltd, Ireland Deutsches Geoforschungs Technical University of Ostrava, • CEA, Commissariat à l’Energie Zentrum GFZ, Germany Czechia Atomique et aux énergies alternatives, France APPLICATIONS IN THE HPC CONTINUUM 87

significant improvements in KPIs including communication. LEXIS brings together a job execution time and solution accuracy consortium with the skills and experience are anticipated. LEXIS will organise a spe- to deliver a complex multi-faceted project, cific call that will stimulate adoption of the spanning a range of complex technologies project framework, increasing stakehol- across seven European countries, inclu- ders’ engagement with the targeted pilots. ding large industry, flagship HPC centres, LEXIS will promote this solution to the industrial and scientific compute pilot HPC, Cloud and Big Data sectors, maximi- users, technology providers and SMEs. sing impact through targeted and qualified

• Information Technology CONTACT CALL for Humanitarian Assistance Jan Martinovic ICT-11-2018-2019 Cooperation and Action, Italy [email protected] • LINKS Foundation, Italy • NUMTECH SARL, France PROJECT TIMESPAN • OUTPOST24, France 01/01/2019 - 31/12/2021 • Teseo Eiffage Energie Systèmes, Italy 88 European High-Performance Computing Projects - HANDBOOK 2020 Phidias Prototype of HPC/Data Infrastructure for On-demand Services

PHIDIAS intends to develop consolidated By engaging with the EOSC as service pro- and shared HPC and Data service by buil- vider, promoting the results from the use ding on existing and emerging infrastruc- cases and leveraging the extensive collec- ture in order to create a coordinated tive and individual expertise and networks system of systems and a federation of “in- of the project partners, PHIDIAS will expand frastructure to infrastructure” and “user its range of stakeholder and users to other to infrastructure” services. Coordinated scientific communities where HPC and Data actions of this kind respond to the needs play a pivotal role in delivering excellence in of harmonizing existing resources, remove science and research. In this framework, existing silos, and leverage, collectively, PHIDIAS aims to become a reference point established e-infrastructure to support for the Earth science community enabling excellence and global impact of European them to discover, manage and process spa- science and research through AI and High- tial and environmental data, through the performance cloud computing and data development of a set of High-Performance management facilities. The project, with Computing based services and tools exploi- its development of HPC and Data services ting large satellite datasets. In order to do and use cases from the scientific commu- that, the project foresees the development nities in the Earth, Ocean, and Atmospheric of three Use Cases: sciences, will demonstrate how an HPC Intelligent screening of large amount of sa- workflow-based suite of component can tellite data for detection and identification address and provide solutions to support of anomalous atmospheric composition data-drive science throughout the research events, aiming to use HPC and high-per- lifecycle, from data acquisition, to research formance data management services for data management, to Open Data sharing. the development of intelligent screening

www.phidias-hpc.eu OTHER PARTNERS • Institut Français de Recherche pour l’exploitation de la Mer : @PhidiasHpc • Centre Européen de Recherche (IFREMER), France : @phidias-hpc et de Formation Avancée en Calcul Scientifique • Mariene Informatie Service (CERFACS), France MARIS BV (MARIS), Netherlands COORDINATING • Centre National de la Recherche • Néovia Innovation, France ORGANISATION Scientifique (CNRS), France • SPASCIA, France Centre Informatique National • IT Centre for Science Ltd. • Finnish Environment Institute de l’Enseignement Supérieur (CSC), Finland (SYKE), Finland (CINES), • Geomatys, France • Trust-IT, Italy France • Institut de Recherche pour le • Université de Liege, Belgium Développement (IRD), France APPLICATIONS IN THE HPC CONTINUUM 89

Improving the use of cloud services for ma- rine data management, aiming to improve approaches for the exploitation of large data service to user in a FAIR perspec- amounts of satellite atmospheric data in tive and data processing on demand, and an operational context, implementing a taking into account the European Open prototype service on the already available Science Cloud (EOSC) challenge and the Sentinel 5 Precursor (S5P) European at- Copernicus Data and Information Access mospheric sounding mission. Services (DIAS). In those terms, this use Processing on-demand services for envi- case can be seen as one of the prototypes, ronmental monitoring, aiming to provide for marine environmental data, of the fu- the academic and land management com- ture Blue Cloud foreseen by the European munity with an interactive environment to Commission. ensure systematic or on-demand produc- In line with the European strategy for Open tion of new knowledge useful for the envi- Science, the data generated and ser- ronmental monitoring of territories. This vices created will be available on relevant will be achieved relying on the algorithmic EU portals, such as EU Open Data Portal, developments carried out within the THEIA EUDAT, and EOSC, and will be preserved land data centre, machine and deep lear- using the long-term preservation services ning techniques adapted to spatial data, of the EOSC. The participation in the EOSC and taking advantage of the complemen- is essential to promote the uptake of the tarities (spatial and temporal resolution) of PHIDIAS services across a broad range large spatial data sets from very high-re- of scientific communities and initiate solution sensors (SPOT, PLEIADES) and partnerships with the public and private SENTINEL 1 and 2-time series. sectors.

CONTACT CALL PROJECT TIMESPAN Boris Dintrans CEF-TC-2018-5 01/09/2019 - 30/09/2022 Project Coordinator [email protected] [email protected] 90 European High-Performance Computing Projects - HANDBOOK 2020 91

International Cooperation 92 European High-Performance Computing Projects - HANDBOOK 2020 ENERXICO Supercomputing and Energy for Mexico

The ENERXICO project applies exascale • To increase oil & gas reserves using geo- HPC techniques to different energy in- physical exploration for subsalt reservoirs dustry simulations of critical interest for • To improve refining and transport - effi Mexico. ENERXICO will provide solutions ciency of heavy oil for the oil and gas industry, improve wind • To develop a strong wind energy sector to energy performance and increase the ef- mitigate oil dependency ficiency of biofuels for transportation. • To improve fuel generation using biofuels Composed of some of the best acade- • To improve the cooperation between in- mic and industrial institutions in the EU dustries from the EU and Mexico. and Mexico, ENERXICO demonstrates the • To improve the cooperation between the power of cross-border collaboration to leading research groups in the EU and put supercomputing technologies at the Mexico service of the energy sector. The specific objectives of the project are: • To develop beyond state-of-the-art high performance simulation tools for the en- ergy industry

www.enerxico-project.eu OTHER PARTNERS • Instituto Politecnico Nacional (IPN-ESUA), Mexico : @ENERXICOproject • Atos (Bull SAS), France • Technische Universität München • Centro de Investigacion y : @enerxico-project (TUM), Germany de Estudios Avanzados del Instituto Politecnico Nacional • Universidad Autonoma COORDINATING (CINVESTAV), Mexico Metropolitana (UAM-A), Mexico ORGANISATIONS • Centro de Investigaciones • Instituto de Ingenieria de la • Barcelona Supercomputing Energeticas, Medioambientales y Universidad Nacional Autonoma Centre (BSC), Spain Tecnologicas (CIEMAT), Spain de Mexico (UNAM-IINGEN), Mexico • Instituto Nacional • Instituto Mexicano del Petroleo de Investigaciones (IMP), Mexico • Universitat Politecnica de Nucleares (ININ), Mexico Valencia (UPV), Spain INTERNATIONAL COOPERATION 93

• Université Grenoble Alpes CONTACT CALL (UGA), France Jose Maria Cela FETHPC-01-2018 • IBERDOLA , Spain [email protected] • REPSOL SA, Spain Rose Gregorio PROJECT TIMESPAN • PROJECT OBSERVER [email protected] • Petroleos Mexicanos 01/06/2019 - 31/05/2021 (PEMEX), Mexico 94 European High-Performance Computing Projects - HANDBOOK 2020 HPCWE High performance computing for wind energy

Wind energy has become an increasingly energy simulations. In HPCWE a novel important as a clean and renewable alter- scale integration approach will be ap- native to fossil fuels in the energy portfolios plied and tested through test cases in a of both Europe and . At almost every Brazil wind farm. stage in wind energy exploitation ranging 3. Adjoint-based optimization, which im- from wind turbine design, wind resource plies large I/O consumption as well as assessment to wind farm layout and opera- storing data on large-scale file systems. tions, the application of HPC is a must. HPCWE research aims at alleviating the The goal of HPCWE is to address the fol- bottlenecks caused by data transfer lowing key open challenges in applying HPC from memory to disk. on wind energy: 1. Efficient use of HPC resources in wind The HPCWE consortium consists of 11 turbine simulations, via the development partners representing the top academic and implementation of novel algorithms. institutes, HPC centres and industries in This leads to the development of me- Europe and Brazil. By exploring this colla- thods for verification, validation, uncer- boration, this consortium will develop novel tainty quantification (VVUQ) and in-situ algorithms, implement them in state-of- scientific data interpretation. the-art codes and test the codes in aca- 2. Accurate integration of meso-scale at- demic and industrial cases to benefit the mosphere dynamics and micro-scale wind energy industry and research in both wind turbine flow simulations, as this Europe and Brazil. interface is the key for accurate wind

hpcwe-project.eu OTHER PARTNERS • Universidade Federal de Santa Catarina, Brazil : @HPCWE_project • Danmarks Tekniske Universitet, Denmark • Universität Stuttgart, Germany • Universiteit Twente, Netherlands COORDINATING • Electricité de France (EDF), ORGANISATION France • VORTEX Factoria de Calculs SL, • Imperial College of Science Spain The University of Nottingham, Technology and Medicine, United Kingdom United Kingdom • The University of Edinburgh (EPCC), United Kingdom • Universidade de Sao Paulo, Brazil • Universidade Estadual de Campinas, Brazil INTERNATIONAL COOPERATION 95

EU and Brazil - Together Bringing Wind Energy Simulation towards Exascale HPC WE

Simulation of the wake flow downstream of a wind turbine

CONTACT CALL PROJECT TIMESPAN Xuerui Mao FETHPC-01-2018 01/06/2019 - 31/05/2021 Project Coordinator [email protected] +441157486020 96 European High-Performance Computing Projects - HANDBOOK 2020 97

Quantum Computing for HPC 98 European High-Performance Computing Projects - HANDBOOK 2020 NEASQC NExt ApplicationS of Quantum Computing

The NEASQC project brings together Intermediate-Scale Quantum) devices that academic experts and industrial end- will be available in the near future. NISQ users to investigate and develop a new computing can deliver significant advan- breed of Quantum-enabled applications tages when running certain applications, that can take advantage of NISQ (Noise thus bringing game-changing benefits to Intermediate-Scale Quantum) systems in users, and particularly industrial users. the near future. NEASQC is use-case driven, The NEASQC consortium has chosen a wide addressing practical problems such as selection of NISQ-compatible industrial drug discovery, CO2 capture, smart ener- and financial use-cases, and will develop gy management, natural language proces- new quantum software techniques to solve sing, breast cancer detection, probabilistic those use-cases with a practical quantum risk assessment for energy infrastructures advantage. To achieve this, the project or hydrocarbon well optimisation. NEASQC brings together a multidisciplinary consor- aims to initiate an active European com- tium of academic and industry experts in munity around NISQ Quantum Computing Quantum Computing, High Performance by providing a common toolset that will at- Computing, Artificial Intelligence, che- tract new industrial users. mistry, etc. NEASQC aims at demonstrating that, The ultimate ambition of NEASQC is to en- though the millions of qubits that will courage European user communities to guarantee fully fault-tolerant quantum investigate NISQ quantum computing. For computing are still far away, there are this purpose, the project consortium will practical use cases for the NISQ (Noise define and make available a complete and

neasqc.eu OTHER PARTNERS • Irish Centre for High-End Computing (ICHEC), Ireland : @neasqc • AstraZeneca AB, Sweden • Leiden University, Netherlands : @neasqc-project • CESGA (Fundación Publica Gallega Centro Tecnológico de • TILDE SIA, Latvia Supercomputación de Galicia), • TOTAL S.A., France COORDINATING Spain • Universidade da Coruña ORGANISATION • Electricité de France – CITIC, Spain Atos (Bull SAS), (EDF), France • Université de Lorraine France • HQS Quantum Simulations (LORIA), France GmbH, Germany • HSBC Bank Plc, United Kingdom QUANTUM COMPUTING FOR HPC 99

common toolset that new industrial actors NEASQC OBJECTIVES can use to start their own practical investi- 1. Develop 9 industrial and financial use gation and share their results. cases with a practical quantum advan- NEASQC also aims to build a much-needed tage for NISQ machines. bridge between Quantum Computing hard- 2. Develop open source NISQ programming ware activities, particularly those of the libraries for industrial use cases, with a Quantum Technologies flagship, and the view to facilitate quantum computing ex- end-user community. Even more than in perimentation for new users. classical IT, NISQ computing demands a 3. Build a strong user community dedicated strong cooperation between hardware to industrial NISQ applications. teams and software users. We expect our 4. Develop software stacks and bench- work in use cases will provide strong di- marks for the Quantum Technology rections for the development of NISQ ma- Flagship hardware platforms. chines, what will be very valuable to the nascent quantum hardware industry. NEASQC is one of the projects selected within the second wave of Quantum Flagship projects.

CONTACT CALL PROJECT TIMESPAN Cyril Allouche FETFLAG-05-2020 01/09/2020 – 31/08/2024 [email protected] 100 European High-Performance Computing Projects - HANDBOOK 2020 101

Other projects 102 European High-Performance Computing Projects - HANDBOOK 2020 EXA MODE EXtreme-scale Analytics via Multimodal Ontology Discovery & Enhancement

Exascale volumes of diverse data from tools that are adopted in decision making distributed sources are continuously pro- by industry and hospitals. Deep learning duced. Healthcare data stand out in the naturally allows to build semantic repre- size produced (production 2020 >2000 sentations of entities and relations in exabytes), heterogeneity (many media, multimodal data. Knowledge discovery is acquisition methods), included knowledge performed via semantic document-level (e.g. diagnostic reports) and commer- networks in text and the extraction of ho- cial value. The supervised nature of deep mogeneous features in heterogeneous learning models requires large labelled, images. The results are fused, aligned to annotated data, which precludes models medical ontologies, visualized and refined. to extract knowledge and value. ExaMode Knowledge is then applied using a seman- solves this by allowing easy & fast, weakly tic middleware to compress, segment and supervised knowledge discovery of hete- classify images and it is exploited in deci- rogeneous exascale data provided by the sion support and semantic knowledge ma- partners, limiting human interaction. Its nagement prototypes. objectives include the development and release of extreme analytics methods and

www.examode.eu OTHER PARTNERS : @examode • Azienda Ospedaliera per : @examode.eu l’Emergenza Cannizzaro, Italy • MicroscopeIT Sp. z o.o., Poland COORDINATING • SIRMA AI, Bulgaria ORGANISATION • Radboud University Medical Center, Nijmegen University of Applied Sciences Western Switzerland, • SURFsara BV, Netherlands HES-SO, Sierre, • Università degli Studi di Padova, Switzerland Italy OTHER PROJECTS 103

ExaMode is relevant to ICT12 in several increasing speed of data throughput and aspects: access, as resulting from tests in extre- 1. Challenge: it extracts knowledge and va- me analysis by industry and in hospitals lue from heterogeneous quickly increa- sing data volumes. 2. Scope: the consortium develops and releases new methods and concepts for extreme scale analytics to accelerate deep analysis also via data compression, for precise predictions, support deci- sion making and visualize multi-modal knowledge. 3.Impact: the multi-modal/media semantic middleware makes heterogeneous data management & analysis easier & faster, it improves architectures for complex distributed systems with better tools

CONTACT CALL PROJECT TIMESPAN Dr. Manfredo Atzori ICT-12-2018-2020 01/01/2019 - 31/12/2022 [email protected] Prof. Henning Müller [email protected] 104 European High-Performance Computing Projects - HANDBOOK 2020 EXSCALATE4COV EXaSCale smArt pLatform Against paThogEns for Corona Virus

The EXSCALATE4CoV (E4C) project aims coronavirus inhibitors. Huge computatio- to exploit the most powerful computing nal resources are needed, therefore the resources currently based in Europe to activities will be supported and empowe- empower smart in-silico drug design. red by four of the most powerful compu- Advanced Computer-Aided Drug Design ter centres in Europe: CINECA, BSC and (CADD) in combination with the high JÜLICH and Eni HPC5. The Swiss Institute throughput biochemical and phenotypic of Bioinformatics (SIB) will provide the ho- screening will allow the rapid evaluation mology 3D models for the viral proteins. of the simulations results and the reduc- The Fraunhofer IME will provide the BROAD tion of time for the discovery of new drugs. Repurposing Library and biochemical Against a pandemic crisis, the immediate assays for the most relevant viral proteins. identification of effective treatments has Phenotypic screenings will be run by KU a paramount importance. First, E4C will LUEVEN to identify molecules capable of select through the EXSCALATE platform, blocking virus replication in in vitro mo- the most promising commercialized and dels. IIMCB and ELECTRA will determine developing drugs safe in man. Second, se- the crystal structure of at least one co- lect from >500 billion molecules new pan ronavirus functional proteins to evaluate

exscalate4cov.eu OTHER PARTNERS • Fraunhofer Gesellschaft zur Förderung der Angewandten : @exscalate4cov • CINECA Consorzio Forschung E.V. Interuniversitario, Italy : @exscalate4cov • Barcelona Supercomputing • Politecnico di Milano, Italy : @exscalate4cov Center, Spain • Universita degli Studi • Forschungszentrum Jülich di Milano, Italy COORDINATING GmbH, Germany • Katholieke Universiteit ORGANISATION • Universita degli Studi Leuven, Belgium di Napoli Federico II., Italy Dompé farmaceutici S.p.A., • International Institute of • Universita degli Studi Italy Molecular and Cell Biology, di Cagliari, Italy Poland • Elettra – Sincrotrone Trieste S.C.p.A., Italy OTHER PROJECTS 105

the structural similarities with other viral portal for the homology models of viral proteins. All the other partners support proteins WT and mutants, the Protein the project by studying the mechanism of Data Bank for the experimentally resolved action, synthetizing the most relevant can- protein structures, the EUDAT for the data didate drugs and enhancing the HPC and AI generated by in-silico simulations and the infrastructure of the consortium. E4C project website. Combining HPC si- mulations and AI driven system pharmaco- EXSCALATE4CoV consortium will identify logy E4C has identified and in vitro - expe safe in man drugs repurposed as 2019- rimentally validated a first promising drug nCoV antiviral and will propose to the EMA for the treatment of mildly symptomatic innovation task force (ITF) to define a preli- Covid19 patients. Raloxifene clinical trial minary development strategy and a propo- application have been already submitted sal for a registration path. The E4C project for approval. The Spallanzani hospital will will share promptly its scientific outcomes be the Coordinating Investigator in the cli- with the research community by using es- nical trial. tablished channels: ChEMBL portal for the biochemical data, the SWISS-MODEL

• SIB Institut Suisse de CONTACT CALL Bioinformatique, Switzerland [email protected] SC1-PHE-CORONAVIRUS-2020 • Kungliga Tekniska Hoegskolan, Sweden • Istituti Nazionale per le Malattie PROJECT TIMESPAN Infettive Lazzaro Spallanzani – 01/10/2020 – 30/09/2023 Istituto di Ricovero e Cura a Carattere Scientifico, Italy • Associazione Big Data, Italy • Istituto Nazionale di Fisica Nucleare, Italy • Chelonia SA, Switzerland 106 European High-Performance Computing Projects - HANDBOOK 2020 OPRECOMP Open transPREcision COMPuting

Guaranteed numerical precision of each processed, stored and communicated can elementary step in a complex computation be lifted for intermediate calculations. has been the mainstay of traditional com- While approximate computing approaches puting systems for many years. This era, have been used before, in OPRECOMP for fueled by Moore’s law and the constant ex- the first time ever, a complete framework ponential improvement in computing effi- for transprecision computing, covering ciency, is at its twilight: from tiny nodes of devices, circuits, software tools, and algo- the Internet-of-Things, to large HPC com- rithms, along with the mathematical theory puting centers, sub-picoJoule/operation and physical foundations of the ideas will energy efficiency is essential for practical be developed. It not only will provide error realizations. To overcome the “power wall”, bounds with respect to full precision re- a shift from traditional computing para- sults, but it will also enable major energy digms is now mandatory. efficiency improvements even when there is no freedom to relax end-to-end applica- OPRECOMP aims at demolishing the ul- tion quality-of-results. tra-conservative “precise” computing abs- traction and replacing it with a more flexible The mission of OPRECOMP is to demons- and efficient one, namely transprecision trate, by using physical demonstrators, computing. OPRECOMP will investigate the that this idea holds in a huge range of ap- theoretical and practical understanding plication scenarios in the domains of IoT, of the energy efficiency boost obtainable Big Data Analytics, Deep Learning, and HPC when accuracy requirements on data being simulations: from the sub-milliWatt to the

www.oprecomp.eu OTHER PARTNERS • Greenwavs Technologies, France : @oprecompProject • Alma Mater Studiorum - Università di Bologna, Italy • Technische Universität Kaiserslautern, Germany COORDINATING • CINECA Consorzio Interuniversitario, Italy • The Queen’s University ORGANISATION of Belfast, United Kingdom • CEA - Commissariat à l’Energie IBM Research GmbH, Atomique et aux énergies • Università degli Studi Switzerland alternatives, France di Perugia, Italy • ETHZ - Eidgenössische • Universitat Jaume I Technische Hochschule Zürich, de Castellon, Spain Switzerland OTHER PROJECTS 107

MegaWatt range, spanning nine orders of and demonstrate that transprecision com- magnitude. In view of industrial exploita- puting is the way to think about future tion, we will prove quality and reliability systems.

CONTACT CALL PROJECT TIMESPAN Cristiano MALOSSI, FETPROACT-01-2016 01/01/2017 - 31/12/2020 PhD [email protected] 108 European High-Performance Computing Projects - HANDBOOK 2020 PROCESS PROviding Computing solutions for ExaScale ChallengeS

The PROCESS demonstrators will pave development. In addition to provide the the way towards exascale data services service prototypes that can cope with very that will accelerate innovation and maxi- large data, PROCESS addresses the work mise the benefits of these emerging data programme goals by using tools and ser- solutions. The main tangible outputs of vices with heterogeneous use cases, inclu- PROCESS are very large data service proto- ding exascale learning on medical images, types, implemented using a mature, modu- airline revenue management and agricultu- lar, generalizable open source solution for ral simulations based on Copernicus data, user friendly exascale data. The services also enabling a SME to use the PROCESS will be thoroughly validated in real-world ecosystem. This diversity from academic settings, both in scientific research and in and industry ensures that in addition to advanced industry deployments. supporting communities that push the en- velope, the solutions will also ease the lear- To achieve these ambitious objectives, the ning curve for broadest possible range of project consortium brings together the key user communities. In addition, the chosen players in the new data-driven ecosystem: open source strategy maximises the po- top-level HPC and big data centres, com- tential for uptake and reuse, together with munities – such as LOFAR – with unique mature software engineering practices data challenges that the current solutions that minimise the efforts needed to set are unable to meet and experienced e-In- up and maintain services based on the frastructure solution providers with an PROCESS software releases. extensive track record of rapid application

www.process-project.eu OTHER PARTNERS • Ustav Informatiky, Slovenska Akademia Vied, Slovakia : @PROCESS_H2020 • Universiteit van Amsterdam, Netherlands • Akademia Gorniczo-Hutnicza IM. Stanislawa Staszica W Krakowie, • Stichting Netherlands EScience COORDINATING Poland ORGANISATION Centre, Netherlands • Haute Ecole Spécialisée de Ludwig-Maximilians-Universität Suisse Occidentale, Switzerland München, Germany • Lufthansa Systems GmbH & CO KG, Germany • Inmark Europa SA, Spain OTHER PROJECTS 109

Large-scale Computing Paradigms Data Analytics and Mining Data Preprocessing High-Performance Cloud Data Post-Processing Computing Computing

Large-scale Modelling Accelerated Intensive Robust Math Modelling Computing Computing

Machine Learning Big Data Multi-core Deep Learning Processing Computing

password Array (SKA) project, which increases the Users Identity Provider Service token Secure Access REST API underlying challenge in the next years to Interaction Execution Environment

Application Repository LOFAR portal LOBCDER SSH adaptor browser SSL TOSCA Docker / Singularity M Template Repository Exa- and Zettabytes. The PROCESS so- Virtual Images Authentication Jupyter plugin Authorization lutions are deployed in three supercom- LOFAR REST API portal

WebDAV puting centres across Europe, in Munich, REST API REST API REST API LOBCDER Rimrock M Cloudify M Amsterdam and Krakow, enabling already Data infrastructure management layer M

GSI-SSH + REST API REST API Cloud API Queueing System today a testbed for exascale applications Kubernetes DataNet M CloudCloud HPC ResourcesCloud Resources resourcesresources of tomorrow.

Micro-infrastructure containers

DataNet Adapter Dispel Gate M The figure shows the detailed PROCESS ar- Data Adaptors Metadata Data Adaptors Repository Data Adaptor chitecture, with the main components end- Storage NextCloud Jupyter StorageStorage Resources resourcesresources user portal, orchestration, data and com- pute service. The heart of the data service Data Sources Java API Java API FTP/FTP TCP/IP NFS FTP/FTPS is LOBCDER and its Micro-Infrastructure- Hadoop Distributed Distributed Distributed HDF5 HBase File System Storage File System Archive DTNDTN DTNDTN Agent Agent DTN TCP/IP GridFTP SRM REST API HTTP DTN Agent Containers, enabling a low-overhead solu- DICOM LOFAR LTA dCache Copernicus CouchDB Archive Tape Archive Storage Open Access Hub tion for the data processing challenge. The already connected centres allow PROCESS The PROCESS project has achieved its to include applications based on HPC, objectives and accelerated the deve- Cloud and Accelerator-systems, and en- lopment in the use case’s communities. able deployments also on resources avai- Implementations for LOFAR will also be va- lable through the European Open Science luable for the upcoming Square Kilometre Cloud.

CONTACT CALL PROJECT TIMESPAN Dieter Kranzlmüller EINFRA-21-2017 01/11/2017 - 31/10/2020 Coordinator Maximilian Höb Technical-Coordinator and Contact [email protected] 110 European High-Performance Computing Projects - HANDBOOK 2020 SODALITE SOftware Defined AppLication Infrastructures managemenT and Engineering Empowering the DevOps World

In recent years the global market has seen a OUR MODAK SOLUTION ON HPC tremendous rise in utility computing, which Developing and deploying applications serves as the back-end for practically any across heterogeneous infrastructures like new technology, methodology or advan- HPC or Cloud with diverse hardware is a cement from healthcare to aerospace. We complex problem. SODALITE targets com- are entering a new era of heterogeneous, plex applications and workflows that are so ware-defined, high-performance -com deployed on heterogeneous environments puting environments. such as virtual machines, containerized In this context, SODALITE aims to address HPC clusters, Cloud and Edge devices. In this heterogeneity by considering environ- this context, deploying the application in ments that comprise accelerators/GPUs, infrastructures usually requires command- configurable processors, and non-x86 line-based access and expert knowledge CPUs such as ARMv8. General purpose in the application domain. Even further, GPUs are becoming common currency in optimizing HPC applications also requires data-centers while specialized FPGA ac- a good background in networking tech- celerators, ranging from deep-learning nologies and parallel programming. With specific accelerators to burst buffers tech- SODALITE, there is a stack on top of the nologies, are becoming “the big coin”, enor- low-level layer which makes it easier for mously speeding up applications execution non-experts to use VMs, HPC and get opti- and likely to become common in the near mal performance. future.

www.sodalite.eu OTHER PARTNERS • Adaptant, Germany : @SODALITESW • Atos Spain, Spain • Jheronimus Academy of Data Science (JADS), Netherlands : @sodalite-eu • IBM, Israel • High Performance Computing • Politecnico di Milano (POLIMI), Center -Universität Stuttgart Italy COORDINATING (HLRS), Germany ORGANISATION • CRAY-HPE, Switzerland XLAB, • ITI-CERTH, Greece Slovenia OTHER PROJECTS 111

THE APPLICATION OF SODALITE data analytics for in-silico clinical trials. TO REAL-CASE SCENARIOS SODALITE will enhance the app by increasing Water availability prediction with moun- the effectiveness of component deployment tains images with the IDE and to Ease the adaptation to An innovative tool demonstrator which different hardware/software platforms. enables the capillary observation of the IoT autonomous vehicle continuous health status of mountain An innovative system demonstrator that environments. enables data from heterogeneous sources SODALITE semantic IDE is key to mastering (principally IoT devices) to be spread across configuration heterogeneity and complexity, a distributed processing architecture in line SODALITE Machine Learning optimization with end-user expectations. for fast classifiers retraining as more data SODALITE image variants enable application are available and dynamic provision of re- containers to be generated for multiple ac- sources (GPUs) > speed-up in the 20% range celerator types, which refactoring, and re- for image processing throughput. deployment can switch between at run-time. In-silico clinical trials for spinal operations SODALITE enables service continuity across Assessment and decision-support sys- heterogeneous resources, minimizing ser- tem for spinal operations consisting of a vice disruption. data store component, capable of provi- SODALITE leverages node monitoring at ding efficient data access from heteroge- the Edge to trigger reconfiguration and re- neous compute resources and simulation deployment based on application-defined process chain facilitating comprehensive alerting.

CONTACT CALL PROJECT TIMESPAN Project Coordinator: H2020-ICT-2018-2 01/01/2019 - 01/01/2022 Daniel Vladušič [email protected] +38640352527 Communications Manager: María Carbonell, [email protected] +34 910 476 494 112 European High-Performance Computing Projects - HANDBOOK 2020 113

Ecosystem development 114 European High-Performance Computing Projects - HANDBOOK 2020 CASTIEL Connecting the countries NCCs to each other and the European Ecosystem

WHAT IS CASTIEL? WHAT IS THE RELATIONSHIP The Coordination and Support Action BETWEEN CASTIEL AND EUROCC? (CSA) CASTIEL leads to cross-European CASTIEL, the Coordination and Support networking activities between National Action (CSA) closely associated Competence Centres (NCCs) in HPC- with EuroCC, combines the National related topics addressed through the Competence Centres (NCC) formed in EuroCC project. CASTIEL emphasises EuroCC into a pan-European network. The training, industrial interaction and coo- aggregation of HPC, high-performance peration, business development, raising data analytics (HPDA), and artificial intelli- awareness of high-performance compu- gence (AI) competencies demonstrates the ting (HPC)-related technologies and exper- global competitiveness of the EU partners. tise. As a hub for information exchange and The two activities are the beginning of a training, CASTIEL promotes networking strategic positioning of the European HPC among NCCs and strengthens idea ex- competence and will contribute to the change by developing best practices. The comprehensive independence of the men- identification of synergies, challenges, and tioned technologies in Europe. possible solutions is implemented through the close cooperation of the NCCs at a European level.

www.castiel-project.eu OTHER PARTNERS : @CASTIEL_project • Gauss Centre for Super : @castiel-project computing e.V. (GCS), Germany • Consorzio Interuniversitario (CINECA), Italy COORDINATING ORGANISATION • TERATEC, France • Barcelona Supercomputing High-Performance Computing Center – Centro Nacional De Center Stuttgart (USTUTT/HLRS), Supercomputación (BSC), Spain Germany • Partnership for Advanced Computing in Europe AISBL (PRACE), Belgium ECOSYSTEM DEVELOPMENT 115

NCC

INTERACTION WITH THE INDUSTRY

NCC TRAINING NCC

TWINNING

RESEARCH APPLICATIONS BUSINESS DEVELOPMENT

MENTORING NCC NCC NCC

CONTACT CALL PROJECT TIMESPAN Dr.-Ing. Bastian Koller EuroHPC-04-2019 01.09.2020 – 31.08.2022 [email protected] 116 European High-Performance Computing Projects - HANDBOOK 2020 EuroCC European Competence Centres

Within the EuroCC project under the WHAT ARE THE NCCS’ TASKS? European Union’s Horizon 2020 (H2020), The tasks of the National Competence participating countries are tasked with es- Centres include mapping existing HPC, tablishing a single National Competence high-performance data analytics (HPDA) Centre (NCC) in the area of high-perfor- and artificial intelligence (AI) compe- mance computing (HPC) in their respec- tences, HPC training and the identification tive countries. These NCCs will coordinate of HPC, HPDA and AI experts, as well as the activities in all HPC-related fields at the coordination of services such as business national level and serve as a contact point development, application support, tech- for customers from industry, science, (fu- nology transfer, training and education, ture) HPC experts, and the general public and access to expertise. Researchers from alike. The EuroCC project is funded 50 academia and industry both benefit from percent through H2020 (EuroHPC Joint this competence concentration, and more Undertaking [JU]) and 50 percent through efficient research ultimately benefits state national funding programs within the and national governments and society as a partner countries. whole.

www.eurocc-project.eu OTHER PARTNERS • University of Tartu HPC Center (UTHPC), Estonia : @EuroCC_project • Gauss Centre for Supercomputing (GCS) Germany • CSC – IT Center for Science Ltd : @eurocc (CSC), Finland • Institute of Information and Communication Technologies at • National Infrastructures for COORDINATING Bulgarian Academy of Sciences Research and Technology S.A. ORGANISATION (IICT-BAS), Bulgaria (GRNET S.A.), Greece High-Performance Computing • Universität Wien (UNIVIE), Austria • Kormányzati Informatikai Center Stuttgart (USTUTT/HLRS), Fejlesztési Ügynökség (KIFÜ), • University of Zagreb University Germany Hungary Computing Centre (SRCE), Croatia • National University of Ireland, Galway (NUI Galway), Ireland • Computation-based Science and Technology Research Centre, The • CINECA – Consorzio Cyprus Institute (CaSToRC-CyI), Interuniversitario (CINECA), Italy Cyprus • Vilnius University (LitGrid-HPC), • IT4Innovations National Lithuania Supercomputing Centre, VSB – • Riga Technical University (RTU), Technical University of Ostrava Latvia (IT4I), Czech Republic • UNINETT Sigma2 AS (Sigma2), • Technical University of Denmark Norway (DTU), Denmark • Norwegian Research Centre AS (NORCE), Norway ECOSYSTEM DEVELOPMENT 117

The National Competence Centres will act COLLABORATION BETWEEN EUROCC as the first points of contact for HPC, HPDA AND CASTIEL and AI in their respective countries. The EuroCC works closely together with the NCCs will bundle information and provide Coordination and Support Action (CSA) experts, offer access to expertise and HPC CASTIEL, which links the national centres training and education. throughout Europe and ensures successful The overall objective of EuroCC — to create transfer and training possibilities between a European base of excellence in HPC by fil- the participating countries. ling existing gaps — comes with a clear vi- sion: offering a broad portfolio of services in all HPC, HPDA, and AI-related areas, tai- lored to the needs of industry, science, and public administrations.

• SINTEF AS (SINTEF), Norway • The University of Edinburgh CONTACT (EPCC), United Kingdom • Academic Computer Centre Dr.-Ing. Bastian Koller Cyfronet AGH (CYFRONET), • TERATEC (TERATEC), France [email protected] Poland • SURFSARA BV (SURFSARA), • Fundação para a Ciência e a Netherlands Tecnologia (FCT), Portugal • Centre de recherche en CALL • National Institute for Research- aéronautique a.s.b.l. (Cenaero), EuroHPC-04-2019 Development in Informatics – ICI Belgium Bucharest (ICIB), Romania • Luxinnovation GIE (LXI), PROJECT TIMESPAN • Academic and Research Network Luxembourg 01.09.2020 – 31.08.2022 of Slovenia (ARNES), Slovenia • Center of Operations of the • Barcelona Supercomputing Slovak Academy of Sciences (CC Center – Centro Nacional de SAS), Slovak Republic Supercomputación (BSC), Spain • University of Ss. Cyril and • Uppsala University (UU), Sweden Methodius, Faculty of computer • Eidgenössische Technische science and engineering (UKIM), Hochschule Zürich (ETH Zurich), Republic of North Macedonia Switzerland • Háskóli Íslands – University of • The Scientific and Technological Iceland (UICE), Research Council of • Iceland (TUBITAK), Turkey • University of Donja Gorica (UDG), Montenegro 118 European High-Performance Computing Projects - HANDBOOK 2020 EUROLAB-4-HPC-2 Consolidation of European Research Excellence in Exascale HPC Systems

Europe has made significant progress in build a robust ecosystem, hence building a becoming a leader in large parts of the HPC sustainable foundation for HPC innovation ecosystem: from industrial and scientific in Europe. application providers via system software This will be realised by means of four short- to Exascale systems, bringing together term objectives: technical and business stakeholders. Despite such gains, excellence in research in high performance computing systems is fragmented across Europe and opportuni- ties for synergy are missed. At this moment, there is fierce interna- tional competition to sustain long-term leadership in HPC technology and there re- mains much to do. Since high-performance computing (HPC) is a vital technology for European economic competitiveness and scientific excellence, it is indispensable for innovating enterprises, scientific re- searchers, and government agencies while it generates new discoveries and break- through products and services. By uniting the HPC community, Eurolab4HPC aims to

eurolab4hpc.eu OTHER PARTNERS • RWTH-AACHEN Rheinisch- Westfälische Technische : @eurolab4hpc • Barcelona Supercomputing Hochschule Aachen, Germany Centre (BSC), Spain • TECHNION - Israel Institute of • Ecole Polytechnique Fédérale de COORDINATING Technlogy, Israel Lausanne, Switzerland ORGANISATION • The University of Edinburgh, • ETHZ - Eidgenössische Chalmers University United Kingdom Technische Hochschule Zürich, of Technology, Switzerland • The University of Manchester, Sweden United Kingdom • FORTH Institute for Computer Science, Greece • Universität Augsburg, Germany • INRIA, Institut National de • Universität Stuttgart, Germany Recherche en Informatique et • Universiteit Gent, Belgium Automatique, France ECOSYSTEM DEVELOPMENT 119

OPEN SOURCE LONG-TERM VISION ON HPC Another mission of Eurolab4HPC is to bring Radical changes in computing are foreseen together researchers, industry and users for the near future. The next Eurolab4HPC for the development and the use of open Long-Term Vision on High-Performance source hardware and software creating a Computing, planned release in January community to work on long-term projects. 2020, will present an assessment of po- We believe that open source projects pro- tential changes for High-Performance vide a natural forum for testing ideas and Computing in the next decade. You will be discovering innovation opportunities the- able to download it here: https://www.eu- reby promoting promote entrepreneurship, rolab4hpc.eu/vision/. fostering industry take-up, and providing an avenue to train more experts in Exascale hardware and software.

CONTACT CALL PROJECT TIMESPAN [email protected] FETHPC-03-2017 01/05/2018 - 30/04/2020 Per Stenström [email protected] 120 European High-Performance Computing Projects - HANDBOOK 2020 EXDCI-2 European eXtreme Data and Computing Initiative – 2

The project EXDCI-2 builds upon the suc- community structuring and synchroniza- cess of EXDCI and will continue the coor- tion in HPC, Big Data, Cloud and embedded dination of the HPC ecosystem with impor- computing, for a more competitive related tant enhancements to better address the value chain in Europe. It will develop an HPC convergence of big data, cloud and HPC. technology roadmap addressing the conver- gence with HPDA and the emergence of new The strategic objectives of EXDCI-2 are: HPC uses. It will deliver application and ap- a) Development and advocacy of a com- plied mathematics roadmaps that will pave petitive European HPC Exascale the road towards exascale simulation in Strategy academic and industrial domains. It will de- b) Coordination of the stakeholder velop a shared vision for the future of HPC community for European HPC at the that increases the synergies and prepares Exascale. for targeted research collaborations.

EXDCI-2 mobilizes the European HPC EXDCI-2 will work to increase the impact of stakeholders through the joint action of the H2020 HPC research projects, by iden- PRACE and ETP4HPC. It will promote global tifying synergies and supporting market

exdci.eu OTHER PARTNERS : @exdci_eu • European Technology Platform for High Performance Computing COORDINATING (ETP4HPC), Netherlands ORGANISATION Partnership for Advanced Computing in Europe (PRACE) AISBL, Belgium ECOSYSTEM DEVELOPMENT 121

acceptance of the results. At the international level, EXDCI-2 will contribute to the international visibility of Europe, develop contacts with the wor- ld leading HPC ecosystems and increase European impact on HPC standards.

EXDCI-2 will improve the HPC awareness by developing international event such as the EuroHPC Summit Week and by targeting specific audience through dedicated- me dia by disseminating the achievements of the European HPC ecosystem.

CONTACT CALL PROJECT TIMESPAN Marjolein Oorsprong FETHPC-03-2017 01/03/2018 - 31/12/2020 [email protected] 122 European High-Performance Computing Projects - HANDBOOK 2020 FF4EUROHPC Connecting SMEs with HPC

BASE PILLARS OF THE FF4EUROHPC PROJECT FF4EuroHPC has the general objective of supporting the EuroHPC initiative to pro- mote the industrial uptake of high-per- formance computing (HPC) technology, in particular by small and medium-sized enterprises (SMEs), and thus increase the innovation potential of European industry. In order to achieve that general objective, FF4EuroHPC has defined a set of specific objectives.

THE FF4EUROHPC GENERAL OBJECTIVES ARE: • Realise a portfolio of business-oriented application “experiments” that are driven

www.ff4eurohpc.eu OTHER PARTNERS : @FF4EuroHPC • scapos AG, Germany : @ff4eurohpc • Consorzio Interuniversitario (CINECA), Italy • Teratec (TERATEC), France COORDINATING ORGANISATION • Fundacion publica gallega centro tecnologico de High Performance Computing supercomputacion de galicia Centrer Stuttgart (USTUTT/HLRS), (CESGA), Spain Germany • ARCTUR racunalniski inzeniring doo (ARCTUR), Slovenia ECOSYSTEM DEVELOPMENT 123

by the SME end-users needs and exe- • Facilitate the widening of industrial HPC cuted by teams covering all required user communities and service providers actors in the value-chain, with the inno- in Europe by delivering compelling suc- vation potential for HPC use and service cess stories for the use of HPC by SMEs; provision of utmost priority. ensuring maximal awareness via com- • Support the future national HPC munication and dissemination in colla- Competence Centres (resulting from boration with relevant Digital Innovation the call EuroHPC-04-2019) to more ef- Hubs (DIHs) and industry associations. fectively collaborate with SMEs through involvement in the FF4EuroHPC experi- ment portfolio and integration in related outreach activities. • Support the participating SMEs in the establishment of HPC-related innovation either by using HPC infrastructures or services for their business needs or by providing new HPC-based services.

CONTACT CALL PROJECT TIMESPAN Dr.-Ing. Bastian Koller, EuroHPC-04-2019 01.09.2020 – 31.08.2023 [email protected] 124 European High-Performance Computing Projects - HANDBOOK 2020 HPC-EUROPA3 Pan-European Research Infrastructure on High Performance Computing

TRANSNATIONAL ACCESS HPC-Europa3 visits can last between 3 and The core activity of HPC-Europa3 is the 13 weeks. The programme is open to re- Transnational Access programme, which searchers of any level, from academia or funds short collaborative research vi- industry, working in any area of computa- sits in any scientific domain using High tional science. Priority is given to resear- Performance Computing (HPC). chers working in the EU and Associated Visitors gain access to some of Europe’s States (see http://bit.ly/AssociatedStates), most powerful HPC systems, as well as but limited places are available for resear- technical support from the relevant HPC chers from third countries. centre to enable them to make the best use HPC-Europa3 has introduced a “Regional of these facilities. Access Programme” to encourage appli- Applicants identify a “host” researcher cations from researchers working in the working in their own field of research, and Baltic and South-East Europe regions. The are integrated into the host department Regional Access Programme specifically during their visit. Visits can be made to targets researchers with little or no HPC any research institute in Finland, Germany, experience who need more powerful com- Greece, Ireland, Italy, the Netherlands, puting facilities than they have, but not the Spain, Sweden or the UK. (Note that pro- most powerful systems offered by HPC- ject partner CNRS does not participate in Europa3. Such researchers can apply res- the Transnational Access activity and the- pectively to KTH-PDC in Sweden or GRNET refore it is not possible to visit research in Athens, and will be given priority over groups in France). more experienced applicants.

www.hpc-europa.eu COORDINATING • CSC – IT Center for Science Ltd., : @HPCEuropa3 ORGANISATION Finland CINECA Consorzio • Ethniko Diktyo Erevnas : @hpceuropa Technologias AE (GRNET), Greece : @hpc-europa3-eu- Interuniversitario, Italy • Kungliga Tekniska Hoegskolan project (KTH), Sweden : www.youtube. • National University of Ireland com/channel/ OTHER PARTNERS Galway (ICHEC), Ireland UC9uOpFQGP9V0T • Barcelona Supercomputing QPXFUOgs_A • SURFsara BV, Netherlands Centre (BSC), Spain • The University of Edinburgh • CNRS - Centre National de la (EPCC), United Kingdom Recherche Scientifique, France • Universität Stuttgart (HLRS), Germany ECOSYSTEM DEVELOPMENT 125

targeted at SMEs has been organised to encourage their uptake of HPC. The Joint Research Activity, Container- as-a-service for HPC, aims to enable easy portability of end-user applications to different HPC centres, providing clear be- nefits for HPC-Europa3 visitors. This ap- proach enables different applications and runtime environments to be supported simultaneously on the same hardware re- NETWORKING AND JOINT sources with no significant decrease in RESEARCH ACTIVITIES performance. It also provides the capability The main HPC-Europa3 Networking to fully capture and package all dependen- Activity, External co-operation for enhan- cies of an application so that the applica- cing the best use of HPC, aims to build tion can be migrated to other machines or stronger relationships with other European preserved to enable a reproducible envi- HPC initiatives, e.g. PRACE, ETP4HPC, ronment in the future. and the HPC Centres of Excellence. The goal is to provide HPC services in a more The results from this JRA can be found integrated way across Europe. This acti- in various deliverables which are avai- vity also aims to increase awareness of lable at http://www.hpc-europa.eu/ HPC within SMEs. A series of workshops public_documents.

CONTACT CALL PROJECT TIMESPAN [email protected] INFRAIA-01-2016-2017 01/05/2017 - 31/10/2021 126 European High-Performance Computing Projects - HANDBOOK 2020

Appendix Completed projects

Projects completed before 1 January 2020 are not featured in this edition of the Handbook, but you can find the descriptions of these projects in the 2018 or 2019 Handbooks, available on our website: ECOSYSTEMAPPENDIX - COMPLETEDDEVELOPMENT PROJECTS 127

ALLScale An Exascale Programming, Multi-objective Optimisation and Resilience Management Environment Based on Nested Recursive Parallelism

ANTAREX AutoTuning and Adaptivity appRoach for Energy efficient eXascale HPC systems

ARGO WCET-Aware Parallelization of Model-Based Applications for Heterogeneous Parallel Systems

BioExcel Centre of Excellence for Biomolecular Research

COEGSS Centre of Excellence for Global Systems Science

ComPat Computing Patterns for High Performance Multiscale Computing

CompBioMed A Centre of Excellence in Computational Biomedicine

ECOSCALE Energy-efficient heterogeneous COmputing at exaSCALE

EoCoE Energy oriented Centre of Excellence for computer applications

ESCAPE Energy-efficient SCalable Algorithms for weather Prediction at Exascale

ESIWACE Centre of Excellence in Simulation of Weather and Climate in Europe

ExaFLOW Enabling Exascale Fluid Dynamics Simulations

ExaHyPE An Exascale Hyperbolic PDE Engine

ExaNeST European Exascale System Interconnect and Storage

ExaNoDe European Exascale Processor and Memory Node Design

ExCAPE Exascale Compound Activity Prediction Engine

EXTRA Exploiting eXascale Technology with Reconfigurable Architectures

greenFLASH Green Flash, energy efficient high performance computing for real- time science

INTERTWINE Programming Model INTERoperability ToWards Exascale (INTERTWinE)

MANGO Exploring Manycore Architectures for Next-GeneratiOn HPC systems

MaX Materials design at the eXascale

M2DC Modular Microserver DataCentre 128 European High-Performance Computing Projects - HANDBOOK 2020

Mont-Blanc 3 Mont-Blanc 3, European scalable and power efficient HPC platform based on low-power embedded technology

NEXTGenIO Next Generation I/O for the Exascale

NLAFET Parallel Numerical Linear Algebra for Future Extreme Scale Systems

NoMaD The Novel Materials Discovery Laboratory

POP Performance Optimisation and Productivity

READEX Runtime Exploitation of Application Dynamism for Energy-efficient eXascale computing

SAGE Percipient StorAGe for Exascale Data Centric Computing

Imprint: © Etp4Hpc

Text: Etp4Hpc

Graphic design and layout: www.maiffret.net

Printed and bound in November 2020: EasyFlyer

Contact [email protected]

The EXDCI-2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No 800957.

European High-Performance www.etp4hpc.eu Computing Projects www.etp4hpc.eu/euexascale : @ETP4HPC Contact: [email protected] Handbook 2020