Exascale Computing Meeting Exascale-potential with research on highly pa­rallel systems, efficient scalability, energy efficient algorithms, and large-scale simulations.

HLRS

High Performance Computing Center Stuttgart

The High Performance Compu- Our Network: HLRS is tightly ting Center of Stuttgart (HLRS) connected to academia and indus- of the University of Stuttgart is try through long term partners- the first National Supercompu- hips with global market players ting Center in Germany and is such as Porsche and T-Systems, offering services to both acade- as well as worldwide companies, mic users and industry. Apart HPC centres and Universities. from the operation of supercom- Particular attention is given to puters HLRS activities include collaboration with Small and Me- teaching and training in distribu- dium Enterprises (SMEs). ted systems, software enginee- ring and programming models, Our Infrastructure: HLRS ope- as well as development of new rates a XC40 supercom- technologies. HLRS is an acti- puter (peak performance > 7 Pe- ve player in the European rese- taFlops), as well as a variety of arch arena with special focus on smaller systems, reaching from Scientific Excellence and Indust- clusters to cloud resources. rial Leadership initiatives.

Big Data, Programming Analytics & Models & Tools Management

Cloud Computing Visualization Featured Topics Optimization Exascale & Scalability Computing

Energy Services Efficiency

HLRS 4 Our Experience: HLRS has been at the forefront of regional, nati- onal and European research and innovation over the last 20 years. During this time, HLRS has parti- cipated successfully in more than 90 European research and inno- vation projects. Director HLRS Our Expertise: HLRS is a lea- Prof. Dr. Michael Resch ding innovation center, applying software methods to HPC and Cloud for the bene- fit of multiple application domains such as automotive, engineering, health, mobility, security, and energy. Thanks to the close inter- action with industry, the center’s capabilities and expertise sup- port the whole lifecycle of simu- lation covering research aspects, pre-competitive development and preparation for production. The HLRS innovation group, which actively examines and tests new technologies, can bring into pro- jects expertise on leading edge technologies hardware and scale up data analysis techniques.

HLRS 5 Exascale Computing

The shift from petascale compu- improve scalability of applications ting to exascale computing—a and enable them to run them on thousandfold increase in compu- massively parallel systems. We ting power—constitutes the start tackle large problems with high of a new era within the commu- numeric complexity and work to- nity of High-Performance Compu- ward energy-efficient algorithms, ting (HPC). reducing highly parallelized sys- The paradigm shift from petasca- tems’ power consump­tion. le to exascale will not only provi- With this brochure, we invite you de faster HPC systems, but also to discover not only how traditi- influence the path of designing onal HPC-applications, such as hardware components, software, computational fluid dynamics applications, and platforms. The- (CFD), can be improved on their se aspects of supercomputing will path to exascale, but also how need to be adapted, optimized, improvements need to be deliver- or, in some cases, even reinven- ed, such as supporting the evolu- ted. After all, it is the ultimate tion of application-specific codes. goal to efficiently solve computa- Furthermore, there is a clear tional problems, which are so far need to discover the full potenti- too complex for recent systems. al to manage the emergence of To this end, the High-Performan- increasingly higher data volumes ce Computing Center Stuttgart in exascale leading to the emer- (HLRS) takes part in various re- gence of High Performance Data search activities that are topics Analytics (HPDA), which will be- of interest on the path to exasca- come more and more important le. Our research activities will in the future.

Exascale Computing 6 Project Overview

POP - Performance Optimisation Page 8 and Productivity (A Centre of Excel- lence in Computing Applications)

Page 10 Mont-Blanc 2/3

EXPERTISE - EXperiments and Page 12 high PERformance computing for Turbine mechanical Integrity and Structural­dynamics in Europe

Page 14 EXASOLVERS - Extreme Scale Solvers for Coupled Problems

Page 16 ExaFLOW - Enabling Exascale Fluid Dynamics Simulations

CATALYST - Combining HPC and Page 18 High Performance Data Analytics for Academia and Industry

Exascale Computing 7 POP

Performance Optimisation and Productivity (A Centre of Excellence in Computing Applications)

High performance computing is a fundamental tool for the progress of science and engineering and as such for the economic competiti- veness. The growing complexity of parallel is leading to a situation where code owners and users are not aware of the de- tailed issues affecting the perfor- mance of their applications. The result is often an inefficient use of computing resources. Code de- velopers often do not have suffi- cient insight in its detailed causes in order to address the problem properly. The objective of POP is to operate a Center of Excellence in perfor- mance optimisation and producti- vity and to share our expertise­ in the field with the computing com- munity. In particular, POP will offer the service of precisely assessing the performance of computing te them. POP will target and ­offer applications of any sort, from a such services to code owners and few hundred to many thousands users from all domains, including of processors. Also, POP will infrastructure operators, acade- show users the specific issues mic and industrial users. ­affecting the performance of their The estimated population of such code and the best way to allevia- applications in Europe is 1500

POP 8 and within the project lifetime the access to computing appli- POP has the ambition of serving cations, thus allowing European over 150 such codes. The Added researchers and industry to be Value of POP’s services is the sa- more competitive. vings generated in the operation and use of a code, which will re- Project Partners sult in a significant Return on In- „„ Barcelona Supercomputing vestment (fixing a code costs less Center, Spain than running it below its optimal „„ Numerical Algorithm Group,UK levels) by employing best-in-class „„ RWTH Aachen services and release capacity for „„ HLRS resolving other priority issues. „„ Teratec, FR POP will be a best-in-class centre. „„ Forschungszentrum Jülich By bringing together the Europe- an world-class expertise in the Project Information area and combining excellent aca- „„ Funding Organisation: demic resources with a practical, EU-H2020 hand-on approach, it will improve „„ Runtime: 10.2015 - 03.2018

Contact Further Information Dr. José Gracia www.pop-coe.eu Christoph Niethammer Phone: +49 (0) 711/ 685-87208 +49 (0) 711/ 685 87203 E-Mail: [email protected] [email protected]

POP 9 Mont-Blanc 2/3

European approach towards Energy Efficient High Performance

Mont-Blanc 2 model, and support for multi-no- The limiting factor in the develop- de debugging. In addition, HLRS ment of an Exascale High Per- also contributed to evaluation formance System is of the programming model and power consumption. The Mont- prototype system by porting and Blanc2 project focused on the benchmarking an application from task to develop a next generati- the engineering domain. on HPC system using embedded technologies to reach this difficult Funding Organisation: EC FP7 task. After the development of Runtime: 01.10.2013 – 31.1.2017 the hardware architecture in the first phase of the Mont-Blanc pro- ject, Mont-Blanc2 focused more Mont-Blanc 3 on the developed of the neces- The Mont-Blanc project aims to sary system software stack and design a new type of computer evolution of the system design. architecture capable of setting It examined a new programming future HPC standards, built from model allowing to write efficient energy efficient solutions used in code for the new computer archi- embedded and mobile devices. tecture. It emphasized tools for The project has been running the programmer like debugger since 2011 and was extended in and performance analysis tools, 2013 (Mont Blanc 2) and 2015 which increase the usability of (Mont Blanc 3), respectively. In such a system for the users. particular, Mont Blanc 3 will en- The main contribution of HLRS is able further development of the the development of scalable de- OmpSs programming model to bugging tools. In particular, HLRS automatically exploit multiple clus- extended the task-based graphi- ter nodes, transparent applica- cal debugger Temanejo with sup- tion checkpointing for fault-tole- port for the OmpSs programming rance, support for ARMv8 64-bit

Mont-Blanc 2/3 10 processors, and the initial design Project Partners of the Mont-Blanc Exascale archi- tecture. HLRS contribution to the project is twofold. Firstly, we will participate in the development of the programming model, in parti- cular combining MPI and OmpSs into a hybrid, task-aware MPI/ OmpSs. This will allow to overlap MPI communication with com- putation with minimal effort for the application programmer. Se- condly, HLRS will contribute to the evaluation of the programming model and the architecture by porting a repres ventative scien- tific application.

Funding Organisation: EC H2020 Runtime: 01.10.2015 – 31.09.2018

Contact Further Information Dr. José Gracia www.montblanc-project.eu Phone: +49 (0) 711/ 685-87208 E-Mail: [email protected]

Mont-Blanc 2/3 11 EXPERTISE EXperiments and high PERformance computing for Turbine mechanical Integrity and Structural­ dynamics in Europe

EXPERTISE is a European Training models of turbine components to Network (ETN) that will contribute pave the way towards the virtual to train the next generation of me- testing of the entire machine. Key chanical and computer science aspects addressed thereby will be engineers. Within the network 15 the understanding and accurate Early Stage Researchers (ESRs) modeling of physics of frictional will work on the big challenges on contact interfaces, new, highly the way to a fully validated nonline- efficient and accurate nonlinear ar dynamic model of turbo-machi- dynamic analysis tools as well as nery components. Along their way the integration of all this into high they aresupervised by experts at performance computing (HPC) world leading institutions from techniques, enabling for the first across Europe in this multidisci- time the accurate dynamic analy- plinary project. The ultimate re- sis of a large scale turbomachi- search objective of EXPERTISE is nery model. to develop advanced tools for the dynamics analysis of large-scale The research program of EXPER- TISE is based on the following Work Packages (WPs): „„ WP1 – Advanced modeling of friction contacts „„ WP2 – Identification of cont- act interfaces „„ WP3 – Structural dynamics of turbine and its components „„ WP4 – High Performance Computing for structural dyna- mics

EXPERTISE 12 HLRS as expert in the field of high Beneficiaries performance computing (HPC) Imperial College of Science Tech- will lead the HPC activities in EX- nology and Medicine London | Uni- PERTISE. Also, HLRS will have a versität Stuttgart | University of key role in the network by training Oxford | CRAY UK Limited| École all the researchers in modern Centrale de Lyon | Middle East HPC techniques and furthermo- Technical University | Vysoka Sko- re add its own research project, la Banska – Technicka Univerzita addressing the tremendous prob- Ostrava | Barcelona Supercompu- lem of handling the huge amounts ting Center – Centro Nacional de of data that are produced during Supercomputacion | Mavel AS | these full model simulations and Technische Universität München| bring HPC systems to their limits. Project Information „„ Runtime: 03.2017 – 02.2021 „„ Funding Organisation: Horizon 2020, Marie Sklodowska-Cu- rie Actions, Innovative Training Network (H2020-MSCA- ITN)

Contact Further Information Dr. José Gracia www.msca-expertise.eu Christoph Niethammer Phone: +49 (0) 711/ 685-87208 +49 (0) 711/ 685 87203 E-Mail: [email protected] [email protected]

EXPERTISE 13 EXASOLVERS

Extreme Scale Solvers for Coupled Problems

Exascale systems will be charac- Optimization and inverse prob- terized by bil­lion-way parallelism. lems Computing on such extreme sca- (Trier University) les requires suitable methods. By means of inverse problems, The ExaSolvers 2 project hence it is possible to determine simu- investigates such methods: lation parameters that can’t be measured due to e.g. subminia- Parallel adaptive multigrid ture structures, inaccessible en- (G-CSC, University Frankfurt) vironments, etc. However, usage The multigrid method is of opti- of the aforementioned methods mal complexity and hence suited for optimization and inverse prob- for extreme scale parallelism. The lems provides further potential to group from Frankfurt develops use exascale systems efficiently. their own parallel multigrid frame- work ug4 which also adapts mesh Uncertainty quantification resolution in order to increase the (RWTH Aachen) solution efficiency. The group from Aachen uses low rank hierarchical tensors to quan- Time parallelization tify uncertainties of simulations, (ICS, USI Lugano) which allows to further increase In transient simulations, not only the amount of parallelism that the simulation domain but also can be used efficiently. the investigated time frame can be divided and handled on diffe- rent execution units in parallel in order to efficiently use the massi- ve parallelism of future systems.

EXASOLVERS 14 Energy efficiency performance engineering experti- (HLRS, University Stuttgart) se of the project partners from Due to their massive parallelism, Japan on codes developed by the Exascale systems will require huge ExaSolvers 2 project. In return, amounts of energy. We hence in- ADVENTURE is going to integra- vestigate methods to increase the te our methods into their frame- energy effi­ciency of such systems work. on multiple levels, i.e. algo­rithmic In order to assess the developed efficiency, efficiency-aware imple- methods, a simulation of trans- mentation as well as adaption of dermal drug delivery through the hardware parameters (e.g. redu- human skin with detailed resolu- cing the CPU’s core frequency, tion of the lipid scale is used as known as Dynamic Voltage and benchmark application. Frequency Scaling). Project Information A collaboration with the Japane- „„ Runtime: 05.2016 - 04.2019 se ADVENTURE project has been „„ Funding Organisation: DFG established in order to deploy the

Contact Further Information Björn Dick www.hlrs.de/about-us/research/ Dr. Ralf Schneider current-projects/exasolvers Phone: +49 (0) 711/ 685-87189 +49 (0) 711/ 685-87236 E-Mail: [email protected] [email protected]

EXASOLVERS 15 ExaFLOW

Enabling Exascale Fluid Dynamics Simulations

We are surrounded by moving The main goal of ExaFLOW is to fluids (gases and liquids), be it address key algorithmic challen- breathing or the blood flow in our ges in CFD (Computational Fluid arteries; the flow around cars, Dynamics) to enable simulation at ships, and airplanes; the changes exascale, guided by a number of in cloud formations or plankton use cases of industrial relevance, transport in oceans; even forma- and to provide open-source pilot tions of stars and galaxies are implementations. Thus, driven by modelled as phenomena in fluid problems of practical engineering dynamics. Fluid dynamics simula- interest we focus on important tions provide a powerful tool for si¬mulation aspects, including: the analysis of fluid flows and are an essential element of many in- „„ error control and adaptive dustrial and academic problems. mesh refinement in complex In fluid dynamics there is almost computational domains no limit to the size of the systems „„ resilience and fault tolerance to be studied via numerical si- in complex simulations mulations. The complexities and „„ solver efficiency via mixed dis- nature of fluid flows, often com- continuous and continuous Ga- bined with problems set in open lerkin methods and appropria- domains, imply that the resour- te optimised preconditioners ces needed to computationally „„ heterogeneous modelling to model problems of industrial and allow for different solution al- academic relevance are almost gorithms in different domain unbounded. The main goal of this zones project is to address algorithmic „„ evaluation of energy efficiency challenges to enable the use of in solver design more accurate simulation models „„ parallel input/output and in-si- in exascale environments. tu compression for extreme data

ExaFLOW 16 sponsible for the evaluation and development of data reduction al- gorithms based on dynamic-mode decompo-sition (DMD) and emer- ging new ideas related to the Ko- opman Operator. Additionally, the task of researching energy effi- ciency and awareness is located at HLRS. Within this scope, the power consumption of different implementations is measured, In ExaFlow the High-Performan- using both high-resolution compo- ce Computing Center Stuttgart nent level and lower-resolution no- (HLRS), in cooperation with the de-level measurement methods. in-stitute for Aero and Gas Dy- namics (IAG) of the University Project Information of Stuttgart, forms the second „„ Runtime: 10.2015 - 09.2018 biggest partner in the ExaFlow „„ Funding Organisation: Consortium. In terms of Data EU H2020 reduction, HLRS is especially re-

Contact Further Information Dr. Ralf Schneider www.exaflow-project.eu Phone: +49 (0) 711/ 685-87236 E-Mail: [email protected]

ExaFLOW 17 CATALYST

Combining HPC and High Performance Data Analytics for Academia and Industry

At the High Performance Com- As the majority of today’s data puting Center Stuttgart (HLRS), analytics algorithms are oriented customers tend to execute more towards text processing (e.g. bu- and more data-intensive applica- si-ness analytics) and graph analy- tions. Since it no longer becomes sis (e.g., social network studies), feasible that data is processed we are further in need to evaluate and analysed manually by domain existing algorithms with respect to experts, HLRS and Cray Inc. have their applicability for the enginee- launched the CATALYST project to ring domain. Thus, CATALYST will advance the field of data-intensi- examine future concepts for both ve computing by converging HPC hardware and software. and in order to allow a The first case study conducted seamless workflow between com- in collaboration with Cray Inc. pute-intensive simulations and addresses the performance va- data-intensive analytics. For that riations of our Cray XC40 sys- purpose, Cray Inc. designed the tem. Performance variability on Urika-GX data analytics hardware, HPC platforms is a critical issue which supports Big Data techno- with serious implications for the logies and furthermore, enhan- users: irregular runtimes prevent ces the analysis of semantic data. users from correctly assessing This system has been installed as performance and from efficiently an extension of Hazel Hen, the planning allocated machine time. current HPC-flagship system of Consequently, monitoring today’s HLRS. IT infra-structures has actually be- The main objective of CATALYST is come a big data challenge on its to evaluate the hardware as well own. The analysis workflow used as the software stack of the Ur- to identify the causes of runtime ika-GX and its usefulness with a variations consists of three steps particular focus on applications including different configuration from the engineering domain. parameters:

CATALYST 18 Outlook „„ Big Data application evaluation „„ Close cooperation with part- ners from both, industry and academia „„ Seamless integration of the Big Data system into our exis- ting HPC infrastructure „„ Develop and evaluate practical case studies to advertise the With the help of this workflow, solution 470 so called „Victim“ applications have been identified that suffered Project Information from the particular behaviour of „„ Runtime: 10.2016 – 09.2019 3 „Aggressors“. Consequently, „„ Funding Organisation: Ministry HLRS took this information and of Science, Research and the approached the responsible sta- Arts Baden-Württemberg keholders in order to optimise „„ Partners: HLRS, Cray Inc. & their applications in general. So Daimler AG (associated) not only the performance of these applications has been improved, but also the entire system perfor- mance in production.

Contact Further Information Michael Gienger www.hlrs.de/en/about-us/ Phone: +49 (0) 711 / 685-63824 research/current-projects/ E-Mail: [email protected] data-analytics-for-hpc

CATALYST 19 High Performance Computing Editor: Lena Bühler, Eric Gedenk, Center Stuttgart (HLRS) Dr. Bastian Koller

University of Stuttgart Design: Janine Jentsch, Ellen Ramminger Nobelstrasse 19 | 70569 Stuttgart | Germany Picture Credits: Phone: +49 (0)711 / 685 87 269 Cover and Interior shot: Bohris Lehner for HLRS Fax: +49 (0)711 / 685 87 209 Back cover shot: Simon Sommer for HLRS

Mail: [email protected] www.hlrs.de © HLRS 2018