ANNUAL REPORT 2018 Table of contents

2. Member logos 17. FET-HPC 2015 projects: A Wealth of Results! 3. Editorial 20. Snapshot: The technology stacks of High Performance 4. Our association Computing and Big Data Computing 4. Our Organisation 20. What they can learn from each other 5. The ETP4HPC Steering board 20. Stack overview and profiles 5. Office 23. What Big Data Computing can Learn from HPC 6. Our 96 members represent 23 countries 23. What HPC can Learn from Big Data Computing 6. Breakdown of our members by statute and profile 7. News from our Working Groups 24. Horizon Europe HPC Vision - - Working group on technological road mapping: 2018 Workshops and Future Plans - Energy Efficiency working group - Industrial User Working Group 26. New Centres of Excellence in computing applications w8. Ho will EuroHPC benefit from European HPC technology? 28. Our main events 10. New members

Contact ETP4HPC [email protected] www.etp4hpc.eu

@etp4h

Text ETP4HPC ETP4HPC Chairman Jean-Pierre Panziera Editorial Team Pascale Bernier-Bruna (Coordinator) - Maike Gilliot - Michael Malms - Marcin Ostasz External contributions ANNUAL • Ar ticle on EuroHPC: Gustav Kalbe • New members: CoolIT, Constellcom Ltd, Infineon, Inspur, LINKS Foundation, Mellanox, Neovia, Saiver SRL, Submer, TS-JFL, UCit, University of A Coruňa • Ar ticle on FET-HPC 2015 projects: Jean-François Lavignon • New Centres of Excellence: Cheese, Excellerat, Hidalgo Graphic design and layout Antoine Maiffret (www.maiffret.net) Printed and bound by Snel Grafics – 4041 Vottem, Belgium REPORT 2018 2 • ETP4HPC • ANNUAL REPORT 2018 MEMBERS LOGOS ANNUAL REPORT 2018 • ETP4HPC • 3

Software for guided parallelization Editorial

International Centre for Numerical Methods in Engineering JEAN-PIERRE PANZIERA, ETP4HPC CHAIR ear ETP4HPC Members and Partners, Things and Artificial Intelligence. The EXDCI2 project constitutes an excellent mechanism to DIt may be a fact known to only a few of you but develop those collaborations. I am an avid mountaineer. I am one of a team of old friends and each year we try to do at As we speak, we are finishing our Vision docu- least one challenging Alpine route – ‘challen- ment and we are embarking on the preparation ging’ according to our abilities, with some of us of our next Strategic Research Agenda (SRA). finding ‘the challenge’ part of it more deman- This SRA will be the source of priorities for the ding with the passing of time. When you scale research projects driven by EuroHPC. I encou- a mountain such as Dôme de Neige des Écrins rage you all to take part in the writing of the or Pic Coolidge, the most beautiful moment SRA. We will need to work very closely with all is not when you reach the summit – the most stakeholders and projects, in particular with poignant moment is when you realise that the the European Processor Initiative, in order to summit is within your reach, that you’re going deliver a vision that will allow EuroHPC to reach to make it to the top, sound and in one pie- its main objective: independent access to HPC ce. It usually happens when you pass the crux, resources for European users. the most difficult part, and you still feel strong enough to keep going and have the end of the We also need to think about the way back from journey in full view. the summit – to use another mountaineering analogy. We should focus on the areas where If you are part of the European HPC ecosys- Europe can achieve long-term leadership. tem, you should consider yourself very lucky, I encourage our members to reflect on the new because we are past that inflexion point. The challenges emerging from the discussions as- efforts made since the beginning of our jour- sociated with the SRA, e.g. end-to-end compu- ney, i.e. since the establishment of ETP4HPC ting, complex workflows, etc. and the beginnings of the construction of the European HPC eco-system, are paying off. We The rest of our journey will take place in close have a vibrant ecosystem in place meeting the cooperation with EuroHPC, and we should fo- needs of all users and we have over 150 tech- cus on ensuring that we combine efforts for the nology project results to build on. Our ins- best benefit of the European HPC value chain tinct was right when we began working with and its stakeholders. Please participate ac- BDVA – we are now working with our Big Data tively in this complex process as such a chance counterparts on the main advisory body of will not happen again. This is the time to make EuroHPC and we are extending this collabora- it happen, to shape the future of European HPC. ting pattern onto other areas: the Internet of Thank you for all your support to date.

SABRI PLLANA

LEADERS IN HIGH PERFORMANCE COMPUTING 2 • ETP4HPC • ANNUAL REPORT 2018 MEMBERS LOGOS ANNUAL REPORT 2018 • ETP4HPC • 3

Software for guided parallelization Editorial

International Centre for Numerical Methods in Engineering JEAN-PIERRE PANZIERA, ETP4HPC CHAIR ear ETP4HPC Members and Partners, Things and Artificial Intelligence. The EXDCI2 project constitutes an excellent mechanism to DIt may be a fact known to only a few of you but develop those collaborations. I am an avid mountaineer. I am one of a team of old friends and each year we try to do at As we speak, we are finishing our Vision docu- least one challenging Alpine route – ‘challen- ment and we are embarking on the preparation ging’ according to our abilities, with some of us of our next Strategic Research Agenda (SRA). finding ‘the challenge’ part of it more deman- This SRA will be the source of priorities for the ding with the passing of time. When you scale research projects driven by EuroHPC. I encou- a mountain such as Dôme de Neige des Écrins rage you all to take part in the writing of the or Pic Coolidge, the most beautiful moment SRA. We will need to work very closely with all is not when you reach the summit – the most stakeholders and projects, in particular with poignant moment is when you realise that the the European Processor Initiative, in order to summit is within your reach, that you’re going deliver a vision that will allow EuroHPC to reach to make it to the top, sound and in one pie- its main objective: independent access to HPC ce. It usually happens when you pass the crux, resources for European users. the most difficult part, and you still feel strong enough to keep going and have the end of the We also need to think about the way back from journey in full view. the summit – to use another mountaineering analogy. We should focus on the areas where If you are part of the European HPC ecosys- Europe can achieve long-term leadership. tem, you should consider yourself very lucky, I encourage our members to reflect on the new because we are past that inflexion point. The challenges emerging from the discussions as- efforts made since the beginning of our jour- sociated with the SRA, e.g. end-to-end compu- ney, i.e. since the establishment of ETP4HPC ting, complex workflows, etc. and the beginnings of the construction of the European HPC eco-system, are paying off. We The rest of our journey will take place in close have a vibrant ecosystem in place meeting the cooperation with EuroHPC, and we should fo- needs of all users and we have over 150 tech- cus on ensuring that we combine efforts for the nology project results to build on. Our ins- best benefit of the European HPC value chain tinct was right when we began working with and its stakeholders. Please participate ac- BDVA – we are now working with our Big Data tively in this complex process as such a chance counterparts on the main advisory body of will not happen again. This is the time to make EuroHPC and we are extending this collabora- it happen, to shape the future of European HPC. ting pattern onto other areas: the Internet of Thank you for all your support to date.

SABRI PLLANA

LEADERS IN HIGH PERFORMANCE COMPUTING 4 • ETP4HPC • ANNUAL REPORT 2018 OUR ASSOCIATION OUR ASSOCIATION ANNUAL REPORT 2018 • ETP4HPC • 5

Our association The ETP4HPC Steering board

TP4HPC, the European Technology Platform OUR ORGANISATION in the area of High-Performance Computing E Eternal (HPC), was incorporated as a Dutch association ontributors in June 2012. We are an industry-led think tank gathering European HPC technology stakeholders: technology vendors, ISVs, service providers, re- search centres, and end users. ETP4HPCs Working Members Groups Our main objective is to increase the global market share of the European HPC technology vendors. Our main deliverable is our Strategic Research Agenda (SRA) which defines the priorities of the European HPC technology research programme managed by the European Commission. Full Members

Until 2018, the ETP4HPC association represented the European HPC industry in the dialogue with the European Commission within the HPC contrac- tual Public-Private Partnership (cPPP). As of 2019, Office the role of the HPC cPPP is taken over by the EuroHPC Joint Undertaking (JU), in particular re- Steering Board 5 members garding HPC R&I funding and steering. In this new context, ETP4HPC will continue to help shape Europe’s HPC Research Programme. ETP4HPC is a The current Steering Board was The Steering Board appointed private member of the EuroHPC JU, and well re- elected at the Annual General the following Steering Committee: Office presented in its Research and Innovation Advisory Assembly on 13 March 2018, Chairman: for 2 years. Its current members Group. Atos - Jean-Pierre Panziera ETP4HPC has a distributed Office team, based represent: Vice-chair for Research: in France, Spain, Germany and the Netherlands. European Research centres CEA - Jean Gonnord The Office is in charge of day-to-day operations, (5 seats): Vice-chair for Industry: under the supervision of the Steering Board. BSC, CEA , Cineca, Fraunhofer, Seagate - Sai Narasimhamurthy Pascale Bernier-Bruna (Atos) is our Forschungszentrum Jülich Secretary: communication leader and community European SMEs (4 seats): ParTec - Hugo Falter manager E4 Computer Engineering, ParTec, Carolien Deklerk (Clustervision) is our Megware, Clustervision Treasurer: ClusterVision - accountant, covering all financial aspects European-controlled Frank van der Hout Ina Schmitz (Partec) is in charge of logistics corporations (4 seats): related to the Steering Board meetings. Atos, ESI Group, Infineon, Seagate Since January 2019, Chris Ebell (Partec) International companies with has taken over this role R&D in Europe (2 seats): Maike Gilliot (Teratec) manages ETP4HPC’s Fujitsu, Intel contributions to the H2020 funded support actions EXDCI-2 and HPC-GIG and contributes to out-bound activities, such as workshops Michael Malms (IBM) is leading our SRA efforts Jean-Philippe Nominé (CEA), coordinates the Office team and supports the Chairman Marcin Ostasz (BSC) co-manages our road- mapping effort and industrial relations 4 • ETP4HPC • ANNUAL REPORT 2018 OUR ASSOCIATION OUR ASSOCIATION ANNUAL REPORT 2018 • ETP4HPC • 5

Our association The ETP4HPC Steering board

TP4HPC, the European Technology Platform OUR ORGANISATION in the area of High-Performance Computing E Eternal (HPC), was incorporated as a Dutch association ontributors in June 2012. We are an industry-led think tank gathering European HPC technology stakeholders: technology vendors, ISVs, service providers, re- search centres, and end users. ETP4HPCs Working Members Groups Our main objective is to increase the global market share of the European HPC technology vendors. Our main deliverable is our Strategic Research Agenda (SRA) which defines the priorities of the European HPC technology research programme managed by the European Commission. Full Members

Until 2018, the ETP4HPC association represented the European HPC industry in the dialogue with the European Commission within the HPC contrac- tual Public-Private Partnership (cPPP). As of 2019, Office the role of the HPC cPPP is taken over by the EuroHPC Joint Undertaking (JU), in particular re- Steering Board 5 members garding HPC R&I funding and steering. In this new context, ETP4HPC will continue to help shape Europe’s HPC Research Programme. ETP4HPC is a The current Steering Board was The Steering Board appointed private member of the EuroHPC JU, and well re- elected at the Annual General the following Steering Committee: Office presented in its Research and Innovation Advisory Assembly on 13 March 2018, Chairman: for 2 years. Its current members Group. Atos - Jean-Pierre Panziera ETP4HPC has a distributed Office team, based represent: Vice-chair for Research: in France, Spain, Germany and the Netherlands. European Research centres CEA - Jean Gonnord The Office is in charge of day-to-day operations, (5 seats): Vice-chair for Industry: under the supervision of the Steering Board. BSC, CEA , Cineca, Fraunhofer, Seagate - Sai Narasimhamurthy Pascale Bernier-Bruna (Atos) is our Forschungszentrum Jülich Secretary: communication leader and community European SMEs (4 seats): ParTec - Hugo Falter manager E4 Computer Engineering, ParTec, Carolien Deklerk (Clustervision) is our Megware, Clustervision Treasurer: ClusterVision - accountant, covering all financial aspects European-controlled Frank van der Hout Ina Schmitz (Partec) is in charge of logistics corporations (4 seats): related to the Steering Board meetings. Atos, ESI Group, Infineon, Seagate Since January 2019, Chris Ebell (Partec) International companies with has taken over this role R&D in Europe (2 seats): Maike Gilliot (Teratec) manages ETP4HPC’s Fujitsu, Intel contributions to the H2020 funded support actions EXDCI-2 and HPC-GIG and contributes to out-bound activities, such as workshops Michael Malms (IBM) is leading our SRA efforts Jean-Philippe Nominé (CEA), coordinates the Office team and supports the Chairman Marcin Ostasz (BSC) co-manages our road- mapping effort and industrial relations 6 • ETP4HPC • ANNUAL REPORT 2018 OUR ASSOCIATION OUR ASSOCIATION ANNUAL REPORT 2018 • ETP4HPC • 7

News from our Working Groups

OUR 96 MEMBERS REPRESENT 23 COUNTRIES WORKING GROUP ON TECHNOLOGICAL ROAD MAPPING: This Working Group continued in 2018 the main USA activity of the association - the elaboration of re- search priorities and related recommendations 7 - by providing input to the HPC-related Work Germany Programme 2019 and 2020 and by preparing our Canada HPC Vision for the new 2021-2027 EC research pro- 12 France th Croatia gramme (“Horizon Europe”, the 9 Framework 1 Programme). 1 16 The Working group organised many workshops throughout the year to develop our recommenda- tions, jointly with our members and external ex- Slovenia perts from BDVA, HiPEAC and AIOTI. United Kingdom 1 Finland Hungary Contact: Michael Malms 11 1 1

Italy Netherlands ENERGY EFFICIENCY Poland WORKING GROUP Japan 6 Spain The Working Group on Energy Efficiency organised Israel Greece 1 3 a half day workshop during the European HPC Summit Week in Ljubljana, tackling the energy ef- 1 3 Luxembourg China ficacy problem from different perspectives: cove- Norway Ireland Czech Republic Sweden 2 Denmark ring hardware aspects as well as software-based 2 2 3 solutions. For 2019, a similar workshop is planned 1 2 1 within ISC.

Contact: Michael Ott BREAKDOWN OF OUR MEMBERS BY STATUTE AND PROFILE Our membership grew by 12% over the last year, with the majority of new members coming from the private sector. INDUSTRIAL USER 10 SMEs STATUTE 58 Full members WORKING GROUP 15 Industrial organisations The Industrial User Working Group is actively seeking to engage more HPC Users in ETP4HPC, fo- Research organisations cusing at this stage on large companies with solid experience in HPC usage. The group is engaging 26 SMEs 8 Associated members with some key users in order to understand their Industrial organisations needs and to validate ETP4HPC’s approach.

6 Research organisations Contact: Ingolf Stärk

Associations or individuals

6 SMEs PROFILE 54 Private sector 18 Industrial organisations

42 Public sector 9 Research organisations Associations or individuals 6 • ETP4HPC • ANNUAL REPORT 2018 OUR ASSOCIATION OUR ASSOCIATION ANNUAL REPORT 2018 • ETP4HPC • 7

News from our Working Groups

OUR 96 MEMBERS REPRESENT 23 COUNTRIES WORKING GROUP ON TECHNOLOGICAL ROAD MAPPING: This Working Group continued in 2018 the main USA activity of the association - the elaboration of re- search priorities and related recommendations 7 - by providing input to the HPC-related Work Germany Programme 2019 and 2020 and by preparing our Canada HPC Vision for the new 2021-2027 EC research pro- 12 France th Croatia gramme (“Horizon Europe”, the 9 Framework 1 Programme). 1 16 The Working group organised many workshops throughout the year to develop our recommenda- tions, jointly with our members and external ex- Slovenia perts from BDVA, HiPEAC and AIOTI. United Kingdom 1 Finland Hungary Contact: Michael Malms 11 1 1

Italy Netherlands ENERGY EFFICIENCY Poland WORKING GROUP Japan 6 Spain The Working Group on Energy Efficiency organised Israel Greece 1 3 a half day workshop during the European HPC Summit Week in Ljubljana, tackling the energy ef- 1 3 Luxembourg China ficacy problem from different perspectives: cove- Norway Ireland Czech Republic Sweden 2 Denmark ring hardware aspects as well as software-based 2 2 3 solutions. For 2019, a similar workshop is planned 1 2 1 within ISC.

Contact: Michael Ott BREAKDOWN OF OUR MEMBERS BY STATUTE AND PROFILE Our membership grew by 12% over the last year, with the majority of new members coming from the private sector. INDUSTRIAL USER 10 SMEs STATUTE 58 Full members WORKING GROUP 15 Industrial organisations The Industrial User Working Group is actively seeking to engage more HPC Users in ETP4HPC, fo- Research organisations cusing at this stage on large companies with solid experience in HPC usage. The group is engaging 26 SMEs 8 Associated members with some key users in order to understand their Industrial organisations needs and to validate ETP4HPC’s approach.

6 Research organisations Contact: Ingolf Stärk

Associations or individuals

6 SMEs PROFILE 54 Private sector 18 Industrial organisations

42 Public sector 9 Research organisations Associations or individuals 8 • ETP4HPC • ANNUAL REPORT 2018 EUROHPC EUROHPC ANNUAL REPORT 2018 • ETP4HPC • 9

t is a pleasure to address the members of the ETP4HPC We have two important assets: Europe has world-class I Association. You have closed another successful year strengths in applications and other HPC software. We also EuroHPC: while our ecosystem passes over the inflection point of have centres-of-excellence to ensure users always have ac- its growth. Your organisation has played an extremely im- cess to the best-available applications. Another European portant role in the building of Europe’s HPC ecosystem. strength is our strong focus on co-design, on ensuring that leveraging the assets Looking back a decade, Europe’s activities in the area of HPC hardware and software are developed together with HPC were not fully coordinated. We did not have an entity application performance in mind. representing our HPC technology providers, we had a lim- of European HPC ited number of technology projects in place and our appli- Our main challenge in Europe today is that we do not cation expertise was not organised in the highly successful produce some key technologies for HPC, including mi- Centres of Excellence. Your association has been instru- croprocessors. We should be able to produce globally technology? mental not only in helping us, the European Commission competitive European HPC systems that are based on indi- and the European politicians, understand the needs of the genous European technologies and components. Being able European HPC technology industry and the optimal path to produce them ourselves is critical for Europe’s strategic Gustav Kalbe, Head of the High Performance Computing for its rapid development, but also in motivating the rest of independence in HPC and for advancing applications. and Quantum Technologies Unit of the European the ecosystem, in particular our application expertise and We have a number of initiatives in Europe to develop mis- Commission and also the Interim Executive Director SME technology providers. sing technologies and we will see whether they are compe- of the EuroHPC Joint Undertaking . For the last few years, we have been working together wit- titive in time for the procurements of the supercomputers hin the contractual Public-Private Partnership, which has the EuroHPC Joint Undertaking plans to acquire jointly with proven to be an efficient forum for exchanging updates the EuroHPC Participating States. With an extreme-scale and executing commitments. The results of this work are demonstrator machine, we intend to test whether the de- tangible – the impressive outputs of the first FETHPC pro- sign for the European processor is stable, for example. The jects and the potential of the on-going ones, including the European Processor Initiative has established ambitious co-design initiatives. Your input – the ETP4HPC Strategic goals to achieve by 2025, including the development of an Research Agenda and related documents – have served us indigenous HPC processor, an embedded processor for au- well, in both preparing the calls and assessing the projects. tomated vehicles and other AI applications, and an accele- I expect the Association not only to continue this work, rator based on the RISC-V standard. The EuroHPC Joint Undertaking (JU) brings but also extend your support as a Private Member of the Our hope is that, by 2022, European technology will be ma- together the EU (represented by the European EuroHPC Joint Undertaking. ture enough to be globally competitive. However, European Commission), 25 EU Member States and Now, Europe is changing gear. I look forward to work very competitors will need to compete strictly on merit, al- countries associated to Horizon 2020 that closely with your representatives within the EuroHPC though ideally we would like companies based in the EU to have decided to join, and private members Research and Innovation Advisory Board. Let us have a look be able to develop the machines. However, we do not ex- ETP4HPC and BDVA. at how it is going to benefit from the provision of European clude per se a cooperation with non-EU vendors if that can technology, a question dear to the hearts to many a satisfy our overall ambition of deploying competitive HPC The Euro HPC JU is a legal and funding entity member of ETP4HPC. supercomputers and ensure unrestricted access to strate- which enables the pooling of EU and national EuroHPC will continue the work of the previous pro- gic technology components. resources in High-Performance Computing. grammes in the area of HPC technology research, but its main mission is different: to develop European technolo- Our expectations is that the European HPC ecosystem will More information: gies for the coming exascale era. EuroHPC combines fun- keep growing at a cruising speed for the next years. Our https://eurohpc-ju.europa.eu ding from the European Union with funding from European vision is long-term. In 2025, we should have a fully inter- EuroHPC on Twitter: @EuroHPC_JU states participating in the EuroHPC initiative. We want to connected infrastructure across Europe in HPC, connec- deliver a European HPC infrastructure that is competi- ting national and European machines. The fully meshed tive with the best in the world. The first step is acquiring network will be available to any user in Europe, with a next-generation supercomputers and putting them into single point of entry where the user sends a request and it service for user applications by 2020. Then, we will aim to goes to the most appropriate machine and this process is support research and innovation activities to deploy exas- completely transparent to the user. The user will not need cale machines in 2022 so users can continue to benefit from to care where the application runs, whether it is on a natio- world-leading HPC technologies. Our infrastructure should nal or European machine. The second expectation is that primarily serve the European scientists, but there is a lot of we manage to keep European science and industry com- interest among European companies in the application of petitive because they have had access to world-class HPC European HPC technology. These certainly include car ma- resources. nufacturers and other large engineering companies, along I wish you all the best in your endeavours. I look forward with oil and gas firms, logistics companies, pharmaceutical, to see the technology developed by your teams push aerospace and IT companies. European HPC towards global leadership. 8 • ETP4HPC • ANNUAL REPORT 2018 EUROHPC EUROHPC ANNUAL REPORT 2018 • ETP4HPC • 9

t is a pleasure to address the members of the ETP4HPC We have two important assets: Europe has world-class I Association. You have closed another successful year strengths in applications and other HPC software. We also EuroHPC: while our ecosystem passes over the inflection point of have centres-of-excellence to ensure users always have ac- its growth. Your organisation has played an extremely im- cess to the best-available applications. Another European portant role in the building of Europe’s HPC ecosystem. strength is our strong focus on co-design, on ensuring that leveraging the assets Looking back a decade, Europe’s activities in the area of HPC hardware and software are developed together with HPC were not fully coordinated. We did not have an entity application performance in mind. representing our HPC technology providers, we had a lim- of European HPC ited number of technology projects in place and our appli- Our main challenge in Europe today is that we do not cation expertise was not organised in the highly successful produce some key technologies for HPC, including mi- Centres of Excellence. Your association has been instru- croprocessors. We should be able to produce globally technology? mental not only in helping us, the European Commission competitive European HPC systems that are based on indi- and the European politicians, understand the needs of the genous European technologies and components. Being able European HPC technology industry and the optimal path to produce them ourselves is critical for Europe’s strategic Gustav Kalbe, Head of the High Performance Computing for its rapid development, but also in motivating the rest of independence in HPC and for advancing applications. and Quantum Technologies Unit of the European the ecosystem, in particular our application expertise and We have a number of initiatives in Europe to develop mis- Commission and also the Interim Executive Director SME technology providers. sing technologies and we will see whether they are compe- of the EuroHPC Joint Undertaking . For the last few years, we have been working together wit- titive in time for the procurements of the supercomputers hin the contractual Public-Private Partnership, which has the EuroHPC Joint Undertaking plans to acquire jointly with proven to be an efficient forum for exchanging updates the EuroHPC Participating States. With an extreme-scale and executing commitments. The results of this work are demonstrator machine, we intend to test whether the de- tangible – the impressive outputs of the first FETHPC pro- sign for the European processor is stable, for example. The jects and the potential of the on-going ones, including the European Processor Initiative has established ambitious co-design initiatives. Your input – the ETP4HPC Strategic goals to achieve by 2025, including the development of an Research Agenda and related documents – have served us indigenous HPC processor, an embedded processor for au- well, in both preparing the calls and assessing the projects. tomated vehicles and other AI applications, and an accele- I expect the Association not only to continue this work, rator based on the RISC-V standard. The EuroHPC Joint Undertaking (JU) brings but also extend your support as a Private Member of the Our hope is that, by 2022, European technology will be ma- together the EU (represented by the European EuroHPC Joint Undertaking. ture enough to be globally competitive. However, European Commission), 25 EU Member States and Now, Europe is changing gear. I look forward to work very competitors will need to compete strictly on merit, al- countries associated to Horizon 2020 that closely with your representatives within the EuroHPC though ideally we would like companies based in the EU to have decided to join, and private members Research and Innovation Advisory Board. Let us have a look be able to develop the machines. However, we do not ex- ETP4HPC and BDVA. at how it is going to benefit from the provision of European clude per se a cooperation with non-EU vendors if that can technology, a question dear to the hearts to many a satisfy our overall ambition of deploying competitive HPC The Euro HPC JU is a legal and funding entity member of ETP4HPC. supercomputers and ensure unrestricted access to strate- which enables the pooling of EU and national EuroHPC will continue the work of the previous pro- gic technology components. resources in High-Performance Computing. grammes in the area of HPC technology research, but its main mission is different: to develop European technolo- Our expectations is that the European HPC ecosystem will More information: gies for the coming exascale era. EuroHPC combines fun- keep growing at a cruising speed for the next years. Our https://eurohpc-ju.europa.eu ding from the European Union with funding from European vision is long-term. In 2025, we should have a fully inter- EuroHPC on Twitter: @EuroHPC_JU states participating in the EuroHPC initiative. We want to connected infrastructure across Europe in HPC, connec- deliver a European HPC infrastructure that is competi- ting national and European machines. The fully meshed tive with the best in the world. The first step is acquiring network will be available to any user in Europe, with a next-generation supercomputers and putting them into single point of entry where the user sends a request and it service for user applications by 2020. Then, we will aim to goes to the most appropriate machine and this process is support research and innovation activities to deploy exas- completely transparent to the user. The user will not need cale machines in 2022 so users can continue to benefit from to care where the application runs, whether it is on a natio- world-leading HPC technologies. Our infrastructure should nal or European machine. The second expectation is that primarily serve the European scientists, but there is a lot of we manage to keep European science and industry com- interest among European companies in the application of petitive because they have had access to world-class HPC European HPC technology. These certainly include car ma- resources. nufacturers and other large engineering companies, along I wish you all the best in your endeavours. I look forward with oil and gas firms, logistics companies, pharmaceutical, to see the technology developed by your teams push aerospace and IT companies. European HPC towards global leadership. 10 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 11

CoolIT Constellcom Ltd Infineon Inspur

SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER EUROPEAN CORPORATION/FULL MEMBER GLOBAL CORPORATION/ASSOCIATED MEMBER

CoolIT specializes in scalable liquid cooling solutions for the Constelcom’s mission is to provide easy, secure and private Infineon Technologies AG is a world leader in semiconduc- Inspur is a global leader in computing technology innovation. world’s most demanding data centres. Through its modular, access to on-premises HPC resources to anyone regardless tor solutions that make life easier, safer and greener. It’s renowned for building customized platforms that converge rack-based Direct Liquid Cooling technology, Rack DCLC™, of size and skills and for any application from its web-acces- Microelectronics from Infineon is the key to a better future. HPC, AI, and cloud computing. Gartner has ranked Inspur as CoolIT enables dramatic increases in rack densities, compo- sible Constellation™ platform. Constellation™ empowers HPC China’s largest server manufacturer and among the Top 3 in the nent performance and power efficiencies. Centres and users by providing a collaborative environment In the 2018 fiscal year (ending 30 September), the Company world. With a focus on innovation, Inspur has proven capabili- to manage and access supercomputing resources when and reported sales of around €7.6 billion with about 40,100 em- ties in HPC system research, design & deployment, after-sale From Passive Coldplate Loops specifically designed for where they are needed. ployees worldwide. Infineon is listed on the Frankfurt Stock service, operations, and maintenance of more than 17% HPC the latest high TDP processors from Intel, NVIDIA and AMD, Exchange (ticker symbol: IFX) and in the USA on the over-the- systems in of TOP500 list, which ranks Inspur as No. 2 global through to Coolant Distribution Units, CoolIT’s reliable tech- Constelcom was founded after our founding team had expe- counter market OTCQX International Premier (ticker symbol: manufacturer for the total number of systems in the TOP500 nology installs into any server or rack, ensuring ease of adop- rienced first-hand the barriers to investigation that a shortfall IFNNY). supercomputers. Inspur offers complete HPC hardware & sof- tion and maintenance. in compute power can mean to any ground-breaking research. tware product lines and application development services. As users of HPC ourselves, we understand deeply the impact Inspur has accrued extensive experience in complete Petascale of hassle-free and self-managed access to HPC, strong col- HPC system designs including the development of many cut- laboration amongst multi-skilled teams, and what that can ting edge HPC applications. Inspur HPC has helped many bu- mean for discovery and in turn, economics. Empowering en- siness and organizations achieve, for their first-time, precise gineers, scientists and innovators to run projects and remain calculation results that simulate untestable real-world sce- in control of HPC is at the heart of Constellation™. As close narios to predict complex future outcomes. These solutions collaborators to HPC Centres, we also understand the challen- have been used to: make people’s lives healthier with the use ges of managing clients, allocations, accounts and systems. of precision medicine for early diagnosis; model and predict For HPC Centres, licensing Constellation™ means freedom and large scale catastrophes/natural disasters; and helped large visibility both to manage internal resource and to optimise companies and SMEs to manufacture intelligently to deliver usage for external clients. Our team of expert scientists and world-class products and services. Inspur has pushed the in- engineers provide assistance to HPC Centres with their mana- novation envelope by converging HPC with other cutting-edge gement and support industrial users with software accelera- disciplines like AI and machine learning to solve a myriad of cri- tion and adaptation to HPC, simulations and workflows. tical business challenges for its clients –a fact that steers Inspur ahead of several renowned industry peers. Inspur is devoted to providing the European HPC community with extreme HPC hardware design and heterogeneous acceleration technologies for Parallel Computing, Big Data, Cloud Services, Artificial Intelligence, etc. As a complete and end-to-end HPC so- lution provider, Inspur specializes in providing “turnkey projects” to customers from underlying HPC hardware infrastructure, cluster management system and software stack, HPC application development, and performance tuning services.

https://www.coolitsystems.com/ https://www.constelcom.com/ https://www.infineon.com/ http://en.inspur.com/ 10 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 11

CoolIT Constellcom Ltd Infineon Inspur

SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER EUROPEAN CORPORATION/FULL MEMBER GLOBAL CORPORATION/ASSOCIATED MEMBER

CoolIT specializes in scalable liquid cooling solutions for the Constelcom’s mission is to provide easy, secure and private Infineon Technologies AG is a world leader in semiconduc- Inspur is a global leader in computing technology innovation. world’s most demanding data centres. Through its modular, access to on-premises HPC resources to anyone regardless tor solutions that make life easier, safer and greener. It’s renowned for building customized platforms that converge rack-based Direct Liquid Cooling technology, Rack DCLC™, of size and skills and for any application from its web-acces- Microelectronics from Infineon is the key to a better future. HPC, AI, and cloud computing. Gartner has ranked Inspur as CoolIT enables dramatic increases in rack densities, compo- sible Constellation™ platform. Constellation™ empowers HPC China’s largest server manufacturer and among the Top 3 in the nent performance and power efficiencies. Centres and users by providing a collaborative environment In the 2018 fiscal year (ending 30 September), the Company world. With a focus on innovation, Inspur has proven capabili- to manage and access supercomputing resources when and reported sales of around €7.6 billion with about 40,100 em- ties in HPC system research, design & deployment, after-sale From Passive Coldplate Loops specifically designed for where they are needed. ployees worldwide. Infineon is listed on the Frankfurt Stock service, operations, and maintenance of more than 17% HPC the latest high TDP processors from Intel, NVIDIA and AMD, Exchange (ticker symbol: IFX) and in the USA on the over-the- systems in of TOP500 list, which ranks Inspur as No. 2 global through to Coolant Distribution Units, CoolIT’s reliable tech- Constelcom was founded after our founding team had expe- counter market OTCQX International Premier (ticker symbol: manufacturer for the total number of systems in the TOP500 nology installs into any server or rack, ensuring ease of adop- rienced first-hand the barriers to investigation that a shortfall IFNNY). supercomputers. Inspur offers complete HPC hardware & sof- tion and maintenance. in compute power can mean to any ground-breaking research. tware product lines and application development services. As users of HPC ourselves, we understand deeply the impact Inspur has accrued extensive experience in complete Petascale of hassle-free and self-managed access to HPC, strong col- HPC system designs including the development of many cut- laboration amongst multi-skilled teams, and what that can ting edge HPC applications. Inspur HPC has helped many bu- mean for discovery and in turn, economics. Empowering en- siness and organizations achieve, for their first-time, precise gineers, scientists and innovators to run projects and remain calculation results that simulate untestable real-world sce- in control of HPC is at the heart of Constellation™. As close narios to predict complex future outcomes. These solutions collaborators to HPC Centres, we also understand the challen- have been used to: make people’s lives healthier with the use ges of managing clients, allocations, accounts and systems. of precision medicine for early diagnosis; model and predict For HPC Centres, licensing Constellation™ means freedom and large scale catastrophes/natural disasters; and helped large visibility both to manage internal resource and to optimise companies and SMEs to manufacture intelligently to deliver usage for external clients. Our team of expert scientists and world-class products and services. Inspur has pushed the in- engineers provide assistance to HPC Centres with their mana- novation envelope by converging HPC with other cutting-edge gement and support industrial users with software accelera- disciplines like AI and machine learning to solve a myriad of cri- tion and adaptation to HPC, simulations and workflows. tical business challenges for its clients –a fact that steers Inspur ahead of several renowned industry peers. Inspur is devoted to providing the European HPC community with extreme HPC hardware design and heterogeneous acceleration technologies for Parallel Computing, Big Data, Cloud Services, Artificial Intelligence, etc. As a complete and end-to-end HPC so- lution provider, Inspur specializes in providing “turnkey projects” to customers from underlying HPC hardware infrastructure, cluster management system and software stack, HPC application development, and performance tuning services.

https://www.coolitsystems.com/ https://www.constelcom.com/ https://www.infineon.com/ http://en.inspur.com/ 12 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 13

LINKS Foundation Mellanox Neovia Saiver SRL

RESEARCH ORGANISATION/FULL MEMBER GLOBAL CORPORATION/FULL MEMBER SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER

The LINKS Foundation was founded in 2016, exploiting and en- Mellanox Technologies is leading supplier of end-to-end NEOVIA Innovation is an SME service company based in Paris Saiver designs and manufactures a multi-purpose, highly ener- hancing the experience gained - from the Istituto Superiore Ethernet and InfiniBand intelligent interconnect solutions and Lyon, France. gy efficient, custom and “self-contained” Prefabricated Modular Mario Boella (ISMB) and from the Istituto Superiore on and services for servers, storage, and hyper-converged NEOVIA Innovation supports the development of innova- Data Center solution (MDC). The Saiver solution is the perfect Territorial Systems for Innovation (SiTI) - in the field of re- infrastructure. tive structures – either private or public – and their ecosys- choice for universities, research centers, private companies or search applied and technology transfer, adding a fundamen- tems. From the conception to market implementation, NEOVIA government organizations that are facing the need to build mi- tal element such as the enhancement of the research result. Mellanox intelligent interconnect solutions increase data Innovation offers services covering all the phases of complex cro, small or large HPC facilities but have budget and timing As part of the knowledge chain, the Foundation intends to en- centre efficiency by providing the highest throughput and collaborative projects. The company has a long record of act- constraints. Saiver was founded in 1959 in Monza (Italy) and hance the lever of applied research, contributing to the growth lowest latency, delivering data faster to applications and un- ing as a Project Management Office, working side by side with has been designing and manufacturing high quality, custom of the socio-economic system by relating the academic wor- locking system performance. Mellanox offers a choice of high the project coordinator to bring support to the partnership, Air Handling Units, for more than half of a century now. Saiver ld, the public and private sectors, activating large-scale pro- performance solutions: network and multicore processors, with a strong experience in EU-funded projects. Series A1 AHUs are delivered in more than 44 countries in the cesses and projects with significant impacts on territory. network adapters, switches, cables, software and silicon, that NEOVIA Innovation has a specific knowledge on Applied world, making Saiver to be recognized as a technology and qua- The Foundation is an ideal partner for both SMEs, which can accelerate application runtime and maximize business re- Mathematics, Numerical Analysis and High Performance lity leader. Further confirmation of the full commitment to qua- enhance the ability to develop innovation and gain competi- sults for a wide range of markets including high performance Computing, Big Data and Extreme Computing. The company lity is the fact that 100% of Saiver AHUs are Eurovent certified. tiveness using the experience and expertise of the founda- computing, enterprise data centres, Web 2.0, cloud, storage, has been involved for years in the European High Performance Thanks to its long experience in precision air treatment, adiaba- tion, both of the large enterprises that can, in partnership with network security, telecom and financial services. Computing Ecosystem, notably through its participation in tic cooling, modular structures and insulated sandwich panels, it, develop new ideas and new technologies. The Foundation past and present European HPC initiatives such as EESI, EESI- more than 10 years ago Saiver has developed an innovative, today relies on the technological and process competences 2, EXDCI and EXDCI-2. NEOVIA Innovation regularly takes part flexible, and “self-contained” Prefabricated MDC solution with of around 150 researchers working in close cooperation with in the BDEC international initiative. unparalleled energy and water saving performances in any cli- companies, academia and public administration. LINKS is or- matic zone. Important research centers, around the world, have ganized in research areas focused on some core sectors of ICT. selected the Saiver MDC solution to host their HPC equipment, Within the field of HPC systems, LINKS is active in the following instead of building rigid, inefficient and expensive “brick and two main topics. mortar” facilities. Among the customers that are running their 1) Applied research in Advanced Computing with: the study Supercomputers into a Saiver MDC we can mention: and design of distributed computing architectures based NASA AMES Research CSC (Finland) on public and private cloud platforms; low power archi- Center (USA) CCFE (UK) tectures in high performance perspective; orchestration of LLNL (USA) Observatoire de Paris heterogeneous architectures in computing continuum; re- NETL (USA) (France) sources and applications management in distributed envi- “Efficiency by Design”: as witnessed by Saiver HPC customers, the ronment; application analysis, design and development for naturally “chiller free” Saiver MDC solution, each year saves them machine learning, HPC-embedded context through many- millions of kW/h thanks to its very low mechanical PUE (down core architecture. to 1.02), as well as millions of potentially wasted liters of water. 2) Applied research in Data Science for extreme scale analysis Moreover, Saiver MDC dramatically reduces the CapEx, the time, of Big Data, including: the design of distributed databases and the space needed for the construction of the HPC facility, and processing architectures for heterogeneous types of compared with the traditional “brick and mortar” approach. data; the development of distributed Data Mining and Ma- chine Learning algorithms for different applications.

https://linksfoundation.com/ http://www.mellanox.com/ https://www.linkedin.com/company/neovia-innovation/ http://www.saiver.com/en/home/ 12 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 13

LINKS Foundation Mellanox Neovia Saiver SRL

RESEARCH ORGANISATION/FULL MEMBER GLOBAL CORPORATION/FULL MEMBER SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER

The LINKS Foundation was founded in 2016, exploiting and en- Mellanox Technologies is leading supplier of end-to-end NEOVIA Innovation is an SME service company based in Paris Saiver designs and manufactures a multi-purpose, highly ener- hancing the experience gained - from the Istituto Superiore Ethernet and InfiniBand intelligent interconnect solutions and Lyon, France. gy efficient, custom and “self-contained” Prefabricated Modular Mario Boella (ISMB) and from the Istituto Superiore on and services for servers, storage, and hyper-converged NEOVIA Innovation supports the development of innova- Data Center solution (MDC). The Saiver solution is the perfect Territorial Systems for Innovation (SiTI) - in the field of re- infrastructure. tive structures – either private or public – and their ecosys- choice for universities, research centers, private companies or search applied and technology transfer, adding a fundamen- tems. From the conception to market implementation, NEOVIA government organizations that are facing the need to build mi- tal element such as the enhancement of the research result. Mellanox intelligent interconnect solutions increase data Innovation offers services covering all the phases of complex cro, small or large HPC facilities but have budget and timing As part of the knowledge chain, the Foundation intends to en- centre efficiency by providing the highest throughput and collaborative projects. The company has a long record of act- constraints. Saiver was founded in 1959 in Monza (Italy) and hance the lever of applied research, contributing to the growth lowest latency, delivering data faster to applications and un- ing as a Project Management Office, working side by side with has been designing and manufacturing high quality, custom of the socio-economic system by relating the academic wor- locking system performance. Mellanox offers a choice of high the project coordinator to bring support to the partnership, Air Handling Units, for more than half of a century now. Saiver ld, the public and private sectors, activating large-scale pro- performance solutions: network and multicore processors, with a strong experience in EU-funded projects. Series A1 AHUs are delivered in more than 44 countries in the cesses and projects with significant impacts on territory. network adapters, switches, cables, software and silicon, that NEOVIA Innovation has a specific knowledge on Applied world, making Saiver to be recognized as a technology and qua- The Foundation is an ideal partner for both SMEs, which can accelerate application runtime and maximize business re- Mathematics, Numerical Analysis and High Performance lity leader. Further confirmation of the full commitment to qua- enhance the ability to develop innovation and gain competi- sults for a wide range of markets including high performance Computing, Big Data and Extreme Computing. The company lity is the fact that 100% of Saiver AHUs are Eurovent certified. tiveness using the experience and expertise of the founda- computing, enterprise data centres, Web 2.0, cloud, storage, has been involved for years in the European High Performance Thanks to its long experience in precision air treatment, adiaba- tion, both of the large enterprises that can, in partnership with network security, telecom and financial services. Computing Ecosystem, notably through its participation in tic cooling, modular structures and insulated sandwich panels, it, develop new ideas and new technologies. The Foundation past and present European HPC initiatives such as EESI, EESI- more than 10 years ago Saiver has developed an innovative, today relies on the technological and process competences 2, EXDCI and EXDCI-2. NEOVIA Innovation regularly takes part flexible, and “self-contained” Prefabricated MDC solution with of around 150 researchers working in close cooperation with in the BDEC international initiative. unparalleled energy and water saving performances in any cli- companies, academia and public administration. LINKS is or- matic zone. Important research centers, around the world, have ganized in research areas focused on some core sectors of ICT. selected the Saiver MDC solution to host their HPC equipment, Within the field of HPC systems, LINKS is active in the following instead of building rigid, inefficient and expensive “brick and two main topics. mortar” facilities. Among the customers that are running their 1) Applied research in Advanced Computing with: the study Supercomputers into a Saiver MDC we can mention: and design of distributed computing architectures based NASA AMES Research CSC (Finland) on public and private cloud platforms; low power archi- Center (USA) CCFE (UK) tectures in high performance perspective; orchestration of LLNL (USA) Observatoire de Paris heterogeneous architectures in computing continuum; re- NETL (USA) (France) sources and applications management in distributed envi- “Efficiency by Design”: as witnessed by Saiver HPC customers, the ronment; application analysis, design and development for naturally “chiller free” Saiver MDC solution, each year saves them machine learning, HPC-embedded context through many- millions of kW/h thanks to its very low mechanical PUE (down core architecture. to 1.02), as well as millions of potentially wasted liters of water. 2) Applied research in Data Science for extreme scale analysis Moreover, Saiver MDC dramatically reduces the CapEx, the time, of Big Data, including: the design of distributed databases and the space needed for the construction of the HPC facility, and processing architectures for heterogeneous types of compared with the traditional “brick and mortar” approach. data; the development of distributed Data Mining and Ma- chine Learning algorithms for different applications.

https://linksfoundation.com/ http://www.mellanox.com/ https://www.linkedin.com/company/neovia-innovation/ http://www.saiver.com/en/home/ 14 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 15

Submer TS-JFL UCit University of A Coruňa

SME/FULL MEMBER SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER RESEARCH ORGANISATION/FULL MEMBER

Submer, the Cleantech company, designs, manufactures Technology Strategy is a consulting company. Its main compe- UCit* is a new kind of HPC Solution Provider. The University of A Coruña (UDC) is a public institution whose and installs the SmartPods, a liquid Immersion Cooling so- tence is to analyse the potential of new IT technologies (how primary objective is the generation, management and dissemi- lution for any IT infrastructure, hardware and application: they can be successful in the IT market) and the impacts of At UCit we do focus on usage rather than technology itself. nation of culture and scientific, technological and professional HPC, Hyperscalers, Data Centers, Edge, AI, Deep Learning, IT innovations on the strategy of large companies and SMEs. We firmly believe that scientists, engineers, analysts should knowledge through the development of research and teaching. blockchain, etc. The SmartPods are the most practical, re- not spend their time trying to understand how infrastructure The ICT Research Center CITIC is a singular research center of silient, and highly efficient Immersion Cooling solution that Technology Strategy is a third party in the EXDCI-2 coordina- works nor trying to fix it. the University of A Coruña, whose main objective is to promote stand out for: tion action. It currently works on the analysis of We know that more than ever, they need to collaborate – share the advancement and excellence in research, development and Modular, compact design. new nanoelectronics or photonics technologies relevant for and access data across HPC resources which are distributed innovation in Information and Communications Technologies Being an all-in-one solution. the future of HPC, across organisations and geographies. (ICT) and to promote the transfer of knowledge and results to Using a dielectric fluid (SmartCoolant) with a long lifecycle, the current landscape of HPC research projects, We understand that flexibility and ease-of-use is critical to the society. The CITIC, with more than 250 researchers including completely biodegradable and non-hazardous for people the international development of HPC. support their evolving requirements from Machine Learning senior, postdoctoral and pre-doctoral researchers, has its re- and environment. and Decision Making to traditional HPC workloads. search activity organized around three main technology areas: Saving between 95% and 99% of cooling costs (representing We acknowledge that Computer Science is a Science and Data and Information Science: Big data, machine learning, 50% of the electricity bill). Exascale (or more…) is a critical resource needed to support the statistics, mathematical models and simulation, information Saving 85% of physical space. most important challenge humanity is facing. Simultaneously, retrieval, GIS, document management systems and web infor- Guaranteeing a PUE <1.03 anywhere in the world. at a smaller scale we will continue to act to democratize HPC mation, quantum computing. Achieving unprecedented IT Hardware densities in a limited through a true move towards HPC-as-a-Service. Perception, cognition and interaction: artificial intelligence, space (>100kW per rack footprint). artificial vision, recommendatory systems and gamification, Reducing building cost. UCit helps public and private organisations of all sizes to ac- cybersecurity. Increasing deployment speed with Edge ready solutions. cess and use seamlessly HPC resources on-premises, on HPC Computer architecture, sensor and communications: High Submer was founded in 2015 by Daniel Pope and Pol Valls to Centers or on public clouds. Performance Computing, Internet of Things, wireless commu- make operating and constructing Data Centers more sustai- We develop data-analysis (Analyze-IT) and machine lear- nications, cloud computing, distributed systems. nable. Both Daniel and Pol have previous track records of ta- ning(Predict-IT) software tools to understand how to opti- With more than 30 University-Industry collaborative projects king tech ventures from startups to high-value companies mize on-premises HPC usage. per year, CITIC is a meeting point between the UDC research (notably, creating, growing and operating the Data Center We enable HPC-as-a-Service to integrate Batch and groups and ICT companies of the Galicia region and Spain. Datahouse Internet, later acquired by Telefónica Group in Interactive workloads and to secure access to data and CITIC researchers have conducted a wide range of work on 2011). applications. HPC topics such as developing parallel applications for diffe- With offices, R&D lab and manufacturing plant located in We understand which workload can effectively move to pu- rent parallel architectures and programming paradigms (clus- Barcelona, the multi-disciplined world class team brings to- blic cloud, at which cost through our WorkCloud methodolo- ters, shared memory multiprocessors, GPUs), optimizing code gether skills and experience in Data Center operations, ther- gy, we prototype and run it in production. exploiting underlying processor architecture, design of fault to- modynamics engineering, HW & SW development, service We work towards the development of an “HPC Wind Tunnel” lerant applications, cloud computing application deployment delivery and innovation management, working with leading to enable the creation of such services. Our approach is and massive data set processing. Multiple funded research pro- companies around the globe. European and Collaborative. We look forward to future oppor- jects have been developed around these topics at national and tunities to work on projects together international level. *Pronounce “You See it!!”

https://submer.com/ http://www.ts-jfl.net/ https://www.ucit.fr/ https://www.citic.udc.es/ 14 • ETP4HPC • ANNUAL REPORT 2018 NEW MEMBERS NEW MEMBERS ANNUAL REPORT 2018 • ETP4HPC • 15

Submer TS-JFL UCit University of A Coruňa

SME/FULL MEMBER SME/ASSOCIATED MEMBER SME/ASSOCIATED MEMBER RESEARCH ORGANISATION/FULL MEMBER

Submer, the Cleantech company, designs, manufactures Technology Strategy is a consulting company. Its main compe- UCit* is a new kind of HPC Solution Provider. The University of A Coruña (UDC) is a public institution whose and installs the SmartPods, a liquid Immersion Cooling so- tence is to analyse the potential of new IT technologies (how primary objective is the generation, management and dissemi- lution for any IT infrastructure, hardware and application: they can be successful in the IT market) and the impacts of At UCit we do focus on usage rather than technology itself. nation of culture and scientific, technological and professional HPC, Hyperscalers, Data Centers, Edge, AI, Deep Learning, IT innovations on the strategy of large companies and SMEs. We firmly believe that scientists, engineers, analysts should knowledge through the development of research and teaching. blockchain, etc. The SmartPods are the most practical, re- not spend their time trying to understand how infrastructure The ICT Research Center CITIC is a singular research center of silient, and highly efficient Immersion Cooling solution that Technology Strategy is a third party in the EXDCI-2 coordina- works nor trying to fix it. the University of A Coruña, whose main objective is to promote stand out for: tion action. It currently works on the analysis of We know that more than ever, they need to collaborate – share the advancement and excellence in research, development and Modular, compact design. new nanoelectronics or photonics technologies relevant for and access data across HPC resources which are distributed innovation in Information and Communications Technologies Being an all-in-one solution. the future of HPC, across organisations and geographies. (ICT) and to promote the transfer of knowledge and results to Using a dielectric fluid (SmartCoolant) with a long lifecycle, the current landscape of HPC research projects, We understand that flexibility and ease-of-use is critical to the society. The CITIC, with more than 250 researchers including completely biodegradable and non-hazardous for people the international development of HPC. support their evolving requirements from Machine Learning senior, postdoctoral and pre-doctoral researchers, has its re- and environment. and Decision Making to traditional HPC workloads. search activity organized around three main technology areas: Saving between 95% and 99% of cooling costs (representing We acknowledge that Computer Science is a Science and Data and Information Science: Big data, machine learning, 50% of the electricity bill). Exascale (or more…) is a critical resource needed to support the statistics, mathematical models and simulation, information Saving 85% of physical space. most important challenge humanity is facing. Simultaneously, retrieval, GIS, document management systems and web infor- Guaranteeing a PUE <1.03 anywhere in the world. at a smaller scale we will continue to act to democratize HPC mation, quantum computing. Achieving unprecedented IT Hardware densities in a limited through a true move towards HPC-as-a-Service. Perception, cognition and interaction: artificial intelligence, space (>100kW per rack footprint). artificial vision, recommendatory systems and gamification, Reducing building cost. UCit helps public and private organisations of all sizes to ac- cybersecurity. Increasing deployment speed with Edge ready solutions. cess and use seamlessly HPC resources on-premises, on HPC Computer architecture, sensor and communications: High Submer was founded in 2015 by Daniel Pope and Pol Valls to Centers or on public clouds. Performance Computing, Internet of Things, wireless commu- make operating and constructing Data Centers more sustai- We develop data-analysis (Analyze-IT) and machine lear- nications, cloud computing, distributed systems. nable. Both Daniel and Pol have previous track records of ta- ning(Predict-IT) software tools to understand how to opti- With more than 30 University-Industry collaborative projects king tech ventures from startups to high-value companies mize on-premises HPC usage. per year, CITIC is a meeting point between the UDC research (notably, creating, growing and operating the Data Center We enable HPC-as-a-Service to integrate Batch and groups and ICT companies of the Galicia region and Spain. Datahouse Internet, later acquired by Telefónica Group in Interactive workloads and to secure access to data and CITIC researchers have conducted a wide range of work on 2011). applications. HPC topics such as developing parallel applications for diffe- With offices, R&D lab and manufacturing plant located in We understand which workload can effectively move to pu- rent parallel architectures and programming paradigms (clus- Barcelona, the multi-disciplined world class team brings to- blic cloud, at which cost through our WorkCloud methodolo- ters, shared memory multiprocessors, GPUs), optimizing code gether skills and experience in Data Center operations, ther- gy, we prototype and run it in production. exploiting underlying processor architecture, design of fault to- modynamics engineering, HW & SW development, service We work towards the development of an “HPC Wind Tunnel” lerant applications, cloud computing application deployment delivery and innovation management, working with leading to enable the creation of such services. Our approach is and massive data set processing. Multiple funded research pro- companies around the globe. European and Collaborative. We look forward to future oppor- jects have been developed around these topics at national and tunities to work on projects together international level. *Pronounce “You See it!!”

https://submer.com/ http://www.ts-jfl.net/ https://www.ucit.fr/ https://www.citic.udc.es/ 16 • ETP4HPC • ANNUAL REPORT 2018 FET-HPC FET-HPC ANNUAL REPORT 2018 • ETP4HPC • 17

FET-HPC 2015 projects: HPC system focused projects From package to system ExaNoDe - ExaNeSt - ECOSCALE A Wealth of Results! ARM based HPC Mont-Blanc Jean-François Lavignon, Reconfigurable systems EXTRA - MANGO - Green Flash

EXDCI2 IO Sage - NEXTGenIO

HPC stack and application-oriented projects

Most of the European FETHPC projects from Energy efficiency ANTAREX - READEX the September 2015 call have now been completed. Programming model INTERTWinE It is a good time to analyse their outcomes and identify Multiscale ComPat - AllScale recommendations for maximizing their impact. Generic applications: ExaHyPE - ExCAPE - ExaFLOW - NLAFET - ESCAPE First, let us have a look at the topics addressed Hyperbolic PDE, Machine learning, Fluid dynamics, Numerical linear algebra, by the 19 projects! The following classification Weather models reflects well the range of their topics: All the projects have taken part in a dedicated survey (run by The other types of results are APIs, applications optimizations, EXDCI2) out of which a list of the most relevant outputs (i.e. ‘IPs’: benchmark suites, trainings and demonstrators. intellectual property elements) generated by the projects has been produced. A simple quantitative analysis shows that most Interestingly, most of the IPs produced could be exploited. They of the results are in the field of software. Out of the 171 IPs list- address the needs of different types of users or stakeholders ed, 114 (two third) are software and 20 are hardware related. that can be represented in the following table:

API application benchmark demon- HW report SW training Total optimisation suite strator

Application developer 7 39 4 50 Application developer/ computing 8 4 1 13 centre Application developer/ end user 10 10

Computing centre 1 4 5 10 End user 6 2 34 2 44 ESD 4 5 9 HPC system customer 1 1 HPC system provider 13 9 22 HPC system provider/ application 1 3 4 developer HPC system provider/ computing 2 4 6 centre HPC system provider/ Processor 1 1 2 provider TOTAL 7 7 6 8 20 2 114 7 171 16 • ETP4HPC • ANNUAL REPORT 2018 FET-HPC FET-HPC ANNUAL REPORT 2018 • ETP4HPC • 17

FET-HPC 2015 projects: HPC system focused projects From package to system ExaNoDe - ExaNeSt - ECOSCALE A Wealth of Results! ARM based HPC Mont-Blanc Jean-François Lavignon, Reconfigurable systems EXTRA - MANGO - Green Flash

EXDCI2 IO Sage - NEXTGenIO

HPC stack and application-oriented projects

Most of the European FETHPC projects from Energy efficiency ANTAREX - READEX the September 2015 call have now been completed. Programming model INTERTWinE It is a good time to analyse their outcomes and identify Multiscale ComPat - AllScale recommendations for maximizing their impact. Generic applications: ExaHyPE - ExCAPE - ExaFLOW - NLAFET - ESCAPE First, let us have a look at the topics addressed Hyperbolic PDE, Machine learning, Fluid dynamics, Numerical linear algebra, by the 19 projects! The following classification Weather models reflects well the range of their topics: All the projects have taken part in a dedicated survey (run by The other types of results are APIs, applications optimizations, EXDCI2) out of which a list of the most relevant outputs (i.e. ‘IPs’: benchmark suites, trainings and demonstrators. intellectual property elements) generated by the projects has been produced. A simple quantitative analysis shows that most Interestingly, most of the IPs produced could be exploited. They of the results are in the field of software. Out of the 171 IPs list- address the needs of different types of users or stakeholders ed, 114 (two third) are software and 20 are hardware related. that can be represented in the following table:

API application benchmark demon- HW report SW training Total optimisation suite strator

Application developer 7 39 4 50 Application developer/ computing 8 4 1 13 centre Application developer/ end user 10 10

Computing centre 1 4 5 10 End user 6 2 34 2 44 ESD 4 5 9 HPC system customer 1 1 HPC system provider 13 9 22 HPC system provider/ application 1 3 4 developer HPC system provider/ computing 2 4 6 centre HPC system provider/ Processor 1 1 2 provider TOTAL 7 7 6 8 20 2 114 7 171 18 • ETP4HPC • ANNUAL REPORT 2018 FET-HPC FET-HPC ANNUAL REPORT 2018 • ETP4HPC • 19

t is clear that the FET HPC 2015 projects have generated a lot Targeted user ESD HPC system Computing centre Application End user of IPs that can be useful for end users and application deve- I provider developer lopers. Some of them target HPC system providers, computing Technology centres or could provide valuable technologies for integration projects (such as ESDs, Extreme Scale Demonstrators). Computing node/ Ecoscale Greenflash board Exanode ExaNest Mont-Blanc 3 Ecoscale Qualitatively, in the hardware area, one should note the deve- Mont-Blanc 3 lopment of several processor or FPGA boards, active interpo- Exanode ser technology, interconnect technologies (one using photonics) and cooling technology. Interconnect NextGenIO NextGenIO The system-oriented projects have developed 8 demonstra- Memory hierarchy ExaNest ExaNest Ecoscale tors, most of them open to experiment by external teams. The larger ones in terms of computing power come from ExaNest/ EcoScale, Mont-Blanc and Mango. The IO-related projects, Sage Storage/file system SAGE SAGE SAGE ExaNest and NextGenIO have also produced demonstrators that can be SAGE used for testing new storage hierarchy or object file system. NextGenIO Some APIs have been proposed by the projects in domains such Tools for FPGA MANGO MANGO MANGO as FPGA management, object file system, energy efficiency and EXTRA Ecoscale Ecoscale interaction between runtimes. EXTRA

In the software area, besides the enhancement of several appli- Software stack Greenflash MANGO MANGO Readex EcoScale Greenflash NextGenIO Antarex cations or application kernels, there are results in domains such Antarex Mont-Blanc 3 Mont Blanc 3 as FPGA programming, file systems, runtime, energy efficiency, Readex COMPAT time constrained computing, tuning/debugging tools… Readex Antarex The complete set of results is broad and di- verse. The exploitation of this basis could be Programming Exanode AllScale InterTwine MANGO ExaFlow model/ tool InterTwine InterTwine AllScale enabled by the new FET-HPC 2017 or other Greenflash projects as they will continue some of the EXDCI, and now EXDCI 2 (European Extreme Data MontBlanc3 work (e.g. EuroExa, Montblanc2020, Sage2, Exanode Escape 2 or Recipe). In addition to these pro- & Computing Initiative) supports the development InterTwine jects, it would be good to launch integration and implementation of a common strategy for Antarex projects that could use and further develop Optimization tools Ecoscale Greenflash NLAFET some of the IPs generated. This vertical inte- the European HPC ecosystem. PRACE and ETP4HPC Mont-Blanc 3 gration was one of the objectives of the ESD are both involved in this transversal project. EXTRA proposed by the HPC ecosystem. Based on Readex the current results, we also suggest horizon- More information: Antarex tal projects that will integrate some of the https://exdci.eu/ EXDCI on Twitter: @exdci_eu results that are closely related and comple- Library ExaFlow NLAFET ExCAPE mentary in one common framework. At least NLAFET in domains such as FPGA programming, run- Readex time and energy efficiency, it would be va- Antarex luable to have open environments for the European HPC ecosystem. Application NextGenIO NextGenIO ExaNext COMPAT COMPAT ExaFlow Antarex Antarex ESCAPE To summarise, the FET-HPC 2015 call has produced an impressive ExHype set of IPs that could be reinforced by vertical or horizontal in- ExCAPE tegration projects, thus pushing these new technologies toward NLAFET industrialisation. Readex Antarex 18 • ETP4HPC • ANNUAL REPORT 2018 FET-HPC FET-HPC ANNUAL REPORT 2018 • ETP4HPC • 19

t is clear that the FET HPC 2015 projects have generated a lot Targeted user ESD HPC system Computing centre Application End user of IPs that can be useful for end users and application deve- I provider developer lopers. Some of them target HPC system providers, computing Technology centres or could provide valuable technologies for integration projects (such as ESDs, Extreme Scale Demonstrators). Computing node/ Ecoscale Greenflash board Exanode ExaNest Mont-Blanc 3 Ecoscale Qualitatively, in the hardware area, one should note the deve- Mont-Blanc 3 lopment of several processor or FPGA boards, active interpo- Exanode ser technology, interconnect technologies (one using photonics) and cooling technology. Interconnect NextGenIO NextGenIO The system-oriented projects have developed 8 demonstra- Memory hierarchy ExaNest ExaNest Ecoscale tors, most of them open to experiment by external teams. The larger ones in terms of computing power come from ExaNest/ EcoScale, Mont-Blanc and Mango. The IO-related projects, Sage Storage/file system SAGE SAGE SAGE ExaNest and NextGenIO have also produced demonstrators that can be SAGE used for testing new storage hierarchy or object file system. NextGenIO Some APIs have been proposed by the projects in domains such Tools for FPGA MANGO MANGO MANGO as FPGA management, object file system, energy efficiency and EXTRA Ecoscale Ecoscale interaction between runtimes. EXTRA

In the software area, besides the enhancement of several appli- Software stack Greenflash MANGO MANGO Readex EcoScale Greenflash NextGenIO Antarex cations or application kernels, there are results in domains such Antarex Mont-Blanc 3 Mont Blanc 3 as FPGA programming, file systems, runtime, energy efficiency, Readex COMPAT time constrained computing, tuning/debugging tools… Readex Antarex The complete set of results is broad and di- verse. The exploitation of this basis could be Programming Exanode AllScale InterTwine MANGO ExaFlow model/ tool InterTwine InterTwine AllScale enabled by the new FET-HPC 2017 or other Greenflash projects as they will continue some of the EXDCI, and now EXDCI 2 (European Extreme Data MontBlanc3 work (e.g. EuroExa, Montblanc2020, Sage2, Exanode Escape 2 or Recipe). In addition to these pro- & Computing Initiative) supports the development InterTwine jects, it would be good to launch integration and implementation of a common strategy for Antarex projects that could use and further develop Optimization tools Ecoscale Greenflash NLAFET some of the IPs generated. This vertical inte- the European HPC ecosystem. PRACE and ETP4HPC Mont-Blanc 3 gration was one of the objectives of the ESD are both involved in this transversal project. EXTRA proposed by the HPC ecosystem. Based on Readex the current results, we also suggest horizon- More information: Antarex tal projects that will integrate some of the https://exdci.eu/ EXDCI on Twitter: @exdci_eu results that are closely related and comple- Library ExaFlow NLAFET ExCAPE mentary in one common framework. At least NLAFET in domains such as FPGA programming, run- Readex time and energy efficiency, it would be va- Antarex luable to have open environments for the European HPC ecosystem. Application NextGenIO NextGenIO ExaNext COMPAT COMPAT ExaFlow Antarex Antarex ESCAPE To summarise, the FET-HPC 2015 call has produced an impressive ExHype set of IPs that could be reinforced by vertical or horizontal in- ExCAPE tegration projects, thus pushing these new technologies toward NLAFET industrialisation. Readex Antarex 20 • ETP4HPC • ANNUAL REPORT 2018 SNAPSHOT SNAPSHOT ANNUAL REPORT 2018 • ETP4HPC • 21 The technology stacks of High Performance Computing and Big Data Computing What they can learn from each other The paper this snapshot has been taken from, reflects an undertaking between two distinct ecosystems - the European associations for HPC (www.ETP4HPC.eu) and Big Data Value (www.BDVA.eu). This and other similar collaborations have been supported by the EXDCI and EXDCI2 projects.

STACK OVERVIEW AND PROFILES potentially high number of compute nodes. huge number of relatively simple nodes (rather As some Big Data Computing (BDC) worklo- Data is usually partitioned between the no- than highly tuned nodes as for HPC), which do ads are increasing in computational inten- des, and any data sharing is effected by mes- not have to be tightly coupled. Several appli- sity (traditionally an HPC trait) and some High sage exchanges over a high-speed interconnect cations (or, instances of the same application) Performance Computing (HPC) workloads are between the nodes. HPC applications are usual- run simultaneously on multiple nodes (i.e. op- stepping up data intensity (traditionally a BDC ly iterative and closely coupled – the underlying posite to HPC where a single application uses trait), there is a clear trend towards innovative mathematical models do cause dependencies all the nodes in the cluster). Common practices approaches that will create significant value by between the data blocks owned by different today highlight that HPC uses Batch Queuing adopting certain techniques of the other. This nodes, and this requires frequent communica- system while BDC uses interactive python inter- is akin to a form of “transfer learning” or “gene tion of data updates (usually of lower-dimen- faces – although this difference is diminishing Download the complete white paper splicing” - where specific BDC techniques are sional parts of blocks) between the nodes. As with introduction of newer HPC workloads. infused into the HPC stack and vice-versa. a consequence, the interconnect fabric has to To aid consistent interpretability across High be very fast in terms of bandwidth and laten- So, in case of HPC workloads we are deploying Performance Computing (HPC) and Big Data cy, essentially turning the entire set of nodes a single large supercomputer to solve the en- Computing (BDC) ecosystems, key concepts into one single “supercomputer”. This requires tire problem. Adding more nodes for running expensive high-performance hardware, i.e. the same problem (“strong scaling”) will initial- high floating-point processing power and very ly reduce runtime, yet at a diminishing rate due fast (in terms of both latency and bandwidth) to the relative increase in communication and There is a clear trend towards network fabric between the nodes. A HPC appli- coordination overheads, and will later even in- © ETP4HPC innovative approaches that will cation operates as a closely coupled and syn- crease runtime again. To achieve good strong chronised over many nodes (up to hundreds of scaling, the performance of the interconnect At ICT 2018, ETP4HPC and BDVA signed a Memorandum create significant value by adopting thousands) and accesses the data on a storage fabric also has to be improved. In the second of Understanding to officialise their collaboration. certain techniques of the other. entity that is attached via the fast network to case (i.e. BDC), just adding more simple mo- all the nodes. In effect, such applications use des will reduce the amount of work per node, large parts of the system in “exclusive mode”. and since much less communication is required High Performance Data Analytics (HPDA) is not need to be explicitly explored upfront before between the nodes, also continue to reduce the represented in Figure 1 – however architec- addressing the potential of cross stack transfer Big Data Computing stacks are designed for runtime. ture-wise, it is close to the BDC column with learning opportunities. We begin by examining analytics workloads which are data intense, and the Server and Network layers from the HPC co- the top-level characteristics and objectives for focus on inferring new insights from big data Figure 1 offers a more fine-grained exploration lumn and potentially selected, more highly per- both HPC and BDC stacks (and extending this sets. Typical application areas include search, of the respective stacks – presenting a disag- to related workloads of Machine Learning (ML) data streaming, data preconditioning, and gregated profile view for the Supercomputing, and High Performance Data Analytics (HPDA)). pattern recognition. In the case of a Hadoop- Big Data Computing and Deep Learning sta- Be it on a leadership class supercomputer, type architecture (which is one of the more cks. We use the term Supercomputing here For Big Data, the power lies in having a huge small institutional cluster, or in the Cloud – HPC prevalent ecosystems in Big Data Computing), to denote a specific implementation of HPC, number of relatively simple nodes that do not and Big Data Computing (BDC) stacks traditio- the data set can be divided into multiple in- however, many aspects will also apply to ins- nally have distinct features, characteristics and dependent sub-problems (i.e. “embarrassingly titutional clusters. Deep Learning is a Big Data have to be tightly coupled, whereas HPC uses capabilities. parallel problems”) and those sub-problems Computing workload that readily stands to be- highly tuned nodes interconnected by a high can be solved by multiple “simple” nodes wi- nefit from HPC’s experience due to its reliance HPC stacks are designed for modelling and si- thout need for significant communication on (dense) linear algebra, hence we examine it performance fabric. mulation workloads which are compute intense, between processes during the “map” step. Only in more detail at this point. Deep Learning (aka and focus on the interaction amongst parts of a the “reduce” step requires bringing the results artificial neural networks) algorithms typical- system and the system as a whole. Typical ap- of parallelised parts together. ly scale well to produce increasingly better re- formant communication and I/O services. This plication areas include physics, biology, che- As the problem size increases, the number of sults where larger models can be fed with more “breed” of stack can bring performance and ef- mistry, and Computer Aided Engineering (CAE). the sub-problems increases accordingly. And as data (i.e. Big Data), this in turn requires more ficiency improvements for large-scale BDC pro- HPC targets extremely large amounts of com- the number of sub-problems grows, the num- computation to train (i.e. more compute is re- blems, and it could open the door to tackling pute operations, which often operate on large ber of simple nodes to solve them also rises. For quired) – typically using hand-crafted tuning of data analytics problems with inherent depen- sets of data and are executed in parallel on these architectures, the power lies in having a deep neural networks. dencies between nodes. 20 • ETP4HPC • ANNUAL REPORT 2018 SNAPSHOT SNAPSHOT ANNUAL REPORT 2018 • ETP4HPC • 21 The technology stacks of High Performance Computing and Big Data Computing What they can learn from each other The paper this snapshot has been taken from, reflects an undertaking between two distinct ecosystems - the European associations for HPC (www.ETP4HPC.eu) and Big Data Value (www.BDVA.eu). This and other similar collaborations have been supported by the EXDCI and EXDCI2 projects.

STACK OVERVIEW AND PROFILES potentially high number of compute nodes. huge number of relatively simple nodes (rather As some Big Data Computing (BDC) worklo- Data is usually partitioned between the no- than highly tuned nodes as for HPC), which do ads are increasing in computational inten- des, and any data sharing is effected by mes- not have to be tightly coupled. Several appli- sity (traditionally an HPC trait) and some High sage exchanges over a high-speed interconnect cations (or, instances of the same application) Performance Computing (HPC) workloads are between the nodes. HPC applications are usual- run simultaneously on multiple nodes (i.e. op- stepping up data intensity (traditionally a BDC ly iterative and closely coupled – the underlying posite to HPC where a single application uses trait), there is a clear trend towards innovative mathematical models do cause dependencies all the nodes in the cluster). Common practices approaches that will create significant value by between the data blocks owned by different today highlight that HPC uses Batch Queuing adopting certain techniques of the other. This nodes, and this requires frequent communica- system while BDC uses interactive python inter- is akin to a form of “transfer learning” or “gene tion of data updates (usually of lower-dimen- faces – although this difference is diminishing Download the complete white paper splicing” - where specific BDC techniques are sional parts of blocks) between the nodes. As with introduction of newer HPC workloads. infused into the HPC stack and vice-versa. a consequence, the interconnect fabric has to To aid consistent interpretability across High be very fast in terms of bandwidth and laten- So, in case of HPC workloads we are deploying Performance Computing (HPC) and Big Data cy, essentially turning the entire set of nodes a single large supercomputer to solve the en- Computing (BDC) ecosystems, key concepts into one single “supercomputer”. This requires tire problem. Adding more nodes for running expensive high-performance hardware, i.e. the same problem (“strong scaling”) will initial- high floating-point processing power and very ly reduce runtime, yet at a diminishing rate due fast (in terms of both latency and bandwidth) to the relative increase in communication and There is a clear trend towards network fabric between the nodes. A HPC appli- coordination overheads, and will later even in- © ETP4HPC innovative approaches that will cation operates as a closely coupled and syn- crease runtime again. To achieve good strong chronised over many nodes (up to hundreds of scaling, the performance of the interconnect At ICT 2018, ETP4HPC and BDVA signed a Memorandum create significant value by adopting thousands) and accesses the data on a storage fabric also has to be improved. In the second of Understanding to officialise their collaboration. certain techniques of the other. entity that is attached via the fast network to case (i.e. BDC), just adding more simple mo- all the nodes. In effect, such applications use des will reduce the amount of work per node, large parts of the system in “exclusive mode”. and since much less communication is required High Performance Data Analytics (HPDA) is not need to be explicitly explored upfront before between the nodes, also continue to reduce the represented in Figure 1 – however architec- addressing the potential of cross stack transfer Big Data Computing stacks are designed for runtime. ture-wise, it is close to the BDC column with learning opportunities. We begin by examining analytics workloads which are data intense, and the Server and Network layers from the HPC co- the top-level characteristics and objectives for focus on inferring new insights from big data Figure 1 offers a more fine-grained exploration lumn and potentially selected, more highly per- both HPC and BDC stacks (and extending this sets. Typical application areas include search, of the respective stacks – presenting a disag- to related workloads of Machine Learning (ML) data streaming, data preconditioning, and gregated profile view for the Supercomputing, and High Performance Data Analytics (HPDA)). pattern recognition. In the case of a Hadoop- Big Data Computing and Deep Learning sta- Be it on a leadership class supercomputer, type architecture (which is one of the more cks. We use the term Supercomputing here For Big Data, the power lies in having a huge small institutional cluster, or in the Cloud – HPC prevalent ecosystems in Big Data Computing), to denote a specific implementation of HPC, number of relatively simple nodes that do not and Big Data Computing (BDC) stacks traditio- the data set can be divided into multiple in- however, many aspects will also apply to ins- nally have distinct features, characteristics and dependent sub-problems (i.e. “embarrassingly titutional clusters. Deep Learning is a Big Data have to be tightly coupled, whereas HPC uses capabilities. parallel problems”) and those sub-problems Computing workload that readily stands to be- highly tuned nodes interconnected by a high can be solved by multiple “simple” nodes wi- nefit from HPC’s experience due to its reliance HPC stacks are designed for modelling and si- thout need for significant communication on (dense) linear algebra, hence we examine it performance fabric. mulation workloads which are compute intense, between processes during the “map” step. Only in more detail at this point. Deep Learning (aka and focus on the interaction amongst parts of a the “reduce” step requires bringing the results artificial neural networks) algorithms typical- system and the system as a whole. Typical ap- of parallelised parts together. ly scale well to produce increasingly better re- formant communication and I/O services. This plication areas include physics, biology, che- As the problem size increases, the number of sults where larger models can be fed with more “breed” of stack can bring performance and ef- mistry, and Computer Aided Engineering (CAE). the sub-problems increases accordingly. And as data (i.e. Big Data), this in turn requires more ficiency improvements for large-scale BDC pro- HPC targets extremely large amounts of com- the number of sub-problems grows, the num- computation to train (i.e. more compute is re- blems, and it could open the door to tackling pute operations, which often operate on large ber of simple nodes to solve them also rises. For quired) – typically using hand-crafted tuning of data analytics problems with inherent depen- sets of data and are executed in parallel on these architectures, the power lies in having a deep neural networks. dencies between nodes. 22 • ETP4HPC • ANNUAL REPORT 2018 SNAPSHOT SNAPSHOT ANNUAL REPORT 2018 • ETP4HPC • 23

SUPERCOMPUTING (SC) DEEP LEARNING (DL) BIG DATA COMPUTING (BDC)

In-house, commercial Framework-dependent applications Framework-dependent applications and OSS applications [e.g. Paraview], [e.g. NLP, voice, image] [e.g. 2/3/4-D] Apps Services Remote desktop Web mechanisms Secure Sockets Layer Boundary

Interaction Interaction [e.g. Virtual Network Computing (VNC)], [e.g. Google & Amazon Web Services], [e.g. SSL certificates] Secure Sockets Layer Secure Sockets Layer [e.g. SSL certificates] [e.g. SSL certificates]

Domain specific frameworks [e.g. PETSc], DNN training & inference frameworks Machine Learning (traditional) Batch processing of large tightly [e.g. Caffe, Tensorflow, [e.g. Mahout, Scikit-learn, BigDL], Theano, Neon, Torch], Services coordinated parallel jobs Analytics / Statistics [e.g. Python, ROOR, DNN numerical libraries Processing Processing [100s - 10000s of processes communicating R, Matlab, SAS, SPSS, Sci-Py], frequently with each other] [e.g. dense LA] Iterative [e.g. Apache Hama], Interactive [e.g. Dremel, Drill, Tez, Impala, Shark, Presto, BlinkDB, Spark], Batch / Map Reduce [e.g. MapReduce, YARN, , Spark], Real-time / streaming [e.g. Flink, YARN, © Shutterstock Middleware & Mgmt Middleware Druid, Pinot, Storm, Samza, Spark]

Data Storage: Parallel File Systems Data Storage Serialization [e.g. Avro], [e.g. Lustre, GPFS, BeeGFS, PanFS, PVFS], [e.g. HDFS, Hbase, Amazon S3, GlusterFS, Meta Data [e.g.Hcatolog], What Big Data Computing can Learn from HPC Therefore, analytics is expected to become a Model / Model I/O libraries [e.g. HDF5, PnetCDF, ADIOS] Cassandra, MongoDB, Hana, Vora] Data Ingestion & Integration Big Data Computing applications are expected fully-fledged software component of the HPC

Information Information [e.g. Flume, Sqoop, Apache Nifi, Elastic Logstash, Kafka, Talend, Pentaho], to move towards more compute-intensive algo- ecosystem to process the massive results of Data Storage [e.g. HDFS, Hbase, Amazon S3, rithms to reap deeper insights across descrip- large numerical simulations or to feed numer- GlusterFS, Cassandra, MongoDB, Hana, Vora], tive (explaining what is happening), diagnostics ical models with complex data produced by Management Services Management Cluster Mgmt [e.g. YARN, MESO] (explaining why it happened), prognostics (pre- scientific and industrial instruments/systems dicting what can happen) and prescriptive Messaging & Coordination Messaging & Coordination Messaging (proactive handling) analysis. HPC capabilities [e.g. MPI/PGAs, direct fabric access], [e.g. Machine Learning Scaling Library (MLSL)] [e.g. (streaming)] are expected to be of assistance to faster de-

Services Threading [e.g. OpenMP, task-based models] cision making across more complex data sets. As already mentioned, Deep Learning and au-

Communication Communication tomated deep neural network design creation

are workloads that readily stand to benefit Conventional compiled languages Scripting languages Workflow & Scheduling [e.g. Oozie], from HPC’s experience in optimising and par- [e.g. C/C++/Fortran], [e.g. Python] Scripting languages [e.g. Keras, Mocha, allelising difficult optimisation algorithms Scripting languages [e.g. Python, Julia] Pig, Hive, JAQL, Python, Java, Scala] problems. Major requirements include high- Workflow / Workflow

Task Services Task ly scalable performance, high memory band- width, low power consumption, and excellent Domain numerical libraries Batching for training [built into DL frameworks], Distributed Coordination reduced precision arithmetic performance. [e.g. PETs, ScaLAPACK, BLAS, FFTW…], Reduced precision[e.g. Inference engines], [e.g. ZooKeeper, Chubby, Paxos], What HPC can Learn from Big Data Computing Performance & debugging Local distribution layer [e.g. Round robin/ Provisioning, Managing & Monitoring HPC is now generating models of unprecedent- [e.g. DDT, Vtune, Vampir], load balancing for inference], [e.g. Ambari, Whirr, BigTop, Chukwa], ed realism and accuracy. At the most extreme, Accelerator APIs Accelerator APIs [e.g. CUDA, OpenCL], SVM systems these models represent the real world using

System SW System [e.g. CUDA, OpenCL, OpenACC], [e.g. Google Sofia, libSVM, svm-py...], Hardware Optimization Libraries

trillions of discrete degrees of freedom, which © Shutterstock Data Protection [e.g. cuDNN, MKL-DNN, etc], Hardware Optimization Libraries [e.g. System AAA, OS/PFS file access control] [e.g. DAAL, DPDK, MKL, etc] require huge memory and compute perfor- LA numerical libraries [e.g. BLAS, LAPACK, etc] mance on extremely large (typically scale-out) Batch scheduling [e.g. SLURM], Virtualization [e.g. Dockers, Kubernetes, systems. These simulations generate enormous (e.g. telescope, satellite, sequencers, particle Virtualization [e.g. Dockers, Kubernetes, VMware, Xen, KVM, HyperX], Cluster management [e.g. OpenHPC] VMware, Xen, KVM, HyperX], amounts of output data. Researchers need to accelerators, etc) or by large scale systems of Operating System Container Virtualization [e.g. Docker], Operating System [e.g. Linux (RedHat, Ubuntu, etc), Windows] apply advanced and highly complex analytics systems (e.g. edge devices, smart sensors, IoT and processing (including visualisation) to this devices, cyber-physical systems, etc). Operating System[e.g. Linux OS Variant] [e.g. Linux (RedHat, Ubuntu, etc), Windows]

System Management and Security Services Management System data to generate insights, which means that In addition, HPC simulations could profit sig- off-loading to remote platforms is simply not nificantly from iterative refinements of their an option. Thus, data analytics needs to take underlying models effected by advanced data Local Servers Network Local Servers Network Local Servers Network storage [e.g. CPU [e.g. Infini- storage [e.g. CPU [e.g. Ether- storage [e.g. CPU [e.g. place in-situ, and perhaps in close coordina- analytics tools and machine learning tech- [e.g. Storage & Memory Band & OPA [e.g. Local & Memory net] [e.g. direct & Memory Ethernet tion with tightly coupled synergistic comput- niques, e.g. by accelerated convergence. HPC & I/O nodes, (Gen Purpo- fabrics] Storage or (Gen Pur- attached (Gen Pur- fabrics] ing platforms (e.g. visualisation engines). These can also benefit from Big Data management

Infrastructure NAS] se CPU NAS/SAN] pose CPU + Storage] pose CPU Hardware nodes, GPU/FPGA, hyper- investigations will have important ecosystem approaches, especially in the case of dynam- GPUs, TPU)] convergent benefits for multi-scale, multi-physics coupled ic scenarios (Big Data Computing is much more FPGAs)] nodes)] applications, where instances are running on flexible with the notions of data at rest, data on tens to hundreds-of-thousands of nodes. move, data in change). 22 • ETP4HPC • ANNUAL REPORT 2018 SNAPSHOT SNAPSHOT ANNUAL REPORT 2018 • ETP4HPC • 23

SUPERCOMPUTING (SC) DEEP LEARNING (DL) BIG DATA COMPUTING (BDC)

In-house, commercial Framework-dependent applications Framework-dependent applications and OSS applications [e.g. Paraview], [e.g. NLP, voice, image] [e.g. 2/3/4-D] Apps Services Remote desktop Web mechanisms Secure Sockets Layer Boundary

Interaction Interaction [e.g. Virtual Network Computing (VNC)], [e.g. Google & Amazon Web Services], [e.g. SSL certificates] Secure Sockets Layer Secure Sockets Layer [e.g. SSL certificates] [e.g. SSL certificates]

Domain specific frameworks [e.g. PETSc], DNN training & inference frameworks Machine Learning (traditional) Batch processing of large tightly [e.g. Caffe, Tensorflow, [e.g. Mahout, Scikit-learn, BigDL], Theano, Neon, Torch], Services coordinated parallel jobs Analytics / Statistics [e.g. Python, ROOR, DNN numerical libraries Processing Processing [100s - 10000s of processes communicating R, Matlab, SAS, SPSS, Sci-Py], frequently with each other] [e.g. dense LA] Iterative [e.g. Apache Hama], Interactive [e.g. Dremel, Drill, Tez, Impala, Shark, Presto, BlinkDB, Spark], Batch / Map Reduce [e.g. MapReduce, YARN, Sqoop, Spark], Real-time / streaming [e.g. Flink, YARN, © Shutterstock Middleware & Mgmt Middleware Druid, Pinot, Storm, Samza, Spark]

Data Storage: Parallel File Systems Data Storage Serialization [e.g. Avro], [e.g. Lustre, GPFS, BeeGFS, PanFS, PVFS], [e.g. HDFS, Hbase, Amazon S3, GlusterFS, Meta Data [e.g.Hcatolog], What Big Data Computing can Learn from HPC Therefore, analytics is expected to become a Model / Model I/O libraries [e.g. HDF5, PnetCDF, ADIOS] Cassandra, MongoDB, Hana, Vora] Data Ingestion & Integration Big Data Computing applications are expected fully-fledged software component of the HPC

Information Information [e.g. Flume, Sqoop, Apache Nifi, Elastic Logstash, Kafka, Talend, Pentaho], to move towards more compute-intensive algo- ecosystem to process the massive results of Data Storage [e.g. HDFS, Hbase, Amazon S3, rithms to reap deeper insights across descrip- large numerical simulations or to feed numer- GlusterFS, Cassandra, MongoDB, Hana, Vora], tive (explaining what is happening), diagnostics ical models with complex data produced by Management Services Management Cluster Mgmt [e.g. YARN, MESO] (explaining why it happened), prognostics (pre- scientific and industrial instruments/systems dicting what can happen) and prescriptive Messaging & Coordination Messaging & Coordination Messaging (proactive handling) analysis. HPC capabilities [e.g. MPI/PGAs, direct fabric access], [e.g. Machine Learning Scaling Library (MLSL)] [e.g. Apache Kafka (streaming)] are expected to be of assistance to faster de-

Services Threading [e.g. OpenMP, task-based models] cision making across more complex data sets. As already mentioned, Deep Learning and au-

Communication Communication tomated deep neural network design creation

are workloads that readily stand to benefit Conventional compiled languages Scripting languages Workflow & Scheduling [e.g. Oozie], from HPC’s experience in optimising and par- [e.g. C/C++/Fortran], [e.g. Python] Scripting languages [e.g. Keras, Mocha, allelising difficult optimisation algorithms Scripting languages [e.g. Python, Julia] Pig, Hive, JAQL, Python, Java, Scala] problems. Major requirements include high- Workflow / Workflow

Task Services Task ly scalable performance, high memory band- width, low power consumption, and excellent Domain numerical libraries Batching for training [built into DL frameworks], Distributed Coordination reduced precision arithmetic performance. [e.g. PETs, ScaLAPACK, BLAS, FFTW…], Reduced precision[e.g. Inference engines], [e.g. ZooKeeper, Chubby, Paxos], What HPC can Learn from Big Data Computing Performance & debugging Local distribution layer [e.g. Round robin/ Provisioning, Managing & Monitoring HPC is now generating models of unprecedent- [e.g. DDT, Vtune, Vampir], load balancing for inference], [e.g. Ambari, Whirr, BigTop, Chukwa], ed realism and accuracy. At the most extreme, Accelerator APIs Accelerator APIs [e.g. CUDA, OpenCL], SVM systems these models represent the real world using

System SW System [e.g. CUDA, OpenCL, OpenACC], [e.g. Google Sofia, libSVM, svm-py...], Hardware Optimization Libraries

trillions of discrete degrees of freedom, which © Shutterstock Data Protection [e.g. cuDNN, MKL-DNN, etc], Hardware Optimization Libraries [e.g. System AAA, OS/PFS file access control] [e.g. DAAL, DPDK, MKL, etc] require huge memory and compute perfor- LA numerical libraries [e.g. BLAS, LAPACK, etc] mance on extremely large (typically scale-out) Batch scheduling [e.g. SLURM], Virtualization [e.g. Dockers, Kubernetes, systems. These simulations generate enormous (e.g. telescope, satellite, sequencers, particle Virtualization [e.g. Dockers, Kubernetes, VMware, Xen, KVM, HyperX], Cluster management [e.g. OpenHPC] VMware, Xen, KVM, HyperX], amounts of output data. Researchers need to accelerators, etc) or by large scale systems of Operating System Container Virtualization [e.g. Docker], Operating System [e.g. Linux (RedHat, Ubuntu, etc), Windows] apply advanced and highly complex analytics systems (e.g. edge devices, smart sensors, IoT and processing (including visualisation) to this devices, cyber-physical systems, etc). Operating System[e.g. Linux OS Variant] [e.g. Linux (RedHat, Ubuntu, etc), Windows]

System Management and Security Services Management System data to generate insights, which means that In addition, HPC simulations could profit sig- off-loading to remote platforms is simply not nificantly from iterative refinements of their an option. Thus, data analytics needs to take underlying models effected by advanced data Local Servers Network Local Servers Network Local Servers Network storage [e.g. CPU [e.g. Infini- storage [e.g. CPU [e.g. Ether- storage [e.g. CPU [e.g. place in-situ, and perhaps in close coordina- analytics tools and machine learning tech- [e.g. Storage & Memory Band & OPA [e.g. Local & Memory net] [e.g. direct & Memory Ethernet tion with tightly coupled synergistic comput- niques, e.g. by accelerated convergence. HPC & I/O nodes, (Gen Purpo- fabrics] Storage or (Gen Pur- attached (Gen Pur- fabrics] ing platforms (e.g. visualisation engines). These can also benefit from Big Data management

Infrastructure NAS] se CPU NAS/SAN] pose CPU + Storage] pose CPU Hardware nodes, GPU/FPGA, hyper- investigations will have important ecosystem approaches, especially in the case of dynam- GPUs, TPU)] convergent benefits for multi-scale, multi-physics coupled ic scenarios (Big Data Computing is much more FPGAs)] nodes)] applications, where instances are running on flexible with the notions of data at rest, data on tens to hundreds-of-thousands of nodes. move, data in change). 24 • ETP4HPC • ANNUAL REPORT 2018 HORIZON HORIZON ANNUAL REPORT 2018 • ETP4HPC • 25

Horizon Europe HPC Vision 2018 Workshops and Future Plans

MARCIN OSTASZ, ETP4HPC OFFICE

he road mapping efforts of ETP4HPC du- ring the course of Horizon 2020 have been Tcentred on the publication of our Strategic Research Agenda (SRA). Each roadmap outlines the milestones needed to produce competitive HPC technology in Europe. However, there are two paradigms that require us to change that approach. First, it is clear that HPC will not operate in iso- lation. While the goals of achieving technolo- gical independence by Europe and increasing the global market share of the European HPC technology providers remains valid, we need to work together with other related areas such as Big Data, Internet of Things and Artificial Intelligence in order to be able to deliver the end-to-end solutions required by the European HiPEAC is a European network of almost 2,000 world-class computing systems researchers, industry representatives and students. It provides a platform for cross-disciplinary research collaboration, promotes the transformation of research results into products and services, and is an incubator for the next generation of world-class computer scientists. It is funded by Horizon 2020.

ETP4HPC and HiPEAC have a long history of collaboration. In 2018, we co- organised a Vision workshop at ISC, as well as a networking session at ICT 2018.

The Vision will set the scene for our work wit- hin Horizon Europe. It will form the basis of our them to share their long-term visions of com- The ICT’18 event in Vienna provided an oppor- next Strategic Research Agendas (SRAs), which puting and HPC. Their input was very diverse tunity to compare the foundations of our HPC will allow EuroHPC to formulate a competitive and daring, ranging from a vision of future use Vision with the results of their thinking. Both research programme. models to new approaches to programming. It BDVA and HiPEAC Visions emphasise the im- is clear that HPC will play an important role in portance of understanding use cases involving addressing the challenges of the modern world HPC, Big Data solutions and end-to-end sys- scientific and industrial user. There is a need, The HPC Vision document our Association is working but the main task of our community is to build tems. This is in line with the work we have been therefore, to expand our thinking and synchro- models for interactions between HPC, Big Data, carrying out together with BDVA, the result of nise our objectives with those of the related on addresses these paradigm shifts. The Vision will AI and IoT. which is a set of scientific and industrial use areas. set the scene for our work within Horizon Europe. scenarios. The task ahead of us is to identify the patterns and bottlenecks to be addressed Secondly, a new approach is needed to define It will form the basis of our next Strategic Research in the future research work. Both BDVA and HiPEAC Visions emphasise the the scope of our road mapping activities in the Agendas (SRAs), which will allow EuroHPC The Vision document is due by the end of March new Horizon Europe programme. It is clear that importance of understanding use cases involving 2019. In parallel, we are starting to prepare our we need to move beyond our classical SRA. We to formulate a competitive research programme. next SRA. In this process, we will interact clo- need to figure out where the world is going and HPC, Big Data solutions and end-to-end systems. sely with EuroHPC to understand its needs and in which areas of HPC technology provision and also with the European Processor Initiative in use Europe can excel and achieve long-term The first milestone in the process of prepa- order to facilitate the technologies that turn leadership. ring the Vision was a workshop we held at ISC In the meantime, two sister organisations – its results into globally competitive solutions. The HPC Vision document our Association is 2018 in Frankfurt. We invited world-class HPC BDVA (Big Data) and HiPEAC (architectures and This work has been supported by the EXDCI and working on addresses these paradigm shifts. experts from Europe, US and Japan and asked compilers) – have embarked on a similar effort. EXDCI2 projects. 24 • ETP4HPC • ANNUAL REPORT 2018 HORIZON HORIZON ANNUAL REPORT 2018 • ETP4HPC • 25

Horizon Europe HPC Vision 2018 Workshops and Future Plans

MARCIN OSTASZ, ETP4HPC OFFICE

he road mapping efforts of ETP4HPC du- ring the course of Horizon 2020 have been Tcentred on the publication of our Strategic Research Agenda (SRA). Each roadmap outlines the milestones needed to produce competitive HPC technology in Europe. However, there are two paradigms that require us to change that approach. First, it is clear that HPC will not operate in iso- lation. While the goals of achieving technolo- gical independence by Europe and increasing the global market share of the European HPC technology providers remains valid, we need to work together with other related areas such as Big Data, Internet of Things and Artificial Intelligence in order to be able to deliver the end-to-end solutions required by the European HiPEAC is a European network of almost 2,000 world-class computing systems researchers, industry representatives and students. It provides a platform for cross-disciplinary research collaboration, promotes the transformation of research results into products and services, and is an incubator for the next generation of world-class computer scientists. It is funded by Horizon 2020.

ETP4HPC and HiPEAC have a long history of collaboration. In 2018, we co- organised a Vision workshop at ISC, as well as a networking session at ICT 2018.

The Vision will set the scene for our work wit- hin Horizon Europe. It will form the basis of our them to share their long-term visions of com- The ICT’18 event in Vienna provided an oppor- next Strategic Research Agendas (SRAs), which puting and HPC. Their input was very diverse tunity to compare the foundations of our HPC will allow EuroHPC to formulate a competitive and daring, ranging from a vision of future use Vision with the results of their thinking. Both research programme. models to new approaches to programming. It BDVA and HiPEAC Visions emphasise the im- is clear that HPC will play an important role in portance of understanding use cases involving addressing the challenges of the modern world HPC, Big Data solutions and end-to-end sys- scientific and industrial user. There is a need, The HPC Vision document our Association is working but the main task of our community is to build tems. This is in line with the work we have been therefore, to expand our thinking and synchro- models for interactions between HPC, Big Data, carrying out together with BDVA, the result of nise our objectives with those of the related on addresses these paradigm shifts. The Vision will AI and IoT. which is a set of scientific and industrial use areas. set the scene for our work within Horizon Europe. scenarios. The task ahead of us is to identify the patterns and bottlenecks to be addressed Secondly, a new approach is needed to define It will form the basis of our next Strategic Research in the future research work. Both BDVA and HiPEAC Visions emphasise the the scope of our road mapping activities in the Agendas (SRAs), which will allow EuroHPC The Vision document is due by the end of March new Horizon Europe programme. It is clear that importance of understanding use cases involving 2019. In parallel, we are starting to prepare our we need to move beyond our classical SRA. We to formulate a competitive research programme. next SRA. In this process, we will interact clo- need to figure out where the world is going and HPC, Big Data solutions and end-to-end systems. sely with EuroHPC to understand its needs and in which areas of HPC technology provision and also with the European Processor Initiative in use Europe can excel and achieve long-term The first milestone in the process of prepa- order to facilitate the technologies that turn leadership. ring the Vision was a workshop we held at ISC In the meantime, two sister organisations – its results into globally competitive solutions. The HPC Vision document our Association is 2018 in Frankfurt. We invited world-class HPC BDVA (Big Data) and HiPEAC (architectures and This work has been supported by the EXDCI and working on addresses these paradigm shifts. experts from Europe, US and Japan and asked compilers) – have embarked on a similar effort. EXDCI2 projects. 26 • ETP4HPC • ANNUAL REPORT 2018 NEW COES NEW COES ANNUAL REPORT 2018 • ETP4HPC • 27

CHEESE CENTRE OF EXCELLENCE FOR EXASCALE AND SOLID EARTH

New Centres of Excellence ChEESE will address extreme compu- chine learning and predictive tech- ting scientific and societal challenges niques from monitoring earthquake by harnessing European institutions and volcanic activity. The selected co- in computing applications in charge of operational monito- des will be audit and optimized at both ring networks, tier-0 supercomputing intranode level (including heteroge- centres, academia, hardware develo- neous computing nodes) and inter- Coordinator: BSC, Spain Other Partners: en new Centres of Excellence (CoEs) for com- second phase. They will continue to help stren- pers and third-parties from SMEs, In- node level on heterogeneous hardware puting applications were selected following gthen Europe’s competitiveness in HPC appli- dustry and public-governance. The prototypes for the upcoming Exascale Atos (Bull SAS), France IPGP, France T scientific challenging ambition is to architectures, thereby ensuring com- CINECA, Italy LMU, Germany the 2018 EC call under e-Infrastructures. Their cations, covering domains such as life sciences, CNRS Marseille, France NGI, Norway prepare 10 open-source flagship co- mitment with a co-design approach. mission: be user-focused, develop a culture of medicine, energy, materials science, or provi- ETH Zürich, Switzerland TUM, Germany excellence, both scientific and industrial, and ding services to help scale HPC applications des to solve Exascale problems on Preparation to Exascale will consider HLRS, Germany University of Málaga, place computational science and the harnes- towards exascale. FocusCoE has a transversal computational seismology, magne- also code inter-kernel aspects of si- IMO, Iceland Spain INGV, Italy sing of “big data” at the centre of scientific dis- mission, supporting the HPC CoEs to more ef- tohydrodynamics, physical volcano- mulation workflows like data mana- covery and industrial competitiveness. fectively fulfil their role within the European logy, tsunamis, and data analysis and gement and sharing, I/O, post-process www.cheese-coe.eu BioExcel, CompBioMed, EoCoE, Esiwace, MaX HPC ecosystem and the EuroHPC JU. predictive techniques, including ma- and visualization. @Cheese_CoE and POP have been granted funding for a

EXCELLERAT

EXCELLERAT is a European Centre of Ex- rent economic and societal challenges. cellence for Engineering Applications. The setup of EXCELLERAT complements Several European High Performance the current activities of each HPC centre Computing Centres teamed up to sup- involved and enables the development Coordinator: HLRS, Germany port several key engineering industries of next-generation engineering appli- Other Partners: in Europe in dealing with complex ap- cations. The aim of EXCELLERAT is to ARCTUR DOO, Slovenia KTH, Sweden plications using HPC technologies. We support the engineering community at BSC, Spain RWTH Aachen, Germany CERFACS, France SICOS BW GmbH, Germany conduct research, provide leadership, a level that no single HPC provider can. CINECA, Italy SSC-Services GmbH, guidance on good practice, user sup- DLR, Germany Germany port mechanisms as well as training EPCC, United Kingdom TERATEC, France and networking activities to our com- Fraunhofer Gesellschaft, munity. Our goal is to help Europe Germany leverage scientific progress in HPC https://www.excellerat.eu/ driven engineering and address cur- Twitter: @EXCELLERAT_CoE

HIDALGO HPC AND BIG DATA TECHNOLOGIES FOR GLOBAL CHALLENGES

Developing evidence and understan- communities has already started wit- ding concerning Global Challenges and hin the Centre of Excellence for Global their underlying parameters is rapidly Systems Science (CoeGSS), the impor- becoming a vital challenge for mo- tance of assisted decision making by dern societies. Various examples, such addressing global, multi-dimensional as health care, the transition of green problems is more important than ever. technologies or the evolution of the Global decisions with their dependen- global climate up to hazards and stress cies cannot be based on incomplete tests for the financial sector demons- problem assessments or gut feelings

trate the complexity of the involved anymore, since impacts cannot be fo- Partners: systems and underpin their interdisci- reseen without an accurate problem plinary as well as their globality. This representation and its systemic evolu- ARH Informatikai Zrt., Magyar Kozut Nonprofit Hungary Zrt., Hungary becomes even more obvious if coupled tion. Therefore, HiDALGO bridges that Atos Spain, Spain MoonStar systems are considered: problem sta- shortcoming by enabling highly accu- Brunel University London, Communications GmbH, Three totally new CoEs tements and their corresponding pa- rate simulations, data analytics and United Kingdom Germany are starting, addressing rameters are dependent on each data visualisation, but also by provi- DIALOGIK, Germany PSNC, Poland ECMWFs, United Kingdom Szechenyi Istvan three new domains: other, which results in interconnec- ding technology as well as knowledge HLRS, Germany University, Hungary geophysics, engineering ted simulations with a tremendous on how to integrate the various work- ICCS, Greece Universität Salzburg, and multidimensional overall complexity. Although the pro- flows and the corresponding data. Know Center GmbH, Austria Austria challenges. cess for bringing together the different 26 • ETP4HPC • ANNUAL REPORT 2018 NEW COES NEW COES ANNUAL REPORT 2018 • ETP4HPC • 27

CHEESE CENTRE OF EXCELLENCE FOR EXASCALE AND SOLID EARTH

New Centres of Excellence ChEESE will address extreme compu- chine learning and predictive tech- ting scientific and societal challenges niques from monitoring earthquake by harnessing European institutions and volcanic activity. The selected co- in computing applications in charge of operational monito- des will be audit and optimized at both ring networks, tier-0 supercomputing intranode level (including heteroge- centres, academia, hardware develo- neous computing nodes) and inter- Coordinator: BSC, Spain Other Partners: en new Centres of Excellence (CoEs) for com- second phase. They will continue to help stren- pers and third-parties from SMEs, In- node level on heterogeneous hardware puting applications were selected following gthen Europe’s competitiveness in HPC appli- dustry and public-governance. The prototypes for the upcoming Exascale Atos (Bull SAS), France IPGP, France T scientific challenging ambition is to architectures, thereby ensuring com- CINECA, Italy LMU, Germany the 2018 EC call under e-Infrastructures. Their cations, covering domains such as life sciences, CNRS Marseille, France NGI, Norway prepare 10 open-source flagship co- mitment with a co-design approach. mission: be user-focused, develop a culture of medicine, energy, materials science, or provi- ETH Zürich, Switzerland TUM, Germany excellence, both scientific and industrial, and ding services to help scale HPC applications des to solve Exascale problems on Preparation to Exascale will consider HLRS, Germany University of Málaga, place computational science and the harnes- towards exascale. FocusCoE has a transversal computational seismology, magne- also code inter-kernel aspects of si- IMO, Iceland Spain INGV, Italy sing of “big data” at the centre of scientific dis- mission, supporting the HPC CoEs to more ef- tohydrodynamics, physical volcano- mulation workflows like data mana- covery and industrial competitiveness. fectively fulfil their role within the European logy, tsunamis, and data analysis and gement and sharing, I/O, post-process www.cheese-coe.eu BioExcel, CompBioMed, EoCoE, Esiwace, MaX HPC ecosystem and the EuroHPC JU. predictive techniques, including ma- and visualization. @Cheese_CoE and POP have been granted funding for a

EXCELLERAT

EXCELLERAT is a European Centre of Ex- rent economic and societal challenges. cellence for Engineering Applications. The setup of EXCELLERAT complements Several European High Performance the current activities of each HPC centre Computing Centres teamed up to sup- involved and enables the development Coordinator: HLRS, Germany port several key engineering industries of next-generation engineering appli- Other Partners: in Europe in dealing with complex ap- cations. The aim of EXCELLERAT is to ARCTUR DOO, Slovenia KTH, Sweden plications using HPC technologies. We support the engineering community at BSC, Spain RWTH Aachen, Germany CERFACS, France SICOS BW GmbH, Germany conduct research, provide leadership, a level that no single HPC provider can. CINECA, Italy SSC-Services GmbH, guidance on good practice, user sup- DLR, Germany Germany port mechanisms as well as training EPCC, United Kingdom TERATEC, France and networking activities to our com- Fraunhofer Gesellschaft, munity. Our goal is to help Europe Germany leverage scientific progress in HPC https://www.excellerat.eu/ driven engineering and address cur- Twitter: @EXCELLERAT_CoE

HIDALGO HPC AND BIG DATA TECHNOLOGIES FOR GLOBAL CHALLENGES

Developing evidence and understan- communities has already started wit- ding concerning Global Challenges and hin the Centre of Excellence for Global their underlying parameters is rapidly Systems Science (CoeGSS), the impor- becoming a vital challenge for mo- tance of assisted decision making by dern societies. Various examples, such addressing global, multi-dimensional as health care, the transition of green problems is more important than ever. technologies or the evolution of the Global decisions with their dependen- global climate up to hazards and stress cies cannot be based on incomplete tests for the financial sector demons- problem assessments or gut feelings

trate the complexity of the involved anymore, since impacts cannot be fo- Partners: systems and underpin their interdisci- reseen without an accurate problem plinary as well as their globality. This representation and its systemic evolu- ARH Informatikai Zrt., Magyar Kozut Nonprofit Hungary Zrt., Hungary becomes even more obvious if coupled tion. Therefore, HiDALGO bridges that Atos Spain, Spain MoonStar systems are considered: problem sta- shortcoming by enabling highly accu- Brunel University London, Communications GmbH, Three totally new CoEs tements and their corresponding pa- rate simulations, data analytics and United Kingdom Germany are starting, addressing rameters are dependent on each data visualisation, but also by provi- DIALOGIK, Germany PSNC, Poland ECMWFs, United Kingdom Szechenyi Istvan three new domains: other, which results in interconnec- ding technology as well as knowledge HLRS, Germany University, Hungary geophysics, engineering ted simulations with a tremendous on how to integrate the various work- ICCS, Greece Universität Salzburg, and multidimensional overall complexity. Although the pro- flows and the corresponding data. Know Center GmbH, Austria Austria challenges. cess for bringing together the different 28 • ETP4HPC • ANNUAL REPORT 2018 OUR MAIN EVENTS OUR MAIN EVENTS ANNUAL REPORT 2018 • ETP4HPC • 29

Energy Efficiency 8th General Assembly European HPC Summit workshop at European Promoting European HPC in Ireland Week in Ljubljana HPC Summit Week efforts at Teratec Forum

TP4HPC held its 8th General Assembly on 13 March 2018. his conference organised by EXDCI and co-located with ichael Ott (LRZ), leader of the ETP4HPC Work Group on he Teratec Forum was held on 19-20 June 2018, on the cam- Intel Ireland hosted this event in their Leixlip Campus near PRACEDays took place from 28 May to 1 June 2018, at the Energy Efficiency, organised a Workshop on «Energy pus of prestigious Ecole Polytechnique near Paris. This EDublin, which is home to a semiconductor manufacturing lo- TUniversity of Ljubljana (Slovenia). This edition offered a wide MEfficiency in HPC» on 30 May 2018, within the European HPC Tconference and exhibition once again brought together the cation. The main objective of the event was to elect a new variety of workshops covering a number of application areas Summit Week. best international experts in HPC, Simulation and Big Data – Steering Board. Besides the traditional activity report pre- where supercomputers are key, as well as HPC technologies and with more than 1300 attendees representing both the acade- sented by the Chairman and Treasurer, Michael Malms gave infrastructures. It also offered a great opportunity to network After reporting on the recent activities of the Work Group, mic community and the industry. an overview of the core of our activities, the SRA3, and future with all relevant European HPC stakeholders, from technolo- Michael Ott gave the floor to four speakers, who each shared roadmaps and visions. gy suppliers and HPC infrastructures to scientific and industrial with the audience their experience and best practice in terms ETP4HPC was one of the 80 exhibitors, sharing a booth with HPC users in Europe. of energy efficiency. EXDCI. Together we explained and promoted the European ef- The members who joined ETP4HPC in 2017 had the opportunity forts to boost the HPC ecosystem. to introduce themselves to the GA. ETP4HPC was very much part of this event, with a keynote by our Axel Auweter, Megware, gave an overview of the 100% Warm- Chairman Jean-Pierre Panziera, a workshop on Energy efficien- water Cooling solution CoolMUC-3. Invited speaker Gustav Kalbe (DG Connect) gave an update on cy in HPC organized by our Energy Efficiency Working Group, and Etienne Walter, Atos, representing project Mont-Blanc 3, ex- the EC’s organisation for HPC – a much expected presentation several speakers among the members of our Steering Board. plained how this project addresses all aspects of energy effi- as EuroHPC was only starting to take shape. Finally, a keynote ciency, from software to hardware. by our host Intel presented their history, research and manu- facturing activities in Ireland Matthias Maiterth, LMU and Intel, introduced the Global Extensible Open Power Manager – GeoPM. Antonio Libri, ETH, reported on the power monitoring in- frastructure installed on the D.A.V.I.D.E. system – derived from work of the MULTITHERMAN project. 28 • ETP4HPC • ANNUAL REPORT 2018 OUR MAIN EVENTS OUR MAIN EVENTS ANNUAL REPORT 2018 • ETP4HPC • 29

Energy Efficiency 8th General Assembly European HPC Summit workshop at European Promoting European HPC in Ireland Week in Ljubljana HPC Summit Week efforts at Teratec Forum

TP4HPC held its 8th General Assembly on 13 March 2018. his conference organised by EXDCI and co-located with ichael Ott (LRZ), leader of the ETP4HPC Work Group on he Teratec Forum was held on 19-20 June 2018, on the cam- Intel Ireland hosted this event in their Leixlip Campus near PRACEDays took place from 28 May to 1 June 2018, at the Energy Efficiency, organised a Workshop on «Energy pus of prestigious Ecole Polytechnique near Paris. This EDublin, which is home to a semiconductor manufacturing lo- TUniversity of Ljubljana (Slovenia). This edition offered a wide MEfficiency in HPC» on 30 May 2018, within the European HPC Tconference and exhibition once again brought together the cation. The main objective of the event was to elect a new variety of workshops covering a number of application areas Summit Week. best international experts in HPC, Simulation and Big Data – Steering Board. Besides the traditional activity report pre- where supercomputers are key, as well as HPC technologies and with more than 1300 attendees representing both the acade- sented by the Chairman and Treasurer, Michael Malms gave infrastructures. It also offered a great opportunity to network After reporting on the recent activities of the Work Group, mic community and the industry. an overview of the core of our activities, the SRA3, and future with all relevant European HPC stakeholders, from technolo- Michael Ott gave the floor to four speakers, who each shared roadmaps and visions. gy suppliers and HPC infrastructures to scientific and industrial with the audience their experience and best practice in terms ETP4HPC was one of the 80 exhibitors, sharing a booth with HPC users in Europe. of energy efficiency. EXDCI. Together we explained and promoted the European ef- The members who joined ETP4HPC in 2017 had the opportunity forts to boost the HPC ecosystem. to introduce themselves to the GA. ETP4HPC was very much part of this event, with a keynote by our Axel Auweter, Megware, gave an overview of the 100% Warm- Chairman Jean-Pierre Panziera, a workshop on Energy efficien- water Cooling solution CoolMUC-3. Invited speaker Gustav Kalbe (DG Connect) gave an update on cy in HPC organized by our Energy Efficiency Working Group, and Etienne Walter, Atos, representing project Mont-Blanc 3, ex- the EC’s organisation for HPC – a much expected presentation several speakers among the members of our Steering Board. plained how this project addresses all aspects of energy effi- as EuroHPC was only starting to take shape. Finally, a keynote ciency, from software to hardware. by our host Intel presented their history, research and manu- facturing activities in Ireland Matthias Maiterth, LMU and Intel, introduced the Global Extensible Open Power Manager – GeoPM. Antonio Libri, ETH, reported on the power monitoring in- frastructure installed on the D.A.V.I.D.E. system – derived from work of the MULTITHERMAN project. 30 • ETP4HPC • ANNUAL REPORT 2018 OUR MAIN EVENTS OUR MAIN EVENTS ANNUAL REPORT 2018 • ETP4HPC • 31

Full house SME members on Birds-of-a-Feather Booth at ICT 2018 for our networking our booth at ISC 2018 session at SC18 Conference in Vienna session at ICT Vienna

TP4HPC was naturally present at Europe’s largest HPC he yearly BoF co-organised by ETP4HPC and EXDCI2 at SC he ICT 2018 event, co-organised by the European TP4HPC, BDVA and HiPEAC had joined forces to set up a event, ISC High Performance, held in Frankfurt (Germany), in November has almost become an institution. It was this Commission and the Austrian Presidency of the Council of networking session at the ICT Conference, on 5 December Eon 24-28 June 2018. Tyear entitled «Consolidating the European Exascale Effort» Tthe European Union, was held on 4-6 December 2018 in Vienna E2018. Under the title “Common priorities of HPC, Big Data and and was held on 14 November in Dallas. We presented to the (Austria). This large event (6000 attendees) included a confe- HiPEAC for post-H2020 era”, it aimed to present and get feed- Just before the start of the conference, we organised a Vision international HPC community the results of the European rence, an exhibition of EU-funded research and innovation back on the common areas of the future technical and strate- workshop gathering world-class HPC experts from Europe, Exascale programme to date (covering the entire HPC sys- projects in the field of ICT, a series of networking activities, gic research priorities for High-Performance Computing, Big US and Japan (see report in Horizon Europe HPC Vision – 2018 tem stack and application expertise), the ambitions of the and an Innovation and start-ups village. Data and High-Performance Embedded Architecture and Workshops and Future Plans). new round of basic technology, application, co-design (DEEP- Compilation. EST/EuroExa), prototype and processor design (EPI) projects, ETP4HPC had a booth – strategically located in the main hall, We also had a booth, and this year we had decided to highlight the post-Exascale plans, collaboration with Big Data, IoT and with the PPPs. We presented our Handbook and our Strategic ETP4HPC had also been invited to take part to another the work of some of our SME members. We hosted Appentra, other areas, and a selection of best-performing HPC techno- Research Agenda, as well as the production of some of our networking session, as a member of the panel of the Thematic Asperitas, Iceotope, and Scapos on our booth, giving them vi- logy SMEs (Iceotope and Megware), emphasising Europe’s glo- industrial members (especially SMEs) - innovative pieces of workshop on Artificial Intelligence held on 4 December 2018. sibility in this large event. Following the success of this acti- bal contributions. The session also features a presentation of HPC hardware and simulations, whose development was sup- vity, we will certainly repeat it with a larger booth in 2019! EuroHPC. ported by H2020 funded projects.

A large audience had the opportunity to exchange with the speakers during the final panel. All attendees received their copy of the freshly printed 2018 Handbook of European HPC projects - some even took extra copies for colleagues! 30 • ETP4HPC • ANNUAL REPORT 2018 OUR MAIN EVENTS OUR MAIN EVENTS ANNUAL REPORT 2018 • ETP4HPC • 31

Full house SME members on Birds-of-a-Feather Booth at ICT 2018 for our networking our booth at ISC 2018 session at SC18 Conference in Vienna session at ICT Vienna

TP4HPC was naturally present at Europe’s largest HPC he yearly BoF co-organised by ETP4HPC and EXDCI2 at SC he ICT 2018 event, co-organised by the European TP4HPC, BDVA and HiPEAC had joined forces to set up a event, ISC High Performance, held in Frankfurt (Germany), in November has almost become an institution. It was this Commission and the Austrian Presidency of the Council of networking session at the ICT Conference, on 5 December Eon 24-28 June 2018. Tyear entitled «Consolidating the European Exascale Effort» Tthe European Union, was held on 4-6 December 2018 in Vienna E2018. Under the title “Common priorities of HPC, Big Data and and was held on 14 November in Dallas. We presented to the (Austria). This large event (6000 attendees) included a confe- HiPEAC for post-H2020 era”, it aimed to present and get feed- Just before the start of the conference, we organised a Vision international HPC community the results of the European rence, an exhibition of EU-funded research and innovation back on the common areas of the future technical and strate- workshop gathering world-class HPC experts from Europe, Exascale programme to date (covering the entire HPC sys- projects in the field of ICT, a series of networking activities, gic research priorities for High-Performance Computing, Big US and Japan (see report in Horizon Europe HPC Vision – 2018 tem stack and application expertise), the ambitions of the and an Innovation and start-ups village. Data and High-Performance Embedded Architecture and Workshops and Future Plans). new round of basic technology, application, co-design (DEEP- Compilation. EST/EuroExa), prototype and processor design (EPI) projects, ETP4HPC had a booth – strategically located in the main hall, We also had a booth, and this year we had decided to highlight the post-Exascale plans, collaboration with Big Data, IoT and with the PPPs. We presented our Handbook and our Strategic ETP4HPC had also been invited to take part to another the work of some of our SME members. We hosted Appentra, other areas, and a selection of best-performing HPC techno- Research Agenda, as well as the production of some of our networking session, as a member of the panel of the Thematic Asperitas, Iceotope, and Scapos on our booth, giving them vi- logy SMEs (Iceotope and Megware), emphasising Europe’s glo- industrial members (especially SMEs) - innovative pieces of workshop on Artificial Intelligence held on 4 December 2018. sibility in this large event. Following the success of this acti- bal contributions. The session also features a presentation of HPC hardware and simulations, whose development was sup- vity, we will certainly repeat it with a larger booth in 2019! EuroHPC. ported by H2020 funded projects.

A large audience had the opportunity to exchange with the speakers during the final panel. All attendees received their copy of the freshly printed 2018 Handbook of European HPC projects - some even took extra copies for colleagues! 32 • ETP4HPC • ANNUAL REPORT 2018 PARTICIPATION IN EVENTS

Date Event Location Participation of ETP4HPC

17/01/2018 KPI revision meeting Brussels, Belgium JP Nominé for ETP4HPC

09/02/2018 KPI revision meeting Brussels, Belgium JP Nominé for ETP4HPC

06/03/2018 Workshop on Digitalising Brussels, Belgium H. Falter attended on behalf of ETP4HPC European Industry

12/03/2018 ETP4HPC networking dinner Maynooth, Ireland Social event open to all ETP4HPC members

13/03/2018 General Assembly of ETP4HPC Leixlip, Ireland General Assembly hosted by Intel Ireland

16/03/2018 ICT Programme Committee Brussels, Belgium On invitation by the EC: H. Falter, JP Panziera and JP Nominé representing ETP4HPC

20/03/2018 Workshop on SMEs Brussels, Belgium On invitation by the EC: H. Falter for ETP4HPC and the uptake on HPC and M. Gilliot on behalf of EXDCI

27-28/03/2018 BDECng meeting Chicago, USA on invitation by the US colleagues: JP Panziera for ETP4HPC

17-19/04/2018 EASC: Exascale Applications Edinburgh, Scotland and Software Conference

19/04/2018 Shaping Europe's Digital Future: Sofia, Bulgary JP Panziera and JP Nominé representing ETP4HPC HPC for Extreme Scale Scientific and Industrial Applications

25-26/04/2018 Joint Workshop ETP4HPC - Brussels, Belgium Led by Michael Malms for ETP4HPC BDVA on use cases

27/04/2018 EXDCI Final Review Luxembourg, Luxembourg ETP4HPC represented by Marcin Ostasz (WP2) and Maike Gilliot (WP7)

04/05/2018 EuroHPC meeting with EC Luxembourg, Luxembourg Jean-Pierre Panziera, Jean Gonnord and Hugo Falter representing ETP4HPC

14-15/05/2018 Open eIRG Workshop Sofia, Bulgaria Marcin Ostasz representing ETP4HPC

17/05/2018 ENES/ESiWACE HPC Workshop Lecce, Italy Thomas Eickeman representing ETP4HPC (talk on SRA and EsDs)

28/05/2018 01/06/2018 European HPC summit week Ljubljana, Slovenia Events and Workshops (co-) organised by ETP4HPC

28/05/2018 EXDCI Workshop Ljubljana, Slovenia As part of the HPC Summit Week

29/05/2018 cPPP Partnership Board meeting Ljubljana, Slovenia During the HPC Summit Week

30/05/2018 Workshop of the Energy Efficiency Ljubljana, Slovenia As part of the HPC Summit Week Workgroup

13/06/2018 EuroHPC WG on "User Requirements" Brussels, Belgium Jean-Philippe Nominé representing ETP4HPC

19-20/06/2018 Forum Teratec Palaiseau, France Booth at the Teratec Exhibition

24-28/06/2018 ISC Frankfurt, Germany Booth at the ISC-Exhibition

24/06/2018 Open Workshop to all members Frankfurt, Germany during ISC 18 on FP9 position paper

27/06/2018 Closed Workshop on FP9 Frankfurt, Germany during ISC 18 position paper

03/07/2018 EuroHPC workshop on SMEs and Brussels, Belgium Maike Gilliot representing EXDCI and ETP4HPC the uptake of HPC (2nd Workshop)

10/07/2018 cPPP Review: 2017 PMR Brussels, Belgium Jean-Philippe Nominé and Guy Lonsdale for ETP4HPC

13/07/2018 EuroHPC preparation: Brussels, Belgium Jean-Pierre Panziera, Hugo Falter joint meeting ETP4HPC- BDVA - EC and Jean Gonnord representing ETP4HPC

01/08/2018 cPPP : KPI review Brussels, Belgium Maike Gilliot representing ETP4HPC and PMR 2017 discussion

27/09/2018 EXDCI-2 First technical meeting Brussels, Belgium Marcin Ostasz representing ETP4HPC

02/10/2018 Portes Ouvertes Teratec Bruyères le Châtel, France Booth

23/10/2018 cPPP meeting Brussels, Belgium

30-31/10/2018 PRACE-CoEs-FET HPC-EXDCI Workshop Brühl, Germany Marcin Ostasz representing ETP4HPC

14/11/2018 Big Data Value Forum (BDVA) Vienna, Austria Michael Malms in Panel

14/11/2018 SC BoF Dallas, Texas, US

04-06/12/2018 ICT event Vienna, Austria Booth

05/12/2018 ICT event: Joint ETP4HPC-BDVA Vienna, Austria Networking Session - Public Review of post-H2020 Vision