Academic Supercomputing in (Sixth edition, fall 2003)

Rossend Llurba

The Hague/Den Haag, januari 2004 Netherlands National Computing Facilities Foundation Academic Supercomputing in Europe Facilities & Policies - fall 2003 Seventh edition

Llurba, R. (Rossend) ISBN 90-70608-37-5

Copyright © 2004 by Stichting Nationale Computerfaciliteiten Postbus 93575, 2509 AN Den Haag, The Netherlands http://www.nwo.nl/ncf

All rights reserved. No part of this publication may be reproduced, in any form or by any means, without permission of the author. Contents

PREFACE ...... 1

1 EUROPEAN PROGRAMMES AND INITIATIVES ...... 3

1.1 EUROPEAN UNION...... 3 1.2 SUPRANATIONAL CENTRES AND CO-OPERATIONS...... 5 2 PAN-EUROPEAN RESEARCH NETWORKING ...... 7

2.1 NETWORK ORGANISATIONS...... 7 2.2 OPERATIONAL NETWORKS...... 7 2.3 NETWORKING PROJECTS...... 7 3 SUPERCOMPUTING IN THE INDIVIDUAL EUROPEAN COUNTRIES ...... 9

3.1 ALBANIA ...... 9 3.2 ARMENIA...... 9 3.3 ...... 9 3.4 BELARUS...... 10 3.5 ...... 10 3.6 BOSNIA-HERZEGOVINA...... 10 3.7 BULGARIA ...... 11 3.8 CROATIA...... 11 3.9 CYPRUS...... 11 3.10 CZECH REPUBLIC...... 11 3.11 DENMARK...... 12 3.12 ESTONIA...... 13 3.13 FINLAND...... 13 3.14 ...... 14 3.15 GEORGIA ...... 15 3.16 ...... 15 3.17 GREECE ...... 17 3.18 HUNGARY...... 18 3.19 ICELAND...... 18 3.20 IRELAND...... 18 3.21 ITALY...... 19 3.22 LATVIA...... 20 3.23 LITHUANIA...... 20 3.24 ...... 20 3.25 MALTA ...... 20 3.26 MOLDOVA ...... 20 3.27 THE NETHERLANDS...... 20 3.28 NORWAY ...... 22 3.29 POLAND...... 23 3.30 PORTUGAL...... 24 3.31 ROMANIA...... 24 3.32 RUSSIAN FEDERATION ...... 25 3.33 SLOVAKIA...... 25 3.34 ...... 25 3.35 SPAIN...... 25 3.36 SWEDEN...... 26 3.37 ...... 27 3.38 TURKEY...... 28 3.39 UNITED KINGDOM...... 28 3.40 YUGOSLAVIA (SERBIA AND MONTENEGRO)...... 30 4 COMPARISON OF EUROPEAN COUNTRIES ...... 31

APPENDIX 1. SYSTEMS MENTIONED IN THE OVERVIEW ...... 33

Preface

This is the 7th edition of this NCF‡ report which pre- Chapter 3 is the main section of this overview. It de- sents an overview of the policies and infrastructures in scribes the situation in the individual European coun- Europe in the academic supercomputing field. tries where supercomputer facilities are available to the academic community. It addresses policy issues and The idea to compile the first overview in 1994 origi- on-going and planned initiatives, describes centres and nated from contacts of NCF with European sister or- facilities, lists the installed , reports on ganisations. Several of these organisations provided to planned upgrades, replacements or new systems or- NCF information about the national policy and facilities dered, describes the national academic and research for academic supercomputing and high-speed network- network, explains how users are granted access to the ing in their respective countries. This information was resources and provides contact information. bundled and expanded such to cover not only more in- New in this chapter is a description per country of na- dividual European countries but to also include major tional grid initiatives. Also new is the inclusion of a supranational initiatives and facilities. Examples of the number of smaller countries. latter were programmes and actions of the European The limit for inclusion of a system in this chapter has Union, facilities available at multinational agencies or been raised from 36 Gflop/s in the previous edition to co-operations of research institutes of several coun- 48 Gflop/s. tries. In Chapter 4 we compare the situation in the ten leading The main goal of this report is to inform people in- European countries. volved in defining, outlining and implementing the na- tional high-performance computing policy for the aca- Because of changing policies and the continuous up- demic community about similar developments and ini- grading of supercomputer systems, networks and grids, tiatives in other European countries. The general idea is the information in this report is by definition outdated that this knowledge might be helpful during national, or at the time of publishing. Its contents are merely a regional, decision processes. snapshot in time of the situation and the course in We expect, however, that the information in this report Europe. More up-to-date information can be found at will also be worthwhile for other people interested in the Arcade website (http://www.arcade-eu.org). high-performance computing.

Chapter 1 summarizes the major supranational Euro- The readers are encouraged to provide the editor with pean initiatives and programmes that are aimed at high- corrections, updated descriptions of the situation in performance computing and networking or that are re- their countries, or other data relevant for inclusion in lated to it. Furthermore, it includes a short description new releases of this report. of the agencies involved. The substantial growth of grid-related (supra)national Comments can be submitted via www.arcade-eu.org activities has continued in 2003. Therefore, in this edi- where this document also can be downloaded. tion, this Chapter has been enlarged with a description of grid projects with a major infrastructural comp onent.

Chapter 2 briefly presents the European organisations involved in academic networking at multinational level and the pan-European research networks. New in this edition is a description of international networking projects in South-East Europe and in the Caucasus region.

‡ NCF – the National Computing Facilities Foundation – is an independent foundation under the umbrella of the Netherlands Organization for Scientific Research (NWO). NCF is in charge of the Dutch national academic supercomputing policy and infrastruc- ture.

Academic supercomputing in Europe 1

1 European programmes and initia- One of the goals of FP6 is to create a pan-European tives grid-based infrastructure to support the ERA. There- fore (research on) grid technology and its applications

are major activities. This chapter describes the European initiatives and The FP6 budget includes up to € 300 million for further programmes that are either aimed at high-performance development of GÉANT and GRID. computing and networking or which are related to these http://www.cordis.lu/fp6 aspects. Of the many currently ongoing grid projects only those that include the establishment of a computa- EU-funded GRID projects tional infrastructure are mentioned. Furthermore, this Here a small selection of the EU-funded Grid projects is chapter shortly introduces the agencies involved. presented. A detailed overview can be found via the IST pages of the EU website. http://www.cordis.lu/ist/home.html 1.1 European Union CrossGrid This 3-year IST-funded project started in March 2002. It EU’s policy aims to deploy a distributed computing infrastructure EU's policy is directed to attain and maintain competi- allowing access to high-end computational resources tiveness for the European industry. The R&D activities across Europe. This environment will be used to de- undertaken hereto are structured in so called Frame- velop interactive parallel applications addressing large- work Programmes each of which spans a period of 4 scale problems in several research fields. The project, in years. which 21 partners from 11 European countries partici- The EU sees national research and education infra- pate, is co-ordinated by Cyfronet (Cracow, Poland). structures and facilities as the responsibility of the na- The project costs € 6.7 million. EU funds € 4.86 million. tional authorities. EU's contribution is directed to stimulate the development of these infrastructures and http://www.eu-crossgrid.org/ to link the activities undertaken for that purpose. DATATAG EU’s Sixth Framework Programme This 2-year IST-funded project started in January 2002. th The 6 Framework Programme (FP6) covers the period It will set up a transatlantic GRID testbed to focus upon 2002-2006 and has a budget of € 16.27 billion. advanced networking issues and interoperability bet- A major aim of FP6 is to realise a European Research ween GRID domains. CERN co-ordinates the project. Area (ERA) which can compete effectively with the re- The other participants are INFN (Italy), INRIA (France), search efforts of the USA and Japan. PPARC (UK) and UvA (the Netherlands). The testbed FP6 is divided into two Specific Programmes: will be developed in conjunction with CrossGrid and - Integrating and strengthening the ERA; EU-DataGrid and with the US GRID projects iVDGL, - Structuring the ERA. GriPhyN and PPDG. The project costs and EU funding The first programme has 7 thematic priorities. The are € 4 million. theme “Information Society Technologies (IST)” http://datatag.web.cern.ch/datatag/index.html (budget € 3.6 billion) has a HPCN related component. A strategic priority of this theme is “Grid-based systems DEISA for Complex Problem Solving” that focuses on comput- DEISA (Distributed European Infrastructure for Super- ing and information grids and middleware to make use computing Applications) is a consortium of national of large-scale highly distributed computing and storage supercomputing centres in Europe aiming to jointly resources and to develop scalable, dependable and se- build and operate a distributed terascale supercomput- cure platforms. ing facility using grid technologies. The core project of The first Specific Programme favours co-ordination DEISA will provide a single system image of a homo- with other EU or transnational research projects such geneous super-cluster of 148 32-way IBM p690 nodes. as from ESA, EUREKA, COST or the ESF. This should The integrated computing power will exceed 30 Tflop/s translate into the creation of joint structures to co- in early 2004. In an extended project the super-cluster ordinate R&D activities. will become heterogeneous. The second Specific Programme has four activity areas. The DEISA members are: CINECA (Italy), CSC One of them, “Research infrastructures” (budget € 655 (Finland), ECMWF (UK), EPCC (UK), FZJ (Germany), million), promotes the development of a fabric of re- IDRIS (France), RZG (Germany) and SARA (the Neth- search infrastructures of the highest quality and per- erlands). The early associate partners for the extended formance in Europe. These include high-speed commu- project are: CEPBA (Spain), the IBM Zurich Research nications networks (e.g. GÉANT) and networks of Laboratory, and the Nordic Grid Consortium. computing facilities (e.g. Grids). http://www.deisa.org

Academic supercomputing in Europe 3 EGEE The project is led by PSNC (Poznan, Poland). The other The Enabling Grids for E-science and industry in participants are 8 European academic institutions (from Europe (EGEE) Integrated Infrastructure Initiative the Czech Republic, Germany, Greece, Hungary, Italy, wants to create and deploy grid technologies to enable the Netherlands and the UK), 3 US universities and two e-Science applications throughout the ERA. EGEE fo- computer manufacturers. cuses on four key objectives: The project costs are ~ € 6 million of which the EU - integrating grid technological developments from funds ~ € 5 million. across Europe; http://www.gridlab.org - establishing a Europe-wide grid infrastructure for science and industry with a focus on heterogeneity EU-programmes, projects and activities with and interoperability; an HPCN component - enabling the creation of e-Science applications from across the scientific and industrial spectrum; IHP - ensuring the timely delivery of the project’s pro- The FP5 programme "Improving the Human Research gramme of work, guided by the needs of academic Potential and Socio-Economic Knowledge Base" is the and industrial partners. follow-up of the "Training and Mobility of Research- The proposed budget is € 32 million. ers" programme. Like in previous programmes several http://egee-ei.web.cern.ch/egee-ei/2003/index.html "Large Scale Facilities" (LSF) have been selected among which 4 HPCN sites (EPCC, CEPBA/CESCA, EUROGRID CINECA and Parallab). Researchers can visit an LSF for This 3-year IST-funded project started in November a period of 1 to 3 months. The EU pays travel and stay 2000. It aims to establish a computing GRID network of expenses. leading European HPC centres. The academic partici- http://improving-ari-fp5.sti.jrc.it/access pants are: FZJ, TU Warsaw, , DWD, ETHZ, CNRS and University of Manchester. ENACTS The commercial partners are: Pallas GmbH (co-ordinates The European Network for Advanced Computing the project), debis, EADS, and Fujitsu. Technology for Science is a co-operation of HPC cen- The project has four domain-specific GRID subprojects: tres. Its aim is to evaluate future trends in the way Bio-GRID, Meteo-GRID, CAE-GRID and HPC Research computational science will be performed and the pan- GRID. EUROGRID builds on the grid infrastructure de- European implications. The participants are: CECAM veloped in the German UNICORE project. The project (France), CESCA-CEPBA (Spain), CINECA (Italy), CNR costs € 3.45 million of which the EU funds € 2.07 million. (Italy), CSC (Finland), CSCS (Switzerland), EPCC (UK), http://www.eurogrid.org FORTH-IESL (Greece), ICCC (Czech Republic), NSC (Sweden), Parallab (Norway), PSNC (Poland), TCD (Ire- Eu-DataGrid (EDG) land) and UNI·C (Denmark). The European DataGrid project aims for a computa- ENACTS has carried out several studies on key ena- tional and data-intensive grid of resources for the bling technologies. The reports published include Grid analysis of scientific data. The result should be a grid Service Requirements (January 2002), HPC technology able to handle PBytes of distributed data, tens of thou- roadmap study (April 2002) and Grid Enabling Tech- sands computing resources (processors, disks, etc.), nologies (December 2002). The ENACTS project ends and thousands of simultaneous users from multiple re- in 2005. search institutions. http://www.enacts.org The project, led by CERN, has 5 other main partners (ESA, CNRS, INFN, NIKHEF and PPARC) and 15 asso- SERENATE ciated partners in the Czech Republic, Finland, France, The SERENATE project (Study into European Research Ge rmany, Hungary, Italy, the Netherlands, Spain, Swe- and Education Networking as Targeted by eEurope) den and the United Kingdom. carries out a series of strategic studies into the future The project runs from 2001 to 2003 and has € 9.8 million of research and education networking in Europe. The EU funding. participants are the NRENs of Europe, national gov- http://www.eu-datagrid.org ernments and funding bodies, the EC, network opera- tors, equipment manufacturers and the scientific and GRIDLAB education community. This IST-funded project started in January 2002 and This IST project started in May 2001 and ended end will last 3 years. It will develop a Grid Application Tool- September 2003. The k€ 960 budget has been provided kit enabling applications to make innovative use of by the EU. The project has delivered 14 public reports. global computing resources. The grid-enabled http://www.serenate.org applications will be tested on testbeds constructed by linking a heterogeneous collection of supercomputers and other resources spanning Europe and the US.

Academic supercomputing in Europe 4 1.2 Supranational centres and co-opera- NLR (The Netherlands) and ONERA (France). All these tions. organisations are active users of HPCN technology. http://www.erea.org European organisations that use supercomputers or that are involved in HPCN related activities are briefly Eureka described in this subsection. The European Research Coordination Agency with 33 European member countries provides a research frame- Supranational centres work. It promotes international co-operation in typically CERN market-oriented projects – contrary to the EU which CERN – the European Organisation for Nuclear Re- emphasises on precompetitive (fundamental) research. search located in Geneva, Switzerland – is a co-opera- The industry plays a predominant role in Eureka pro- tion of 20 member states. jects. CERN's HPC facilities include a cluster farm of about http://www.eureka.be 2000 processors. Supranational Grid initiatives CERN initiated in 2000 the DataGrid project to provide in the computational and data needs of the Large Had- Central-European Grid Consortium (CEGC) ron Collider (LHC) experiments that will start in 2007. CEGC, established in January 2003, is an open consor- tium, primarily for Central-European countries. The http://www.cern.ch main goal is: - to co-ordinate grid infrastructures of partner coun- ECMWF tries; The European Centre for Medium-Range Weather - to jointly develop a grid infrastructure; Forecasts, located in Reading (UK) is a co-operation of - to jointly participate in the EU FP6 grid projects as 24 European national meteorological institutes. well in other international grid projects. Since its creation ECMWF has always had state-of-the- The 6 partner countries are: Austria, Czech Republic, art high-performance computers. Today the top system Hungary, Poland, Slovakia and Slovenia. is a IBM pSeries 690/1920. This system will gradually http://www.cyfronet.krakow.pl/~yemosurs/ be expanded to attain 20 Tflop/s in 2004. http://www.ecmwf.int Nordic Grid Consortium (NGC) This consortium was founded in April 2002 by CSC Supranational co-operations (Finland), Parallab (Norway) and PDC (Sweden) to pro- ARCADE mote Nordic grid projects by establishing a laboratory This consortium of national bodies involved in high- for grid middleware and application development. end computing infrastructures was founded in 1995 and provides an European platform for the exchange of http://www.nordicgrid.net knowledge and information on high-end computing Nordic DataGrid Facility policies and infrastructures. The Research Councils in Denmark, Finland, Norway http://www.arcade-eu.org and Sweden are planning a major investment in 2005 in ERCIM a Nordic DataGrid Facility that will serve present and The European Research Consortium for Informatics and future multidisciplinary computational needs. A pre- Mathematics is an organisation dedicated to the ad- paratory project will result end 2004 in a proposal. The vancement of European R&D in the areas of Informa- Nordic Research Councils together allocate k€ 544 an- tion Technology and Applied Mathematics. The 17 par- nually for the years 2003 and 2004. ticipants are the national research centres: AARIT NorduGrid (Austria), CCLRC (UK), CNR (Italy), CRCIM (Czech The Nordic Testbed for Wide Area Computing and Republic), CWI (The Netherlands), FhG (Germany), Data Handling project (NorduGrid) started in 2001 as FNR (Luxembourg), FORTH (Greece), INRIA (France), part of the Nordunet2 programme. It aims to establish a NTNU (Norway), SARIT (Switzerland), SICS (Sweden), inter-Nordic GRID testbed to prepare for the future re- SpaRCIM (Spain), SRCIM (Slovak Republic), SZTAKI quirements of the CERN LHC project. Universities in (Hungary), TCD (Ireland) and VTT (Finland). Denmark, Finland, Norway and Sweden participate in this project that will end in 2003. NorduGrid is also a http://www.ercim.org testbed for the EDG project. EREA http://www.nordugrid.org The members of the Association of European Research North European Grid Establishments in Aeronautics are CIRA (Italy), DERA The North European Grid (NEG) consortium clusters ac- (UK), DLR (Germany), FFA (Sweden), INTA (Spain), tivities and interests in the field of grids. The consor- tium partners are from Belgium, Denmark, Estonia,

Academic supercomputing in Europe 5 Finland, the Netherlands, Norway and Sweden. Other Northern European countries may join at a later stage. NEG participates as a consortium in the EGEE project. http://www.northeuropeangrid.org SEE-GRID The South Eastern European Grid-enabled eInfrastruc- ture Development proposal aims to initiate grid activi- ties in South East Europe (Greece, Albania, Bulgaria, FYR of Macedonia, Serbia-Montenegro, Bosnia- Herzegovina, Romania, Turkey, and Croatia). http://www.see-grid.org

Academic supercomputing in Europe 6 2 Pan-European research network- 2.2 Operational networks ing GÉANT This chapter presents European organisations involved GÉANT is the pan-European Gigabit research and edu- in networking at multinational level and pan-European cation network. It became fully operational in Novem- research networks. National academic and research ber 2001 when its predecessor the TEN-155 network networks are described in chapter 3 where the situation was phased out. GÉANT’s DWDM based backbone in each country is depicted. operates at 10 Gbps and 2.5 Gbps. National networks can couple at 2.5 Gbps. GÉANT interconnects more 2.1 Network organisations than 3,000 research and education institutions in 32 countries. Transatlantic connectivity is provided through 3´2.5 Gbps paths. CEENet GÉANT received from the European Commission € 80 CEENet is an association of national academic network million funding for a 4-year period. The participants will operators that aims to co-ordinate the international as- contribute € 160 million to the project. pects of academic and research networking in Central Services on GÉANT are provided by DANTE. and Eastern Europe and in adjacent countries. The 26 http://www.dante.net/geant member countries are Albania, Armenia, Austria, Bela- rus, Bulgaria, Croatia, Czech Republic, Estonia, Georgia, NORDUnet Greece, Hungary, Kazakhstan, Kyrgyzstan, Latvia, This network interconnects the national research net- Lithuania, Macedonia, Moldova, Mongolia, Poland, works of the five Nordic countries (Denmark, Finland, Romania, Russia, Slovak Republic, Slovenia, Tajikistan, Iceland, Norway and Sweden) and provides their inter- Turkey and Uzbekistan. CEENet is member of TERENA. national connectivity. Stockholm has 2´2.5 Gbps links to Oslo, Helsinki and Copenhagen and 45 Mbps to http://www.ceenet.org Reykjavik. NORDUnets international links include 2´2.5 Gbps to GÉANT and 2´622 Mbps to the USA. DANTE NORDUnet provides also connectivity to Poland (12 DANTE is a not-for-profit company established in 1993 Mbps) and Ukraine (1 Mbps). by European research networks to provide international The Nordic Governments finance the Nordunet2 pro- network services for the research community. DANTE gramme that focuses on network-based applications. provides these services on the GÉANT network. One of the projects funded is the NorduGrid project. http://www.dante.net ENPG http://www.nordu.net The European Network Policy Group has as its mission the development of strategies for European Research 2.3 Networking projects Networking. National funding agencies of 16 European countries are member of this platform. SEEREN http://www.enpg.org The South-East European Research and Education RIPE Networking initiative will link the NRENs of Albania, RIPE (Réseaux IP Européens) is a collaboration of more Bosnia-Herzegovina, Bulgaria, the Former Yugoslav than 3000 European Internet providers with the objec- Republic of Macedonia (FYROM), and the Federal Re- tive to co-ordinate the operation of a pan-European IP public of Yugoslavia (Serbia and Montenegro) to network. RIPE’s Network Co-ordination Centre (RIPE GÉANT. The NRENs will be linked at 2-34 Mbps in Oc- NCC) is one of the three Regional Internet Registries in tober 2003. GRNET (Greece) co-ordinates this project the world. that lasts until May 2004. The € 1.5 million budget is http://www.ripe.net provided by the EU. TERENA Virtual Silk highway TERENA (Trans-European Research and Education This satellite technology based network connects at 3 Networking Association) promotes and participates in Mbps speed the academic networks of eight countries the development of a high-quality international network in the Caucasus and Central Asia: Armenia, Azerbaijan, infrastructure for research and education. Georgia, Kazakhstan, Kyrgyz Republic, Tajikistan, http://www.terena.nl Turkmenistan and Uzbekistan. SILK has a connection to GÉANT via DESY (Hamburg). This NATO project has a budget of US$ 3.4 million and will provide connectivity until the beginning of 2005. http://www.silkproject.org

Academic supercomputing in Europe 7 List of abbreviations CEENet Central and Eastern European Networking Association DANTE Delivery of Advanced Network Technol- ogy to Europe ENPG European Network Policy Group GEANT Gigabit European Academic Network NREN National Research and Education Network RIPE Réseaux IP Européens TERENA Trans-European Research and Education Networking Association

Academic supercomputing in Europe 8 3 Supercomputing in the individual stalled. Also systems that can meet the 48 Gflop/s re- European countries quirement after being upgraded have been included if such an upgrade is a realistic event.

We could have omitted from this overview many Cen- This chapter describes the situation in the European tral and Eastern European countries which lack HPC fa- countries where supercomputer facilities are available cilities. We have chosen, however, to describe their na- to the academic community or where plans exist to pro- tional network. vide these. In the context of this overview the academic At the time of compilation no reliable information was community consists of universities, research institutes available about Azerbaijan, Macedonia, and Ukraine. of national science foundations, governmental research For obvious reasons no information is provided about institutes, weather forecast centres and other institu- Andorra, Gibraltar, Liechtenstein, Monaco, San Marino tions that can be considered to be public. and Vatican City. For the description of the situation in a country the fol- 3.1 Albania lowing template is used: Statistics Population and Gross Domestic Product (GDP). Statistics Policy Population: 3.6 million Addresses the following questions: GDP/capita: € 4,100 - Is there a national policy on supercomputing, and if National academic network so which organisation determines this policy and The Institute of Informatics and Applied Mathematics what is the policy? (INIMA) of the Albanian Academy of Sciences acts as - How is funding for carrying out the national super- NREN while ANA (Albanian Academic Network) is be- computing policy provided and what is the total ing organised. INIMA represents Albania in CEENet amount? and SEREEN. - Is there a national policy on high-speed networking, and if so which organisation determines this policy 3.2 Armenia and what is the policy and the funding? - Are there national HPCN initiatives, and if so how are Statistics these organised, what are the targets and what is the Population: 3.3 million funding? GDP/capita: € 3,400 Supercomputing facilities for the academia Gives a description of national supercomputer centres National academic network and facilities and a list of supercomputers installed at The Institute for Informatics and Automation Problems universities and (government) research institutes. of the National Academy of Sciences started in 1994 For countries with a large installed base of super- ASNET-AM – the Armenian academic scientific re- computers (³1 Tflop/s) a figure is included showing the search network. The network infrastructure was funded growth since 1998. by the Armenian Government, three private founda- National academic network tions and NATO. In 2000 the Armenian Research and - Description of the national research network. Education Networking Association Foundation - Who is the operator of this network? (ARENA) was founded by three universities and three - Ongoing innovation projects. institutes of the National Academy of Sciences. Allocation of resources How is the access of the users to the national facilities The network connects by radio links (640 kbps and 2 organised? Mbps) 30 scientific, research and educational organis a- National GRIDs tions through 16 main nodes in 4 cities. International Describes national GRID projects and initiatives. connectivity is provided through the “Virtual Silk Acquisition and upgrade plans Highway”. Upgrades or new systems ordered List of abbreviations 3.3 Austria Contacts and Addresses Statistics Only systems with peak performance ³48 Gflop/s have Population: 8.2 million been included. Appendix 1 contains an overview of the GDP/capita: € 24,900 systems installed in Europe that fall in this category. Policy An exception has been made for countries where super- There is no national HPC policy. The Federal Ministry computing is in a premature condition by mentioning for Education, Science and Culture is responsible for for these countries the most powerful systems n-i

Academic supercomputing in Europe 9 the national academic networking and provided for this 3.5 Belgium purpose a budget of € 6 million for 2003. Supercomputing facilities for the academia Statistics Technical : SGI Origin 2000/16, Beo- Population: 10.3 million wulf cluster (50 PIII/866 MHz) GDP/capita: € 26,100 Technical : IBM SP Power3/48, SGI Policy Origin 2000/64 The Federal Office for Scientific, Technical and Cultural University of Linz: SGI Altix 3700/128, SGI Origin Affairs (OSTC) funded a supercomputing programme in 3800/128 the early nineties. Since then no other national aca- University of Leoben: IBM SP2/21 demic HPC initiatives have been undertaken. University of Vienna: Beowulf cluster (160 Ath- OSTC is also the agency responsible for the financing, lon/1470), Beowulf cluster (64 PIII/700) the operation and the innovation of the Belgian federal National academic network academic network. A Policy Board advises OSTC on ACOnet – the Austrian Academic Computer Network – the medium and long-range development of this net- interconnects the cities with universities (Vienna, Graz, work and on other aspects of its operation and organi- Leoben, Klagenfurt, Innsbruck, Salzburg, Linz and sation. Dornbirn). The technology used is DWDM with Giga- http://www.belspo.be/ bit Ethernet. ACOnet has a 622 Mbps connection with Supercomputing facilities for the academia GÉANT. KULeuven: IBM pSeries 690/8, IBM SP Power3/28 ACOnet plays an important rôle in the international UCL: Compaq AlphaServer/44, Compaq cluster/83, connectivity of Central and Eastern European countries Beowulf cluster (32 PIII/933+16 P4/2400), Beowulf clus- by providing links to several of these countries. ter (24 PIII/1400) http://www.aco.net/ University of Liège: SGI Origin 3800/60 National GRIDs VUB/ULB: Compaq AlphaServer/28 The AUSTRIAN GRID initiative aims to create a na- National academic network tional grid testbed on top of the existing national com- BELNET – the Belgian federal academic network – puting and communication hardware in Austria. The connects universities and research institutes at speeds project requested in June 2002 € 3 million funding for of 34 Mbps to 1 Gbps. BELNET has a 2.5 Gbps link to the first two years. GÉANT and a 155 Mbps link to Abilene. The budget in Contacts and Addresses 2003 is € 11.5 million. Willy Weisz http://www.belnet.be/ VCPC National GRIDs Liechtensteinstrasse 22 A grid initiative called BEgrid started in February 2003 A-1090 Vienna and has resulted in a small grid infrastructure of two email: [email protected] clusters.

3.4 Belarus List of abbreviations BELNET Belgian Research Network KULeuven Katholieke Universiteit Leuven Statistics OSTC Federal Office for Scientific, Technical and Population: 10.3 million Cultural Affairs GDP/capita: € 7,400 UCL Université Catholique de Louvain National academic network VUB/ULB Vrije Universiteit Brussel/ Université Libre UNIBEL – the Belarussian network for science and de Bruxelles education – connects 7 nodes in the Minsk area at 2-10 Mbps and 6 regional nodes at 64 kbps. Connectivity with Internet is at 512 Kbps. UNIBEL is operated by a centre resorting under the Ministry of Education. http://unibel.by/eng/home.htm The institutes of the Belarussian Academy of Sciences use their own network, BAS-NET. This network has dedicated international connections to Moscow (256 kbps) and Hamburg (DESY, 64 kbps). The budget in 2003 is € 0.7 million. http://www.bas-net.by

Academic supercomputing in Europe 10 3.6 Bosnia-Herzegovina 3.8 Croatia

Statistics Statistics Population: 4.0 million Population: 4.4 million GDP/capita: € 1,700 GDP/capita: € 7,900 National academic network National academic network In 1998 BIHARNET (Academic Research Network of The national academic network CARNet connects 167 Bosnia and Herzegovina) was established by the uni- institutions in 24 cities. Its backbone is based on ATM versity of Banja Luka, the two Mostar universities, the (155 and 622 Mbps). CARNet has a 155 Mbps link to and the with GÉANT. funding from the government of Slovenia (!). CARNet is funded by the Croatian government. The The backbone of BIHARNET links nodes in Banja budget in 2003 is € 10.3 million. Luka, Bihac, Bijeljina, Foca-Srbinje, Lukavica, Mostar, http://www.carnet.hr/ Sarajevo, Tuzla and Zenica at speeds of 256 Kbps upto 2 Mbps. International connectivity is provided through 3.9 Cyprus a 2 Mbps connection to the Slovenian ARNES network. Statistics 3.7 Bulgaria Population: 0.8 million GDP/capita: € 13,500 Statistics National academic network Population: 7.6 million The national academic network CYNET links the only GDP/capita: € 6,000 university in Cyprus and a few educational establis h- Supercomputing facilities for the academia ments. Its backbone runs at 34 Mbps and has a 34 The Central Laboratory for Parallel Processing (CLPP) Mbps link to GÉANT (via GRNET). CYNET is operated of the Bulgarian Academy of Sciences in has a by the computer and networking department of the Cray Origin 2000. university. CYNET’s 2003 budget is € 0.6 million of which 90% is National academic network funded by the government and the rest by the users. The UNICOM-B academic network links the universi- http://www.cynet.ac.cy/ ties in Sofia, Plovdiv, Varna and Rousse and all insti- tutes of the Bulgarian Academy of Sciences. Link National GRID speeds are 128 Kbps up to 2 Mbps. International con- The CyGrid initiative at the University of Cyprus has nectivity is provided through a 6 Mbps link to the been established as local node of the CrossGrid project. Greek network GRNET. The network is operated by The CyGrid testbed consists of 4 systems with a total CLPP. of 6 processors. In 2003 a new national academic network organisation has been established named IST Foundation. It is fi- 3.10 Czech Republic nanced by the Ministry of Transport and Telecommu- nication, the Ministry of Education and Science and the Statistics United Nations Development Programme. The budget Population: 10.3 million in 2003 is € 0.3 million. GDP/capita: € 13,800 National GRIDs Policy The Bulgarian Grid Consortium (BgGrid) has been es- In 1992 the Czech Academy of Sciences (CAS) and the tablished. It is co-ordinated by CCLP. Ministry of Education (MoE) defined a national policy for supercomputing. A taskforce established by the MoE selected in 1994 SGI PowerChallenge as the lead- ing platform for the Czech universities and 5 of these systems were installed at different sites. These systems have since then been gradually replaced with funding from the MoE. In 1992 MoE provided funds to create the national aca- demic network that became operational in 1993. In 1996 all Czech universities and the Czech Academy of Sci- ences established the CESNET association which is re- sponsible for the operation of the network and its de- velopment. The development of the network is covered

Academic supercomputing in Europe 11 by the budgets of the MoE and the Academy of Sci- Supercomputing facilities for the academia ences and by grants from the Czech Grant Agency. Aarhus University: IBM pSeries 690/80 CESNET’s budget for 2003 is € 11.5 million. DMI: NEC SX-6/64 DTU: Beowulf cluster (480´P4/2260), SunFire 15K/72, Supercomputing facilities for the academia SunFire 6800 cluster/48, Sun Fire 12K/48, Sun Fire Charles University, Prague: SGI Origin 2000/32 6800 cluster/48, Beowulf cluster (94´EV67+46´EV6), CESNET: Beowulf cluster (32 PIII/700) SGI Altix 3000/64 Czech Hydrometeorological Institute, Prague: NEC SX- University of Copenhagen: Beowulf cluster (24 Ath- 4/3A lon/1400 + 6 AMD/700 +2 PIII/933 +1 PIII/700) Czech Technical University, Prague: IBM SP2/23 USD: Beowulf cluster (512´P4/2000+140´P4/2660) , Brno: SGI Origin 2000/40, Beowulf

cluster (32 PIII/700 + 32 PIII/1000) The figure shows how the aggregate performance of Technical University Ostrava: IBM SP Power3/4 the Danish HPC systems and the peak performance of University of Pilsen: 2´Compaq AlphaServer/8 the #1 system developed during the last 5 years. National academic network CESNET – the Czech academic network – links the net- works of all public universities and the academic insti- Denmark tutes with a backbone operating at 2.5 Gbps. The inter- national connectivity of CESNET consists of a 1.2 Gbps 10 link to GÉANT and a 622 Mbps link to the USA. An op- tical exchange called CzechLight is operational since 1 total January 2003 and provides a 2.5 Gbps ? to peak (NetherLight). Tflop/s 0,1 http://www.cesnet.cz/english 0,01 National GRIDs 1999 2000 2001 2002 2003 The national grid initiative is centered around the MetaCenter in which CESNET and the universities in Brno, Pilsen and Prague participate. National academic network The Danish Research Network "Forskningsnettet" Contacts and Addresses serves the Danish universities and research institu- Dr. Jaroslav Nadrchal tions. Its backbone between Lyngby and the universi- Centre of Automation and Computers ties in Odense, Aarhus and Ålborg operates at 622+200 Institute of Physics of the Academy of Sciences Mbps. International connectivity is provided via Cukrovarnická 10 NORDUNET. Forskningsnettet is run by UNI•C. CZ-162 53 Praha 6 Forskningsnettet’s 2003 budget is € 5.1 million. This is email: [email protected] co-financed by the Ministry for Research and the us- ers. 3.11 Denmark www.forskningsnettet.dk/netsek/ Statistics Allocation of resources Population: 5.4 million Scientists from the academia and public research cen- GDP/capita: € 26,100 tres who want to use the national academic supercom- puters must submit proposals to the Danish Research Policy Councils. These proposals are discussed in the so- A National Computer Board under the Ministry of Edu- called Bonuspoint Committee of the Research Councils cation deals among other subjects with questions on and subjected to peer review by the members of this supercomputing and supercomputing policy. Until 2000 Committee. On approval the applicants receive grants. UNI·C co-ordinated the national supercomputing pol- icy for the research and educational community. This National GRIDs role is now fulfilled by DCSC. In August 2003 the Danish Center for Grid Computing http://www.dcsc.dk (DCGC) was established by the Danish Natural Science Research Council (SNF) with DKK 7.5 million funding for a 3 year period. The DCGC participants are DTU, Copenhagen University, Aalborg University, Aarhus University and USD.

Academic supercomputing in Europe 12 List of abbreviations of CSC has representatives from academia, MoE, FMI DCSC Danish Center for Scientific Computing, Co- and the Technical Research Centre of Finland (VTT), an penhagen important customer of CSC. DMI Danish Meteorological Institute, Copenhagen A Scientific Advisory Board, with representatives from DTU Technical University of Denmark, Lyngby all major universities as well as from various academic UNI·C Danish Computing Centre for Research and disciplines, supervises academic supercomputing and Education (Lyngby, Aarhus, Copenhagen) networking and sets the framework for advancing com- USD University of Southern Denmark putational sciences in Finland. http://www.csc.fi Contacts and Addresses Prof. K.V. Mikkelsen Supercomputing facilities for the academia (Director DCSC) Åbo Akademi: Beowulf cluster (44 PIII/866) University of Copenhagen/Department of Chemistry CSC: IBM pSeries 690/512, SGI Origin 2000/128, Com- Universitetsparken 5 paq Alphacluster/96 DK 2100 Copenhagen Ø email: [email protected] The figure shows how the aggregate performance of the Finnish HPC systems and the peak performance of Prof. Dr. Dorte Olesen the #1 system developed during the last 5 years. (Director of UNI·C) UNI·C Finland Vermundsgade 5 DK-2100 Copenhagen Ø 10 email: [email protected]

3.12 Estonia total 1 peak Tflop/s Statistics Population: 1.4 million 0,1 GDP/capita: € 10,000 1999 2000 2001 2002 2003 National academic network The Ministry for Culture and Education established in National academic network 1993 the EENet (Estonian educational and research CSC is responsible for FUNET, the Finnish University network) organisation to manage and co-ordinate aca- and Research Network. FUNET connects all the univer- demic networking. This Ministry funds the use of sities, polytechnics and research institutes and has EENet. The budget for 2003 is € 0.7 million. more than 200,000 nodes attached to the network. The The backbone of EENet consists of a 100 Mbps link be- backbone operates at 2.5 Gbps and 155 Mbps. The in- tween Tallin and Tartu and connections to 16 other cit- ternational connectivity of FUNET is provided through ies, usually 2 Mbps. International connectivity is pro- 2´2.5 Gbps links to NORDUNET in Stockholm and a 4 vided by a 155 Mbps link to Stockholm (GÉANT). Mbps link to St. Petersburg. http://www.eenet.ee FUNET policy and guidelines are set by the Scientific Advisory Board of CSC, with representatives from re- 3.13 Finland search and university libraries. FUNET is funded by MoE (50%, through CSC) and its member organisations Statistics (50%). The 2003 budget is € 8.1 million. Population: 5.2 million http://www.funet.fi GDP/capita: € 23,600 Allocation of resources Policy CSC, its Scientific Director and the Scientific Advisory Supercomputing policy is made by a partnership of the Board allocate the resources. FMI has a fixed share of academic community (represented by the Ministry of the resources; other projects are allocated resources Education (MoE)) and the Finnish Meteorological Insti- based on their scientific and engineering merit. tute (FMI). Funding of the national supercomputer fa- National GRIDs cilities is provided mainly through these two organis a- CSC is involved in the development of the grid infra- tions. structure in Finland. Center for Scientific Computing (CSC) is the organis a- tion responsible for national supercomputing and net- working services and support. It is a non-profit com- pany, whose shareholder is MoE. The Executive Board

Academic supercomputing in Europe 13 List of abbreviations Meteo France (Toulouse): Fujitsu VPP5000/64, CSC Center for Scientific Computing, Espoo VPP5000/31, VPP700/26 FMI Finnish Meteorological Institute IN2P3: Beowulf cluster (384 Xeon/2400) MoE Ministry of Education ONERA: NEC SX-5/16 VTT Technical Research Center of Finland University of Le Mans: Beowulf cluster (70 PIII/500) University of Lille: IBM SP Power3/64 Contacts and Addresses Matti Ihamuotila France Managing Director Center for Scientific Computing 100 P.O. Box 405 FIN-02101 Espoo, FINLAND 10 e-mail: [email protected] total peak Tflop/s 1 3.14 France 0,1 Statistics 1999 2000 2001 2002 2003 Population: 60.2 million

GDP/capita: € 23,200 The figure above shows the peak performance of the #1 Policy French system and the total performance of the French The Centre National de la Recherche Scientifique HPC systems during the past 5 years. (CNRS) and the universities determine the French na- National academic network tional HPC policy for academic research. CNRS funds a In 1991 ME, MRT and the Minister of Posts and Tele- national HPC centre (IDRIS, near Paris) and the Minis- communications decided to establish the national high- try of Universities a second national HPC centre speed research network RENATER. A legal structure (CINES, in Montpellier). (Groupement Renater) associates the various research The national HPC policy is to support a hierarchical and education institutions involved in the program and structure of resources. Users should find resources acts as the legal operational unit in charge of the ser- available at all levels of performance and should be vice. granted access to them according to their needs. Re- RENATER has a distributed architecture where regional sources are more and more centralised (and rare) as the networks are interconnected by a national backbone in- level of performance increases. frastructure with links at speeds of 34, 155, 622 Mbps A committee (Comité des Grands Equipements) advises and 2.5 Gbps. the Minister of Science and Technology (MRT) on all RENATER has a 2.5 Gbps link to GÉANT and a 2.5 questions of large installations. Gbps link to the USA. Supercomputing facilities for the academia The financing of RENATER is: CNRS 42%, ME 42%, CCH: SGI Origin 2000/64 the rest is shared by small partners like INRIA, CEA, CEA: Compaq AlphaServer ES40/318, Compaq Alpha- EDF, etc. The budget for 2003 is € 27 million. Server ES40/166 http://www.renater.fr CEA (Limeil): Cray T3E/188 Allocation of resources CEA/CENG: Compaq Alphaserver ES40/232, Fujitsu Access to the resources of both national centres is free VPP5000/16, Cray T3E/300 after peer review. Once a year there is a joint nation- CEA/DAM: Compaq AlphaServer SC2560 wide request for proposals to use the facilities at the CERFACS: Compaq AlphaServer SC24, SGI Origin centres. These proposals are examined by the appropri- 2000/32 ate Evaluation Committee (10 Committees, about 10 ex- CICT: SGI Origin 2000/64 perts each). Resources are allocated by the Directors of CINES: SGI Origin 3800/768, IBM pSeries 690/32, IBM Centres, following the recommendations of their Scien- SP Power3/472 tific Council. CNRS/IDRIS: IBM pSeries 690/256, NEC SX-5/40, Cray T3E/256, IBM SP Power3/160 National Grids CRIHAN: SGI Origin 2000/64 RNTL, an organisation supported by MRNT and the ENSC: SGI Origin 2000/64 Ministry of the Economy, Finances and Industry, pro- ENSL: 2×Sun 6800/32 motes software research in which private companies IFP: NEC SX-5/5 and public organisations cooperate. It started in 2001 IIM: Beowulf cluster (20 G4/450) the 2-year project e-Toile that has created a high INRIA (Grenoble): HP cluster (208 Itanium2/900), HP performance data processing grid. e-Toile uses cluster (225 PIII/733) computers of seven French academic and industrial research centres. RNTL has provided € 2.7 million of

Academic supercomputing in Europe 14 tres. RNTL has provided € 2.7 million of this € 4.5 mil- Contacts and Addresses lion project. Prof. Victor Alessandrini MRNT created in 2001 the ACI GRID programme to IDRIS support the development of Grids in France. ACI GRID Boite Postal 167 launched in 2003 two national grid programmes. F-91403 Orsay CEDEX CIIX - “GRID'5000” will interconnect 10–15 Beowulf clusters email: [email protected] each with typically 500 processor nodes. End 2003 will be decided which 4–6 sites will be the main 3.15 Georgia nodes. The budget in 2003 is € 1 million. - The “Data Grid Explorer” will be an emulation envi- Statistics ronment consisting of a large cluster (>1000 proces- Population: 4.9 million sors, at IDRIS), a database of experimental conditions GDP/capita: € 2,800 and tools to control and analyse experiments. The budget is € 1 million. National academic network GRENA, the Georgian Research and Educational Net- List of abbreviations working Association, was founded in 1999 as a non- ACI Action Concertée Incitative profit organisation with the aim to create a network in- CCH Centre Charles Hermite, Nancy frastructure for Georgian research and educational in- CEA Commissariat à l'Energie Atomique stitutions, libraries and non-profit organis ations. Now CENG Centre d'Etudes Nucléaires de Grenoble the GRENA network connects more than 300 institu- CERFACS Centre Européen de Recherche et de Forma- tions. Four major cities have a 2 Mbps link to Tblisi. tion Avancée en Calcul Scientifique, Tou- louse The 2003 budget is € 0.45 million. CICT Centre Interuniversitaire de Calcul de Tou- http://www.grena.ge/english louse CINES Centre Informatique National de l’Enseignement Supérieur, Montpellier 3.16 Germany CNRS Centre National de la Recherche Scientifi- que, Paris Statistics CRIHAN Centre de Ressources Informatiques de Population: 82.4 million Haute Normandie, Mont-Saint-Aignan GDP/capita: € 23,900 DAM Direction des Applications Militaires, Bruyeres Policy ENSC Ecole Normale Supérieure de Cachan Policy on supercomputing is made both at federal and ENSL Ecole Normale Supérieure de Lyon state level and is generally based on recommendations IDRIS Institut du Développement et des Ressour- issued by the Wissenschaftsrat. Its scientific commit- ces en Informatique Scientifique, Orsay tee has 32 members 24 of which represent the German academia. They are appointed on proposal of the Ge r- IFP Institut Français de Pétrole, Rueil man Research Council (DFG). The Wissenschaftsrat es- IIM International Institute of Multimedia tablished in 2001 a co-ordinating committee for the ac- INRIA Institut National de Recherche en Informa- quisition and the use of high-performance computers. tique et en Automatique A DFG committee (KfR) also addresses the provision of IN2P3 Institut National de Physique Nucléaire et (super)computers to universities and prepares recom- de Physique des Particules mendations for the Wissenschaftsrat. ME Ministère de la Education MESR Ministère de l’Enseignement Supérieur et As a rule 50% of the investment in academic super- de la Recherche computers is provided by the federal government and MRNT Ministère délégué à la Recherche et aux 50% by the state where the university is located. Nouvelles Technologies The Wissenschaftsrat recommended in 1995 on the is- MRT Ministère de la Recherche et de la Techno- sue of supercomputing capacities for German science logie and research the gradual establishment of two up to ONERA Office National d'Etudes et de Recherches four HPC centres, each to provide facilities either na- Aérospatiales, Chatillon tion-wide or to a number of co-operating states. There RENATER Réseau National de Télécommunication are now three supercomputer centres of this new type. pour la Technologie, l'Enseignement et la Recherche - HLRS that operates supercomputers owned by hww, RNTL Réseau National des Technologies Logi- a joint venture of the Universities of , Hei- cielles delberg and Karlsruhe, and the State of Baden-

Academic supercomputing in Europe 15 Württenberg (together 50% of the shares), T-systems University of Braunschweig: Compaq Alphaserver/36, (40%) and Porsche (10%). Cray T3E-900/28 - LRZ in with funding from the State of Bava- University of Chemnitz: Beowulf cluster (528 PIII/800) ria and federal funding. University of Darmstadt: IBM pSeries 690/96 - HLRN (a co-operation of RRZN and ZIB) as HPC cen- University of Dresden: SGI Origin 3800/128, SGI Origin tre for the Northern States Berlin, Bremen, Hamburg, 3800/64 Mecklenburg-Vorpommern, Niedersachsen and University of Duisburg: HP V2250/16, HP V2250/12 Schleswig-Holstein. HLRN’s IBM pSeries 690/768 University of Düsseldorf: SGI Origin 2000/38 system is distributed. One half of the system is lo- University of Erlangen: Beowulf cluster (150 cated at ZIB in Berlin, the other half at RRZN in Han- Xeon/2600) nover. A 2.4 Gbps link couples both parts over a dis- University of Essen: IBM SP2/35 tance of 300 km. The € 20 million system has been fi- University of Halle-Wittenberg: Sun Fire 6800/24 nanced by the participating states (together 50%) University of Hamburg-Harburg: HP SuperDome/64 and the federal government (50%). University of Hannover: IBM pSeries 690/384, Sun Most of the other supercomputing centres are regional 10000/56 (State based) or specialised (physics, mathematics, University of Heidelberg: Beowulf cluster (512 Ath- weather etc.). A supercomputer centre that provides lon/1400), Beowulf cluster (12 PIII/933, 52 PIII/800, 12 nationwide services to the academic community is NIC PIII/733) (a co-operation of FZJ and DESY). University of Jena: HP SuperDome/64 The Wissenschaftsrat proposed in May 2002 the re- University of Karlsruhe: IBM SP Power3/256, Fujitsu newal of the HLRS resources by a 15 Tflop/s system in VPP5000/8 2004. The budget is € 58 million. In May 2003 a 40 University of Kiel: NEC SX-5/16 Tflop/s system in 2005 for LRZ was proposed by the University of Köln: Sun Fire15K/72, Sun Fire 6800 clus- Wissenschaftrat with a budget of € 40 million. ter/36 http://www.wissenschaftsrat.de University of Leipzig: HP V2250/32 University of Magdeburg: HP SuperDome/64, Beowulf Supercomputing facilities for the academia cluster (148 PIII/800 + 12 Athlon/1200) AWI: Cray T3E/134 University of Mainz: HP V2500/32 BAW: SGI Origin 3000/256 University of Marburg: IBM SP Power3/16 DESY (Zeuthen): 8 Quadrics APEmille/128 University of Münster: Beowulf cluster (16 Ath- DKFZ: HP SuperDome/96, IBM pSeries 690/64, IBM lon/1327+50 PIII/800) SP3/48 University of Paderborn: Fujitsu hpcline cluster (192 DKRZ: NEC SX-6/192 PIII/850) DLR (Brunswick): NEC SX-5E/16 University of Rostock: Sun Fire 6800+3800/32 DLR (Köln): IBM SP2/68 University of Stuttgart (HLRS/hww): Cray T3E-900/540, DLR (Wessling): IBM SP2/68 NEC SX-5e/32, NEC SX-4/40, Hitachi SR8000/16, HP DWD: IBM SP Power3/1280, Cray T3E-1200/812 cluster (16 IA-64/900), Cray SV1/20, IBM pSeries FhG/EMI: Beowulf cluster (256 Xeon/2000) 690/16, Fujitsu hpcLine cluster (130 Xeon/2000+64 FhG/ITWM: Beowulf cluster (128 Xeon/2400) Xeon/2400+2 IA-64/800), Beowulf cluster (20 FHI: HP N4000/80 PIII/1000+48 Xeon/2400) FzK: IBM SP Power3/64, Beowulf cluster (144 University of Stuttgart (ITB): Beowulf cluster (256 Xeon/2660 + 130 Xeon/2200 + 190 PIII/1260 + 32 AMD/1533) PIII/1000) University of Tübingen: Beowulf cluster (196 PIII/650) GFZ: HP V2500/16 University of Weimar: SGI Altix 3700/40. GWDG: IBM pSeries 690/96, IBM SP Power3/144 University of Wuppertal: Compaq AlphaCluster/128 LRZ: Hitachi SR8000-F1/168, Fujitsu VPP700/52, IBM ZIB: IBM pSeries 690/384, Cray T3E-900/544 pSeries 690/8, Beowulf cluster (68 Itanium2/1300 + 117 P4/(1500-3060) + 68 PIII(500-866)) Germany MPG/MPI (Berlin): IBM SP Power3/48, HP N4000/80 MPG/IPP: Cray T3E/812, IBM pSeries 690/768 100 NIC: IBM pSeries 690/192, Cray T3E-1200/540 PIK: IBM pSeries 655/240, IBM SP Power3/200 10 University of Aachen: cluster of 4´Sun Fire 15K/72 + total 16´Sun Fire 6800/24 peak Tflop/s 1 University of Bochum: Fujitsu hpcline cluster (128 Ath- lon/1800+) 0,1 University of Bonn: Beowulf cluster (144 PII/400) 1999 2000 2001 2002 2003

Academic supercomputing in Europe 16 The figure shows the total performance of German HPC DKFZ Deutsches Krebsforschungszentrum, Heidel- systems and the peak performance of the #1 German berg system during the last 5 years. DKRZ Deutsches Klimarechenzentrum, Hamburg DLR Deutsche Forschungsanstalt für Luft- und National academic network Raumfahrt Data communication in the German research sector is DWD Deutscher Wetterdienst, Offenbach promoted by the DFN Association – Verein zur Förde- EMI Ernst-Mach-Institut (FhG), Freiburg rung eines Deutschen Forschungsnetzes. The DFN FhG Fraunhofer Gesselschaft Association has more than 750 members – academic FHI Fritz Haber Institute, Berlin and private research organisations and industry. FZJ Forschungszentrum Jülich http://www.dfn.de FzK Forschungszentrum Karlsruhe G-WIN (Gigabit-Wissenschaftsnetz) is the German GFZ Geophysical Forschungszentrum, Potsdam high-speed research network. Its backbone has 10 GWDG Gessellschaft für wissenschhaftliche Daten- level-1 entry nodes and 19 regional nodes. The back- verarbeitung, Göttingen bone connections are 2.5 Gbps or 10 Gbps. Organis a- G-WIN Gigabit-Wissenschaftsnetz tions can access G-WIN at up to 2.5 Gbps. International HLRN Norddeutscher Verbund für Hoch- und connectivity includes 2´622 Mbps links to the USA Höchstleistungsrechnen and 2.5 Gbps to GÉANT. HLRS Höchtsleistungsrechenzentrum Stuttgart The 2003 budget is € 40 million. The running costs of HWW Höchtsleistungsrechner für Wissenschaft und the network are financed by the users. BMBF finances Wirtschaft, Stuttgart the further development of G-WIN with € 44 million IPP Institut für Plasmaphysik, Garching over a five-year period. ITB Institute for Technical Biochemistry Acquisition and upgrade plans ITWM Institut für Techno- und Wirtschaftsmathe- DWD: upgrade to IBM SP3/1980 in 4Q03. matik (FhG), Kaiserslautern HLRS: In Stuttgart a 15 Tflop/s system should be op- KfR Kommission für Rechenanlagen erational in 2004 and in Karlsruhe a 3 Tflop/s system. LRZ Leibniz Rechenzentrum der Bayeris chen Aka- demie der Wissenschaften, Munich LRZ: Installation of a 40 Tflop/s system in 2005. MPG Max Planck Gesellschaft NIC: Upgrade to IBM pSeries 690/1152 begin 2004. NIC John von Neumann Institute for Computing, University of Aachen: upgrade of the SunFire cluster in Jülich the second half of 2003 to a 4 Tflop/s system. PIK Potsdam Institute for Climate Impact Research University of Wuppertal: proposal submitted for a 3–4 RRZN Regionales Rechenzentrum für Niedersachsen, Tflop/s cluster with 1024 processors. Hannover ZIB Konrad Zuse-Zentrum für Informationstech- National GRIDs nik, Berlin Germany was one the first European countries to en- gage in major national grid initiatives. In 1997 the lead- Contacts and Addresses ing German Supercomputer Centres started the Prof. Dr. H.W. Meuer UNICORE project to provide in easy-to-use secure re- Direktor mote access to high-end machines. The UNICORE Rechenzentrum der Universität Mannheim software development was funded by BMBF. The fol- L15, 16 Postfach 103462 low-up project UNICORE plus ended in 2002. The D- 68131 Mannheim UNICORE software is now used by the large German email: [email protected] computer centres and in several European projects like EUROGRID. 3.17 Greece More than 40 groups of German nuclear and particle physicists have chosen FzK as site for the regional grid Statistics computing centre GridKa. GridKa provides a powerful Population: 10.7 million grid infrastructure. GDP/capita: € 17,100 List of abbreviations Supercomputing facilities for the academia AWI Alfred-Wegener Institut, Bremerhaven (Insti- University of Athens: HP V2600/48 tute for Polar and Marine research) BAW: Bundesanstalt für Wasserbau, Hamburg National academic network BMBF Federal Ministry for Education, Science, Re- GRNET, the national research and technology network search and Technology connects 7 major cities at 622 Mbps speed. Interna- DESY Deutsches Elektron-Synchrotron tional connectivity is via 2 622 Mbps links to GÉANT. DFG Deutsche Forschungsgemeinschaft GRNET has connections to all Greek universities and DFN Deutsche Forschungsnetz research centres and provides VPN services to the pri-

Academic supercomputing in Europe 17 mary and secondary educational institutions. GRNET is List of abbreviations being upgrading to a network based on DWDM tech- BUTE Budapest University of Technology and nology, GRNET2. Economics GRNET is funded by the Ministry of Development. The ELTE Eötvös Loránd University, Budapest budget for 2003 is € 8.7 million. NIIFI Office for National Information and Infra- http://www.grnet.gr structure Development SZTAKI Computer and Automation Research Institu- National GRID tion, Hungarian Academy of Sciences The Local Office of the Information Society – Ministry of Economy and Finance established in December 2002 Contacts and Addresses the HellasGrid task force. Its task is to define the basic Peter Kacsuk strategy and guidelines for the Greek national, regional MTA SZTAKI and international grid activities. PO Box 63 http://www.hellasgrid.gr/ (in Greek) H-1518 Budapest email: [email protected] 3.18 Hungary 3.19 Iceland Statistics Population: 10.0 million Statistics GDP/capita: € 12,000 Population: 281,000 GDP/capita: € 22,500 Policy The Hungarian Academy of Sciences, the Ministry of Supercomputing facilities for the academia Culture and Education, the National Committee of Icelandic Energy Authority and Institute for Meteoro- Technological Development and the Hungarian Scien- logical Research: Beowulf cluster (24 PIII/866 + 24 tific Research have established a National Information PIII/1260) Infrastructure Development (NIIFI) Programme. This National academic network programme finances mainly the Hungarian Academic The Icelandic University Research Network RHnet links and Research Network, HUNGARNET. The 2003 the Icelandic universities and research institutions. Its budget for this network is € 8.7 million. backbone operates at 1 Gbps. RHnet has a 45 Mbps link to NORDUnet. Supercomputing facilities for the academia The 2003 budget of RHnet is € 0.38 million and it is BUTE: Compaq AlphaServer/16 mainly provided by its users. ELTE: Compaq AlphaServer/16 http://www.rhnet.is/english MTA SZTAKI: Beowulf cluster (56 PIII/500) NIIFI: Sun HPC10000/64 3.20 Ireland National academic network HUNGARNET uses the national HBONE network with a backbone of 2.5 Gbps to connect higher education in- Statistics stitutes, R&D organis ations and libraries throughout Population: 3.9 million the whole country at speeds of 34 Mbps up to 1 Gbps. GDP/capita: € 27,400 International connectivity is 2.5 Gbps to GÉANT. Policy http://www.hungarnet.hu The Higher Education Authority (HEA) is the govern- National GRIDs mental agency established for the funding of the Irish In July 2003 a national grid infrastructure called Clus- universities. HEA provides ~65% of the funding of the terGrid became available to the Hungarian research national research network HEAnet which in 2003 has a community. The clusters that compose the grid are in- budget of € 13 million. The users pay ~30%. stalled at 12 institutes and amount a total of 580 nodes. Supercomputing facilities for the academia These systems are used for their regular purposes dur- NUI at Galway: SGI Origin 3800/40 ing daytime and are offered to ClusterGrid during the University Cork: Beowulf cluster (100 PIII/1400) nights and weekends. http://www.clustergrid.iif.hu National academic network HEAnet – the Irish research network – provides net- work services to the Irish universities, other HEA or- ganisations, national research organisations and edu- cational institutions.

Academic supercomputing in Europe 18 The HEAnet backbone operates at 155 Mbps and 2.5 Gbps. International academic connectivity includes 2.5 Italy Gbps to GÉANT, 155 Mbps to JANET and 2´155 Mbps to Abilene. 10 http://web.heanet.ie/

total 1 National Grids peak Tflop/s A GRID-Ireland project started in 1999. Grid resources were provided by Trinity College Dublin, University 0,1 College Cork, NUI Galway and Queens University Bel- 1999 2000 2001 2002 2003 fast. List of abbreviations National academic network NUI National University of Ireland GARR is the Italian network for research and education to which all Italian universities and main R&D centres 3.21 Italy are connected. GARR has a 2.5 Gbps backbone. Inter- national connectivity includes 2.5 Gbps to GÉANT and Statistics 622 Mbps to the USA. Population: 58.0 million The 2003 budget for the GARR network is € 40 million. GDP/capita: € 22,500 MIUR finances ~40%, the users the rest. http://www.garr.it/ Policy The organisations involved in making the HPC policy National GRIDs are the Ministry for Education, University and Re- A € 30 million project that started in 2000 has resulted search (MIUR), the National Research Council (CNR), in the INFN LHC grid, an infrastructure primarily de- the Agency for New Technologies, Energy and the En- vised for LHC calculations. vironment (ENEA) and the National Institute for Nu- MIUR has provided € 3 million for the development of clear Physics (INFN). SPACI (Southern Partnership for Advanced Computa- MIUR finances several supercomputing centres and tional Infrastructures) which aims to create a grid infra- the backbone of the high-performance research net- structure in Southern Italy. The principal nodes are work. MIUR has established a National Strategic Pro- Lecce, Napels and Cosenza. gram (FIRB program) for Information Society. The major Italian research institutions, universities and ENEA has invested in massively parallel computing HPC centres, as well as industrial partners, participate platforms of the Italian Quadrics technology. INFN, in in IG-BIGEST (Italian Grid for eBusiness, eIndustry the frame of its particle physics research program, has eGovernment, eScience & Technology). This initiative started up the APEmille project. aims to co-ordinate grid deployment in Italy, creating The university consortia, which operate supercomputer an Italian Research Area (IRA), and its integration with centres are: CINECA in Bologna, which provides a na- similar ones in Europe and worldwide. tional service, CILEA in Milan, principally in support to List of abbreviations Lombard universities, and CASPUR in Rome, which CASPUR Consortium for Supercomputer Applications serves the universities of Rome and of South Italy. in University and Research, Rome Supercomputing facilities for the academia CILEA Consorzio Interuniversitario Lombardo per CASPUR (Rome): IBM SP Power3/128 l'Elaborazione Automatica (Consortium of 7 CILEA: HP SuperDome/64, HP V2500/20, 2´HP N4000/8 universities in Milan, Bergamo, Brescia and CINECA: IBM pSeries 690/512, Compaq Al- Pavia), Milan phaServer/128, IBM SP Power3/128, SGI Origin CINECA Centro di Calculo Interuniversitario dell’Italia 3000/128, IBM cluster (128 PIII/1133) Nord-Orientale (Consortium of universities ENEA (Casaccia): Compaq cluster/80 in Bologna, Firenze, Padova, Venezia, INFN (Rome): Alenia APE/2048 Modena, Parma, Ferrara, Trieste, Ancona, University of Pisa: Beowulf cluster (26 Athlon/1800 + Catania, Siena, Triento and Udine), Bologna 28 Athlon/1200) CNR Consiglio Nazionale delle Ricerche ENEA Ente per le Nuove Tecnologie, l'Energia e The figure below shows the peak performance of the #1 l'Ambiente (Institute for New Technologies, Italian system and the total performance of Italian HPC Energy and Environment), Rome systems during the last 5 years. INFN National Institute for Nuclear Physics, Rome MIUR Ministry for Education, University and Re- search

Academic supercomputing in Europe 19 Contacts and Addresses 3.24 Luxembourg Giovanni Erbacci Via Magnanelli 6/3 Statistics 40031 Casalecchio di Reno (Bologna) Population: 454,000 email: [email protected] GDP/capita: € 39,600

3.22 Latvia National academic network The network RESTENA (Réseau Téléinformatique de l'Education Nationale) connects the educational insti- Statistics tutions and research centres in Luxembourg. The back- Population: 2.3 million bone speed is 1 Gbps. RESTENA has a 155 Mbps link GDP/capita: € 7,500 to GÉANT. Supercomputing facilities for the academia The 2003 budget for RESTENA is € 3 million. It is University of Latvia: Beowulf cluster (10 PIII/1260) jointly provided by the government and six research in- stitutions. National academic network http://www.restena.lu/ Latvia has two research networks. LANET (Latvian Academic Network) is a network for 3.25 Malta education and science which is only available in Riga. The Department of Information Technology of the Uni- versity of Latvia manages LANET. The backbone is 155 Statistics Mbps. Population: 0.4 million LATNET provides data communication services to re- GDP/capita: € 15,300 search and education institutes in Latvia. The star- National academic network shaped backbone operates at 1 Gbps and large sites The national academic network is operated by the typically access at 2 Mbps. LATNET has a 34 Mbps Computing Services Centre of the University of Malta. link to GÉANT. The 2003 budget is € 1.8 million. It is The network consists of the university’s campus net- mainly financed through charging the users. work and some links to outlying centres. The network http://www.lanet.lv/ has a 20 Mbps connection to GÉANT. http://www.latnet.lv 3.26 Moldova 3.23 Lithuania Statistics Statistics Population: 4.4 million Population: 3.6 million GDP/capita: € 2,300 GDP/capita: € 7,600 National academic network Supercomputing facilities for the academia RENAM, the Research and Networking Association of Military Academy of Lithuania: Beowulf cluster (4 Moldova, is a non-profit organisation founded in 1999 AMD/1200 + 10 PIII/800 + 16 PIII/733 + 2 PII/450) to establish and develop a scientific and educational National academic network network in Moldova. RENAM is supported by the Min- LITNET, the national academic and research network istry of Science and Education of Moldova, the Acad- organisation, manages the national network (also called emy of Science of Moldova and the universities and re- LITNET). Its main nodes in Vilnius and Kaunas are ceived for the network infrastructure funding from the connected at 155 Mbps links. The universities, acad- NATO Science Programme. The 2003 budget is € 0.15 emies and research institutes access LITNET at speeds million. of 1, 2 or 4 Mbps. International connectivity includes a RENAM connects 4 universities, 25 scientific insti- 155 Mbps link to GÉANT. tutes, 11 colleges, 6 libraries and 4 public organisations LITNET is funded by the Ministry of Science and Edu- at speeds of 64 kbps up to 2 Mbps. International con- cation. The budget for 2003 is € 2.9 million. nectivity is through a 8 Mbps link to GÉANT (via http://www.litnet.lt/index_en.html RoEduNet, Romania). http://www.renam.md

Academic supercomputing in Europe 20 3.27 The Netherlands create a London – Amsterdam – Chicago optical test- bed that will enable research into leading edge net- Statistics works and applications. Population: 16.1 million http://www.nwo.nl/ncf GDP/capita: € 24,300 Supercomputing facilities for the academia Policy CWI: Origin 2000/16 NCF – an independent foundation under the umbrella KNMI: Sun Fire 15K/44 of the Netherlands Organization for Scientific Research NIKHEF: Beowulf cluster (4 PIII/933+128 AMD/2000), (NWO) – is in charge of the national policy on aca- Beowulf cluster (100 PIII/800) demic supercomputing and co-ordinates and promotes NLR: NEC SX-5/8 the use of advanced computing facilities. SARA: SGI Origin 3800/1024, SGI Altix 3700/416, IBM One of NCF's key policy items is to provide the aca- pSeries 690/128, Beowulf cluster (72 Xeon/3000), IBM demic community with access to top-of-the-line super- cluster (8 IA64/733) computer resources. To achieve this goal NWO has TUD: Beowulf cluster (64 PIII/1000) since 1990 fully financed the acquisition and subse- UNITE (at SARA): SGI Origin 2000/128 quent upgrades and replacements of a national aca- University of Groningen: Cray SV1e/32, Beowulf cluster demic supercomputer for scientific research. For the (128 P4/1700) same purpose NCF also co-finances the innovation of University of Leiden: Beowulf cluster (64 PIII/1000), the national academic research network to ensure high- Beowulf cluster/79 speed connectivity of Dutch universities. University of Utrecht: Beowulf cluster (64 PIII/1000) Long term funding for the installation, running costs UvA: Beowulf cluster (64 PIII/1000) and future upgrades or replacements of the national UvA (at SARA): Beowulf cluster/168 supercomputer facilities is provided by NWO. The ba- VU: Beowulf cluster (144 PIII/1000) sic funding level is € 5.6 million per year, not including additional and ad hoc funding for special facilities. The SGI Origin 3800 + Altix 3700 at SARA is the na- tional supercomputer facility. Many of the other sys- The larger non-academic research institutes and indus- tems in the list above are also accessible through NCF try can have access to the national facilities, but this is for use by the academic community. incidentally used. NCF spends € 0.7 million per year for auxiliary policy as The Netherlands matching funds for the purchase of systems at aca- demic HPC centres. 10 NCF supports grid development and deployment in the 1 Netherlands through investments in grid infrastructure. total The national policy on networking for the research peak community has been established by the SURF founda- Tflop/s 0,1 tion – the higher education and research partnership organisation for network services and information and 0,01 communications technology – whose members are the 1999 2000 2001 2002 2003 universities, schools for higher professional education, research institutes and national organisations for re- The figure shows for the past 5 years the development # search and education. Network innovation is funded by of the peak performance of the 1 Dutch system and of the government on a project/programme basis. The us- the aggregate performance of Dutch HPC systems. ers of the network finance its annual running costs. National academic network The budget for 2003 is € 33 million. SURFnet5 – the national academic network – links the networks of more than 250 organisations. The back- The GigaPort Next Generation (NG) Network proposal bone is 20 Gbps. Most universities and research insti- has been submitted to the Dutch government for fund- tutes link at 1 Gbps. ing. Part of this proposal is the creation of the new na- The worldwide connectivity of SURFnet5 includes a 2.5 tional academic network, SURFnet6. The total project Gbps link to GÉANT and 3´1 Gbps links to the USA. costs of GigaPort NG Network amount to ~ € 118 mil- The NetherLight optical infrastructure is operational lion. The funding requested from the government since January 2002. The Radio Astronomy institute amounts ~ € 49 million. A decision is expected in No- ASTRON/JIVE has a dedicated DWDM connection vember 2003. (32x2.5 Gbps) into NetherLight. The international con- SURF strengthened in September 2002 the collabora- nectivity consists of a 10 Gbps l to StarLight (Chicago, tion with her UK pendant JISC. They will develop an USA), a 10 Gbps l to CERN and a 2.5 Gbps l to Czech- optical network between JISC, SURF and Internet2 to Light in Prague.

Academic supercomputing in Europe 21 The network is operated by SURFnet BV – a private Contacts and Addresses not-for-profit company owned by SURF. SURFnet's ac- Dr. P.J.C. Aerts tivities are restricted to higher education institutes, re- (director of NCF) search institutions including industrial research, scien- NCF tific libraries and academic hospitals. P.O. Box 93575 2509 AN Den Haag Allocation of resources email: [email protected] Scientists from the academia who want to use any of the national facilities may submit a proposal to NCF. 3.28 Norway Proposals are discussed in the WGS – NCF's Advisory Committee – and are subject to peer-review. On ap- proval the applicant receives a grant in terms of re- Statistics source units. Population: 4.6 million GDP/capita: € 28,600 National Grids The DutchGrid platform established in 2000 aims to co- Policy ordinate the deployment of grids in the Netherlands The policy on supercomputing was set during the pe- and to offer a forum for exchange of experiences. To- riod 1995-1998 by the Research Council of Norway day, 12 research institutions participate in DutchGrid. (NFR). A Program for Supercomputing developed vec- tor facilities in Trondheim, clusters in Oslo and MPP DAS-2 is a grid infrastructure consisting of 5 clusters, technology in Bergen. with a total of 200 processors, distributed over 5 uni- NTNU leads the follow-up HPC project NOTUR for the versities (VU, UvA, Leiden, Delft, Utrecht). period 2000-2003. NOTUR is funded by NFR (~50%), and the partners in a consortium consisting of the other Acquisition and upgrade plans three universities in Norway, Statoil, SINTEF, DNMI KNMI: expansion of the SunFire 15K to 84 processors and ViewTech ASA, a small visualisation software in 2003. company. The project addresses specialised HPC facili- NLR: NEC TX7/16 ordered ties at NTNU, the , the University of SARA: upgrade of the IBM pSeries 690 to 204 proces- Bergen and the University of Tromsø. DNMI uses the sors in 2004. computing resources of the project for operational at- List of abbreviations mospheric and oceanographic forecasting and for re- CWI Centrum voor Wiskunde en Informatica, search work. Amsterdam Plans for the structure of the Norwegian HPC in the pe- KNMI Koninklijk Nederlands Meteorologisch Insti- riod 2004-2008 will be made by a committee appointed tuut, De Bilt (Royal Dutch Meteorological by the Research Council of Norway chaired by Profes- Institute). sor Odd Gropen from the University of Tromsø. NCF Stichting Nationale Computer Faciliteiten, Den Haag, (NWO National Computer Facili- Supercomputing facilities for the academia ties Foundation) NTNU: SGI 3800/512, SGI 3800/384, Cray T3E/96, Beo- NIKHEF National Institute for Nuclear Physics and wulf cluster (29 AMD/1466 + 22 AMD/1400) High Energy Physics, Amsterdam University of Bergen: IBM pSeries 690/96, IBM cluster NLR Nationaal Luchtvaart Laboratorium, Mark- (64 PIII/1266) nesse, (National Aerospace Laboratory) University of Oslo: HP SuperDome/44 NWO Nederlandse Organisatie voor Wetenschap- University of Tromsø: HP cluster (14 Itanium2/900 + 8 pelijk Onderzoek, Den Haag, (The Nether- Itanium2/1000), HP SuperDome/32 lands Organization for Scientific Research) The next figure shows the development of the peak per- SARA Stichting Academisch Rekencentrum Am- formance of the #1 Norwegian system and of the aggre- sterdam gate performance of Norwegian HPC systems. for the SURF Samenwerkende Universitaire Rekenfacilitei- past 5 years ten, Utrecht TUD Technische Universiteit Delft UNITE Co-operation of the technical universities of Eindhoven and Enschede UvA Universiteit van Amsterdam VU Vrije Universiteit, Amsterdam WGS Adviescommissie Wetenschappelijk Gebruik Supercomputers (NCF)

Academic supercomputing in Europe 22 3.29 Poland Norway Statistics 10 Population: 38.6 million GDP/capita: € 8,600 1 total Policy peak

Tflop/s 0,1 The State Committee for Scientific Research (KBN) de- termines the national research policy at the governmen- tal level. KBN allocates funding from the Ministry of 0,01 1999 2000 2001 2002 2003 Finance to the different research organisations through the Ministries under which these organisations resort.

National academic network Since its establishment in 1991 KBN has invested in the UNINETT is the national network for research and edu- development of the Polish infrastructure for information cation in Norway. The primary customers are the uni- technology. Part of these investments was used to in- versities, regional high schools and research institu- stall high-performance computers at academic and re- tions. Its backbone is 2.5 Gbps and partially 155 Mbps. search institutes and to develop a nation-wide aca- demic and research network. UNINETT's international connectivity is through 2´2.5 KBN is also in charge of the policy regarding the na- Gbps connections to NORDUnet. tional research network. UNINETTS’ 2003 budget is € 12 million. It is provided by the government (50%) and the users (50%). Supercomputing facilities for the academia http://www.uninett.no The main supercomputing facilities are located in su- percomputer centres in Warsaw, Cracow, Poznan, Allocation of resources Gdansk and Wroclaw. The Program Committee for High Performance Comput- CYFRONET, Cracow: SGI Origin 2800/128, Beowulf ing of the Norwegian Research Council awards comput- cluster (8 PIII/1000+46 Xeon/2400), Sun Fire 6800/24 ing time on the NOTUR systems after a referee process IMGW: SGI Origin 3000/100 of the applications. University of Gdansk: SGI Origin 2000/16, Beowulf clus- Acquisition and upgrade plans ter (128 PIII/700) University of Oslo: upgrade of the HP SuperDome in University of Poznan: SGI Origin 3800/64, Cray SV1/8, autumn 2003 with 64 Itanium2/1500 processors. SGI Altix 3300/4, Cray T3E-900/8 University of Warsaw: Cray T3E/36, Cray SV1/16 List of abbreviations University of Wroclaw: SGI Origin 2000/32 DNMI The Norwegian Institute of Meteorology NFR The Research Council of Norway National academic network NTNU The Norwegian University of Science and The Polish scientific network today is called POL- Technology, Trondheim 34/622. It connects Metropolitan Area Networks in 12 SINTEF Stiftelsen for Industriell og Teknisk cities through a 622 Mbps backbone and 8 other cities Forskning (Foundation for Industrial and at 2, 34 or 155 Mbps. International connectivity is Technical Research) through a 2.5 Gbps link to GÉANT. The network is operated by the PSNC. Its 2003 budget Contacts and Addresses of € 3.6 million is co-financed by the government and Professor Odd Gropen its users on a 87%-13% basis. Institutt for Kjemi http://www.man.poznan.pl/resources/index.html Universitetet i Tromsø N-9037 Tromsø National GRIDs The PROGRESS (Polish Research on Grid Environment Professor Petter Bjørstad for Sun Servers) project is a collaboration of several Institutt for Informatikk Polish scientific institutions and Sun with the goal to Universitetet i Bergen create a grid environment based on a cluster of Sun N-5020 Bergen computers. The participants are PSNC, Cyfronet and email: [email protected] the Technical University of Lódz. Funding is provided by KBN and Sun. The project is planned to finnish end 2003. The project SGI Grid for Virtual Laboratory Applica- tions started end 2002 and will last until November 2005. It will use a hardware platform based on a cluster

Academic supercomputing in Europe 23 of SGI systems. Funding is provided by KBN and SGI. Contacts and Addresses The academic participants in the project are: IMGW, Pedro Veiga PSNC, Cyfronet, Wroclaw Centre for Networking and FCCN Supercomputing, CI TASK (Gdansk) and the Technical Avenida do Brasil 101 University of Lódz. 1700-066 LISBOA email: [email protected] List of abbreviations IMGW Institute of Meteorology and Water Manage- 3.31 Romania ment, Warsaw. PSNC Poznan Supercomputing and Network Center Statistics Contacts and Addresses Population: 22.2 million Prof. Dr. Stefan Wegrzyn GDP/capita: € 6,700 (Chairman of the Committee on Computer Sciences of the Polish Academy of Sciences) Policy Institute of Theoretical and Applied Computer Science A subprogramme of the National R&D Plan addresses ul. Baltycka 5 the transition to an Information Society. This strategic 44-100 Gliwice objective of the Romanian government for the period 2001-2007 is one of the conditions for EU accession. Dr. Maciej Stroinski One goal of this subprogramme is the development of Technical Director PSNC academic networks. Ul. Noskowskiego 10 National academic network 61-704 Poznan There are two academic networks in Romania. The RNC (Romanian National Computer Network) con- 3.30 Portugal nects 6 universities and over 100 other education insti- tutes. The backbone capacity is 100 Mbps. RNC is op- Statistics erated by the National R&D Institute for Informatics. Population: 10.1 million Funding comes from the government (50%) and from its GDP/capita: € 16,200 users. The RoEduNet network is co-ordinated by the Ministry Policy of Education and Research. It provides connectivity to The Foundation for the Development of National Com- universities, high-level education institutes and re- putation Capabilities (FCCN) aims to establish and op- search institutes but also to the high schools and ele- erate national supercomputing facilities for the aca- mentary schools and to not-for-profit and governmen- demic community. At this moment, however, there is no tal institutions (e.g. ministries, city councils, hospitals, national facility operational. etc). Its backbone uses 34 Mbps and 8 Mbps leased Supercomputing facilities for the academia lines to connect 7 network operating centres situated in LIP: Beowulf cluster (27 PIII/(500-866)+1 P4/1400) the cities with large universities. A total of 40 PoPs are linked to these 7 NOC’s with 2 Mbps links. The link to National academic network GÉANT is 155 Mbps. RCTS2 is the Portuguese research network for the uni- The 2003 budget is € 2.6 million and is provided by the versities and research laboratories. The network has government. been established and is managed by FCCN. http://www.rnc.ro/ The backbone operates at 34 Mbps. The international http://www.roedunet.ro/ connection to GÉANT is 622 Mbps. The 2003 budget is € 6 million. 60% of this budget is National GRID provided by the government. The RoGRID initiative aims to establish a national scale http://www.fccn.pt/rcts2/ Romanian computational grid. RoGRID is co-ordinated by the National Institute for Research and Develop- List of abbreviations ment in Informatics. FCCN Fundação para o Desenvolvimento dos Meios Nacionais de Cálculo Científico (Foundation Contacts and Addresses for the Development of National Computation Prof. Sergiu Stelian Iliescu Capabilities) (President National Agency for Communications and LIP Laboratory for the research in the field of ex- Informatics) perimental High Energy Physics and associ- Bd. Libertatii 14, ated Instrumentation Bucharest

Academic supercomputing in Europe 24 3.32 Russian Federation List of abbreviations JINR Joint Institute for Nuclear Research, Dubna Statistics JSCC Joint Supercomputer Center, Moscow Population: 144.5 million MSU Moscow State University GDP/capita: € 8,400 SSSC Siberian Supercomputer Center, Novosibirsk

Policy 3.33 Slovakia In 1996 the Russian Academy of Sciences (RAS), the Ministry of Science and Technologies of the Russian Statistics Federation, the Ministry of Education of the Russian Population: 5.4 million Federation and the Russian Foundation for Basic Re- GDP/capita: € 11,000 search (RFBR) established the Joint Supercomputer Center (JSCC) in Moscow with the goal to provide Supercomputing facilities for the academia HPC-services to Russian academic researchers. Slovak Academy of Sciences: Beowulf cluster (16 http://www.jscc.ru PIII/500)

National academic network The Russian Institute for Public Networks (RIPN) was SANET is the national research and academic network. founded in 1992 to develop computer communications All universities and research communities are con- in the research and education area. RIPN is responsible nected to SANET as well as libraries, hospitals and for the national network RBNet. other educational institutions. The SANET backbone is http://www.ripn.net 1 Gbps. International connectivity is provided by a 2.5 Supercomputing facilities for the academia Gbps link to GÉANT. JINR: Beowulf cluster (40 PIII/1000 + 16 PIII/500) The Ministry of Education supports the SANET or- JSCC: HP SuperDome/64, cluster (768 EV67/667), cluster ganisation and provides 80% of its budget which in (200 EV6/500), Beowulf cluster (16 Xeon/2400), Beo- 2003 is € 2.9 million. wulf cluster (2 Athlon/1900 + 14 Athlon/1800), Beo- http://www.sanet.sk wulf cluster (32 PIII/550) MSU: IBM pSeries 690/16 3.34 Slovenia SSCC: cluster (24 EV67/833) National academic network Statistics Russia has two academic networks. Population: 1.9 million RBNet – the Russian Backbone Network – provides GDP/capita: € 16,200 Internet access to the universities and the research National academic network community in Russia. RBNet links access nodes in St. ARNES is the academic and research network of Slove- Petersburg, Moscow, Rostov-na-Donu, Samara, Ekater- nia. It was established in 1992 by the Ministry for Sci- inburg, Novosibirsk, Irkutsk and Khabarovsk. Each of ence and Technology (which mainly funds the costs of these nodes is itself the centre of a star-shaped re- ARNES, ~ € 4.4 million in 2003) and the Ministry for gional network. The typical link speed is 2 Mbps. Education and Sport. RBNet has since July 2003 a 155 Mbps link to GÉANT. The backbone of ARNES operates at 155 Mbps speed The 2003 budget is € 3.5 million. It is provided by the and connects (typically at 100 Mbps) the Slovenian government (80%) and the users. universities, higher education institutes and govern- http://www.ripn.net/rbnet/en ment research organisations. Libraries and many sec- ondary and primary schools are connected at 2 Mbps FREEnet is a network for research, education and engi- or less. International connectivity consists of a 310 neering in use by institutes of the Russian Academy of Mbps link to GÉANT. Sciences and some universities. FREEnet has a 100 http://www.arnes.si Mbps link to RBNet. http://www.free.net 3.35 Spain National GRID The Russian Data Intensive Grid is a collaboration of 7 Statistics research institutes in Moscow, Dubna, Provitno and Population: 40.2 million Sankt Petersburg: SINP, MSU, JINR, ITEP, IHEP, RRC GDP/capita: € 18,600 "KI", KIAM RAS, and PNPI. Policy There is no specific national HPC policy. Some HPC and networking activities are embedded in the pro- grammes of the National Plan for Research and Devel-

Academic supercomputing in Europe 25 opment which is managed by the Scientific Research ITER Instituto Tecnológico y de Energías Reno- Council (CSIC). vables, Santa Cruz de Tenerife UPV Universidad Politechnica de Valencia Supercomputing facilities for the academia CEPBA: IBM SP Power3/128, SGI Origin2000/64, Com- Contacts and Addresses paq Alphaserver/16, Compaq Alphaserver/12 Dr. Antoni Oliva CESCA: Compaq Alphaserver/32, HP GS1280/16, IBM President CESCA-CEPBA SP2/44, HP V2500/16, HP N4000/8 Gran Capita, 2-4 (Edifici Nexus) CESGA: HP HPC320/32, Beowulf cluster (34 PIII/(550- E-08034 Barcelona 1000)) CIEMAT: SGI Origin 3800/160, SGI Altix 3700/64 3.36 Sweden CTTC, Beowulf cluster (48 Athlon/800) IFAC: Beowulf cluster (72 PIII/1260) Statistics INEM: SUN HPC 10000/52 Population: 8.9 million ITER: Beowulf cluster (288 Athlon/1650) GDP/capita: € 23,900 University of Valencia: SGI Origin2000/64 University of Sevilla: Beowulf cluster (31 P4/1700) Policy The Swedish Research Council (VR) funds through the Acquisition and upgrade plans Swedish National Infrastructure for Computing (SNIC) CESGA: HP SuperDome/128 ordered. six national HPC centres: HPC2N, LUNARC, NSC, PDC, UPV: SGI Altix 3000/48 ordered. UNICC and UPPMAX. National academic network Supercomputing facilities for the academia RedIRIS is the National Research and Academic Net- HPC2N: Beowulf cluster (240 Athlon/1667), IBM SP work (organisation). It is financed through the national P2SC/68 R&D plan, has in 2003 a budget of € 16.7 million and KTH (including PDC): Beowulf cluster (180 Ita- provides services to 260 universities and research cen- nium2/900), IBM SP P2SC/146, IBM SP Power3/84, tres in Spain. RedIRIS is managed by CSIC. IBM SP Power3/40, Beowulf cluster (80 Ath- The 2.5 Gbps backbone of RedIRIS2 connects in a lon/1400+24 PIII/866+8 Athlon/900) mesh topology 18 nodes, one in each autonomous re- LUNARC: Beowulf cluster (128 P4/2530+65 Ath- gion of Spain. RedIris has a 2.5 Gbps link to GÉANT. lon/1600), SGI Origin 2000/116 http://www.rediris.es/ NSC: SGI Origin 3800/128, Beowulf cluster (400 National GRIDs Xeon/2200), Beowulf cluster (33 Athlon/900), Beo- CESCA and CESGA have create a grid infrastructure by wulf cluster (16 Athlon/850), Beowulf cluster (16 Ath- linking their HP HPC320 systems. lon/800), Beowulf cluster (25 Athlon/700) The PDGE project (Proyecto Data Grid España) is a co- UNICC: SGI Origin 2000/110 operation of five Spanish HEP institutes (IFAE, IFCA, UPPMAX: Sun Fire15K/48, Beowulf cluster (17 Ath- IFIC, CIEMAT and UAM) with the aim to establish a lon/1000) grid testbed as part of the EU-DataGrid project. Sweden UPV is creating an interdepartmental grid that will inte- grate a SGI Altix 3000/48, a Beowulf cluster/128 and 10 about 3000 computers at several UPV departments.

List of abbreviations total CEPBA Centro Europeo de Paralelismo de Barcelona 1 peak

CESCA Centre de Supercomputació de Catalunya, Tflop/s Barcelona CESGA Centro de Supercomputación de Galicia, San- 0,1 tiago de Compostela 1999 2000 2001 2002 2003 CIEMAT Centro de Investigaciones Energéticas, Me- dioambientales y Technológicas, Madrid The figure shows for the past 5 years the peak perform- CSIC Consejo Superior de Investigaciones ance of the #1 Swedish system and the total perform- Científicas (National Council for Scientific ance of all Swedish HPC systems. Research) CTTC Centre Technologic de Transferencia de Ca- lor, Terrassa IFAC Instituto de Física de Cantabria, University of Cantabria, Santander

Academic supercomputing in Europe 26 National academic network 3.37 Switzerland SUNET – the Swedish University Computer Network – interconnects local and regional networks at Swedish Statistics universities. The backbone capacity is 10 Gbps. Major Population: 7.3 million universities are typically connected at 100 Mbps or 1 GDP/capita: € 27,500 Gbps. SUNET uses NORDUnet for international con- nectivity. The connection to NORDUnet is 2´2.4 Gbps. Policy SUNET is co-ordinated by a Board with representatives Three sites have large HPC systems for the academic mainly from universities. The Swedish universities fi- users: CSCS in Manno, EPFL in Lausanne and ETHZ in nance SUNET. The budget in 2003 is € 17 million. Zurich. CSCS was established in 1991 by the Swiss SUNET is operated by KTH. government as an independent national supercomputer http://www.sunet.se centre but became in 1996 part of the ETHZ organis a- tion. CSCS’s facilities, however, still serve the national National GRIDs R&D community while the other two sites are mainly End 2002 the Knut and Alice Wallenberg foundation used for local needs of the two federal institutes of (KAW) awarded € 2.5 million to establish SweGrid – a technology. Swedish computational and storage grid with several The Swiss federal government provides the main fund- hundreds of processors and a few hundred TB of stor- ing for the three sites. age. The computational and storage resources of Swe- Grid are located at the six national HPC centres. Supercomputing facilities for the academia VR has allocated through its IT-Research Committee CSCS/SCSC: IBM Power4/256, NEC SX-5/10, HP additional funds to the SweGrid project for research on N4000/24 cluster grid-related issues. EPFL: SGI Origin3000/128, HP/Compaq Alphacluster SNICs Strategic Technical Advisory Committee (STAC) SC45/100 serves as steering committee for SweGrid. ETHZ: HP SuperDome/48, Cray SV1-B cluster (16+8), SweGrid will be deployed during Summer 2003 and will Beowulf cluster (288 Athlon/1800), Beowulf cluster accept early users in the Fall of 2003. (384 PIII/500+96 PIII/650), Beowulf cluster (160 PIII/1000) Allocation of resources University of Geneva: IBM SP2/18 The Swedish National Allocations Committee (SNAC) regulates the use of national computational, grid, vis u- Switzerland alis ation and data warehousing facilities through evaluation of applications for such resources. It also 10 stimulates the innovative use of these resources. http://www.snac.vr.se 1 total List of abbreviations peak HPC2N High Performance Computer Centre North, Tflop/s 0,1 Umeå KTH Kungliga Tekniska Hogskolan, Stockholm 0,01 LUNARC Centre for scientific and technical computing 1999 2000 2001 2002 2003 at Lund University The figure shows for the past 5 years the peak perform- NSC National Supercomputer Centre, Linköping # PDC Parallel Computer Centre at KTH ance of the 1 Swiss system and the total performance SNAC Swedish National Allocations Committee of all Swiss HPC systems. SNIC Swedish National Infrastructure for Comput- National academic network ing SWITCH is the Swiss national network for research and UNICC Unix Numeric Intensive Calculations at higher education. Sites connected to SWITCH include Chalmers, Chalmers University, Göteborg. the two federal institutes of technology, CSCS, the can- UPPMAX Uppsala Multidisciplinary Center for Ad- tonal universities, the cantonal technical schools, and vanced Computational Science industry research laboratories. Its backbone operates at VR Swedis h Research Council 155 Mbps but between the cities of Geneva, Lausanne, Contacts and Addresses Bern, Basel and Zürich there is a capacity of 10 Gbps. Prof. Börje Johansson The connection to GÉANT is 2.5 Gbps. Uppsala Universtetet SWITCH is mainly funded by usage-proportional con- Fysiska institutionen tributions from member universities (45%) and by sell- Ångströmlaboratoriet, Lägerhyddsvägen 1 ing services to organisations and industry (47%). The S-751 21 UPPSALA budget for 2003 is € 12.4 million. SWITCH is operated by the SWITCH foundation, created by the Swiss fed-

Academic supercomputing in Europe 27 eral government and the 8 cantons accommodating a 3.39 United Kingdom university. http://www.switch.ch Statistics Allocation of resources Population: 60.1 million Researchers from Swiss cantonal universities and ETH GDP/capita: € 22,800 institutions who want to use CSCS’ computing re- Policy sources can submit twice a year an application. Large The Engineering and Physical Sciences Research projects are reviewed scientifically by the CSCS Re- Council (EPSRC) is the managing agent for the UK re- search Committee and evaluated technically by CSCS. search councils HPC programme on behalf of the Office Small projects are only technically evaluated. The allo- of Science and Technology (OST). The High Perform- cated computer resources are free of charge for the re- ance Computing Strategy Committee (HSC) advises the searchers. Research Councils and OST on HPC strategy. List of abbreviations There are two national HPC services, which EPSRC has CSCS Centro Svizzero di Calcolo Scientifico/ Swiss procured on the behalf of all the Research Councils in Center for Scientific Computing its role as Managing Agent. EPFL Ecole Polytechnique Fédérale de Lausanne - The HPCx service that became operational in Decem- ETHZ Eidgenössische Technische Hochschule Zürich ber 2002 is provided by a consortium of the Univer- sity of Edinburgh, Daresbury Laboratory of CCLRC, Contacts and Addresses where the system is located, and IBM. The initial sys- Dr. D. Maric tem is an IBM pSeries 690/1280. CSCS - The CSAR (Computer Services for Academic Re- Via Cantonale search) service started in November 1998 and is cur- CH-6928 Manno (TI) rently planned to finish in June 2006. It is provided email: [email protected] by the Computation for Science (CfS) consortium, comprising of the Computer Sciences Corporation, 3.38 Turkey the University of Manchester, where the system is located, and SGI. The main systems are a Cray Statistics T3E1200/816, a SGI Origin 3800/512 and a SGI Altix Population: 68.1 million 3700/256. GDP/capita: € 6,300 Besides the central HPC services provided for all the research councils, individual research councils, univer- Supercomputing facilities for the academia sities and research institutes run HPC services for their Bogaziçi University (Istanbul): Beowulf cluster (29 own research communities. PIII/450), Beowulf cluster (17 Celeron/500) In November 1998 £26 million was invested for a 6-year National academic network period by the UK Research Councils in the Computing The Scientific and Technical Research Council of Tur- Services fo Academic Research (CSAR). key established in 1996 the Turkish National Academic In 2002 £53 million was invested for a 6-year period in Network & Information Center (ULAKBIM). One of the the HPCx service for the whole UK research commu- tasks of this centre is to manage the academic network nity. EPSRC will fund £48 million (£9 million come from UlakNet. The Turkish Go vernment provides the € 14 the UK e-science programme), NERC £5 million and million funding for the network in 2003. BBSRC will contribute on a “pay as you go” basis. The 155 Mbps ATM backbone of UlakNet has Points On behalf of all the Research Councils EPSRC is intend- of Presence in Ankara, Istanbul and Izmir. More than ing to procure the next generation HPC academic ser- 140 Turkish universities, research organis ations and vice for 6 years starting at the end of 2005. The exact governmental organisations are directly connected to nature of this service depends upon the results of a the backbone. Larger organisations have typically a formal capture of requirements, which will take place link in the range 2 Mbps to 155 Mbps. The connection during November 2003 to March 2004. The initial ser- to GÉANT is 155 Mbps. vice capability sought would be for a peak performance http://www.ulakbim.gov.tr/english/ of 50 to 100 Tflop/s (at least a factor of 8 greater than the initial phase of the HPCx service), doubling to 100 to 200 Tflop/s after 2 years, and doubling again to 200 to 400 Tflop/s 2 years after that.

Academic supercomputing in Europe 28 In 2000 the government allocated £120 million to a ‘e- University of Cambridge (COSMOS): SGI Altix 3700/128 Science’ programme. Part of this programme is a e- University of Cardiff: Beowulf cluster (36 Xeon/2400, Science core programme with a budget of £35 million 108 Xeon/2200) that includes grid development. In 2003 OST has allo- University of Exeter: IBM SP Power3/68 cated new funding for the e-Science programme for the University of Leicester: SGI Origin 3000/128 period until 2006 making the total OST investment in e- University of Liverpool: cluster (260 PIII/733) Science £213M. University of Manchester: IBM SP Power3/144, Beo- The e-Science programme is overseen by a Steering wulf cluster (182 PIII/1170) Committee chaired by Professor David Wallace. Profes- University of Southampton: Beowulf cluster/548 (290 sor Tony Hey is the Director of the e-Science core pro- PIII/1000, 32 P4/1500, 102 P4/1800, 10 IA64/833), gramme. Beowulf cluster/69 Information on the national HPC programme can be The next figure shows for the last 5 years the peak per- found at: http://www.epsrc.ac.uk/hpc/ formance of the #1 UK system and the total perform- The UK networking policy is determined by the Joint ance of UK HPC systems. Information Systems Committee (JISC) - a strategic ad- visory committee working on behalf of the funding United Kingdom bodies for further and higher education in the UK. The UK Research Councils are represented in JISC. JISC 100 promotes innovative use of information technology and funds among others the network infrastructure. 10 The 2003 budget is € 42 million. total JISC intensified in September 2002 the collaboration peak Tflop/s 1 with her Dutch pendant SURF to create a London – Amsterdam – Chicago optical network between JISC, 0,1 SURF and Internet2 to enable research into leading 1999 2000 2001 2002 2003 edge networks and applications. In May 2003 HEFCE (Higher Education Funding Council for England) allo- cated £6.5 million in this UKLight initiative. UKLight will participate in the creation of an international ex- Networking in the UK perimental testbed for optical networking that will in- JANET, the UK's national network, is controlled via the clude StarLight (USA), NetherLight (the Netherlands), JISC. The network is operated and developed by CERN and NorthernLight. UKERNA, a company limited by guarantee. UKERNA http://www.jisc.ac.uk is owned by its members, namely individuals in the UK higher education and research community and is di- Supercomputing facilities for the academia rected by a Board of Directors appointed by the major The two national HPC services for the Research Coun- funding bodies and the membership. UKERNA is re- cils are provided with the following systems: sponsible for operational and development aspects of CSAR: Cray T3E-1200/816, SGI Altix 3700/256, SGI Ori- the networking programme. gin 3800/512, SGI Origin 2000/128, SGI cluster (32 The core of the JANET backbone operates at 10 Gbps. IA64/677) JANET connects over 700 organisations including all HPCx: IBM pSeries 690/1280 UK universities, colleges of higher and further educa-

tion, research council establishments and other organi- The other academic systems available are: sations that collaborate with this community. AWE: IBM SP Power3/1920 JANET has a 2.5 Gbps connection to GÉANT and 2´2.5 DL: Compaq AlphaServer/64 Gbps links to the USA. ECMWF: IBM pSeries 690/1920 http://www.ja.net HPCF: SunFire 15K/972, IBM SP Power3/168 Oxford University: Sun Fire 6800 cluster/84, Beowulf Acquisition and upgrade plans cluster (128 PIII/1260) DL: The HPCx system will be upgraded to 11 Tflop/s in RAL: HP cluster (30), Beowulf cluster (160 Xeon/2670), July 2004 and to 22 Tflop/s in November 2006. Beowulf cluster (512 PIII/1400), Beowulf cluster (32 ECMWF: IBM pSeries 690 upgrade to 3072 processors Athlon/1200) (25+ Tflop/s) in 4Q04. Sanger Centre (Cambridge): Compaq Alphacluster/400 UKMET: Upgrade to NEC SX-6/240 in 2004. UKMET: NEC SX-6/120, Cray T3E-900/876, Cray T3E- University of Edinburgh: IBM pSeries 690/16 1200/636 Seventeen higher education, science and technology University for Industry: Sun HPC10000/64 projects are set to benefit from a high-performance sys- University of Belfast: HP cluster (50 Itanium2/1000) tem following the allocation of £16 million of joint fund- University of Bristol: Beowulf cluster (160 PIII/1000)

Academic supercomputing in Europe 29 ing under the Higher Education Founding Council for Contacts and Addresses England’s Joint Research Equipment Initiative. Hugh Pilcher-Clayton Head of High-End Computing National GRIDs EPSRC The UK e-Science programme has resulted in a large Polaris House number of (national) grid projects. Only a few of them, North Star Avenue with an infrastructural component, are given below. Swindon The Beowulf clusters project has realised four commo d- SN2 1ET ity based parallel clusters (two at RAL and two at DL) email: [email protected] acting as technology demonstrators and production fa- cilities for e-Science projects. 3.40 Yugoslavia (Serbia and Montenegro) JISC and CCLRC are funding a National Grid Service (NGS) comprising of two data clusters and two com- pute clusters of significant size. The clusters are lo- Statistics cated at Manchester, Oxford, CCLRC and the White Population: 10.7 million Rose Grid. The hardware will be installed by December GDP/capita: € 2,100 2003 and the National Grid Service will be available to National academic network users by early 2004. AMREJ (Yugoslav Academic and Research Network) is A National e-Science Centre has been established in financed by the Federal Secretariat for Development Edinburgh, managed jointly by Glasgow and Edinburgh and Science, the Ministry of Science, Technologies and Universities. Eight other regional centres have been es- Development and the Ministry of Education and Sport tablished. The centres will jointly provide a national re- of Serbia and the Ministry of Science and Education of source in computing, virtual reality technology, data Montenegro. AMREJ is managed by a Board of Direc- storage and other key aspects of distributed research. tors consisting of the directors of the University Com- UK particle physicists and computing scientists col- puting Centers of all Universities in Yugoslavia and is laborate in the GridPP project with the aim to build a operated by the Belgrade University Computing Center. grid infrastructure for HEP. The budget in 2003 is € 1.5 million. Allocation of resources The backbone is star-shaped around Belgrade and has Each research council subscribes to the HPC pro- links to Novi Sad (1 Gbps), Nis (155 Mbps), University gramme according to its needs and awards resources to of Montenegro (2Mbps), Kragujevac (2Mbps) and users through peer-review. Krusevac (2Mbps). More than 150 educational and re- search institutions are connected to AMREJ. List of abbreviations International connectivity is through a 2 Mbps link to AWE UK Atomic Weapons Establishment GRNET. BBSRC Biotechnoloy and Biological Sciences Research Council CfS Computation for Science (consortium of SGI, CSC and Manchester Computing). CCLRC Council for the Central Laboratory of the Research Councils CSAR Computer Services for Academic Research DL Daresbury Laboratory, CCLRC DTI Department of Trade and Industry EPCC Edinburgh Parallel Computing Centre EPSRC Engineering and Physical Sciences Research Council HPCF Cambridge-Cranfield High Performance Computing Facility HSC High Performance Computing Strategy Committee JISC Joint Information Systems Committee MC Manchester Computing NERC Natural Environment Research Council RAL Rutherford Appleton Laboratory, CLRC UKERNA UK Education and Research Networking Association UKMET UK Meteorological Office, Bracknell and Exeter

Academic supercomputing in Europe 30 4 Comparison of European countries

The figures in this chapter compare the 10 European countries that are leading in the field of academic supercomputing using different metrics.

Gflop/s (total) systems (40 Gflop/s +)

40000 80 30000 60 20000 40 10000 20 0 0 .de .uk .fr .dk .nl .se .it .ch .fi .no .de .uk .fr .nl .se .it .dk .no .ch .fi country country

Gflop/s (peak) Gflop/s per system (average)

10000 1200 8000 1000 6000 800 600 4000 400 2000 200 0 0 .uk .fr .de .dk .it .fi .nl .se .ch .no .uk .dk .fi .fr .de .ch .it .nl .se .no country country

Kflop/s per inhabitant processors

1500 15000

1000 10000

500 5000

0 0 .dk .se .uk .fi .ch .no .de .nl .fr .it .de .uk .fr .it .nl .se .dk .ch .no .fi country country

flop/s per euro (GDP) Gflop/s per processor (average)

50 4,0 40 3,0 30 2,0 20 10 1,0 0 0,0 .dk .se .uk .fi .de .ch .no .nl .fr .it .dk .fi .se .uk .de .ch .fr .nl .no .it country country

Academic supercomputing in Europe 31

network budget (millions of euro)

50,0 40,0 30,0 20,0 10,0 0,0 .de .uk .it .nl .fr .se .ch .no .fi .dk country

network budget (euro) per inhabitant

3,0 2,5 2,0 1,5 1,0 0,5 0,0 .no .nl .se .ch .fi .dk .uk .it .de .fr country

Academic supercomputing in Europe 32 Appendix 1. Supercomputer systems mentioned in the overview

Vendor Series or model Processors (N) Peak (Mflop/s) Alenia APE-100 8-2048 N*50 Compaq AlphaServer SC £512 N*2000 HPC320 £32 N*2000 Cray T3E 32-2048 N*600 T3E-750 32-2048 N*750 T3E-900 32-2048 N*900 T3E-1200 32-2048 N*1200 SV1 8-32 N*2000 Fujitsu (Siemens) VPP5000 7-222 N*9600 VPP700 8-256 N*2200 VPP700E 8-256 N*2400 Hitachi SR8000 F1 4-512 N*12000 HP HPC320 £32 N*2000 V2250 16-64 N*960 V2500 16-64 N*1760 V2600 16-128 N*2270 SuperDome £64/node N*3000 IBM SP2 £512 N*266 SP Power3 8-2048 N*1500 pSeries 690 8-2048 N*5200 NEC SX-4 16-512 N*2000 SX-5 £512 N*8000 SX-5E £512 N*4000 SX-6 1-1024 N*8000 TX7 - N* Quadrics APEmille 8-2048 N*533 SGI Origin 2000 2-128 N*600 Origin 3000 £1024 N*1200 Altix 3000 4-hundreds N*4000 SUN HPC 10000 16-64 N*800 Fire 6800 £24 N*2400 Fire 12K £52 N*2400 Fire 15K £106 N*2400

Academic supercomputing in Europe 33