Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems

Total Page:16

File Type:pdf, Size:1020Kb

Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems Roman Wyrzykowski1, Norbert Meyer2, and Maciej Stroinski2 1 Czestochowa University of Technology Institute of Computer & Information Sciences Dabrowskiego 73, 42-200 Czestochowa, Poland [email protected] http://icis.pcz.pl 2 Poznan Supercomputing and Networking Center Noskowskiego 10, 61-704 Poznan, Poland {meyer, stroins}@man.poznan.pl http://www.man.poznan.pl Abstract. This paper presents the concept and implementation of the National Cluster of Linux Systems (CLUSTERIX) - a distributed PC- cluster (or metacluster) of a new generation, based on the Polish Optical Network PIONIER. Its implementation makes it possible to deploy a production Grid environment, which consists of local PC- clusters with 64- and 32-bit Linux machines, located in independent centers across Poland. The management software developed as Open Source allows for dynamic changes in the metacluster configuration. The resulting system will be tested on a set of pilot distributed applications developed as a part of the project. The project is implemented by 12 Polish supercomputing centers and metropolitan area networks. 1 Introduction PC-clusters using Open Source software such as Linux are the most common and available parallel systems now. At the same time, the capability of Gigabit/s wide area networks are increasing rapidly, to the point when it becomes feasible and indeed interesting to think of the high-end integrated metacluster environment rather than a set of disjoint local clusters. Such metaclusters [3,17,18] can be viewed as key elements of the modern Grid infrastructure, and used by scientists and engineers to solve computationally and data demanding problems. In Poland, we have access to all crucial elements which are necessary to build the national Linux metacluster. The most important among them is Polish Optical Network PIONIER [15, 16]. It represents an intelligent, multi-channel optical network using the DWDM technology, with the bandwidth of n x (10, 40, ...) Gb/s, based on IP protocol. On the transport layer this network provides allocation of dedicated resources for specified applications, Grids, and thematic networks. 2 Roman Wyrzykowski et al. 2 Project Goals and Status The main objective of the CLUSTERIX project [1] is to develop mechanisms and tools that allow for the deployment of a production Grid environment with the bacbone consisting of dedicated, local Linux clusters with 64-bit machines. Local clusters are placed in geographically distant independent centers connected by the Polish Optical Network PIONIER. It is assumed that each (in theory) Linux cluster may be attached to the backbone dynamically, as so called dynamic cluster. As a result, a geographically distributed Linux cluster is obtained, with a dynamically changing configuration, fully operational, and integrated with services offered by other projects. The project started on December 2003, and lasts 32 months. It is divided into two stages: (i) research and development with estimated duration of 20 months, (ii) deployment stage. The project is implemented by 12 Polish supercomputing centers and metropolitan area networks affiliated to Polish universities, with Czestochowa University of Technology as the project coordinator. It is important to note the phrase ”production Grid”; meaning the devel- opment of software/hardware infrastructure accessible for real computing, fully operational and integrated with services offered by other projects related to the PIONIER program [16], e.g., the National Computational Cluster based on the LSF batch system, National Data Warehouse, and virtual lab project. Deliver- ing advanced and specialized services integrated into a single coherent system requires additional mechanisms not available in the existing pilot installations (see, e.g., CrossGrid testbed [2]). They are commonly constrained by the as- sumption of static infrastructure in terms of the number of nodes and services provided, as well as the number of users organized into virtual organizations. On the contrary, in CLUSTERIX we provide mechanisms and tools for an auto- mated attachement of dynamic clusters; for example, non-dedicated clusters or labs may be attached to the backbone during the night or weekend. In the CLUSTERIX project, a lot of emphasis is laid on the usage of the IPv6 protocol [8] and its added functionality - enhanced reliability and QoS. This functionality delivered to the application level and at least used in middleware would allow for a better quality of services. Nothing like a production, IPv6- based Grid infrastructure does exist at present, but taking into account duration of the project it may be assumed that the IPv6 standard will be widely used. Therefore, the developed tools will support both IPv6 and IPv4. After the system is built, it will be tested on a set of pilot applications created as a part of the project. The important goal of the project is also to support potential CLUSTERIX users in preparation of their Grid applications, thus creating a group of people being able to use the cluster in an optimal way after the research and deployment works are finished. Concept and Implementation of CLUSTERIX 3 3 Pilot Installation The CLUSTERIX project includes a pilot installation (Fig.1) consisting of 12 local clusters located in independent centers across Poland. They are intercon- nected via dedicated 1 Gb/s channels provided by the PIONIER optical network. S£UPSK GDAÑSK 32xIA-64, KOSZALIN 128GBRAM, 1168GBHDD, switchInfiniBand OLSZTYN SZCZECIN 6xIA-64, 12GBRAM, 438GBHDD, BYDGOSZCZ switch48x1Gb/s 30xIA-64, TORUÑ BIA£YSTOK 60GBRAM, 6xIA-64, 1095 GBHDD, 12GBRAM, switch48x1Gb/s 6xIA-64, 2219 GBHDD, 16GBRAM, switch24x1Gb/s POZNAÑ 219 GBHDD, switch24x1Gb/s WARSZAWA 12xIA-64, 24GBRAM, ZIELONA GÓRA 438 GBHDD, 8xIA-64, switch48x1Gb/s 24GBRAM, 292 GBHDD, £ÓD switch24x1Gb/s 18xIA-64, PU£AWY 172GBRAM, 6278 GBHDD, RADOM switchInfiniBand 16xIA-64, 32GBRAM, LUBLIN WROC£AW 2219 GBHDD, 24xIA-64, 8xIA-64, switch24x1Gb/s 48GBRAM, 16GBRAM, 876 GBHDD, 292 GBHDD, CZÊSTOCHOWA switch48x1Gb/s switch48x1Gb/s KIELCE OPOLE GLIWICE 16xIA-64, 32GBRAM, 584 GBHDD, KATOWICE switch24x1Gb/s KRAKÓW RZESZÓW BIELSKO-BIA£A Fig. 1. Pilot installation in the CLUSTERIX project The core of the testbed is equipped with 127 Intel Itanium2 nodes managed by Linux OS (Debian distribution, kernel 2.6.x). A computational node includes two Itanium2 processors (1,3 GHz, 3 MB cache), 4 GB or 8 GB RAM, 73 or 146 GB SCSI HDD, as well as two network interfaces (Gigabit Ethernet, and InfiniBand or Myrinet). Such a dual network interface allows for creating two independent communication channels dedicated to exchange of messages during computations and NFS support. The efficient access to the PIONIER backbone is provided through a Gigabit Ethernet L2/L3 coupling switch (see Fig.2). 4 Roman Wyrzykowski et al. Fig. 2. Architecture of the CLUSTERIX infrastructure Concept and Implementation of CLUSTERIX 5 Selected 32-bit machines are dedicated to management of local clusters and the entire infrastructure. While users tasks are allowed to be executed only on computational nodes, each local cluster is equipped with an access node where the Globus Toolkit [5] and local batch system are running. All machines inside a local cluster are protected by a firewall, which is also used as a router for attachment of dynamic clusters. Access to resources of the National Linux Cluster is allowed only from machines called entry points; physical users can possess their accounts only on these dedicated nodes. It is assumed that end- users applications are submitted to the CLUSTERIX system through WWW portals. An important element of the pilot installation is Data Storage System. Before execution of an application, input data are fetched from storage elements and transferred to access nodes; after the execution output data are returned from access nodes to storage elements. The Data Storage System includes a distributed implementation of data broker. Currently each storage element is equipped with 2 TB HDD. 4 Pilot Applications The National Linux Cluster will be used for running HTC applications, as well as large-scale distributed applications that require parallel use of resources of one or more local clusters (meta-applications). In the project, selected end-user‘s applications are being developed for the experimental verification of the project assumptions and deliverables, as well as to achieve real application results. It is clear that applications and their ability to use distributed resources efficiently will decide finally on success of computational Grids. Because of the hierarchical architecture of the CLUSTERIX infrastructure, it is not a trivial issue to adopt an application for its efficient execution on the metacluster. This requires parallelization on several levels corresponding to the metacluster architecture, and taking into account heterogeneity in both the computing power of different nodes, and network performance between various subsystems. Another problem is a variable availability of Grid components. In the CLUSTERIX project, the MPICH-G2 tool [10] based on the Globus Toolkit is used as a Grid-enabled implementation of MPI standard. The list of pilot applications includes among others: – FEM modeling of castings solidification; – modeling transonic flows and design of advanced tip devices; – prediction of protein structures from a sequence of aminoacids and simula- tion of protein folding; – investigation of properties of bio-molecular
Recommended publications
  • PIONIER – the National Fibre Optic Network for New Generation Services
    PIONIER – the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center e-IRG Workshop October 12 - 13, 2011; Poznań, Poland 18th years of Polish e-Infrastructure for Science • 1993 – starting academic MANs based on own fibers (FDDI) • 1995 – MANs’ transition to ATM • 1997 – POL-34, 155, 622 • 1999 – joining to TEN-155 (precursor of GÉANT) network • 2000 – PIONIER take-of • 2001 – dark fiber deployment & 1st EU funded project: ATRIUM • 2003 – 10GE • 2004 – multi-lambda • 2006 – PIONIER2 strategy • 2008 – PLATON project • 2009 – NEWMAN project • 2011 – 100NET project National Research and Education Network - PIONIER (topology) • Area 312k sq km • Population 38M • Main academic centers 21 • State universities 165+ • Researchers & Students 2M+ • R&D institutions and Univ. interconnected via PIONIER network 700+ 6494 km of fiber infrastructure in Poland 763 km of fiber in Germany (IRU) 7257 km of fiber in total 21 MANs and 5 HPC Centers in PIONIER Consortium with PSNC as Operator National Research and Education Network - PIONIER (3Q2011) RUSSIA (Kaliningrad) 1 x 10 Gb/s GDAŃSK LITHUANIA 1 x 10 Gb/s Elbląg Suwałki KOSZALIN OLSZTYN SZCZECIN BIELARUS 1 x 10 Gb/s BYDGOSZCZ BIAŁYSTOK Gorzów TORUŃ Hamburg (GLIF, Surfnet, Nordunet) 4x10 Gb/s POZNAŃ Sochaczew GÉANT2 10+10 Gb/s WARSZAWA ZIELONA GÓRA ŁÓDŹ PUŁAWY LUBLIN RADOM WROCŁAW CZĘSTOCHOWA KIELCE 2 x 10 Gb/s (2 lambdas) OPOLE Zamość CBDF 10Gb/s KATOWICE UKRAINE 1x10 Gb/s Colorless, directionless and contentionless Bielsko-Biała
    [Show full text]
  • The Polish Road to Optical Networks
    The Polish Road to Optical Networks Artur Binczewski, Maciej Stroiński artur | [email protected] Pisa, 10-12.05.2005 National programs of development IT infrastructure for academic community • State Committee of Scientific Research created the national program of development IT infrastructure in 1992-2000: 421 Metropolitan Area Networks 45 High Performance Computing Centers 4National Network (NASK, POL-34/155/622) 4LANs and campus networks in research institutions 4Resource servers and databases in MANs POZMAN – Poznan Metropolitan Area Network • Own fiber network (under 220 km) • 105 connections to research institutions • Simple migration to new technologies • 1992 - FDDI • 1995 - ATM • 2000 - Gigabit Ethernet • 2005 - 10 Gigabit Ethernet POZMAN Metropolitan Area Network 1Gb M10 ps 1G bps PSNC M10 Resources Network 10 G Juniper T320 bp s s p BD6808 Catalyst 6509 b PIONIER G 10 Catalyst 6503 Catalyst 6503 Catalyst 6506 Catalyst 6503 10GigabitEthernet Layer 2 Catalyst 6503 Catalyst 6503 Catalyst 6503 Core Network Catalyst 6503 1Gbps 1Gbps 1Gbps 1Gbps AT-8024GB 1GbpsAT-8326GB 1Gbps 1Gbps AT-8024GB AT-8326GB AT-8024GB Access Layer HPC centers in Poland PSNC: SGI Origin3800 (MIPS R12k/160), Cray SV1/8, Cluster PC (IA-64/250), SunFire 6800 (UltraSPARCIII/48), SGI Onyx2 (MIPS R16k/8, Infinity Reality Engine), SGI Altix ACK Cyfronet: SGI Origin2000 (MIPS R14k/128), HP Integrity Superdome (IA-64/8), SunFire 6800 (UltraSPARCIII/20), Cluster PC (Intel PentiumIII, Xeon/80), Cluster PC (IA-64/40). TASK: Cluster PC (IA-64/256), Cluster PC (IA-32/Xeon/128). ICM: Cray SV1ex (32), SGI Origin2000 (MIPS R14k/16), SUN E10K (UltraSPARC/16) WCSS: SGI Origin2400 (MIPS R14k/32), Cluster PC (Xeon/58).
    [Show full text]
  • KMD, PL-GRID PLATON Grid Infrastructure
    IntegrationIntegration ofof nationalnational andand EuropeanEuropean ee--InfrastructureInfrastructure Norbert Meyer On behalf of PIONIER Consortium Grids and e-Science workshop June 17th, 2009, Santander 15th years of Polish e‐Infrastructure for Science • 1993 –starting academic MANs (FDDI) • 1995 – MANs’ transition to ATM • 1997 – POL‐34, 155, 622 • 2000 –PIONIER take‐of • 2001 – dark fiber deployment • 2003 – 10GE • 2004 –multi‐lambda • 2006 – PIONIER2 strategy • 2008 – PLATON, PL‐GRID 21 ACADEMIC METROPOLITAN AREA NETWORKS • Area 320k sq km • Population 38M • Main academic centers 21 • State universities 120+ • Students 2M+ • R&D institutions and Univ. interconnected via PIONIER network 700+ MAN MAN & HPC Center What PIONIER is all about • developing country wide optical infrastructure based on the academic community ownership model • setting up national optical networks interconnecting on separate AS: 21 MANs, 5 HPCCs • extending optical reach to expensive and uniq national labs • grid technology as a main tool to integrate distributed R&D resources • portal as a main service delivery platform • broadband IP based services development • international co‐operation enabling and stimulating This was not that obvious in 1999, as is today… How PIONIER is organized: • PIONIER is a consortium of 22 academic MANs and HPCCs • PIONIER is supervised by PIONIER Board consisting of 22 representatives • PIONIER is managed by PIONIER Executive consisting of 4 people • PIONIER network is financed from the member fee • member fee is based on the cost
    [Show full text]
  • PIONIER-Research and Education Networking in Poland
    PIONIER - Research and Education Networking in Poland Bartosz Belter, Poznan Supercomputing and Networking Center, Poland SEAIP2014, December 2014, Tainan, Taiwan Poznan Supercomputing and Networking Center Center of e-Infrastructure • National Research and Education Network PIONIER • Research Metropolitan Area Network - POZMAN • HPC Center • Data repositories and Digital Libraries Federation Center for R & D • New Generation Networks • HPC, Grids & Clouds • Grand challenge applications • New media and visualization technologies • Knowledge Platforms • Future Internet - Technology, Applications and Services for IS • Cyber Security PIONIER in a nutshell The PIONIER Consortium: 21 MANs and 5 HPC Centers (PSNC as the operator) • Area 312k sq km • Population 38M • Main academic centers 21 • State universities 165+ • Students 2M+ • R&D institutions and Univ. interconnected via PIONIER network 700+ 6479 km of fiber infrastructure in Poland 2359 km of fiber in Europe (IRU) 8838Source km of- http fiber://www.glif.is/publications/maps in total PIONIER: international collaboration GÉANT GLIF PIONIER EU PIONIER @GLIF PIONIER network infrastructure Regional Networks PIONIER National Infrastructure NEWMAN project • Nation scale deployment of MPLS/IP platform for all MANs and PIONIER – Last mile is provided for most of users – Full portoflio of offered services: IPV4, IPv6, MPLS, VPLS, multicasts, QoS,… • New generation DWDM with automatic restoration (GMPLS) – 100G ready – Colorless, Directionless and Contentionless functions implemented – All nodes
    [Show full text]
  • PIONIER Services
    and then things speeded up… • 1993 – starting academic MANs (FDDI) • 1995 – MANs’ transition to ATM • 1997 – POL-34, 155, 622 • 2000 – PIONIER take-of • 2003 – 1 lambda - 10GE • 2004 – multi-lambda 21 ACADEMIC METROPOLITAN AREA NETWORKS • Population 38M • Main academic centers 21 • State universities 120+ • Students 2M+ • R&D institutions and Univ. interconnected via PIONIER network 700+ MAN MAN & HPC Center What PIONIER is all about • closing the gap to EC academic communities an more… • developing country wide optical infrastructure based on the academic community ownership model • setting up national optical networks interconnecting on separate lambdas: 21 MANs, 5 HPCCs and to-do-research network • extending optical reach to expensive national laboratories • grid technology as a main tool to integrate distributed R&D resources • portal as a main service delivery platform • broadband IP based services development • international co-operation enabling and stimulating This was not that obvious in 1999, as is today… How PIONIER is organized: • PIONIER is a consortium of 22 academic MANs and HPCCs • PIONIER is supervised by PIONIER Board consisting of 22 representatives • PIONIER is managed by PIONIER Executive consisting of 4 people • PIONIER network is financed from the member fee • member fee is based on the cost sharing model • each year PIONIER Board take a decision about the framework and parameters of a cost sharing model • one member is selected to play the role of the PIONIER network operator (PSNC) PIONIER dark fiber topology Operational
    [Show full text]
  • Optical Fiber Communications in Poland
    More light in Polish optical fibres Ryszard S. Romaniuk [email protected] Institute of Electronic Systems, Warsaw University of Technology ABSTRACT Optical communications infrastructure is undergoing an intense development in his country now. A number of international investors and domestic operators are building from the beginning, modernizing or developing proprietary network or leasing wide area systems on a large scale. The aggregate level of these processes is of the order of bil. $. Despite of this the network is not homogeneous, has not satisfactory bandwidth, lacks the QoS, has inadequate international connections, and (according to prevailing opinions) the prices are too high for corporate as well as private users. The intense development of the optical infrastructure is governed by two dominant tendencies: burying new large, fat, optical pipes – cables containing even as much fibers as 500 (for C, L and XL optical bands) and investments in DWDM for main traffic directions (previously working in 1300 nm band). Keywords: Optical fibre technology, optical fibers, optical communications, optoelectronics, Internet 1. INTRODUCTION The economic crisis of the last decade has touched also certain sectors of the communication field, in particular cable carriers, and to a lesser degree wireless and mobile carriers. This enforces strong consolidation activities in the global scale. The telecom market is segmented and re-grouped. Big operators aim frequently at new investments in such conditions. These processes can be observed on domestic telecom market in this country. The last years are a period of very intense investments in optical fibre communications infrastructure in this country, especially in backbone networks. Several operators carry out the investments: Telia, Energis, TPSA, Netia, Telbank, Tel-Energo, Railway-Telecom and others.
    [Show full text]
  • Prezentacja Programu Powerpoint
    Poland - e-Infrastructure ecosystem and relation to EOSC Norbert Meyer [email protected] in co-operation with Ministry of Science and Higher Education Polish research infrastructure stakeholders ✓ Owners of the infrastructure ✓ MAN institutions + HPC centres ✓ Service providers ✓ MAN + HPC and universities ✓ Users ✓ Universities, R&D institutions, Polish Academy of Science institutes ✓ Digital libraries, hospitals ✓ Founders ✓ Ministry of Science and Higher Education ✓ Ministry of Regional Development, Marshals of the regions ✓ Inkind contribution of MAN and HPC centres Partners: PIONIER consortium – easier together ! 1. Institute of Bioorganic Chemistry of the Polish Academy of Science – Poznań Supercomputing and Networking Center 2. University of Technology and Life Sciences in Bydgoszcz 3. AGH University of Science and Technology – Academic Computer Centre CYFRONET 4. Institute of Soil Science and Plant Cultivation – State Research Institute 5. Maria Curie – Skłodowska University in Lublin LUBMAN UMCS 6. Bialystok University of Technology 7. Czestochowa University of Technology 8. Gdansk University of Technology Academic Computer Centre TASK 9. Koszalin University of Technology 10. Technical University of Lodz 11. Technical University of Radom 12. Rzeszow University of Technology 13. West Pomeranian University of Technology Szczecin 14. Silesian University of Technology – Computer Centre 15. Kielce University of Technology 16. Wrocław University of Technology 17. Nicolaus Copernicus University 18. Opole University 19. University of
    [Show full text]
  • POLFAR Consortium Activities
    Current POLFAR Consortium Activities Andrzej Krankowski Leszek Blaszkiewicz Chair of POLFARO Consortium Space Radio-Diagnostics Research Centre (SRRC/UWM), University of Warmia and Mazury in Olsztyn, Poland BALTICS SCIENTIFIC CONFERENCE Latvia, Jūrmala, 5 December 2018 POLFARO August, 2015 University of Warmia and Mazury in Olsztyn, the leader of the POLFARO Consortium - (coordinator: prof. Andrzej Krankowski) Jagiellonian University, Krakow - (dr hab. Marian Soida, prof. UJ) Space Research Centre of PAS, Warsaw - (dr hab. Hanna Rothkaehl, prof. CBK) PCSS/PIONIER- (Robert Pekal) ż Nicolaus Copernicus Astronomical Center of PAS in Warsaw, Torun (dr hab. Jarosław Dyks, prof. CAMK) The Nicolaus Copernicus University in Torun (NCU) (dr hab. Magdalena Kunert-Bajraszewska) Szczecin University (dr hab. Ewa Szuszkiewicz, prof.US) University of Zielona Góra (dr hab. Jarosław Kijak, prof.UZ) Wrocław University of Environmental and Life Sciences (prof. Bernard Kontny) LOFAR- The Key Science Projects Epoch of Reionisation Surveys Transients ż Cosmic Rays Magnetism Sun, Space Weather Baldy, Borowiec, Lazy LOFAR stations ż POLFARO PIONIER GLIF GÉANT POLFARO PIONIER EU /PL Pulsars • Big help from the GLOW (D. Schwarz and Bielefeld group mainly) (We are using GLOW observational and raw data reduction tools and them well known software like DPSR and PSARCHIVE) ż • Close cooperation with Zielona Góra pulsar’s team POLFARO Sun and scintillations • Big help in observations initiating and support in solar LOFAR research from Richard Fallows (ASTRON), Derek McKay (Sodankyla Geophysical Observatory and STFC Rutherford Appleton Laboratory, Didcot, UK) and Gottfried Mann (Leibniz-Institut fur Astrophysik Potsdam) from KSP team. ż • Close cooperation on data analysis with University of Wroclaw solar team (prof.
    [Show full text]
  • PIONIER Organizational Model of E-Infrastructure for Research and New Services Development
    PIONIER Organizational model of e-infrastructure for research and new services development Robert Pękal [email protected] . 20th years of Polish e-Infrastructure for Science • 1993 – starting academic MANs based on own fibers (FDDI) • 1995 – MANs’ transition to ATM • 1997 – POL-34, 155, 622 • 1999 – joining to TEN-155 (precursor of GÉANT) network • 2000 – PIONIER take-of • 2001 – dark fiber deployment & 1st EU funded project: ATRIUM • 2003 – 10GE • 2004 – multi-lambda • 2006 – PIONIER2 strategy • 2008 – PLATON project • 2009 – NEWMAN project • 2011 – 100NET project • 2013 – CBPIO project Polish e-Infrastructure • 21 Academic Optical Based MAN (Metropolitean Area Network) • 5 HPC (High Performance Computing) Centers • Connected with owned fiber infrastructure – PIONIER network (Polish Optical Internet) • Science Services Platform - PLATON • Digital Libraries Federation • National Data Storage European & Polish Models of e-Infrastructure European Model Polish Model Responsible Intitutions MAN activity • Local dark fibre investment and ownership • Delivering IP based & VPN services • Cost sharing (recovery) model for academic community • Testbeds in new technologies • Taking part in Regional Development Plan setup • R&D regional IT infrastructure development coordination Every academic MAN: • Is acting under the umbrella of the legal entity of university or R&D institution • Is obliged to service regional academic community • Has a legal permission to act as IT operator or ISP • Is authorized to apply for Ministry of Science grants to develop new services • Is co-operating closely with local community • Is a member of the Polish national PIONIER Consortium Example of MAN – POZMAN 220 km of own Fiber Optic Cables in the city of Poznań • 600k city population • 105 connections to R&D institutions & Univ.
    [Show full text]
  • PDF Presentation
    TELEMEDICINE in POLAND AntoniAntoni NowakowskiNowakowski Department of Biomedical Engineering Gdansk University of Technology Gdańsk, POLAND Ministry of Science and Higher Education Committee of Information Infrastructure CODATA Poland Computer networks in Poland: KASK till 1990 (practically not existing); beginning of TCP/IP - EARN 1991 - CIUW INTERNET - NASK - 1993 (6000 hosts in Poland) HPCC - 1994 CI TASK - http://www.task.gda.pl/ GDANSK ICM - http://www.icm.edu.pl/ WARSZAWA CYFRONET - http://www.cyfronet.krakow.pl/ KRAKOW PCSS - http://rose.man.poznan.pl/ POZNAN WCSS - http://www.wcss.wroc.pl/ WROCLAW 1995 - 2000 - Programme of information infrastructure for science development (full accessibility of any scientist in Poland to the INTERNET & HPC) 2000 - e-Poland programme 2001 - 2005 - PIONIER: Polish Optical Internet - Advanced Applications, Services and Technologies for Information Society 2006 - 2012 - PIONIER 2 Development of MAN in 21 regions II is a property of regional scientific institutions! & since 2000 development of national inter-MAN optical backbone “PIONIER” - as property of MAN consortium NASK - 1998 2003 April POL 34 & MAN - 1998 PIONIERPIONIER • > 5000 km inter-metropolitan fiber-optic network (NZDS G.655 + SM G.652) • System DWDM with lambda (to 40Gb/s) • Optical switches - GMPLS • Optical VPN ¾ Internet ¾ HPC network ¾ administration network ¾ dedicated virtual networks for specific projects as ATLAS, VLBI Gdańsk Słupsk Elbląg Koszalin Olsztyn s / b M Bydgoszcz Białystok Szczecin 4 3 Toruń /s b M 5 5 1 Poznań
    [Show full text]
  • The New Challenges for PIONIER Infrastructure Artur Binczewski
    The New Challenges for PIONIER Infrastructure 10th CEF Networks Workshop 2019 Artur Binczewski Network Technology Division Director EU Policies for e-Infrastructures for research in IT area STRESSES the importance of PRACE, a world-class European High Performance Computing (HPC) infrastructure for research that provides access to computing resources and services for large-scale scientific and engineering applications; ACKNOWLEDGES the need to develop the new generation of HPC technologies and CALLS for the reinforcement of the interconnected network of data processing facilities GEANT. In this respect, INVITES ESFRI to explore MechanisMs for better coordination of MeMber States’ investMent strategies in e-infrastructures, covering also HPC, distributed coMputing, scientific data and networks; http://data.consilium.europa.eu/doc/document/ST-9360-2015-INIT/en/pdf The European Strategy ForuM on Research Infrastructures – Report 2018 http://roadmap2018.esfri.eu/media/1066/esfri-roadmap-2018.pdf Future visions: HPC Europe & EOSC but it is subject for another presentation Source: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60156 https://publications.europa.eu/en/publication-detail/-/publication/78ae5276-ae8e-11e9-9d01-01aa75ed71a1/language-en Direction: national strategies and e-infrastructure initiatives: Polish RoadMap of Research Infrastructure (PRRI) • Launched in 2009 • Updated in 2013 • The current PRRI contains 53 project proposals in many broadly understood fields of science: • physical and mathematical sciences - 14, • technical
    [Show full text]
  • Why Is Ipv6 Deployment Important for the Internet Evolution?
    Paper Why is IPv6 Deployment Important for the Internet Evolution? Jordi Mongay Batallaa, Artur Binczewskib, Wojciech Burakowskic, Krzysztof Chudzikd, Bartosz Gajdab, Mariusz Gajewskia, Adam Grzechd, Piotr Krawiecc, Jan Kwiatkowskid, Tomasz Mrugalskie, Krzysztof Nowickie, Wiktor Procykb, Konrad Sienkiewicza, Robert Szumanb, Jarosław Śliwińskic, Jacek Światowiake, Piotr Wiśniewskic, Józef Woźniake a National Institute of Telecommunications, Warsaw, Poland b Poznań Supercomputing and Networking Center, Poznań, Poland c Institute of Telecommunications, Warsaw University of Technology, Warsaw, Poland d Institute of Informatics, Wrocław University of Technology, Wrocław, Poland e Faculty of Electronics, Telecommunications and Informatics, Gdańsk Univeristy of Technology, Gdańsk, Poland Abstract—Replacing the IPv4 protocol with IPv6 on the Inter- pool will be exhausted before 2013. Methods: polynomial net is currently one of the aims of the European Union policy. and exponential estimate the depletion of addresses during The main reason for this replacement is the effeteness of the the year 2012. Regardless of the method of research, the addresses pool in the IPv4 protocol, which can cause serious prospect of exhaustion of IPv4 addresses is so close that complications in the evolution of the Internet and its adapta- the urgent implementation of IPv6 has become a necessity. tion in new areas, e.g., in next generation mobile telephony or the so called Internet of Things. Simultaneously, the address- ing capabilities of the IPv6 protocol are practically unlimited and its new functionalities increase the attractiveness of its usage. The article discusses the problems connected with the IPv6 deployment on the Internet. Especially, the rules for re- alization of the IPv6 deployment and rules for cooperation of IPv4 with IPv6 (including cooperation tests) in network in- frastructure and in applications are presented.
    [Show full text]