RES Updates, Resources and Access / European HPC Ecosystem

Total Page:16

File Type:pdf, Size:1020Kb

RES Updates, Resources and Access / European HPC Ecosystem RES updates, resources and Access European HPC ecosystem Sergi Girona RES Coordinator RES: HPC Services for Spain •The RES was created in 2006. •It is coordinated by the Barcelona Supercomputing Center (BSC-CNS). •It forms part of the Spanish “Map of Unique Scientific and Technical Infrastructures” (ICTS). RES: HPC Services for Spain RES is made up of 12 institutions and 13 supercomputers. BSC MareNostrum CESGA FinisTerrae CSUC Pirineus & Canigo BSC MinoTauro UV Tirant UC Altamira UPM Magerit CénitS Lusitania & SandyBridge UMA PiCaso IAC La Palma UZ CaesarAugusta SCAYLE Caléndula UAM Cibeles 1 10 100 1000 TFlop/s (logarithmiC sCale) RES: HPC Services for Spain • Objective: coordinate and manage high performance computing services to promote the progress of excellent science and innovation in Spain. • It offers HPC services for non-profit, open R&D purposes. • Since 2006, it has granted more than 1,000 Million CPU hours to 2,473 research activities. Research areas Hours granted per area 23% AECT 30% Mathematics, physics Astronomy, space and engineering and earth sciences BCV FI QCM 19% 28% Life and health Chemistry and sciences materials sciences RES supercomputers BSC (MareNostrum 4) 165888 cores, 11400 Tflops Main processors: Intel(R) Xeon(R) Platinum 8160 Memory: 390 TB Disk: 14 PB UPM (Magerit II) 3920 cores, 103 Tflops Main processors : IBM Power7 3.3 GHz Memory: 7840 GB Disk: 1728 TB UMA (Picasso) 4016 cores, 84Tflops Main processors: Intel SandyBridge-EP E5-2670 Memory: 22400 GB Disk: 720 TB UV (Tirant 3) 5376 cores, 111,8 Tflops Main processors: Intel SandyBridge-EP E5-2670 Memory: 10752 GB Disk: 14 + 10 TB CSUC (Pirineus) 2784 cores, 283,66 Tflops Main processors: Intel(R) Xeon(R) Platinum 8160 Memory: 12000 GB Disk: 200 TB CSUC (Canigo) 384 cores, 33,2 Tflops Main processors: Intel(R) Xeon(R) Platinum 8160 Memory: 9000 GB Disk: 200 TB RES superComputers CénitS (Lusitania 2) 800 cores, 33,2 Tflops Main processors Intel Xeon E5-2660v3, 2.6GHz Memory: 10 GB Disk: 328 TB CénitS (SandyBridge) 2688 cores, 56 Tflops Main processors Intel Sandybridge Xeon Memory: 5376 GB Disk: 328 TB BSC (MinoTauro) 624 cores, 251 Tflops Main processor: 39x 2 Intel Xeon E5-2630 v3 Memory: 20 TB Disk: 14PB (shared with MN4) CESGA (FinisTerrae 2) 7712 cores, 328,3Tflops Main processor: Intel Xeon E5-2680v3 Memory: 40 TB Disk: 960 TB UC (Altamira 2+) 5120 cores, 105 Tflops Main processor: Intel SandyBridge Memory: 15,4 TB Disk: 2PB UZ (Caesaraugusta) 2014 cores, 80.5 Tflops Main processor: Intel E5-2680v3, 2.5GHz Memory: 5400 GB RAM memory Disk: 219TB RES supercomputers SCAYLE (Caléndula) 2432 cores, 50,6 Tflops Main processor: Intel SandyBridge Xeon Memory: 4864 GB Disk: 600 TB UAM (Cibeles) 368 cores, 14,1 Tflops Main processor: Intel Xeon E5-2630 v3, 2.40GHz Memory: 896 GB Disk: 80 TB UAM (SandyBridge) – coming soon – 2688 cores, 56Tflops Main processor: Intel SandyBridge Xeon, 2.60GHz Memory: 5376 GB Disk: 80 TB IAC (LaPalma) 4032 cores, 83,85 Tflops Main processor: Intel SandyBrigde Memory: 8064 GB Disk: 60 TB Resources granted: CPU hours 400.000 350.000 300.000 140mh available 250.000 200.000 Hours x 1000 x Hours 150.000 100.000 50.000 0 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 Requested hours Awarded hours (A+B) How to apply? • RES resources are open to researchers and spin-offs: o Computing resources: CPU hours and storage o Technical support: application analysis, porting of applications, search for the best algorithm… to improve performance and ensure the most effective use of HPC resources. o Free of cost at the point of usage Next deadline: • Three open competitive calls per year. January 2019 Deadline for Period Starting date applications P1 January 1st March P2 May 1st July P3 September 1st November How to apply? RES intranet: https://www.bsc.es/res-intranet • Researchers present a proposal which includes research project description, technical requirements and research group experience. • Accepted proposals have access to RES supercomputers for 4 months. • Granted time can be: hours with priority (hours A) or without priority (hours B) Proposal evaluation Formal Technical evaluation evaluation Submit Access committee Scientific experts panel Final report of accepted activities Activity length • Accepted proposals have access to RES supercomputers for 4 months. If your activity needs more time to be properly developed, you can ask for a continuation activity: New activity Report dissemination (4 months) information Continuation activity Report dissemination (4 months) information • The application form is simplified • Are preferably allocated to the same machine Continuation activities • In the evaluation, one reviewer is kept from the previous activity and the second reviewer changes RES Users’ Committee • CURES aims to provide advice and feedback to RES coordinators: o Promotes optimal use of high performance computing facilities o Shares information about users’ experiences o Voices user concerns • You can contact CURES through RES intranet: Which is/could be the main impediment to apply to RES resources? Ø I don't know how to write a strong application Ø I'm not sure if I can apply Ø Lack of HPC expertise in my research group Ø Too much paperwork Ø Not enough resources for my project Tips to write a strong proposal • Read carefully all the protocols, guides and FAQs in: https://www.res.es/en/access-to-res • Project description section: highlight the importance of your project, not only from the scientific point of view but also from the return to society. Ø Why do you think that your project should deserve the resources requested? Tips to write a strong proposal • Activity description section: specify clearly why do you need supercomputing resources. Write as accurate as possible the flowchart of the simulations. Indicate that you have your own human resources in your group to run and process the output of all the simulations you propose. Ø Why do you need to carry out the simulations in the selected machine? Ø Is the amount of computing resources requested adjusted to your needs and properly justified? • Doubts about software/HPC resources: ask support team! Ø Are your jobs adequate for parallel computing? [email protected] Which is/could be the main impediment to apply to RES resources? Ø I don't know how to write a strong application Ø I'm not sure if I can apply Ø Lack of HPC expertise in my research group Ø Too much paperwork Ø Not enough resources for my project Who can apply? • RES resources are aimed at open R+D+I activities: o Researchers from academia and public R&D institutions o Spin-offs during their first 3 years from its creation o Collaboration projects between private companies and research groups from academia or public institutions o Open to international applicants, but we recommend the collaboration with researchers from Spanish institutions Which is/could be the main impediment to apply to RES resources? Ø I don't know how to write a strong application Ø I'm not sure if I can apply Ø Lack of HPC expertise in my research group Ø Too much paperwork Ø Not enough resources for my project RES events: technical training These workshops are organized by the RES nodes and aim to provide the knowledge and skills needed to use and manage the supercomputing facilities. • Check the agenda in RES website: https://www.res.es/en/events?event_type=technical_training • PATC courses in BSC (PRACE Advanced Training Center): https://www.bsc.es/education/training/patc-courses RES events: networking opportunities Scientific seminars The RES promotes scientific seminars which address supercomputing technology applications in specific scientific areas. These events are mainly organized by RES users and are open to the entire research community. In 2017: ü 5 scientific seminars ü More than 300 attendees Sep Next Generation Sequencing and Supercomputing: life as a couple 27 CBMSO-UAM (Madrid) Agenda 2018: www.res.es/en/events RES events: networking opportunities RES Users’ Meeting: 20 September 2018 - Valencia The agenda includes: • Information about RES and the European HPC ecosystem • Plenary session: Research Open Data • Parallel scientific sessions • Poster session • Networking opportunities • Evening social event www.res.es/users-conference-2018 Funded by the EC: 2017 - 2021 ü Mobility grants for researchers using HPC resources ü Short stays to visit scientific hosts (3 weeks – 3 months) ü Funds for travel and living allowance ü Access to European HPC facilities Next deadline: 20 September http://www.hpc-europa.eu/ Which is/could be the main impediment to apply to RES resources? Ø I don't know how to write a strong application Ø I'm not sure if I can apply Ø Lack of HPC expertise in my research group Ø Too much paperwork Ø Not enough resources for my project RES forms In the RES we try to keep the administrative procedures short and simple for researchers: • New activity application form: 10 pages on average • Continuation activity application form: simplified • Dissemination form: 3 pages on average o Brief description of results (1-2 paragraphs) o Publications o Figures / pictures o Optional: patents, PhD students… • Intermediate reports: 1-2 sentences (“Everything is ok”) • Resubmission of non-accepted activities: one click Which is/could be the main impediment to apply to RES resources? Ø I don't know how to write a strong application Ø I'm not sure if I can apply Ø Lack of HPC expertise in my research group Ø Too much paperwork Ø Not enough resources for my project PRACE HPC Access • Call for Proposals
Recommended publications
  • National HPC Task-Force Modeling and Organizational Guidelines
    FP7-INFRASTRUCTURES-2010-2 HP-SEE High-Performance Computing Infrastructure for South East Europe’s Research Communities Deliverable 2.2 National HPC task-force modeling and organizational guidelines Author(s): HP-SEE Consortium Status –Version: Final Date: June 14, 2011 Distribution - Type: Public Code: HPSEE-D2.2-e-2010-05-28 Abstract: Deliverable D2.2 defines the guidelines for set-up of national HPC task forces and their organisational model. It identifies and addresses all topics that have to be considered, collects and analyzes experiences from several European and SEE countries, and use them to provide guidelines to the countries from the region in setting up national HPC initiatives. Copyright by the HP-SEE Consortium The HP-SEE Consortium consists of: GRNET Coordinating Contractor Greece IPP-BAS Contractor Bulgaria IFIN-HH Contractor Romania TUBITAK ULAKBIM Contractor Turkey NIIFI Contractor Hungary IPB Contractor Serbia UPT Contractor Albania UOBL ETF Contractor Bosnia-Herzegovina UKIM Contractor FYR of Macedonia UOM Contractor Montenegro RENAM Contractor Moldova (Republic of) IIAP NAS RA Contractor Armenia GRENA Contractor Georgia AZRENA Contractor Azerbaijan D2.2 National HPC task-force modeling and organizational guidelines Page 2 of 53 This document contains material, which is the copyright of certain HP-SEE beneficiaries and the EC, may not be reproduced or copied without permission. The commercial use of any information contained in this document may require a license from the proprietor of that information. The beneficiaries do not warrant that the information contained in the report is capable of use, or that use of the information is free from risk, and accept no liability for loss or damage suffered by any person using this information.
    [Show full text]
  • Innovatives Supercomputing in Deutschland
    inSiDE • Vol. 6 No.1 • Spring 2008 Innovatives Supercomputing in Deutschland Editorialproposals for the development of soft- Contents ware in the field of high performance Welcome to this first issue of inSiDE in computing. The projects within this soft- 2008. German supercomputing is keep- ware framework are expected to start News ing the pace of the last year with the in autumn 2008 and inSiDE will keep Automotive Simulation Centre Stuttgart (ASCS) founded 4 Gauss Centre for Supercomputing taking reporting on this. the lead. The collaboration between the JUGENE officially unveiled 6 three national supercomputing centres Again we have included a section on (HLRS, LRZ and NIC) is making good applications running on one of the three PRACE Project launched 7 progress. At the same time European main German supercomputing systems. supercomputing has seen a boost. In this In this issue the reader will find six issue you will read about the most recent articles about the various fields of appli- Applications developments in the field in Germany. cations and how they exploit the various Drag Reduction by Dimples? architectures made available. The sec- An Investigation Relying Upon HPC 8 In January the European supercomput- tion not only reflects the wide variety of ing infrastructure project PRACE was simulation research in Germany but at Turbulent Convection in large-aspect-ratio Cells 14 started and is making excellent prog- the same time nicely shows how various Editorial ress led by Prof. Achim Bachem from architectures provide users with the Direct Numerical Simulation Contents the Gauss Centre. In February the fast- best hardware technology for all of their of Flame / Acoustic Interactions 18 est civil supercomputer JUGENE was problems.
    [Show full text]
  • Recent Developments in Supercomputing
    John von Neumann Institute for Computing Recent Developments in Supercomputing Th. Lippert published in NIC Symposium 2008, G. M¨unster, D. Wolf, M. Kremer (Editors), John von Neumann Institute for Computing, J¨ulich, NIC Series, Vol. 39, ISBN 978-3-9810843-5-1, pp. 1-8, 2008. c 2008 by John von Neumann Institute for Computing Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above. http://www.fz-juelich.de/nic-series/volume39 Recent Developments in Supercomputing Thomas Lippert J¨ulich Supercomputing Centre, Forschungszentrum J¨ulich 52425 J¨ulich, Germany E-mail: [email protected] Status and recent developments in the field of supercomputing on the European and German level as well as at the Forschungszentrum J¨ulich are presented. Encouraged by the ESFRI committee, the European PRACE Initiative is going to create a world-leading European tier-0 supercomputer infrastructure. In Germany, the BMBF formed the Gauss Centre for Supercom- puting, the largest national association for supercomputing in Europe. The Gauss Centre is the German partner in PRACE. With its new Blue Gene/P system, the J¨ulich supercomputing centre has realized its vision of a dual system complex and is heading for Petaflop/s already in 2009. In the framework of the JuRoPA-Project, in cooperation with German and European industrial partners, the JSC will build a next generation general purpose system with very good price-performance ratio and energy efficiency.
    [Show full text]
  • (PDF) Kostenlos
    David Gugerli | Ricky Wichum Simulation for All David Gugerli Ricky Wichum The Politics of Supercomputing in Stuttgart David Gugerli, Ricky Wichum SIMULATION FOR ALL THE POLITICS OF SUPERCOMPUTING IN STUTTGART Translator: Giselle Weiss Cover image: Installing the Cray-2 in the computing center on 7 October 1983 (Polaroid UASt). Cover design: Thea Sautter, Zürich © 2021 Chronos Verlag, Zürich ISBN 978-3-0340-1621-6 E-Book (PDF): DOI 10.33057/chronos.1621 German edition: ISBN 978-3-0340-1620-9 Contents User’s guide 7 The centrality issue (1972–1987) 13 Gaining dominance 13 Planning crisis and a flood of proposals 15 5 Attempted resuscitation 17 Shaping policy and organizational structure 22 A diversity of machines 27 Shielding users from complexity 31 Communicating to the public 34 The performance gambit (1988–1996) 41 Simulation for all 42 The cost of visualizing computing output 46 The false security of benchmarks 51 Autonomy through regional cooperation 58 Stuttgart’s two-pronged solution 64 Network to the rescue (1997–2005) 75 Feasibility study for a national high-performance computing network 77 Metacomputing – a transatlantic experiment 83 Does the university really need an HLRS? 89 Grid computing extends a lifeline 95 Users at work (2006–2016) 101 6 Taking it easy 102 With Gauss to Europe 106 Virtual users, and users in virtual reality 109 Limits to growth 112 A history of reconfiguration 115 Acknowledgments 117 Notes 119 Bibliography 143 List of Figures 153 User’s guide The history of supercomputing in Stuttgart is fascinating. It is also complex. Relating it necessarily entails finding one’s own way and deciding meaningful turning points.
    [Show full text]
  • Supercomputing Resources in BSC, RES and PRACE
    www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum) IAC (LaPalma II) Processor: 6.112 8-core Intel SandyBridge Processor: 1.024 IBM PowerPC 970 2.3GHz EP E5-2670/1600 20M 2.6GHz Memory: 2 TB 84 Xeon Phi 5110 P Disk: 14 + 10 TB Memory: 100,8 TB Network: Myrinet Disk: 2000 TB Universitat de València (Tirant II) Network: Infiniband FDR10 Processor: 2.048 IBM PowerPC 970 2.3GHz BSC-CNS (MinoTauro) Memory: 2 TB Processor: 256 M2090 NVIDIA GPU Disk: 56 + 40 TB 256 Intel E5649 2,53 GHz 6-core Network: Myrinet Memory: 3 TB Gobierno de Islas Canarias - ITC (Atlante) Network: Iinfiniband QDR Processor: 336 PowerPC 970 2.3GHz BSC-CNS (Altix) Memory: 672 GB Processor: SMP 128 cores Disk: 3 + 90 TB Memory: 1,5 TB Network: Myrinet UPM (Magerit II) Universidad de Málaga (Picasso) Processor: 3.920 (245x16) Power7 3.3GHz Processor: 82 AMD Opteron 6176 96 Intel E5-2670 Memory: 8700 GB 56 Intel E7-4870 Disk: 190 TB 32 GPUS Nvidia Tesla M2075 Network: Infiniband QDR Memory: 21 TB Universidad de Cantabria (Altamira II) Disk: 600 TB Lustre + 260 TB Network: Infiniband Processor: 316 Intel Xeon CPU E5-2670 2.6GHz Memory: 10 TB Universidad de Zaragoza (CaesaraugustaII) Disk: 14 TB Processor: 3072 AMD Opteron 6272 2.1GHz Network: Infiniband Memory: 12,5 TB Disk: 36 TB Network: Infiniband New nodes New RES members and HPC resources CESGA (FINIS TERRAE II) FCSCL (Caléndula) Peak Performance: 256 Tflops Peak Performance:
    [Show full text]
  • Curriculum Vitae
    CURRICULUM VITAE Name: Mateo Valero Position: Full Professor (since 1983) Contact: Phone: +34-93- Affiliation: Universitat Politécnica de Catalunya 4016979 / 6986 (UPC) Fax: +34-93-4017055 Computer Architecture Department E-mail: [email protected] Postal Address: Jordi Girona 1-3, Módulo D6 URL 08034 – Barcelona, Spain www.ac.upc.es/homes/mateo 1. Summary Professor Mateo Valero, born in 1952 (Alfamén, Zaragoza), obtained his Telecommunication Engineering Degree from the Technical University of Madrid (UPM) in 1974 and his Ph.D. in Telecommunications from the Technical University of Catalonia (UPC) in 1980. He has been teaching at UPC since 1974; since 1983 he has been a full professor at the Computer Architecture Department. He has also been a visiting professor at ENSIMAG, Grenoble (France) and at the University of California, Los Angeles (UCLA). He has been Chair of the Computer Architecture Department (1983-84; 1986-87; 1989- 90 and 2001-2005) and the Dean of the Computer Engineering School (1984-85). His research is in the area of computer architecture, with special emphasis on high performance computers: processor organization, memory hierarchy, systolic array processors, interconnection networks, numerical algorithms, compilers and performance evaluation. Professor Valero has co-authored over 600 publications: over 450 in Conferences and the rest in Journals and Books Chapters. He was involved in the organization of more than 300 International Conferences as General Chair (11), including ICS95, ISCA98, ICS99 and PACT01, Steering Committee member (85), Program Chair (26) including ISCA06, Micro05, ICS07 and PACT04, Program Committee member (200), Invited Speaker (70), and Session Chair (61). He has given over 400 talks in conferences, universities and companies.
    [Show full text]
  • Financing the Future of Supercomputing: How to Increase
    INNOVATION FINANCE ADVISORY STUDIES Financing the future of supercomputing How to increase investments in high performance computing in Europe years Financing the future of supercomputing How to increase investment in high performance computing in Europe Prepared for: DG Research and Innovation and DG Connect European Commission By: Innovation Finance Advisory European Investment Bank Advisory Services Authors: Björn-Sören Gigler, Alberto Casorati and Arnold Verbeek Supervisor: Shiva Dustdar Contact: [email protected] Consultancy support: Roland Berger and Fraunhofer SCAI © European Investment Bank, 2018. All rights reserved. All questions on rights and licensing should be addressed to [email protected] Financing the future of supercomputing Foreword “Disruptive technologies are key enablers for economic growth and competitiveness” The Digital Economy is developing rapidly worldwide. It is the single most important driver of innovation, competitiveness and growth. Digital innovations such as supercomputing are an essential driver of innovation and spur the adoption of digital innovations across multiple industries and small and medium-sized enterprises, fostering economic growth and competitiveness. Applying the power of supercomputing combined with Artificial Intelligence and the use of Big Data provide unprecedented opportunities for transforming businesses, public services and societies. High Performance Computers (HPC), also known as supercomputers, are making a difference in the everyday life of citizens by helping to address the critical societal challenges of our times, such as public health, climate change and natural disasters. For instance, the use of supercomputers can help researchers and entrepreneurs to solve complex issues, such as developing new treatments based on personalised medicine, or better predicting and managing the effects of natural disasters through the use of advanced computer simulations.
    [Show full text]
  • Academic Supercomputing in Europe (Sixth Edition, Fall 2003)
    Academic Supercomputing in Europe (Sixth edition, fall 2003) Rossend Llurba The Hague/Den Haag, januari 2004 Netherlands National Computing Facilities Foundation Academic Supercomputing in Europe Facilities & Policies - fall 2003 Seventh edition Llurba, R. (Rossend) ISBN 90-70608-37-5 Copyright © 2004 by Stichting Nationale Computerfaciliteiten Postbus 93575, 2509 AN Den Haag, The Netherlands http://www.nwo.nl/ncf All rights reserved. No part of this publication may be reproduced, in any form or by any means, without permission of the author. Contents PREFACE ..............................................................................................................................................1 1 EUROPEAN PROGRAMMES AND INITIATIVES ................................................................................3 1.1 EUROPEAN UNION...................................................................................................................................................................3 1.2 SUPRANATIONAL CENTRES AND CO-OPERATIONS. ....................................................................................................................5 2 PAN-EUROPEAN RESEARCH NETWORKING ...................................................................................7 2.1 NETWORK ORGANISATIONS......................................................................................................................................................7 2.2 OPERATIONAL NETWORKS.......................................................................................................................................................7
    [Show full text]
  • La Difusió D
    ADVERTIMENT . La consulta d’aquesta tesi queda condicionada a l’acceptació de les següents condicions d'ús: La difusió d’aquesta tesi per mitjà del servei TDX ( www.tesisenxarxa.net ) ha estat autoritzada pels titulars dels drets de propietat intel·lectual únicament per a usos privats emmarcats en activitats d’investigació i docència. No s’autoritza la seva reproducció amb finalitats de lucre ni la seva difusió i posada a disposició des d’un lloc aliè al servei TDX. No s’autoritza la presentació del seu contingut en una finestra o marc aliè a TDX (framing). Aquesta reserva de drets afecta tant al resum de presentació de la tesi com als seus continguts. En la utilització o cita de parts de la tesi és obligat indicar el nom de la persona autora. ADVERTENCIA . La consulta de esta tesis queda condicionada a la aceptación de las siguientes condiciones de uso: La difusión de esta tesis por medio del servicio TDR ( www.tesisenred.net ) ha sido autorizada por los titulares de los derechos de propiedad intelectual únicamente para usos privados enmarcados en actividades de investigación y docencia. No se autoriza su reproducción con finalidades de lucro ni su difusión y puesta a disposición desde un sitio ajeno al servicio TDR. No se autoriza la presentación de su contenido en una ventana o marco ajeno a TDR (framing). Esta reserva de derechos afecta tanto al resumen de presentación de la tesis como a sus contenidos. En la utilización o cita de partes de la tesis es obligado indicar el nombre de la persona autora.
    [Show full text]
  • Sistema De Recuperación Automática De Un Supercomputador Con Arquitectura De Cluster
    Universidad Politecnica´ de Madrid Facultad de Informatica´ Proyecto fin de carrera Sistema de recuperacion´ automatica´ de un supercomputador con arquitectura de cluster Autor: Juan Morales del Olmo Tutores: Pedro de Miguel Anasagasti Oscar´ Cubo Medina Madrid, septiembre 2008 La composicion´ de este documento se ha realizado con LATEX. Diseno˜ de Oscar Cubo Medina. Esta obra esta´ bajo una licencia Reconocimiento-No comercial-Compartir bajo la misma licencia 2.5 de Creative Commons. Para ver una copia de esta licencia, visite http://creativecommons.org/licenses/by-nc-sa/2.5/ o envie una carta a Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. Las ideas no duran mucho. Hay que hacer algo con ellas. Santiago Ramon´ y Cajal A todos los que he desatendido durante la carrera Sinopsis El continuo aumento de las necesidades de computo´ de la comunidad cient´ıfica esta´ ocasionando la proliferacion´ de centros de supercomputacion´ a lo largo del mundo. Desde hace unos anos˜ la tendencia es ha utilizar una arquitectura de cluster para la construccion´ de estas maquinas.´ Precisamente la UPM cuenta con uno de estos computadores. Se trata de Magerit, el segundo super- computador mas´ potente de Espana˜ que se encuentra alojado en el CeSViMa y que alcanza los 16 TFLOPS. Los nodos de computo´ de un sistema de estas caracter´ısticas trabajan exhaustivamente casi sin des- canso, por eso es frecuente que vayan sufriendo problemas. Las tareas de reparacion´ de nodos consumen mucho tiempo al equipo de administracion´ de CeSViMa y no existen herramientas que agilicen estas labo- res. El objetivo de este proyecto es dotar de cierta autonom´ıa a Magerit para que pueda recuperar de forma automatica´ sus nodos de computo´ sin la intervencion´ de los administradores del sistema.
    [Show full text]
  • Investigation Report on Existing HPC Initiatives
    European Exascale Software Initiative Investigation Report on Existing HPC Initiatives CONTRACT NO EESI 261513 INSTRUMENT CSA (Support and Collaborative Action) THEMATIC INFRASTRUCTURE Due date of deliverable: 30.09.2010 Actual submission date: 29.09.2010 Publication date: Start date of project: 1 JUNE 2010 Duration: 18 months Name of lead contractor for this deliverable: EPSRC Authors: Xu GUO (EPCC), Damien Lecarpentier (CSC), Per Oster (CSC), Mark Parsons (EPCC), Lorna Smith (EPCC) Name of reviewers for this deliverable: Jean-Yves Berthou (EDF), Peter Michielse (NCF) Abstract: This deliverable investigates large-scale HPC projects and initiatives around the globe with particular focus on Europe, Asia, and the US and trends concerning hardware, software, operation and support. Revision JY Berthou, 28/09/2010 Project co-funded by the European Commission within the Seventh Framework Programme (FP7/2007-2013) Dissemination Level : PU PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services) 1 INVESTIGATION ON EXISTING HPC INITIATIVES CSA-2010-261513 D2.1 29/09/2010 Table of Contents 1. EXECUTIVE SUMMARY ............................................................................................................. 6 2. INTRODUCTION ........................................................................................................................
    [Show full text]
  • Potential Use of Supercomputing Resources in Europe and in Spain for CMS (Including Some Technical Points)
    Potential use of supercomputing resources in Europe and in Spain for CMS (including some technical points) Presented by Jesus Marco (IFCA, CSIC-UC, Santander, Spain) on behalf of IFCA team, And with tech contribution from Jorge Gomes, LIP, Lisbon, Portugal @ CMS First Open Resources Computing Workshop CERN 21 June 2016 Some background… The supercomputing node at the University of Cantabria, named ALTAMIRA, is hosted and operated by IFCA It is not large (2500 cores) but it is included in the Supercomputing Network in Spain, that has quite significant resources (more than 80K cores) that will be doubled along next year. As these resources are granted on the basis of the scientific interest of the request, CMS teams in Spain could directly benefit of their use "for free". However the request needs to show the need for HPC resources (multicore architecture in CMS case) and the possibility to run in a “existing" framework (OS, username/passw. access, etc.). My presentation aims to cover these points and ask for experience from other teams/experts on exploiting this possibility. The EU PRACE initiative joins many different supercomputers at different levels (Tier-0, Tier-1) and could also be explored. Evolution of the Spanish Supercomputing Network (RES) See https://www.bsc.es/marenostrum-support-services/res Initially (2005) most of the resources were based on IBM-Power +Myrinet Since 2012 many of the centers evolved to use Intel x86 + Infiniband • First one was ALTAMIRA@UC: – 2.500 Xeon E5 cores with FDR IB to 1PB on GPFS • Next: MareNostrum
    [Show full text]