A New UK Service for Academic Research
Total Page:16
File Type:pdf, Size:1020Kb
The newsletter of EPCC, the supercomputing centre at the University of Edinburgh news Issue 74 Autumn 2013 In this issue Managing research data HPC for business Simulating soft materials Energy efficiency in HPC Intel Xeon Phi ARCHER A new UK service for academic research ARCHER is a 1.56 Petaflop Cray XC30 supercomputer that will provide the next UK national HPC service for academic research Also in this issue Dinosaurs! From the Directors Contents 3 PGAS programming 7th International Conference Autumn marks the start of the new ARCHER service: a 1.56 Petaflop Profile Cray XC30 supercomputer that will provide the next UK national Meet the people at EPCC HPC service for academic research. 4 New national HPC service We have been involved in running generation of exascale Introducing ARCHER national HPC services for over 20 supercomputers which will be years and ARCHER will continue many, many times more powerful Big Data our tradition of supporting science than ARCHER. Big Data activities 7 Data preservation and in the UK and in Europe. are also playing an increasingly infrastructure important part in our academic and Our engagement with industry work. supercomputing takes many forms 9 HPC for industry - from racing dinosaurs at science We have always prided ourselves Making business better festivals, to helping researchers get on the diversity of our activities. more from HPC by improving This issue of EPCC News 11 Simulation algorithms and creating new showcases just a fraction of them. Better synthesised sounds software tools. EPCC staff design You can find out more about our Improving soft matter design and deliver high-quality training, we work on our website and blog: equip scientists with the skills they www.epcc.ed.ac.uk/blog Support for science need to produce ground-breaking 13 Advancing molecular computer simulations, and we Alison Kennedy & Mark Parsons dynamics undertake research into the EPCC Executive Directors software, tools and technologies [email protected] Future of HPC that will make possible a new [email protected] 15 A roadmap to the next generation Numerical simulations NAIS’s new GPGPUs 16 Energy efficiency in HPC Adrian Jackson More efficient parallel and [email protected] cloud computing EPCC has joined the OpenACC implementations on a range of Legacy code consortium. OpenACC is a different hardware. We are also 19 Parallelising commercial directives-based parallel porting a number of applications to codes programming model, in the vein of GPGPUs using OpenACC, including OpenMP, designed to enable C, CASTEP (DFT-based materials Exascale C++, and FORTRAN codes to modelling code), and COSA 20 effectively utilise accelerator (frequency domain CFD code). European research projects technology such as GPGPUs. The consortium brings together: Intel Xeon Phi The consortium works on the hardware vendors (Cray and 21 Our first impressions OpenACC standard, OpenACC NVIDIA); software providers (CAPS, tools, and OpenACC benchmarks Allinea, PGI, and Rogue Wave), and Training and education and example applications. research establishments (including 23 Georgia Tech, Oak Ridge and MSc in HPC; research EPCC has a strong engagement Sandia National Labs, and the software; DiRAC, Summer with OpenACC, including OpenACC Tokyo Institute of Technology). of HPC compiler developers amongst our staff, and we have created a set of Outreach OpenACC benchmarks to enable www.openacc-standard.org 26 www.castep.org Spreading the word about users to evaluate OpenACC supercomputers 29 MSc in HPC Contact us Study with us www.epcc.ed.ac.uk [email protected] +44 (0)131 650 5030 EPCC is a supercomputing centre based at The University of Edinburgh, which is a charitable body registered in Scotland with registration number SC005336. 2 PGAS2013 in Edinburgh The 7th International Conference on PGAS Programming Models visited Edinburgh on the 3rd and 4th October, making its first ever appearance outside the United States! The PGAS conference is the support for PGAS-type languages Michele Weiland premier forum to present and in current (and future) HPC systems [email protected] discuss ideas and research Professor Mitsuhisa Sato from the developments in the area of PGAS University of Tsukuba in Japan, who models, languages, compilers, took the opportunity to discuss runtimes, applications and tools, how PGAS may play a role in the PGAS architectures and hardware race to the exascale. features. The conference, which attracted More information, including The keynote talks were given by over 60 attendees from across the links to the papers and two highly-regarded experts in the globe, had a varied programme of proceedings, can be found on field: research papers, “hot” sessions the conference website: Dr Duncan Roweth, a senior where speakers introduced work in principal engineer at Cray, who progress, as well as a poster www.pgas2013.org.uk focussed his talk on hardware reception. Staff profile Applications Consultant Eilidh Troup talks about her work here at EPCC. I work as an Applications Consultant on a project called SPRINT (Simple Parallel R INTerface), which allows users of the statistical language R to make use of multi-core and HPC machines without needing any parallel programming skills. We provide parallelised versions of can be used for measuring gene Eilidh Troup standard R functions which can just expression to diagnose and [email protected] be slotted in to replace the usual R understand diseases or to function and will then run on many sequence genomes, for example to processors behind the scenes. find out which microorganisms are present in a habitat. I particularly enjoy working on SPRINT as it is mostly used by I am also involved in EPCC’s public biologists. I studied genetics before outreach events and love the I became a programmer, and love enthusiasm of children pretending this opportunity to keep up to date to be part of a computer and with the latest biology technology. working together to sort coloured balls or numbers. Next Generation Sequencing can rapidly produce terabytes of data People are very interested in the that must then be analysed. This science we support at EPCC and amount of data needs a lot of the real hardware that makes a computational power to process supercomputer is always popular and EPCC is well placed to work on too. this. Next Generation Sequencing The newsletter of EPCC, the supercomputing centre at the University of Edinburgh 3 ARCHER: On target for a bullseye Autumn ushers in a new era for UK supercomputing with the start of the ARCHER (Advanced Research Computing High End Resource) service in Edinburgh. ARCHER is the next national HPC service for academic research and comprises a number of components: accommodation provided by the University of Edinburgh; hardware from Cray; systems support from EPCC and Daresbury; and user support from EPCC. In autumn 2011, the Minister for The latest Intel Xeon Ivy Bridge At your service Science announced a new capital processors used in ARCHER provide investment in e-infrastructure, which the next generation of computational The Service Provision function for included £43m for ARCHER, the muscle, with best-in-class floating- ARCHER is provided by UoE HPCX next national HPC facility for point performance, memory Ltd. This includes responsibilities academic research. After a brief bandwidth and energy efficiency. Each such as systems administration, overlap, ARCHER will take over ARCHER node comprises two 12-core helpdesk and website provision, from HECToR as the UK’s primary 2.7 GHz Ivy Bridge multi-core and administrative support. The Academic research supercomputer. processors, at least 64 GB of DDR3- work is subcontracted to EPCC at HECToR has been in Edinburgh 1833 MHz main memory and all the University of Edinburgh (EPCC) since 2007. compue nodes are interconnected via and the STFC’s Daresbury an Aries Network Interface Card. Laboratory. What is ARCHER? ARCHER has 3008 such nodes, ie, Service Provision will be delivered The new Cray XC30 architecture is 72,192 cores, in only 16 cabinets by two sub-teams: the Operations the latest development in Cray’s providing a total of 1.56 Petaflops of and Systems Group led by Mr long history of MPP architectures, theoretical peak performance.Scratch Michael Brown, and the User which have been supporting storage is provided by 20 Cray Support and Liaison Team led by fundamental global scientific Sonexion Scalable Storage Units, Dr Alan Simpson. research for over two decades. giving 4.4PB of accessible space with sustained read-write bandwidth of Enabling a smooth transition for The Cray XC30 incorporates two over 100GB per second. users from the HECToR to ARCHER major upgrades to the fundamental services is one of our key aims. For components of any MPP ARCHER is also directly connected ARCHER, we will utilise SAFE supercomputer: the introduction of to the UK Research Data Facility, (Service Administration from EPCC) Cray’s newest network interconnect, easing the transition of large data for both the ARCHER Helpdesk and Aries; and the use of Intel’s Xeon sets between high-performance Project Administration & Reporting series of multi-core processors. scratch space and long-term functions. Each has enhanced capabilities archival storage and between successive HPC services. The ARCHER website provided by over previous architectures. Aries EPCC contains supporting incorporates the novel dragonfly Updates included in the newest documentation for the service and network topology that provides versions of the Cray Compilation will also showcase the research multi-tier all-to-all connectivity. This Environment provide full support for that uses the system. The new network allows all applications, generating highly optimised configuration of the ARCHER even those that perform all-to-all executables that fully exploit the service will evolve over time to stay style communications, the potential “Ivy Bridge” processors.