CERN Openlab VI Project Agreement “Exploration of Google Technologies

Total Page:16

File Type:pdf, Size:1020Kb

CERN Openlab VI Project Agreement “Exploration of Google Technologies CERN openlab VI Project Agreement “Exploration of Google technologies for HEP” between The European Organization for Nuclear Research (CERN) and Google Switzerland GmbH (Google) CERN K-Number Agreement Start Date 01/06/2019 Duration 1 year Page 1 of 26 THE EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (“CERN”), an Intergovernmental Organization having its seat at Geneva, Switzerland, duly represented by Fabiola Gianotti, Director-General, and Google Switzerland GmbH (Google), Brandschenkestrasse 110, 8002 Zurich, Switzerland , duly represented by [LEGAL REPRESENTATIVE] Hereinafter each a “Party” and collectively the “Parties”, CONSIDERING THAT: The Parties have signed the CERN openlab VI Framework Agreement on November 1st, 2018 (“Framework Agreement”) which establishes the framework for collaboration between the Parties in CERN openlab phase VI (“openlab VI”) from 1 January 2018 until 31 December 2020 and which sets out the principles for all collaborations under CERN openlab VI; Google Switzerland GmbH (Google) is an industrial Member of openlab VI in accordance with the Framework Agreement; Article 3 of the Framework Agreement establishes that all collaborations in CERN openlab VI shall be established in specific Projects on a bilateral or multilateral basis and in specific agreements (each a “Project Agreement”); The Parties wish to collaborate in the “exploration of applications of Google products and technologies to High Energy Physics ICT problems related to the collection, storage and analysis of the data coming from the Experiments” under CERN openlab VI (hereinafter “Exploration of Google technologies for HEP”); AGREE AS FOLLOWS: Article 1 Purpose and scope 1. This Project Agreement establishes the collaboration of the Parties in Exploration of Google technologies for HEP, hereinafter the “Project”). The detailed tasks and responsibilities of the Parties for this Project are described in the Statement of Work, contained in Annex 1 of this Project Agreement. Page 2 of 26 2. The Parties’ collaboration under this Project Agreement is subject to the terms and conditions set forth in the Framework Agreement, unless otherwise specified in this Project Agreement. Article 2 Contributions The Parties shall make the following contributions to the Project: 1. Google Switzerland GmbH (Google) contributions: • Deployment of technical experts to the Project; • Secondment of or grants for recruiting dedicated researchers (at the master and doctoral level) at CERN for the execution of the projects; • Credits towards the use of cloud technologies Google Switzerland GmbH (Google) shall make the above contributions and issue the credit memos, as applicable, to CERN within ninety working days of receipt of the corresponding invoice from CERN. 2. CERN contributions: • Deployment of technical experts to the Project; • Supervision of personnel funded by Google Switzerland GmbH (Google); • Deployment of interns for the CERN Summer Student 2019-2020 program to work on activities related to the Project; • Access to CERN’s technical environment (hardware, software) as required for the Project; • Support to public relations and communications activities in accordance with Annex 1 of the Framework Agreement and based on Google Switzerland GmbH (Google)’s openlab VI membership status. Article 3 Contact points The contact persons for the Project shall be: For Google Switzerland GmbH (Google): Page 3 of 26 XXXXXXX For CERN: Federico Carminati, CERN openlab Maria Girone, CERN openlab For the individual use cases Use case 1.1 Tim Bell, Ricardo Rocha (IT-CM) Use case 1.2 Felice Pantaleo, Andrea Bocci (CMS) Use case 2.1 Sofia Vallecorsa (CERN openlab) Use case 2.2 Jennifer Ngadiuba, Maurizio Pierini (CMS) Use case 2.3 Luca Canali (IT-DB), Viktor Khristenko (CERN openlab) Use case 2.4 Sofia Vallecorsa, Taghi Aliyev (CERN openlab), xxxxx (UNOSAT) Use Case 2.5 LHCb Use case 3.1 Federico Carminati, Sofia Vallecorsa (CERN openlab) Use case 3.2 Federico Carminati, Sofia Vallecorsa (CERN openlab) CERN 1 Esplanade des Particules CH 1211 Geneva 23, Switzerland Article 4 Entry into force, duration and termination 1. This Project Agreement shall enter into force on the date of signature by the last Party to sign with effect of June 1st, 2019. It shall remain in force for as long as necessary to give effect to the Parties’ respective rights and obligations under this Project Agreement. 2. Except in case of force majeure, each Party may only terminate this Project Agreement in the event that the other Party fails to honour any of its obligations thereunder. Page 4 of 26 Signed on ………… The European Organization for Nuclear Research (CERN) ……………………………… (Signature) ……………………………… (Full Name) ……………………………… (Email address) Page 5 of 26 Signed on ………… Google Switzerland GmbH (Google) ……………………………… (Signature) ……………………………… (Full Name) ……………………………… (Email address) Page 6 of 26 Annex 1: Project Statement of Work Page 7 of 26 OVERVIEW Start date June 1st, 2019 Duration 12 months with option for renewal upon mutual agreement Members CERN, ATLAS, Google Total effort Total cash contributions Total in-kind contributions SUMMARY OF RESOURCE CONTRIBUTIONS FTEs Facilities1 Hardware Software Other Cash (Summer (FTE+travel, students, other, etc.) events) CERN Google Switzerland GmbH (Google) Total2 PROJECT DESCRIPTION The following Programme of Work is organized in three main areas with specific use cases of interest. Section 1 describes explorations in data center and computing infrastructures technologies (storage, workflow managements, containers, hybrid infrastructures integration, etc.). Section 2 focuses on application of machine learning/deep learning tools and algorithms. Finally Section 3 deals with quantum computing explorations. 1. Data Center technologies 1 Office space, rack space, access to infrastructure or facilities, etc. 2 Provide estimated values for facilities, hardware and software contributions Page 8 of 26 Kubernetes performance and scalability (use case 1.1) Large computing environments require high levels of automation for both resource and workload management, as well as careful resource planning to meet the needs of daily operations and periodic spikes on resource requests while minimizing cost. Balancing the two can often be challenging. Looking forward to the upcoming challenges of the High Luminosity LHC and its significant increase in the amount of data to be processed, this use case will explore technologies with potential to ease this task while improving resource usage. In addition to the more traditional scale out options using well established network and compute provisioning methods, it will focus on the Kubernetes project and ecosystem, where Google and more recently CERN have been investing significantly along a much larger community. The main focus will be on extending the ongoing efforts to scale out the CERN data center to the Google Cloud Platform (GCP) (already done at scale for one time demos like Kubecon Barcelona 2019), exploring Kubernetes federation and service mesh technologies (Istio). The Batch use case can be taken as the primary use case to offer a larger CPU capacity to CERN users. Another interesting area of collaboration are accelerators, and how we can offer GPUs (currently available in a very limited amount at CERN) and TPUs transparently to our users building on top of the same Kubernetes layer. The final result should include a detailed report on the technologies used as well as cost models and efficiency in the areas of compute, networking and storage. The current CERN capacity is in the order of 300.000 hardware hyper-threads (cores) and 100s of Petabytes of storage. For a successful evaluation of this model’s viability we expect to have a significant fraction of these resources available in GCP at least for a short period of time. Additional areas of interest include research on serverless models to simplify the LHC computing workflows. The existing workflows launched by data arriving from the detectors are a good match for pipelines triggered by the storage systems, with each step executed as an independent function. Technologies like Knative (the backend of Google’s Cloud Run service offering) should allow us to again build on the same Kubernetes layer, allowing experiment users to focus more on their computing tasks and less on resource management. It should also allow Google to get a large use case being deployed in this new cloud service offering. The end result is expected to be a prototype of one of the experiment’s data reconstruction relying on a single pipeline backed by GCS/S3 and Knative. Milestone Duration H/W S/W Approx. Criteria Outcome effort (FTE) M1 6 months x x 1 Working CERN Batch system prototype running on Kubernetes, with resources both at CERN and in the Google Cloud Platform (GCP) Page 9 of 26 M2 6 month 0.5 - Report on cost models and service efficiency for resources running on GCP M3 6 months x 1 - Experiment data reconstruction deployed using a serverless pipeline Composable data centers for efficient HEP computing workflows (use case 1.2) This use-case deals, in a novel way, with event processing, pushing back the frontiers of technologies that can find application in many other areas of science and technology. At High Luminosity LHC and FCC, the higher proton-proton interaction rate, pileup and event processing rate present an unprecedented challenge to the real-time and offline event reconstruction, requiring a processing power orders of magnitude larger than today. This exceeds by far the
Recommended publications
  • CERN Courier–Digital Edition
    CERNMarch/April 2021 cerncourier.com COURIERReporting on international high-energy physics WELCOME CERN Courier – digital edition Welcome to the digital edition of the March/April 2021 issue of CERN Courier. Hadron colliders have contributed to a golden era of discovery in high-energy physics, hosting experiments that have enabled physicists to unearth the cornerstones of the Standard Model. This success story began 50 years ago with CERN’s Intersecting Storage Rings (featured on the cover of this issue) and culminated in the Large Hadron Collider (p38) – which has spawned thousands of papers in its first 10 years of operations alone (p47). It also bodes well for a potential future circular collider at CERN operating at a centre-of-mass energy of at least 100 TeV, a feasibility study for which is now in full swing. Even hadron colliders have their limits, however. To explore possible new physics at the highest energy scales, physicists are mounting a series of experiments to search for very weakly interacting “slim” particles that arise from extensions in the Standard Model (p25). Also celebrating a golden anniversary this year is the Institute for Nuclear Research in Moscow (p33), while, elsewhere in this issue: quantum sensors HADRON COLLIDERS target gravitational waves (p10); X-rays go behind the scenes of supernova 50 years of discovery 1987A (p12); a high-performance computing collaboration forms to handle the big-physics data onslaught (p22); Steven Weinberg talks about his latest work (p51); and much more. To sign up to the new-issue alert, please visit: http://comms.iop.org/k/iop/cerncourier To subscribe to the magazine, please visit: https://cerncourier.com/p/about-cern-courier EDITOR: MATTHEW CHALMERS, CERN DIGITAL EDITION CREATED BY IOP PUBLISHING ATLAS spots rare Higgs decay Weinberg on effective field theory Hunting for WISPs CCMarApr21_Cover_v1.indd 1 12/02/2021 09:24 CERNCOURIER www.
    [Show full text]
  • Sixtrack V and Runtime Environment
    February 19, 2020 11:45 IJMPA S0217751X19420351 page 1 International Journal of Modern Physics A Vol. 34, No. 36 (2019) 1942035 (17 pages) c World Scientific Publishing Company DOI: 10.1142/S0217751X19420351 SixTrack V and runtime environment R. De Maria∗ Beam Department (BE-ABP-HSS), CERN, 1211, Geneva 23, Switzerland [email protected] J. Andersson, V. K. Berglyd Olsen, L. Field, M. Giovannozzi, P. D. Hermes, N. Høimyr, S. Kostoglou, G. Iadarola, E. Mcintosh, A. Mereghetti, J. Molson, D. Pellegrini, T. Persson and M. Schwinzerl CERN, 1211, Geneva 23, Switzerland E. H. Maclean CERN, 1211, Geneva 23, Switzerland University of Malta, Msida, MSD 2080, Malta K. N. Sjobak CERN, 1211, Geneva 23, Switzerland University of Oslo, Boks 1072 Blindern, 0316, Oslo, Norway I. Zacharov EPFL, Rte de la Sorge, 1015, Lausanne, Switzerland S. Singh Indian Institute of Technology Madras, IIT P.O., Chennai 600 036, India Int. J. Mod. Phys. A 2019.34. Downloaded from www.worldscientific.com Received 28 February 2019 Revised 5 December 2019 Accepted 5 December 2019 Published 17 February 2020 SixTrack is a single-particle tracking code for high-energy circular accelerators routinely used at CERN for the Large Hadron Collider (LHC), its luminosity upgrade (HL-LHC), the Future Circular Collider (FCC) and the Super Proton Synchrotron (SPS) simula- tions. The code is based on a 6D symplectic tracking engine, which is optimized for long-term tracking simulations and delivers fully reproducible results on several plat- forms. It also includes multiple scattering engines for beam{matter interaction studies, by INDIAN INSTITUTE OF TECHNOLOGY @ MADRAS on 04/19/20.
    [Show full text]
  • Harnessing Green It
    www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT PRINCIPLES AND PRACTICES Editors San Murugesan BRITE Professional Services and University of Western Sydney, Australia G.R. Gangadharan Institute for Development and Research in Banking Technology, India A John Wiley & Sons, Ltd., Publication www.it-ebooks.info This edition first published 2012 © 2012 John Wiley and Sons Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
    [Show full text]
  • Particle Tracking and Identification on Multi- and Many-Core Hardware
    PARTICLETRACKINGANDIDENTIFICATIONONMULTI-AND MANY-COREHARDWAREPLATFORMS by plácido fernández declara A dissertation submitted by in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Technology Universidad Carlos III de Madrid Advisors: José Daniel García Sánchez Niko Neufeld September 2020 This thesis is distributed under license “Creative Commons Attribution – Non Commercial – Non Derivatives”. To my parents. A mis padres. We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. — Donald E. Knuth [87] ACKNOWLEDGMENTS Since I finished my master’s degree and started working inthe aerospace industry with GMV I have been interested in pursuing a PhD. But I did not find the right challenge or topic I was looking for. One day I received a call from my professor from Computer Science and Engineering degree and director of the master’s degree I would later study in Leganés. He let me know about an opening position to do a PhD in High Performance Computing at CERN, in Switzerland. It was just what I was looking for; an incredible computing challenge working in the Large Hadron Collider. I applied for it and some interviews later I got the job. I have to thank first and foremost my parents. It is thanks tothem that I was able to purse any dream and challenge I ever imagined and I am ever thankful for their love and support. To my love and friend Afri, who helps me overcome any situation and makes any adventure an amazing journey to enjoy life.
    [Show full text]
  • Analysis and Optimization of the Offline Software of the Atlas Experiment at Cern
    Technische Universität München Fakultät für Informatik Informatik 5 – Fachgebiet Hardware-nahe Algorithmik und Software für Höchstleistungsrechnen ANALYSIS AND OPTIMIZATION OF THE OFFLINE SOFTWARE OF THE ATLAS EXPERIMENT AT CERN Robert Johannes Langenberg Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: 0. Prof. Bernd Brügge, PhD Prüfer der Dissertation: 1. Prof. Dr. Michael Georg Bader 2. Prof. Dr. Siegfried Bethke Die Dissertation wurde am 03.07.2017 bei der Technischen Universität München einge- reicht und durch die Fakultät für Informatik am 11.10.2017 angenommen. ABSTRACT The simulation and reconstruction of high energy physics experiments at the Large Hadron Collider (LHC) use huge customized software suites that were developed during the last twenty years. The reconstruction software of the ATLAS experiment, in particular, is challenged by the increasing amount and complexity of the data that is processed because it uses combinatorial algorithms. Extrapolations at the end of Run-1 of the LHC indicated the need of a factor 3 improvement in event processing rates in order to stay within the available resources for the workloads expected during Run-2. This thesis develops a systematic approach to analyze and optimize the ATLAS recon- struction software for modern hardware architectures. First, this thesis analyzes limita- tions of the software and how to improve it by using intrusive and non-intrusive tech- niques. This includes an analysis of the data flow graph and detailed performance meas- urements using performance counters and control flow and function level profilers.
    [Show full text]
  • Nov/Dec 2020
    CERNNovember/December 2020 cerncourier.com COURIERReporting on international high-energy physics WLCOMEE CERN Courier – digital edition ADVANCING Welcome to the digital edition of the November/December 2020 issue of CERN Courier. CAVITY Superconducting radio-frequency (SRF) cavities drive accelerators around the world, TECHNOLOGY transferring energy efficiently from high-power radio waves to beams of charged particles. Behind the march to higher SRF-cavity performance is the TESLA Technology Neutrinos for peace Collaboration (p35), which was established in 1990 to advance technology for a linear Feebly interacting particles electron–positron collider. Though the linear collider envisaged by TESLA is yet ALICE’s dark side to be built (p9), its cavity technology is already established at the European X-Ray Free-Electron Laser at DESY (a cavity string for which graces the cover of this edition) and is being applied at similar broad-user-base facilities in the US and China. Accelerator technology developed for fundamental physics also continues to impact the medical arena. Normal-conducting RF technology developed for the proposed Compact Linear Collider at CERN is now being applied to a first-of-a-kind “FLASH-therapy” facility that uses electrons to destroy deep-seated tumours (p7), while proton beams are being used for novel non-invasive treatments of cardiac arrhythmias (p49). Meanwhile, GANIL’s innovative new SPIRAL2 linac will advance a wide range of applications in nuclear physics (p39). Detector technology also continues to offer unpredictable benefits – a powerful example being the potential for detectors developed to search for sterile neutrinos to replace increasingly outmoded traditional approaches to nuclear nonproliferation (p30).
    [Show full text]
  • Annual Report 2019
    ANNUAL REPORT 2019 INTRO 01 A word from the DG 4 CONTEXT 02 Background 6 HIGHLIGHTS 03 2019 at CERN 10 ABOUT 04 The concept 12 RESULTS 05 Developing a ticket reservation system for the CERN Open Days 2019 20 Dynamical Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) 22 Oracle Management Cloud 24 Heterogeneous I/O for Scale 26 Oracle WebLogic on Kubernetes 28 EOS productisation 30 Kubernetes and Google Cloud 32 High-performance distributed caching technologies 34 Testbed for GPU-accelerated applications 36 Data analytics in the cloud 38 Data analytics for industrial controls and monitoring 40 Exploring accelerated machine learning for experiment data analytics 42 Evaluation of Power CPU architecture for deep learning 44 Fast detector simulation 46 NextGeneration Archiver for WinCC OA 48 Oracle cloud technologies for data analytics on industrial control systems 50 Quantum graph neural networks 52 Quantum machine learning for supersymmetry searches 54 Quantum optimisation for grid computing 56 Quantum support vector machines for Higgs boson classification 58 Possible projects in quantum computing 60 BioDynaMo 62 Future technologies for medical linacs (SmartLINAC) 64 CERN Living Lab 66 Humanitarian AI applications for satellite imagery 68 Smart platforms for science 70 KNOWLEDGE 06 Education, training and outreach 72 FUTURE 07 Next steps 76 01 INTRO A word from the DG Since its foundation in 2001, CERN openlab has been working to help accelerate the development of cutting-edge computing technologies. Such technologies play a vital role in particle physics, as well as in many other research fields, helping scientists to continue pushing back the frontiers of knowledge.
    [Show full text]
  • Wigner RCP 2015 Annual Report
    Wigner RCP 2015 Annual Report Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Published by the Wigner Research Centre for Physics, Hungarian Academy of Sciences Konkoly Thege Miklós út 29-33 H-1121 Budapest Hungary Mail: POB 49, H-1525 Budapest, Hungary Phone: +36 (1) 392-2512 Fax: +36 (1) 392-2598 E-mail: [email protected] http://wigner.mta.hu © Wigner Research Centre for Physics ISSN: 2064-7336 Source of the lists of publications: MTMT, http://www.mtmt.hu This yearbook is accessible at the Wigner RCP Homepage, http://wigner.mta.hu/wds/ Wigner RCP 2015 – Annual Report Edited by TS. Bíró, V. Kozma-Blázsik, A. Kmety, B. Selmeci Proofreaders: I. Bakonyi, D. Horváth, P. Kovács Closed on 15. May, 2016 List of contents Foreword .................................................................................................................................... 5 Key figures and organizational chart ......................................................................................... 6 Most important events of the year 2015 .................................................................................. 8 2015 – The International Year of Light and the Wigner Research Centre for Physics ............ 11 Research grants and international scientific cooperation ....................................................... 14 Wigner research infrastructures .............................................................................................
    [Show full text]
  • Big Data Al CERN: Come Si Gestiscono I Dati Prodotti Da Grandi Esperimenti Di Fisica
    Big Data al CERN: come si gestiscono i dati prodotti da grandi esperimenti di Fisica Giuseppe Lo Presti CERN IT Department SeminarioG. Lo Presti -UniFe, UniFe - Big 29/03/2019Data al CERN Agenda • The Big Picture • Computing and Data Management at CERN • Big Data at CERN and outside • Future Prospects • The “High-Luminosity” LHC and its challenges G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN The LHC and its friends G. Lo Presti - UniFe - Big Data al CERN Computing at CERN: The Big Picture Data Storage - Data Processing - Event generation - Detector simulation - Event reconstruction - Resource accounting Distributed computing - Middleware - Workload management - Data management - Monitoring SW CMSSW-CMS GAUDI-LHCb ATHENA-ATLAS From the Hit to the Bit: DAQ 100 million channels 40 million pictures a second Synchronised signals from all detector parts G. Lo Presti - UniFe - Big Data al CERN From the Hit to the Bit: event filtering • L1: 40 million events per second • Fast, simple information • Hardware trigger in a few micro seconds • L2: 100,000 events per second • Fast algorithms in local computer farm • Software trigger in <1 second • EF: Few 1000s per second recorded for offline analysis • By each experiment! G. Lo Presti - UniFe - Big Data al CERN Data Processing • Experiments send over 10 PB of data per month • 100PB from all experiments in 2018 • The LHC data is aggregated at the CERN data centre to be stored, processed, and distributed ~0.5GB/s ~3GB/s ~ 5GB/s (Heavy Ion Run) ~3GB/s G.
    [Show full text]
  • Power for Modern Detectors
    I NTERNAT I ONAL J OURNAL OF H I G H - E NERGY P H YS I CS CERNCOURIER WELCOME V OLUME 5 8 N UMBER 9 N O V EMBER 2 0 1 8 CERN Courier – digital edition Welcome to the digital edition of the November 2018 issue of CERN Courier. Physics through Explaining the strong interaction was one of the great challenges facing theoretical physicists in the 1960s. Though the correct solution, quantum photography chromodynamics, would not turn up until early the next decade, previous attempts had at least two major unintended consequences. One is electroweak theory, elucidated by Steven Weinberg in 1967 when he realised that the massless rho meson of his proposed SU(2)xSU(2) gauge theory was the photon of electromagnetism. Another, unleashed in July 1968 by Gabriele Veneziano, is string theory. Veneziano, a 26-year-old visitor in the CERN theory division at the time, was trying “hopelessly” to copy the successful model of quantum electrodynamics to the strong force when he came across the idea – via a formula called the Euler beta function – that hadrons could be described in terms of strings. Though not immediately appreciated, his 1968 paper marked the beginning of string theory, which, as Veneziano describes 50 years later, continues to beguile physicists. This issue of CERN Courier also explores an equally beguiling idea, quantum computing, in addition to a PET scanner for clinical and fundamental-physics applications, the internationally renowned Beamline for Schools competition, and the growing links between high-power lasers (the subject of the 2018 Nobel Prize in Physics) and particle physics.
    [Show full text]
  • Discovery Machines
    DISCOVERY MACHINES CERN operates a unique complex of eight accelerators and one decelerator. The accelerators speed particles up to almost the speed of light before colliding them with each other or launching them at targets. Detectors record what happens during these collisions, and all the data collected is stored and analysed using a worldwide computing grid. Hundreds of physicists, engineers and technicians contribute to the operation and maintenance of these sophisticated machines. The LHC performed remarkably in 2016, delivering far more collisions than expected. (CERN-GE-1101021-08) 20 | CERN 2011: 6 fb-1 2012: 23 fb-1 2015: 4 fb-1 2016: 40 fb-1 7 and 8 TeV runs 13 TeV runs LHC operators at the controls in July 2016. Above, the amount of data delivered by the LHC to the ATLAS (CERN-PHOTO-201607-171-3) and CMS experiments over the course of its proton runs. These quantities are expressed in terms of integrated luminosity, which refers to the number of potential collisions during a given period. LARGE HADRON COLLIDER IN FULL SWING 2016 was an exceptional year for the Large Hadron Collider magnet, which injects the particle bunches into the machine, (LHC). In its second year of running at a collision energy of limited the number of particles per bunch. 13 teraelectronvolts (TeV), the accelerator’s performance exceeded expectations. In spite of these interruptions, the performance of the machines was excellent. Numerous adjustments had been The LHC produced more than 6.5 million billion collisions made to all systems over the previous two years to improve during the proton run from April to the end of October, their availability.
    [Show full text]
  • Rapport De Stage De Rapport
    4, rue Merlet de la Boulaye BP 30926 European Organization 49009 Angers cedex 01 - France for Nuclear Research Tél. : +33 (0)2.41.86.67.67 http://www.eseo.fr CERN CH-1211 Genève 23 Switzerland RAPPORT DE STAGE (*)TYPE DE STAGE : Stage de fin d'etudes (I3) (*) AUTEUR: Axel Voitier (*) DATES : 01/01/2009 au 28/02/2010 NIVEAU DE CONFIDENTIALITE Aucune Niveau I Niveau II Niveau III X TITRE DU STAGE CERN-THESIS-2009-205 17/12/2009 Test suite for the archiver of a SCADA system Responsable de stage : M. Manuel Gonzales Berges ………………………………………… Summary sheet Test suite for the archiver of a SCADA system Topic: The group responsible for providing the main control system applications for all machines at CERN has to validate that every piece of the control systems used will be reliable and fully functional when the LHC and its experiments will do collisions of particles. CERN use PVSS from ETM/Siemens for the SCADA part of its control systems. This software has a component dedicated to archive into a centralised Oracle database values and commands of tenth of thousands hardware devices. This component, named RDB, has to be tested and validated in terms of functionality and performance. The need is high for that because archiving is a critical part of the control systems. In case of an incident on one of the machine, it will be unacceptable to not benefit of archiving the machine context at this moment just because of a bug in RDB. Bugs have to be spotted and reported to ETM.
    [Show full text]