Particle Tracking and Identification on Multi- and Many-Core Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Particle Tracking and Identification on Multi- and Many-Core Hardware PARTICLETRACKINGANDIDENTIFICATIONONMULTI-AND MANY-COREHARDWAREPLATFORMS by plácido fernández declara A dissertation submitted by in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Technology Universidad Carlos III de Madrid Advisors: José Daniel García Sánchez Niko Neufeld September 2020 This thesis is distributed under license “Creative Commons Attribution – Non Commercial – Non Derivatives”. To my parents. A mis padres. We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. — Donald E. Knuth [87] ACKNOWLEDGMENTS Since I finished my master’s degree and started working inthe aerospace industry with GMV I have been interested in pursuing a PhD. But I did not find the right challenge or topic I was looking for. One day I received a call from my professor from Computer Science and Engineering degree and director of the master’s degree I would later study in Leganés. He let me know about an opening position to do a PhD in High Performance Computing at CERN, in Switzerland. It was just what I was looking for; an incredible computing challenge working in the Large Hadron Collider. I applied for it and some interviews later I got the job. I have to thank first and foremost my parents. It is thanks tothem that I was able to purse any dream and challenge I ever imagined and I am ever thankful for their love and support. To my love and friend Afri, who helps me overcome any situation and makes any adventure an amazing journey to enjoy life. I would like to thank my friends in Madrid for endless hours of explorations, discussions and joy in the good and bad times. To my childhood friends Javi and Josemari for every minute in the holm oak woods and the surrounding neighbourhoods. To Miguel, Guille, Centeno and all the group for adventure, climbing and nights in the mountains. To all the people that supported me in GMV... To Jordi, Albano, Patricio, Puma, Aven, Romo, Oso and Abel for many, many hours of rehearsal, all the concerts around Spain and experiences. To Cristian who took care of me in the river and plains. To Jorge for we supported each other to improve and finish our studies. To Auro and Álex who showed and accompany us on discovering the Alps, the lakes and the area. I am eternally grateful to my family who always have been supportive and encouraged me to achieve the career and life I wanted. I am grateful to Niko and Omar for showing me the way in my doc- toral studies. To Jose Daniel for the advice, guiding and conversations all along my career. To Daniel who made the experience at CERN so much more enjoyable. To Luismi for the discussions at CERN and exploring the mountains together. To Dorothea, Flavio, Rainer and Roel for all the help to make Allen an amazing thing. Thank you to all the people from the LHCb experiment and CERN that I worked with, iv for showing me the way of physics, for the BBQs at Point 8, endless Factorio games and all the great discussions. Thanks to my colleagues at Universidad Carlos III de Madrid in the ARCOS research group. Firstly, I would like to thank to David del Río, Manuel Dolz and Javier Fernández for their help with the integration with GrPPI. Secondly, I would like to also thank Javier Garcia-Blas who helped me with the experiments in GPU architectures. Thanks to the Spanish MINISTERIO DE ECONOMÍA Y COMPETI- TIVIDAD though project grant TIN2016-79637-P TOWARDS UNIFI- CATION OF HPC AND BIG DATA PARADIGMS, the EU Project ICT 644235 "REPHRASE: REfactoring Parallel Heterogeneous Resource- Aware Applications", Spanish "Ministerio de Economía y Competitivi- dad" under the project grant TIN2016-79637-P "Towards Unification of HPC and Big Data Paradigms", and Madrid Regional Government, CABAHLA-CM (ConvergenciA Big dAta-Hpc: de Los sensores a las Aplicaciones) grant number S2018/TCS-4423. v PUBLISHEDANDSUBMITTEDCONTENT Publications and research contributions from the author included as part of this thesis: Plácido Fernández et al. «Parallelizing and Optimizing LHCb- Kalman for Intel Xeon Phi KNL Processors.» In: 2018 26th Eu- romicro International Conference on Parallel, Distributed and Network- based Processing (PDP). IEEE. 2018, pp. 741–750. doi: 10.1109/ PDP2018.2018.00121 • This publication is wholly included in this thesis in Chap- ter 6. • The material from this source included in this thesis is not singled out with typographic means and references. Placido Fernandez Declara et al. «A parallel-computing algo- rithm for high-energy physics particle tracking and decoding using GPU architectures.» In: IEEE Access 7 (2019), pp. 91612– 91626. issn: 2169-3536. doi: 10.1109/ACCESS.2019.2927261 • This publication is wholly included in this thesis in Chap- ter 7. • The material from this source included in this thesis is not singled out with typographic means and references. Placido Fernandez Declara and J. Daniel Garcia. «Compass SPMD: a SPMD vectorized tracking algorithm.» 2020. Accepted for publication at CHEP 2019 Proceedings, Adelaide, Australia • This publication is partly included in this thesis in Chap- ter 8. • The material from this source included in this thesis is not singled out with typographic means and references. vi OTHERRESEARCHMERITS A list of publications and research contributions produced during the work of this thesis is presented here: Publications: Plácido Fernández et al. «Parallelizing and Optimizing LHCb- Kalman for Intel Xeon Phi KNL Processors.» In: 2018 26th Eu- romicro International Conference on Parallel, Distributed and Network- based Processing (PDP). IEEE. 2018, pp. 741–750. doi: 10.1109/ PDP2018.2018.00121 Placido Fernandez Declara et al. «A parallel-computing algo- rithm for high-energy physics particle tracking and decoding using GPU architectures.» In: IEEE Access 7 (2019), pp. 91612– 91626. issn: 2169-3536. doi: 10.1109/ACCESS.2019.2927261 Placido Fernandez Declara and J. Daniel Garcia. «Compass SPMD: a SPMD vectorized tracking algorithm.» 2020. Accepted for publication at CHEP 2019 Proceedings, Adelaide, Australia Roel Aaij et al. «Allen: A High-Level Trigger on GPUs for LHCb.» In: Computing and Software for Big Science 4.7 (2020). doi: 10. 1007/s41781-020-00039-7 Posters: Placido Fernandez Declara. «CompassUT: study of a GPU track reconstruction for LHCb upgrades.» 2019. url: https://cds. cern.ch/record/2665033. Poster presented at Winter LHCC sessions, CERN, Switzerland Placido Fernandez Declara. «Compass SPMD: a SPMD vector- ized tracking algorithm.» 2019. url: https://cds.cern.ch/ record / 2699802. Poster presented at CHEP 2019, Adelaide, Australia Talks at international conferences: Placido Fernandez Declara. Fast Kalman Filtering: new approaches for the LHCb upgrade. Tech. rep. 2018. url: http://cds.cern.ch/ record/2631784 Technical reports: LHCb Collaboration. LHCb Upgrade GPU High Level Trigger Technical Design Report. Tech. rep. CERN-LHCC-2020-006. LHCB- TDR-021. Geneva: CERN, 2020. url: https://cds.cern.ch/ record/2717938 vii I am also a co-author in a number of LHCb related papers. The full list can be found at: Placido Fernandez Declara. Google Scholar profile. https : / / scholar.google.com/citations?user=Ygkq_7YAAAAJ. 2020 viii ABSTRACT Particle track reconstruction in high-energy physics is used for pur- poses that include simulation, particle analysis, and particle collision filtering and recording. The LHCb experiment at CERN is undergoing an upgrade where improvements are being applied to the detectors, hardware and software. One of the main changes of this upgrade includes the removal of the "hardware filter" which used custom elec- tronics to filter particle collisions, and its substitute: the "software filter". This new filter is composed of commodity hardware which must be able to process a data rate of 40 Tb per second in real-time. Different architectures are being considered to achieve this goal, and the software used to compute it must be optimized and improved to achieve the target data throughput. This software filter is used to reconstruct particle collisions, also known as events, including the trajectories of the resulting particles, which later are analyzed and used to help explain problems like the matter-antimatter asymmetry. This thesis explores different opportunities with multi and many- core architectures, to improve the throughput processing of particle collisions, and the maintainability and improvement of the source code used for it. The Kalman filter algorithm is widely used in high-energy physics for particle reconstruction, and the Intel Xeon Phi KNL processor offers a many-core x86 architecture that is well suited for parallel workloads. Performance and maintainability improvements are discussed, where optimization are targeted towards the Intel Xeon Phi processor. GPU architectures are a good fit for high-energy physics workloads, where its highly parallel architecture can benefit the throughput pro- cessing of it. A GPU framework for event filtering is discussed, in particular the optimizations and changes implemented to a tracking algorithm to deliver high-throughput. Finally vectorization opportunities for CPUs are explored by using data-oriented algorithms and constructs that benefit the vector units found in x86 processors. A SPMD compiler that resembles program- ming for GPUs is used to improve the readability and performance of these algorithms. ix RESUMEN La reconstrucción de trayectorias en física de partículas se usa con distintos fines entre los que se incluyen la simulación, el análisis yel filtrado y recogida de las colisiones entre protones. El experimento LHCb del CERN se encuentra en pleno proceso de actualización en el que cambios y mejoras serán aplicadas a los detectores, los procesadores y el software. Uno de los principales cambios incluye la eliminación del "filtro hardware" basado en circuitos integrados específicos para estas aplicaciones, por un "filtro software".
Recommended publications
  • CERN Courier–Digital Edition
    CERNMarch/April 2021 cerncourier.com COURIERReporting on international high-energy physics WELCOME CERN Courier – digital edition Welcome to the digital edition of the March/April 2021 issue of CERN Courier. Hadron colliders have contributed to a golden era of discovery in high-energy physics, hosting experiments that have enabled physicists to unearth the cornerstones of the Standard Model. This success story began 50 years ago with CERN’s Intersecting Storage Rings (featured on the cover of this issue) and culminated in the Large Hadron Collider (p38) – which has spawned thousands of papers in its first 10 years of operations alone (p47). It also bodes well for a potential future circular collider at CERN operating at a centre-of-mass energy of at least 100 TeV, a feasibility study for which is now in full swing. Even hadron colliders have their limits, however. To explore possible new physics at the highest energy scales, physicists are mounting a series of experiments to search for very weakly interacting “slim” particles that arise from extensions in the Standard Model (p25). Also celebrating a golden anniversary this year is the Institute for Nuclear Research in Moscow (p33), while, elsewhere in this issue: quantum sensors HADRON COLLIDERS target gravitational waves (p10); X-rays go behind the scenes of supernova 50 years of discovery 1987A (p12); a high-performance computing collaboration forms to handle the big-physics data onslaught (p22); Steven Weinberg talks about his latest work (p51); and much more. To sign up to the new-issue alert, please visit: http://comms.iop.org/k/iop/cerncourier To subscribe to the magazine, please visit: https://cerncourier.com/p/about-cern-courier EDITOR: MATTHEW CHALMERS, CERN DIGITAL EDITION CREATED BY IOP PUBLISHING ATLAS spots rare Higgs decay Weinberg on effective field theory Hunting for WISPs CCMarApr21_Cover_v1.indd 1 12/02/2021 09:24 CERNCOURIER www.
    [Show full text]
  • Sixtrack V and Runtime Environment
    February 19, 2020 11:45 IJMPA S0217751X19420351 page 1 International Journal of Modern Physics A Vol. 34, No. 36 (2019) 1942035 (17 pages) c World Scientific Publishing Company DOI: 10.1142/S0217751X19420351 SixTrack V and runtime environment R. De Maria∗ Beam Department (BE-ABP-HSS), CERN, 1211, Geneva 23, Switzerland [email protected] J. Andersson, V. K. Berglyd Olsen, L. Field, M. Giovannozzi, P. D. Hermes, N. Høimyr, S. Kostoglou, G. Iadarola, E. Mcintosh, A. Mereghetti, J. Molson, D. Pellegrini, T. Persson and M. Schwinzerl CERN, 1211, Geneva 23, Switzerland E. H. Maclean CERN, 1211, Geneva 23, Switzerland University of Malta, Msida, MSD 2080, Malta K. N. Sjobak CERN, 1211, Geneva 23, Switzerland University of Oslo, Boks 1072 Blindern, 0316, Oslo, Norway I. Zacharov EPFL, Rte de la Sorge, 1015, Lausanne, Switzerland S. Singh Indian Institute of Technology Madras, IIT P.O., Chennai 600 036, India Int. J. Mod. Phys. A 2019.34. Downloaded from www.worldscientific.com Received 28 February 2019 Revised 5 December 2019 Accepted 5 December 2019 Published 17 February 2020 SixTrack is a single-particle tracking code for high-energy circular accelerators routinely used at CERN for the Large Hadron Collider (LHC), its luminosity upgrade (HL-LHC), the Future Circular Collider (FCC) and the Super Proton Synchrotron (SPS) simula- tions. The code is based on a 6D symplectic tracking engine, which is optimized for long-term tracking simulations and delivers fully reproducible results on several plat- forms. It also includes multiple scattering engines for beam{matter interaction studies, by INDIAN INSTITUTE OF TECHNOLOGY @ MADRAS on 04/19/20.
    [Show full text]
  • CERN Openlab VI Project Agreement “Exploration of Google Technologies
    CERN openlab VI Project Agreement “Exploration of Google technologies for HEP” between The European Organization for Nuclear Research (CERN) and Google Switzerland GmbH (Google) CERN K-Number Agreement Start Date 01/06/2019 Duration 1 year Page 1 of 26 THE EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (“CERN”), an Intergovernmental Organization having its seat at Geneva, Switzerland, duly represented by Fabiola Gianotti, Director-General, and Google Switzerland GmbH (Google), Brandschenkestrasse 110, 8002 Zurich, Switzerland , duly represented by [LEGAL REPRESENTATIVE] Hereinafter each a “Party” and collectively the “Parties”, CONSIDERING THAT: The Parties have signed the CERN openlab VI Framework Agreement on November 1st, 2018 (“Framework Agreement”) which establishes the framework for collaboration between the Parties in CERN openlab phase VI (“openlab VI”) from 1 January 2018 until 31 December 2020 and which sets out the principles for all collaborations under CERN openlab VI; Google Switzerland GmbH (Google) is an industrial Member of openlab VI in accordance with the Framework Agreement; Article 3 of the Framework Agreement establishes that all collaborations in CERN openlab VI shall be established in specific Projects on a bilateral or multilateral basis and in specific agreements (each a “Project Agreement”); The Parties wish to collaborate in the “exploration of applications of Google products and technologies to High Energy Physics ICT problems related to the collection, storage and analysis of the data coming from the Experiments” under CERN openlab VI (hereinafter “Exploration of Google technologies for HEP”); AGREE AS FOLLOWS: Article 1 Purpose and scope 1. This Project Agreement establishes the collaboration of the Parties in Exploration of Google technologies for HEP, hereinafter the “Project”).
    [Show full text]
  • Harnessing Green It
    www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT PRINCIPLES AND PRACTICES Editors San Murugesan BRITE Professional Services and University of Western Sydney, Australia G.R. Gangadharan Institute for Development and Research in Banking Technology, India A John Wiley & Sons, Ltd., Publication www.it-ebooks.info This edition first published 2012 © 2012 John Wiley and Sons Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
    [Show full text]
  • Analysis and Optimization of the Offline Software of the Atlas Experiment at Cern
    Technische Universität München Fakultät für Informatik Informatik 5 – Fachgebiet Hardware-nahe Algorithmik und Software für Höchstleistungsrechnen ANALYSIS AND OPTIMIZATION OF THE OFFLINE SOFTWARE OF THE ATLAS EXPERIMENT AT CERN Robert Johannes Langenberg Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: 0. Prof. Bernd Brügge, PhD Prüfer der Dissertation: 1. Prof. Dr. Michael Georg Bader 2. Prof. Dr. Siegfried Bethke Die Dissertation wurde am 03.07.2017 bei der Technischen Universität München einge- reicht und durch die Fakultät für Informatik am 11.10.2017 angenommen. ABSTRACT The simulation and reconstruction of high energy physics experiments at the Large Hadron Collider (LHC) use huge customized software suites that were developed during the last twenty years. The reconstruction software of the ATLAS experiment, in particular, is challenged by the increasing amount and complexity of the data that is processed because it uses combinatorial algorithms. Extrapolations at the end of Run-1 of the LHC indicated the need of a factor 3 improvement in event processing rates in order to stay within the available resources for the workloads expected during Run-2. This thesis develops a systematic approach to analyze and optimize the ATLAS recon- struction software for modern hardware architectures. First, this thesis analyzes limita- tions of the software and how to improve it by using intrusive and non-intrusive tech- niques. This includes an analysis of the data flow graph and detailed performance meas- urements using performance counters and control flow and function level profilers.
    [Show full text]
  • Nov/Dec 2020
    CERNNovember/December 2020 cerncourier.com COURIERReporting on international high-energy physics WLCOMEE CERN Courier – digital edition ADVANCING Welcome to the digital edition of the November/December 2020 issue of CERN Courier. CAVITY Superconducting radio-frequency (SRF) cavities drive accelerators around the world, TECHNOLOGY transferring energy efficiently from high-power radio waves to beams of charged particles. Behind the march to higher SRF-cavity performance is the TESLA Technology Neutrinos for peace Collaboration (p35), which was established in 1990 to advance technology for a linear Feebly interacting particles electron–positron collider. Though the linear collider envisaged by TESLA is yet ALICE’s dark side to be built (p9), its cavity technology is already established at the European X-Ray Free-Electron Laser at DESY (a cavity string for which graces the cover of this edition) and is being applied at similar broad-user-base facilities in the US and China. Accelerator technology developed for fundamental physics also continues to impact the medical arena. Normal-conducting RF technology developed for the proposed Compact Linear Collider at CERN is now being applied to a first-of-a-kind “FLASH-therapy” facility that uses electrons to destroy deep-seated tumours (p7), while proton beams are being used for novel non-invasive treatments of cardiac arrhythmias (p49). Meanwhile, GANIL’s innovative new SPIRAL2 linac will advance a wide range of applications in nuclear physics (p39). Detector technology also continues to offer unpredictable benefits – a powerful example being the potential for detectors developed to search for sterile neutrinos to replace increasingly outmoded traditional approaches to nuclear nonproliferation (p30).
    [Show full text]
  • Annual Report 2019
    ANNUAL REPORT 2019 INTRO 01 A word from the DG 4 CONTEXT 02 Background 6 HIGHLIGHTS 03 2019 at CERN 10 ABOUT 04 The concept 12 RESULTS 05 Developing a ticket reservation system for the CERN Open Days 2019 20 Dynamical Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) 22 Oracle Management Cloud 24 Heterogeneous I/O for Scale 26 Oracle WebLogic on Kubernetes 28 EOS productisation 30 Kubernetes and Google Cloud 32 High-performance distributed caching technologies 34 Testbed for GPU-accelerated applications 36 Data analytics in the cloud 38 Data analytics for industrial controls and monitoring 40 Exploring accelerated machine learning for experiment data analytics 42 Evaluation of Power CPU architecture for deep learning 44 Fast detector simulation 46 NextGeneration Archiver for WinCC OA 48 Oracle cloud technologies for data analytics on industrial control systems 50 Quantum graph neural networks 52 Quantum machine learning for supersymmetry searches 54 Quantum optimisation for grid computing 56 Quantum support vector machines for Higgs boson classification 58 Possible projects in quantum computing 60 BioDynaMo 62 Future technologies for medical linacs (SmartLINAC) 64 CERN Living Lab 66 Humanitarian AI applications for satellite imagery 68 Smart platforms for science 70 KNOWLEDGE 06 Education, training and outreach 72 FUTURE 07 Next steps 76 01 INTRO A word from the DG Since its foundation in 2001, CERN openlab has been working to help accelerate the development of cutting-edge computing technologies. Such technologies play a vital role in particle physics, as well as in many other research fields, helping scientists to continue pushing back the frontiers of knowledge.
    [Show full text]
  • Wigner RCP 2015 Annual Report
    Wigner RCP 2015 Annual Report Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Published by the Wigner Research Centre for Physics, Hungarian Academy of Sciences Konkoly Thege Miklós út 29-33 H-1121 Budapest Hungary Mail: POB 49, H-1525 Budapest, Hungary Phone: +36 (1) 392-2512 Fax: +36 (1) 392-2598 E-mail: [email protected] http://wigner.mta.hu © Wigner Research Centre for Physics ISSN: 2064-7336 Source of the lists of publications: MTMT, http://www.mtmt.hu This yearbook is accessible at the Wigner RCP Homepage, http://wigner.mta.hu/wds/ Wigner RCP 2015 – Annual Report Edited by TS. Bíró, V. Kozma-Blázsik, A. Kmety, B. Selmeci Proofreaders: I. Bakonyi, D. Horváth, P. Kovács Closed on 15. May, 2016 List of contents Foreword .................................................................................................................................... 5 Key figures and organizational chart ......................................................................................... 6 Most important events of the year 2015 .................................................................................. 8 2015 – The International Year of Light and the Wigner Research Centre for Physics ............ 11 Research grants and international scientific cooperation ....................................................... 14 Wigner research infrastructures .............................................................................................
    [Show full text]
  • Big Data Al CERN: Come Si Gestiscono I Dati Prodotti Da Grandi Esperimenti Di Fisica
    Big Data al CERN: come si gestiscono i dati prodotti da grandi esperimenti di Fisica Giuseppe Lo Presti CERN IT Department SeminarioG. Lo Presti -UniFe, UniFe - Big 29/03/2019Data al CERN Agenda • The Big Picture • Computing and Data Management at CERN • Big Data at CERN and outside • Future Prospects • The “High-Luminosity” LHC and its challenges G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN The LHC and its friends G. Lo Presti - UniFe - Big Data al CERN Computing at CERN: The Big Picture Data Storage - Data Processing - Event generation - Detector simulation - Event reconstruction - Resource accounting Distributed computing - Middleware - Workload management - Data management - Monitoring SW CMSSW-CMS GAUDI-LHCb ATHENA-ATLAS From the Hit to the Bit: DAQ 100 million channels 40 million pictures a second Synchronised signals from all detector parts G. Lo Presti - UniFe - Big Data al CERN From the Hit to the Bit: event filtering • L1: 40 million events per second • Fast, simple information • Hardware trigger in a few micro seconds • L2: 100,000 events per second • Fast algorithms in local computer farm • Software trigger in <1 second • EF: Few 1000s per second recorded for offline analysis • By each experiment! G. Lo Presti - UniFe - Big Data al CERN Data Processing • Experiments send over 10 PB of data per month • 100PB from all experiments in 2018 • The LHC data is aggregated at the CERN data centre to be stored, processed, and distributed ~0.5GB/s ~3GB/s ~ 5GB/s (Heavy Ion Run) ~3GB/s G.
    [Show full text]
  • Power for Modern Detectors
    I NTERNAT I ONAL J OURNAL OF H I G H - E NERGY P H YS I CS CERNCOURIER WELCOME V OLUME 5 8 N UMBER 9 N O V EMBER 2 0 1 8 CERN Courier – digital edition Welcome to the digital edition of the November 2018 issue of CERN Courier. Physics through Explaining the strong interaction was one of the great challenges facing theoretical physicists in the 1960s. Though the correct solution, quantum photography chromodynamics, would not turn up until early the next decade, previous attempts had at least two major unintended consequences. One is electroweak theory, elucidated by Steven Weinberg in 1967 when he realised that the massless rho meson of his proposed SU(2)xSU(2) gauge theory was the photon of electromagnetism. Another, unleashed in July 1968 by Gabriele Veneziano, is string theory. Veneziano, a 26-year-old visitor in the CERN theory division at the time, was trying “hopelessly” to copy the successful model of quantum electrodynamics to the strong force when he came across the idea – via a formula called the Euler beta function – that hadrons could be described in terms of strings. Though not immediately appreciated, his 1968 paper marked the beginning of string theory, which, as Veneziano describes 50 years later, continues to beguile physicists. This issue of CERN Courier also explores an equally beguiling idea, quantum computing, in addition to a PET scanner for clinical and fundamental-physics applications, the internationally renowned Beamline for Schools competition, and the growing links between high-power lasers (the subject of the 2018 Nobel Prize in Physics) and particle physics.
    [Show full text]
  • Discovery Machines
    DISCOVERY MACHINES CERN operates a unique complex of eight accelerators and one decelerator. The accelerators speed particles up to almost the speed of light before colliding them with each other or launching them at targets. Detectors record what happens during these collisions, and all the data collected is stored and analysed using a worldwide computing grid. Hundreds of physicists, engineers and technicians contribute to the operation and maintenance of these sophisticated machines. The LHC performed remarkably in 2016, delivering far more collisions than expected. (CERN-GE-1101021-08) 20 | CERN 2011: 6 fb-1 2012: 23 fb-1 2015: 4 fb-1 2016: 40 fb-1 7 and 8 TeV runs 13 TeV runs LHC operators at the controls in July 2016. Above, the amount of data delivered by the LHC to the ATLAS (CERN-PHOTO-201607-171-3) and CMS experiments over the course of its proton runs. These quantities are expressed in terms of integrated luminosity, which refers to the number of potential collisions during a given period. LARGE HADRON COLLIDER IN FULL SWING 2016 was an exceptional year for the Large Hadron Collider magnet, which injects the particle bunches into the machine, (LHC). In its second year of running at a collision energy of limited the number of particles per bunch. 13 teraelectronvolts (TeV), the accelerator’s performance exceeded expectations. In spite of these interruptions, the performance of the machines was excellent. Numerous adjustments had been The LHC produced more than 6.5 million billion collisions made to all systems over the previous two years to improve during the proton run from April to the end of October, their availability.
    [Show full text]
  • Rapport De Stage De Rapport
    4, rue Merlet de la Boulaye BP 30926 European Organization 49009 Angers cedex 01 - France for Nuclear Research Tél. : +33 (0)2.41.86.67.67 http://www.eseo.fr CERN CH-1211 Genève 23 Switzerland RAPPORT DE STAGE (*)TYPE DE STAGE : Stage de fin d'etudes (I3) (*) AUTEUR: Axel Voitier (*) DATES : 01/01/2009 au 28/02/2010 NIVEAU DE CONFIDENTIALITE Aucune Niveau I Niveau II Niveau III X TITRE DU STAGE CERN-THESIS-2009-205 17/12/2009 Test suite for the archiver of a SCADA system Responsable de stage : M. Manuel Gonzales Berges ………………………………………… Summary sheet Test suite for the archiver of a SCADA system Topic: The group responsible for providing the main control system applications for all machines at CERN has to validate that every piece of the control systems used will be reliable and fully functional when the LHC and its experiments will do collisions of particles. CERN use PVSS from ETM/Siemens for the SCADA part of its control systems. This software has a component dedicated to archive into a centralised Oracle database values and commands of tenth of thousands hardware devices. This component, named RDB, has to be tested and validated in terms of functionality and performance. The need is high for that because archiving is a critical part of the control systems. In case of an incident on one of the machine, it will be unacceptable to not benefit of archiving the machine context at this moment just because of a bug in RDB. Bugs have to be spotted and reported to ETM.
    [Show full text]