Cern Computer

Total Page:16

File Type:pdf, Size:1020Kb

Cern Computer CERN COMPUTER NEWSLETTERVolume 44, issue 3 July–September 2009 Contents CINBAD keeps an eye Editorial CINBAD keeps an eye on the CERN network 1 ETICS 2 offers guidance to software professionals 3 on the CERN network Announcements and news CERN welcomes 13 Intel ISEF pre-college winners 4 The CINBAD (CERN Investigation of Network network infrastructure misuse, violation of Computer team advises reviewing your Behaviour and Anomaly Detection) project a local network security policy and device security now and frequently 5 was launched in 2007 as a collaboration misconfiguration. In addition, the expected EGEE-III project is on track for EGI between CERN openlab, IT-CS and HP network behaviour never remains static transition 5 ProCurve Networking. The project’s aim because it can vary with the time of day, the Grid news is to understand the behaviour of large number of users connected and network Scientists demonstrate the role of CMS in computer networks in the context of services deployed. As a consequence, computing Grid 6 high-performance computing and campus anomalies are not easy to detect. Technical brief Indico’s new face goes live 7 installations such as those at CERN. The CERN updates Wi-Fi network 9 goals are to detect traffic anomalies in Network sniffing Conference and event reports such systems, perform trend analysis, To acquire knowledge about the network Prague hosts CHEP conference 10 automatically take counter measures and status and behaviour, CINBAD collects and Workshop identifies steps to reap provide post-mortem analysis facilities. analyses data from numerous sources. benefits from multicore and virtualization Alarms from different network monitoring technologies 11 CERN’s network systems, logs from network services like HEPiX event arrives in Sweden 12 Calendar 12 CERN’s campus network has more than Domain Name System (DNS), Dynamic 50 000 active user devices interconnected Host Configuration Protocol (DHCP), user by 10 000 km of cables and fibres, with feedback, etc – all of these constitute a more than 2500 switches and routers. solid base of information. A naive approach The potential 4.8 Tbps throughput might be to look at all of the packets flying within the network core and 140 Gbps over the CERN network. However, if we did connectivity to external networks offers this we would need to analyse even more countless possibilities to different network data than the LHC could generate. The LHC applications. The bandwidth of modern data are only a subset of the total data networks is growing much faster than crossing via these links. the performance of the latest processors. CINBAD overcomes this issue by applying This fact combined with the CERN specific statistical analysis and using sFlow, a configuration and topology makes network technology for monitoring high-speed behaviour analysis a very challenging and switched networks that provides randomly daunting task. sampled packets from the network traffic. The information that we collect is based CINBAD in a nutshell on the traffic from around 1000 switches The CINBAD project addresses many and routers and gives a representative Editor Natalie Pocock, CERN IT Department, 1211 aspects associated with the CERN network. sample of the CERN network traffic with Geneva 23, Switzerland. E-mail cnl.editor@cern. First, it provides facilities for a better more than 3 Terabytes of data per month. ch. Fax +41 (22) 766 8500. understanding and improved maintenance The multistage collection system was Web cerncourier.com/articles/cnl. of the CERN network infrastructure. designed and implemented in consultation Advisory board Frédéric Hemmer (head of IT Department), Alberto Pace (group leader, Data This includes analysing various network with experts from the LHC experiments and Management), Christine Sutton (CERN Courier statistics and trends, traffic flows and Oracle, to benefit from their data-analysis editor), Tim Smith (group leader, User and protocol distributions. Other factors and storage experience. The system has Document Services). that might have an impact on the current now been up and running for more than a Produced for CERN by IOP Publishing Dirac House, Temple Back, Bristol BS1 6BE, UK. network status or influence its evolution year (figure 1). Tel +44 (0)117 929 7481. E-mail jo.nicholas@iop. are also studied, such as connectivity, org. Fax +44 (0)117 930 0733. Web iop.org. bottleneck and performance issues. Network operation enhancements Published by CERN IT Department When we have learnt and understood The field of network monitoring and ©2009 CERN the network behaviour, CINBAD can help planning can greatly benefit from the The contents of this newsletter do not necessarily to identify various abnormalities and CINBAD activities. We provide tools and represent the views of CERN management. determine their causes. Because there are data that simplify the operation and many factors that can be used to describe problem-diagnosing process. In addition, the network status, anomaly definition our statistics help in understanding the is also very domain specific and includes network evolution and design. CERN Computer Newsletter • July–September 2009 1 Editorial A very basic piece of information that configurator configure rules is of interest for network operations is for analysis knowledge about the host’s activity. (live tcpdump, CINBAD is able to provide detailed configuration via data for adjusting fingerprints) statistics about the traffic sent and SNMP sFlow configuration received by a given host, it facilitates inference about the nature of the traffic on a given outlet/port and can thus identify collector the connected machine. This information level II raw storage could also be used to diagnose routing sFlow problems by looking at all of the packets datagrams unpacked aggregated outbound or inbound to a particular host. data data CINBAD is also able to provide CINBAD DB information about the traffic at CERN. The sampled data collected by the project are sufficient to obtain the switching/ routing/transport protocol information redundant collector as well as gaining information about the application data. This provides valuable network level I level I disk level II level III input for an understanding of the current devices processing storage processing processing network behaviour. Here the CINBAD team uses descriptive statistics. The potential Fig. 1. The CINBAD sFlow data collector receives and processes the CERN network traffic. set of metrics that we can provide to characterize the traffic at CERN is very (if no-one can get to it, no-one can would not scale. extensive and specific needs are currently harm it). Nowadays, we cannot avoid A second approach is to build various being discussed. For example, we can communicating with others and therefore network profiles by learning from the enumerate protocol-type distributions, we expose our machine to outside past. The selection of robust metrics that packet size distributions, etc. Depending on threats. Although CERN centrally managed are resistant to data randomness plays the requirements, these statistics can be desktops have up-to-date anti-virus an important role in characterizing the tailored even further. software and firewalls, this does not expected network behaviour. Once these Top n-list is another form of network guarantee that our machines and data are normal profiles are well established, the summary that might be of interest. Such shielded from attacks. These tools are statistical approach can detect new and lists would allow the identification of the usually designed to detect known patterns unknown anomalies. most popular application servers, either (signatures) and there are also other The CINBAD project combines inside or outside CERN. Although this machines (unmanaged desktops, PDAs, the statistical approach with the information might be available on each etc) connected to the CERN network that signature-based analysis to benefit from individual CERN server, CINBAD provides might be less protected. the synergy of the two techniques. While the possibility to collect these statistics Currently, detailed analysis is only the latter provides the detection system for all servers of a given type, whether performed at critical points on the with a fast and reliable detection rate, or not they are centrally managed by the network (firewall and gates between the former is used to detect the unknown IT Department. This information may be network domains). The CINBAD team has anomalies and to produce new signatures. of value to both network engineers and been investigating various data-analysis The CINBAD team constantly monitors application-server administrators. approaches that could overcome both the campus and internet traffic These statistics can also be useful for this limitation. These studies can be using this method. This has already led to network design and provisioning. The categorized into two main domains: the identification of various anomalies, CINBAD project can provide valuable statistical and signature-based analysis. e.g. DNS abuse, p2p applications, information about the nature of the The former depends on detecting rogue DHCP servers, worms, trojans, traffic on the links. These statistics can deviations from normal network behaviour unauthorized wireless base stations, etc. also be used to detect the trunks with while the latter uses existing problem Some of these findings have resulted in potential bottlenecks. This information signatures and matches them
Recommended publications
  • CERN Courier–Digital Edition
    CERNMarch/April 2021 cerncourier.com COURIERReporting on international high-energy physics WELCOME CERN Courier – digital edition Welcome to the digital edition of the March/April 2021 issue of CERN Courier. Hadron colliders have contributed to a golden era of discovery in high-energy physics, hosting experiments that have enabled physicists to unearth the cornerstones of the Standard Model. This success story began 50 years ago with CERN’s Intersecting Storage Rings (featured on the cover of this issue) and culminated in the Large Hadron Collider (p38) – which has spawned thousands of papers in its first 10 years of operations alone (p47). It also bodes well for a potential future circular collider at CERN operating at a centre-of-mass energy of at least 100 TeV, a feasibility study for which is now in full swing. Even hadron colliders have their limits, however. To explore possible new physics at the highest energy scales, physicists are mounting a series of experiments to search for very weakly interacting “slim” particles that arise from extensions in the Standard Model (p25). Also celebrating a golden anniversary this year is the Institute for Nuclear Research in Moscow (p33), while, elsewhere in this issue: quantum sensors HADRON COLLIDERS target gravitational waves (p10); X-rays go behind the scenes of supernova 50 years of discovery 1987A (p12); a high-performance computing collaboration forms to handle the big-physics data onslaught (p22); Steven Weinberg talks about his latest work (p51); and much more. To sign up to the new-issue alert, please visit: http://comms.iop.org/k/iop/cerncourier To subscribe to the magazine, please visit: https://cerncourier.com/p/about-cern-courier EDITOR: MATTHEW CHALMERS, CERN DIGITAL EDITION CREATED BY IOP PUBLISHING ATLAS spots rare Higgs decay Weinberg on effective field theory Hunting for WISPs CCMarApr21_Cover_v1.indd 1 12/02/2021 09:24 CERNCOURIER www.
    [Show full text]
  • Sixtrack V and Runtime Environment
    February 19, 2020 11:45 IJMPA S0217751X19420351 page 1 International Journal of Modern Physics A Vol. 34, No. 36 (2019) 1942035 (17 pages) c World Scientific Publishing Company DOI: 10.1142/S0217751X19420351 SixTrack V and runtime environment R. De Maria∗ Beam Department (BE-ABP-HSS), CERN, 1211, Geneva 23, Switzerland [email protected] J. Andersson, V. K. Berglyd Olsen, L. Field, M. Giovannozzi, P. D. Hermes, N. Høimyr, S. Kostoglou, G. Iadarola, E. Mcintosh, A. Mereghetti, J. Molson, D. Pellegrini, T. Persson and M. Schwinzerl CERN, 1211, Geneva 23, Switzerland E. H. Maclean CERN, 1211, Geneva 23, Switzerland University of Malta, Msida, MSD 2080, Malta K. N. Sjobak CERN, 1211, Geneva 23, Switzerland University of Oslo, Boks 1072 Blindern, 0316, Oslo, Norway I. Zacharov EPFL, Rte de la Sorge, 1015, Lausanne, Switzerland S. Singh Indian Institute of Technology Madras, IIT P.O., Chennai 600 036, India Int. J. Mod. Phys. A 2019.34. Downloaded from www.worldscientific.com Received 28 February 2019 Revised 5 December 2019 Accepted 5 December 2019 Published 17 February 2020 SixTrack is a single-particle tracking code for high-energy circular accelerators routinely used at CERN for the Large Hadron Collider (LHC), its luminosity upgrade (HL-LHC), the Future Circular Collider (FCC) and the Super Proton Synchrotron (SPS) simula- tions. The code is based on a 6D symplectic tracking engine, which is optimized for long-term tracking simulations and delivers fully reproducible results on several plat- forms. It also includes multiple scattering engines for beam{matter interaction studies, by INDIAN INSTITUTE OF TECHNOLOGY @ MADRAS on 04/19/20.
    [Show full text]
  • CERN Openlab VI Project Agreement “Exploration of Google Technologies
    CERN openlab VI Project Agreement “Exploration of Google technologies for HEP” between The European Organization for Nuclear Research (CERN) and Google Switzerland GmbH (Google) CERN K-Number Agreement Start Date 01/06/2019 Duration 1 year Page 1 of 26 THE EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (“CERN”), an Intergovernmental Organization having its seat at Geneva, Switzerland, duly represented by Fabiola Gianotti, Director-General, and Google Switzerland GmbH (Google), Brandschenkestrasse 110, 8002 Zurich, Switzerland , duly represented by [LEGAL REPRESENTATIVE] Hereinafter each a “Party” and collectively the “Parties”, CONSIDERING THAT: The Parties have signed the CERN openlab VI Framework Agreement on November 1st, 2018 (“Framework Agreement”) which establishes the framework for collaboration between the Parties in CERN openlab phase VI (“openlab VI”) from 1 January 2018 until 31 December 2020 and which sets out the principles for all collaborations under CERN openlab VI; Google Switzerland GmbH (Google) is an industrial Member of openlab VI in accordance with the Framework Agreement; Article 3 of the Framework Agreement establishes that all collaborations in CERN openlab VI shall be established in specific Projects on a bilateral or multilateral basis and in specific agreements (each a “Project Agreement”); The Parties wish to collaborate in the “exploration of applications of Google products and technologies to High Energy Physics ICT problems related to the collection, storage and analysis of the data coming from the Experiments” under CERN openlab VI (hereinafter “Exploration of Google technologies for HEP”); AGREE AS FOLLOWS: Article 1 Purpose and scope 1. This Project Agreement establishes the collaboration of the Parties in Exploration of Google technologies for HEP, hereinafter the “Project”).
    [Show full text]
  • Harnessing Green It
    www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT www.it-ebooks.info www.it-ebooks.info HARNESSING GREEN IT PRINCIPLES AND PRACTICES Editors San Murugesan BRITE Professional Services and University of Western Sydney, Australia G.R. Gangadharan Institute for Development and Research in Banking Technology, India A John Wiley & Sons, Ltd., Publication www.it-ebooks.info This edition first published 2012 © 2012 John Wiley and Sons Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
    [Show full text]
  • Particle Tracking and Identification on Multi- and Many-Core Hardware
    PARTICLETRACKINGANDIDENTIFICATIONONMULTI-AND MANY-COREHARDWAREPLATFORMS by plácido fernández declara A dissertation submitted by in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Technology Universidad Carlos III de Madrid Advisors: José Daniel García Sánchez Niko Neufeld September 2020 This thesis is distributed under license “Creative Commons Attribution – Non Commercial – Non Derivatives”. To my parents. A mis padres. We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. — Donald E. Knuth [87] ACKNOWLEDGMENTS Since I finished my master’s degree and started working inthe aerospace industry with GMV I have been interested in pursuing a PhD. But I did not find the right challenge or topic I was looking for. One day I received a call from my professor from Computer Science and Engineering degree and director of the master’s degree I would later study in Leganés. He let me know about an opening position to do a PhD in High Performance Computing at CERN, in Switzerland. It was just what I was looking for; an incredible computing challenge working in the Large Hadron Collider. I applied for it and some interviews later I got the job. I have to thank first and foremost my parents. It is thanks tothem that I was able to purse any dream and challenge I ever imagined and I am ever thankful for their love and support. To my love and friend Afri, who helps me overcome any situation and makes any adventure an amazing journey to enjoy life.
    [Show full text]
  • Analysis and Optimization of the Offline Software of the Atlas Experiment at Cern
    Technische Universität München Fakultät für Informatik Informatik 5 – Fachgebiet Hardware-nahe Algorithmik und Software für Höchstleistungsrechnen ANALYSIS AND OPTIMIZATION OF THE OFFLINE SOFTWARE OF THE ATLAS EXPERIMENT AT CERN Robert Johannes Langenberg Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: 0. Prof. Bernd Brügge, PhD Prüfer der Dissertation: 1. Prof. Dr. Michael Georg Bader 2. Prof. Dr. Siegfried Bethke Die Dissertation wurde am 03.07.2017 bei der Technischen Universität München einge- reicht und durch die Fakultät für Informatik am 11.10.2017 angenommen. ABSTRACT The simulation and reconstruction of high energy physics experiments at the Large Hadron Collider (LHC) use huge customized software suites that were developed during the last twenty years. The reconstruction software of the ATLAS experiment, in particular, is challenged by the increasing amount and complexity of the data that is processed because it uses combinatorial algorithms. Extrapolations at the end of Run-1 of the LHC indicated the need of a factor 3 improvement in event processing rates in order to stay within the available resources for the workloads expected during Run-2. This thesis develops a systematic approach to analyze and optimize the ATLAS recon- struction software for modern hardware architectures. First, this thesis analyzes limita- tions of the software and how to improve it by using intrusive and non-intrusive tech- niques. This includes an analysis of the data flow graph and detailed performance meas- urements using performance counters and control flow and function level profilers.
    [Show full text]
  • Nov/Dec 2020
    CERNNovember/December 2020 cerncourier.com COURIERReporting on international high-energy physics WLCOMEE CERN Courier – digital edition ADVANCING Welcome to the digital edition of the November/December 2020 issue of CERN Courier. CAVITY Superconducting radio-frequency (SRF) cavities drive accelerators around the world, TECHNOLOGY transferring energy efficiently from high-power radio waves to beams of charged particles. Behind the march to higher SRF-cavity performance is the TESLA Technology Neutrinos for peace Collaboration (p35), which was established in 1990 to advance technology for a linear Feebly interacting particles electron–positron collider. Though the linear collider envisaged by TESLA is yet ALICE’s dark side to be built (p9), its cavity technology is already established at the European X-Ray Free-Electron Laser at DESY (a cavity string for which graces the cover of this edition) and is being applied at similar broad-user-base facilities in the US and China. Accelerator technology developed for fundamental physics also continues to impact the medical arena. Normal-conducting RF technology developed for the proposed Compact Linear Collider at CERN is now being applied to a first-of-a-kind “FLASH-therapy” facility that uses electrons to destroy deep-seated tumours (p7), while proton beams are being used for novel non-invasive treatments of cardiac arrhythmias (p49). Meanwhile, GANIL’s innovative new SPIRAL2 linac will advance a wide range of applications in nuclear physics (p39). Detector technology also continues to offer unpredictable benefits – a powerful example being the potential for detectors developed to search for sterile neutrinos to replace increasingly outmoded traditional approaches to nuclear nonproliferation (p30).
    [Show full text]
  • Annual Report 2019
    ANNUAL REPORT 2019 INTRO 01 A word from the DG 4 CONTEXT 02 Background 6 HIGHLIGHTS 03 2019 at CERN 10 ABOUT 04 The concept 12 RESULTS 05 Developing a ticket reservation system for the CERN Open Days 2019 20 Dynamical Exascale Entry Platform - Extreme Scale Technologies (DEEP-EST) 22 Oracle Management Cloud 24 Heterogeneous I/O for Scale 26 Oracle WebLogic on Kubernetes 28 EOS productisation 30 Kubernetes and Google Cloud 32 High-performance distributed caching technologies 34 Testbed for GPU-accelerated applications 36 Data analytics in the cloud 38 Data analytics for industrial controls and monitoring 40 Exploring accelerated machine learning for experiment data analytics 42 Evaluation of Power CPU architecture for deep learning 44 Fast detector simulation 46 NextGeneration Archiver for WinCC OA 48 Oracle cloud technologies for data analytics on industrial control systems 50 Quantum graph neural networks 52 Quantum machine learning for supersymmetry searches 54 Quantum optimisation for grid computing 56 Quantum support vector machines for Higgs boson classification 58 Possible projects in quantum computing 60 BioDynaMo 62 Future technologies for medical linacs (SmartLINAC) 64 CERN Living Lab 66 Humanitarian AI applications for satellite imagery 68 Smart platforms for science 70 KNOWLEDGE 06 Education, training and outreach 72 FUTURE 07 Next steps 76 01 INTRO A word from the DG Since its foundation in 2001, CERN openlab has been working to help accelerate the development of cutting-edge computing technologies. Such technologies play a vital role in particle physics, as well as in many other research fields, helping scientists to continue pushing back the frontiers of knowledge.
    [Show full text]
  • Wigner RCP 2015 Annual Report
    Wigner RCP 2015 Annual Report Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Wigner Research Centre for Physics Hungarian Academy of Sciences Budapest, Hungary 2016 Published by the Wigner Research Centre for Physics, Hungarian Academy of Sciences Konkoly Thege Miklós út 29-33 H-1121 Budapest Hungary Mail: POB 49, H-1525 Budapest, Hungary Phone: +36 (1) 392-2512 Fax: +36 (1) 392-2598 E-mail: [email protected] http://wigner.mta.hu © Wigner Research Centre for Physics ISSN: 2064-7336 Source of the lists of publications: MTMT, http://www.mtmt.hu This yearbook is accessible at the Wigner RCP Homepage, http://wigner.mta.hu/wds/ Wigner RCP 2015 – Annual Report Edited by TS. Bíró, V. Kozma-Blázsik, A. Kmety, B. Selmeci Proofreaders: I. Bakonyi, D. Horváth, P. Kovács Closed on 15. May, 2016 List of contents Foreword .................................................................................................................................... 5 Key figures and organizational chart ......................................................................................... 6 Most important events of the year 2015 .................................................................................. 8 2015 – The International Year of Light and the Wigner Research Centre for Physics ............ 11 Research grants and international scientific cooperation ....................................................... 14 Wigner research infrastructures .............................................................................................
    [Show full text]
  • Big Data Al CERN: Come Si Gestiscono I Dati Prodotti Da Grandi Esperimenti Di Fisica
    Big Data al CERN: come si gestiscono i dati prodotti da grandi esperimenti di Fisica Giuseppe Lo Presti CERN IT Department SeminarioG. Lo Presti -UniFe, UniFe - Big 29/03/2019Data al CERN Agenda • The Big Picture • Computing and Data Management at CERN • Big Data at CERN and outside • Future Prospects • The “High-Luminosity” LHC and its challenges G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN G. Lo Presti - UniFe - Big Data al CERN The LHC and its friends G. Lo Presti - UniFe - Big Data al CERN Computing at CERN: The Big Picture Data Storage - Data Processing - Event generation - Detector simulation - Event reconstruction - Resource accounting Distributed computing - Middleware - Workload management - Data management - Monitoring SW CMSSW-CMS GAUDI-LHCb ATHENA-ATLAS From the Hit to the Bit: DAQ 100 million channels 40 million pictures a second Synchronised signals from all detector parts G. Lo Presti - UniFe - Big Data al CERN From the Hit to the Bit: event filtering • L1: 40 million events per second • Fast, simple information • Hardware trigger in a few micro seconds • L2: 100,000 events per second • Fast algorithms in local computer farm • Software trigger in <1 second • EF: Few 1000s per second recorded for offline analysis • By each experiment! G. Lo Presti - UniFe - Big Data al CERN Data Processing • Experiments send over 10 PB of data per month • 100PB from all experiments in 2018 • The LHC data is aggregated at the CERN data centre to be stored, processed, and distributed ~0.5GB/s ~3GB/s ~ 5GB/s (Heavy Ion Run) ~3GB/s G.
    [Show full text]
  • Power for Modern Detectors
    I NTERNAT I ONAL J OURNAL OF H I G H - E NERGY P H YS I CS CERNCOURIER WELCOME V OLUME 5 8 N UMBER 9 N O V EMBER 2 0 1 8 CERN Courier – digital edition Welcome to the digital edition of the November 2018 issue of CERN Courier. Physics through Explaining the strong interaction was one of the great challenges facing theoretical physicists in the 1960s. Though the correct solution, quantum photography chromodynamics, would not turn up until early the next decade, previous attempts had at least two major unintended consequences. One is electroweak theory, elucidated by Steven Weinberg in 1967 when he realised that the massless rho meson of his proposed SU(2)xSU(2) gauge theory was the photon of electromagnetism. Another, unleashed in July 1968 by Gabriele Veneziano, is string theory. Veneziano, a 26-year-old visitor in the CERN theory division at the time, was trying “hopelessly” to copy the successful model of quantum electrodynamics to the strong force when he came across the idea – via a formula called the Euler beta function – that hadrons could be described in terms of strings. Though not immediately appreciated, his 1968 paper marked the beginning of string theory, which, as Veneziano describes 50 years later, continues to beguile physicists. This issue of CERN Courier also explores an equally beguiling idea, quantum computing, in addition to a PET scanner for clinical and fundamental-physics applications, the internationally renowned Beamline for Schools competition, and the growing links between high-power lasers (the subject of the 2018 Nobel Prize in Physics) and particle physics.
    [Show full text]
  • Discovery Machines
    DISCOVERY MACHINES CERN operates a unique complex of eight accelerators and one decelerator. The accelerators speed particles up to almost the speed of light before colliding them with each other or launching them at targets. Detectors record what happens during these collisions, and all the data collected is stored and analysed using a worldwide computing grid. Hundreds of physicists, engineers and technicians contribute to the operation and maintenance of these sophisticated machines. The LHC performed remarkably in 2016, delivering far more collisions than expected. (CERN-GE-1101021-08) 20 | CERN 2011: 6 fb-1 2012: 23 fb-1 2015: 4 fb-1 2016: 40 fb-1 7 and 8 TeV runs 13 TeV runs LHC operators at the controls in July 2016. Above, the amount of data delivered by the LHC to the ATLAS (CERN-PHOTO-201607-171-3) and CMS experiments over the course of its proton runs. These quantities are expressed in terms of integrated luminosity, which refers to the number of potential collisions during a given period. LARGE HADRON COLLIDER IN FULL SWING 2016 was an exceptional year for the Large Hadron Collider magnet, which injects the particle bunches into the machine, (LHC). In its second year of running at a collision energy of limited the number of particles per bunch. 13 teraelectronvolts (TeV), the accelerator’s performance exceeded expectations. In spite of these interruptions, the performance of the machines was excellent. Numerous adjustments had been The LHC produced more than 6.5 million billion collisions made to all systems over the previous two years to improve during the proton run from April to the end of October, their availability.
    [Show full text]