CERNCOURIER

IN FOCUS COMPUTING

A retrospective of 60 years’ coverage of computing technology cerncourier.com 2019

COMPUTERS, WHY? 1972: CERN’s Lew Kowarski on how worked and why they were needed

COMPUTING IN HIGH ENERGY PHYSICS 1989: CHEP conference considered the data challenges of the LEP era

WEAVING A BETTER TOMORROW 1999: a snapshot of the web as it was beginning to take off

THE CERN OPENLAB 2003: Grid computing was at the heart of CERN’s new industry partnership

CCSupp_4_Computing_Cover_v3.indd 1 20/11/2019 13:54 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

IN THIS ISSUE CERN COURIER IN FOCUS COMPUTING

Best Cyclotron Systems provides 15/20/25/30/35/70 MeV Proton Cyclotrons as well as 35 & 70 MeV Multi-Particle (Alpha, Deuteron & Proton) Cyclotrons Motorola CERN Currents from 100uA to 1000uA (or higher) depending on the FROM ThE EdITOR particle beam are available on all BCS cyclotrons Best 20u to 25 and 30u to 35 are fully upgradeable on site Welcome to this retrospective supplement devoted to Cyclotron Energy (MeV) Isotopes Produced computing – the fourth in 18F, 99mTc, 11C, 13N, 15O, a series celebrating CERN Best 15 15 64Cu, 67Ga, 124I, 103Pd Courier’s 60th anniversary. Big impact An early Data shift CERN’s Centre in 1998, with banks of Best 15 + 123I, The Courier grew up during the chip. 19 computing arrays for the major experiments. 25 Best 20u/25 20, 25–15 111 68 68 In, Ge/ Ga computing revolution. CERN’s first Best 30u Best 15 + 123I, SPECIAL FOREWORD 5 30 computer, the Mercury, which (Upgradeable) 111In, 68Ge/68Ga CERN computing guru Graeme Stewart describes the immense task of had less computational ability than an taming future data volumes in high-energy physics. Greater production of average pocket calculator has today, Best 35 35–15 Best 15, 20u/25 isotopes COMPUTERS AT CERN 9 plus 201Tl, 81Rb/81Kr was installed in 1958 and early issues 1964: with CERN about to be equipped with the most powerful computer in 82 82 123 67 of the magazine carried helpful articles Sr/ Rb, I, Cu, Installation of Best 70 MeV Europe, the data handling division considered the challenges that it faced. Best 70 70–35 81 explaining how computers work Kr + research Cyclotron at INFN, Legnaro, Italy and why they are important. As the COMPUTERS, WHY? 13 reproduced articles in this issue show, 1972: in the year Intel’s 8008 was launched and the compact disc invented, CERN’s Lew Kowarski explained why computers were here to stay. it did not take long for computing to become a pillar of high-energy physics. WEAVING A BETTER TOMORROW 16 The BG-75 Biomarker Generator is a revolutionary 400 MeV Rapid Cycling Medical Synchrotron The field has inspired many advances, 1999: at 10 years old, the web was only just beginning to fulfil its potential. development in radio-pharmaceutical production for Proton-to-Carbon Heavy Ion Therapy: most famously the web and the Grid, THE MICROPROCESSOR BOOM 19 18 that delivers a single or batch dose of F-FDG, and and must now pursue the most advanced 1979: this prescient article described how the microprocessor would soon 18 Intrinsically small beams facilitating additional advanced F biomarkers on demand. beam delivery with precision technology developed elsewhere to meet become a routine part of the high-energy physicists’ toolkit. The system provides integration of all components the demands of future experiments. In Small beam sizes – small magnets, COMPUTING IN HIGH ENERGY PHYSICS 21 needed to produce and qualify PET biomarkers our special foreword, software-specialist into a single, self-contained system that occupies light gantries – smaller footprint 1989: the Computing In High Energy Physics conference weighed up Graeme Stewart sets out the need for the challenges of analysing data from LEP. a fraction of the space required by conventional Highly efficient single turn extraction physicists to prepare for a deluge of data solutions, simplifying the implementation of PET. Efficient extraction – less shielding A MAJOR SHIFT IN OUTLOOK 25 in a fast-evolving technological and 2001: Ben Segal described how distributed boxes took over from Flexibility – heavy ion beam therapy commercial landscape. CERN’s all-powerful IBM and Cray mainframe workhorses. (protons and/or carbon), beam • All of this year’s special supplements, delivery modalities THE CERN OPENLAB 27 including those devoted to detector, 2003: Grid computing was the theme of this feature about a new model vacuum and magnet technology, can for partnership between CERN and industry. be read at cerncourier.com under the CERN’S ULTIMATE ACT OF OPENNESS 30 section “In Focus”. 2019: the seed that led CERN to relinquish ownership of the web in 1993 Matthew Chalmers was planted when the laboratory formally came into being. ion Rapid Cycling Medical

Synchrotron (iRCMS) Editor Philippe Bloch, Roger Head of Media Advertising sales Advertising Published by CERN, 1211 Matthew Chalmers Forty, Mike Lamont, Jo Allen Tom Houlden, Tel +44 (0)117 930 1026 Geneva 23, Switzerland Associate editor Matthew Mccullough Head of Media Chris Thomas or +44 (0)117 930 1164; Tel +41 (0) 22 767 61 11 Mark Rayner Business Development Advertisement e-mail sales@ E-mail Produced for CERN by Ed Jost production cerncourier.com Printed by Warners [email protected] IOP Publishing Ltd Art direction Katie Graham (Midlands) plc, Bourne, TeamBest Companies © 2019 * Best iRCMS is under development and not available for sale currently. Advisory board Temple Circus, Temple Frédérique Swist Marketing and General distribution Lincolnshire, UK Peter Jenni, Christine Way, Bristol BS1 6HG, UK Production editor circulation courrier-adressage@ www.bestcyclotron.com • www.bestabt.com • www.bestproton.com Sutton, Claude Amsler, Tel +44 (0)117 929 7481 Ruth Leopold Laura Gillham cern.ch © 2019 CERN

BCS tel: 604 681 3327 • Best ABT tel: 865 982 0098 • BPT tel: 703 451 2378 DECEMBER 2019 3

BCS_Best ABT_BPT_ComboAd_CERNCourier_213x282mm_withIPAC2019_ShowInfo_v29_04162019_working.indd 1 4/16/19 11:50:57 AM CCSupp_4_Computing_Conts_v5.indd 3 20/11/2019 11:21 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM HIGH VOLTAGE. EXACTLY. SPECIAL POWER FOR MODERN DETECTORS FOREWORD MODULAR LOW AND HIGH VOLTAGE MULTICHANNEL POWER SUPPLY SYSTEMS

MMS MMS HV LV compatible compatible Adapting to exascale computing

CERN’s Graeme Stewart surveys S Bennett/CERN six decades of computing milestones in high-energy physics and describes the immense challenges in taming the data volumes from future experiments in a rapidly changing technological landscape.

It is impossible to envisage high- energy physics without its foundation of microprocessor technology, software and . Almost as s o o n a s C E R N w a s fo u n d e d t h e fi r s t c on- tract to provide a computer was signed, but it took manufacturer Ferranti more than two years to deliver “Mercury”, our first valve-based behemoth, in 1958. Ever-evolving The CERN data centre, photographed in 2016. Graeme Stewart So early did this machine arrive that is a software the venerable FORTRAN language had computers, and every time Worldwide LHC Computing Grid (WLCG). highest channel density to save valueable installation space specialist in the yet to be invented! A team of about 10 got smaller, clock speeds could be Not only would information need to be CERN EP-SFT people was required for operations and ramped up. It was a golden age where transferred, but huge amounts of data best ripple and noise characteristics group. He is a the I/O system was already a bottle- more advanced machines, running ever and millions of computer jobs would member of the neck. It was not long before faster and faster, gave us an exponential increase need to be moved and executed, all with experiment more capable machines were available in computing power. a reliability that would support the LHC’s up to 30 kV, up to 100 W per channel and coordinator of at the lab. By 1963, an IBM 7090 based on Key also to the computing revolution, physics programme. A large investment the High-Energy technology was available with alongside the hardware, was the growth in brand new grid technologies was pure LV compatible, pure HV compatible and mixed setup crates available Physics Software a FORTRAN compiler and tape storage. of open-source software. The GNU pro- undertaken, and software engineers Foundation. This machine could analyse 300,000 ject had produced many utilities that and physicists in the experiments had high class & quality precision low and high voltage supplies frames of spark-chamber data – a big could be used by hackers and coders on to develop, deploy and operate a new grid early success. By the 1970s, computers which to base their own software. With system utterly unlike anything that had interchangeable controller boards for networking and software control were important enough that CERN hosted the start of the project to provide gone before. Despite rapid progress in its first Computing and Data Handling a kernel, humble PCs became increas- computing power, storage space and net- School. It was clear that computers were ingly capable machines for scientific working, it was extremely hard to make a here to stay. com puting. Around the same time, Tim reliable, working distributed system for FLEXIBLE By the time of the LEP era in the Berners-Lee’s proposal for the World particle physics out of these pieces. Yet CHANNEL late 1980s, CERN hosted multiple large Wide Web, which began as a tool for w e a c h i e e d t h i s i n c r e d i bl e t a s k . D u r i n g COMBINATIONS mainframes. , to be used by connecting information for CERN sci- the past decade, thousands of physics BY MODULAR CHANNELS individuals or small teams, had become entists, started to take off. CERN real- results from the four LHC experiments, fe a s i bl e. DE C VA X s y s t e m s w e r e a bi g s t e p ised the value in releasing the web as an including the Higgs-boson discovery, forward in power, reliability and usa- open standard and in doing so enabled a were enabled by the billions of jobs exe- bility and their , VMS, success that today connects almost the cuted and the petabytes of data shipped is still talked of warmly by older col- entire planet. around the world. leagues in the field. Even more economi- The software that was developed to OPC-UA, SNMP cal machines, personal computers (PCs), LHC computing support the LHC is equally impressive. were also reaching a threshold of having This interconnected world was one of The community had made a wholesale enough computing power to be useful the cornerstones of the computing that migration from the LEP FORTRAN era to physicists. Moore’s law, which pre- was envisaged for the Large Hadron to C++ and millions of lines of code were NEW EHS FLEX HV Boards High Class Low Voltage Boards Controller Boards with Linux inside dicted the doubling of transistor densities Collider (LHC). Mainframes were not developed. Huge software efforts in integrated EPICS IOC, SNMP, High Voltage Supply highly up to 16 channel LV for on site every two years, was well established and enough, nor were local clusters. What the every experiment produced frameworks customizable through plug- frontend hardware supply saves OPC-UA and Webservices, PCs were riding this technological wave. LHC needed was a worldwide system of that managed data taking and recon- gable channels (by factory) cable and installation space also „bare metal“ use More transistors meant more capable interconnected computing systems: the struction of raw events to analysis data.

WWW.WIENER-D.COM | WWW.ISEG-HV.COM/MMS DECEMBER 2019 5

anzeige_cerncourier_2018-11_fullpage_mms.indd 1 17.10.18 16:51 CCSupp_4_Computing_Foreword_v2.indd 5 20/11/2019 12:54 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING

could be further from the truth. For Run and LHCb (as ALICE makes even more use arXiv:1905.05515 3 of the LHC, scheduled to start in 2021, of GPUs since their successful deploy- ALICE performance 2018/03/20

ATLAS/CERN the ALICE and LHCb collaborations are ment in Run 2). In the analysis domain, 35 installing new detectors and preparing the core of ROOT is being reworked to be to take massively more data than they did faster and also easier for analysts to work 30 up to now. Hardware triggers are being with. Much inspiration is taken from the dropped in favour of full software pro- Python ecosystem, using Jupyter note- 25 cessing systems and continuous data books and services like SWAN. processing. The high-luminosity upgrade T h e s e d e v e l o p m e nt s a r e fi r m l y r o o t e d 20 of the LHC for Run 4, from 2026, will in the new distributed models of software be accompanied by new detector sys- development based on GitHub or GitLab 15 tems for ATLAS and CMS, much higher and with worldwide development com- 10 trigger rates and greatly increased event munities, hackathons and social coding. HLT GPU tracking vs HLT CPU tracking (AMD S9000 vs Xeon 2697, 2.7 GHz)

complexity. All of this physics needs to Open source is also vital, and all of the speedup (normalised to a single core) 5 HLT GPU tracking vs HLT CPU tracking (NVIDIA GTX 1080 vs i7 6700K, 4.2 GHz) be supported by a radical evolution of LHC experiments have now opened up HLT CPU tracking vs offline tracking (Xeon 2697, 2.7 GHz) software and computing systems, and their software. In the computing domain 0 in a more challenging sociological and there is intense R&D into improving 500000 1×106 1.5×106 2×106 2.5×106 3×106 technological environment. The LHC will data management and access, and the number of TPC clusters also not be the only scientific big player ATLAS-developed Rucio data manage- in the future. Facilities such as DUNE, ment system is being adopted by a wide FAIR, SKA and LSST will come online and range of other HEP experiments and All this sets us on a good path for the to impossible. Disruptive technology, Delivering data have to handle as much, if not more, data many astronomy communities. Many future, but still, the problems remain like , might even Speedup of the than at CERN and in the WLCG. That is of these developments got a shot in the significant, the implementation of e nt i r e l y r e v olut i o n i s e t h e fie l d. Ho w e v e r, ALICE TPC tracking both a challenge but also an opportunity arm from the IRIS–HEP project in the solutions is difficult and the level of if there is one thing that we can be sure on GPUs, normalised to work with new scientific partners in US; other European initiatives, such as uncertainty is high. Looking back to of, it’s that the next decades of software to a single CPU core, the era of exascale science. IRIS in the UK and the IDT-UM German the first computers at CERN and then and computing at CERN will very likely based on lead–lead Close encounters In simulation, the Geant4 toolkit enabled started to become constrained in their There is one solution that we know will project are helping, though much more imagining the same stretch of time be as interesting and surprising as the collision data A simulated the experiments to begin data-taking at performance. Instead, performance gains not work: simply scaling up the money remains to be done. into the future, predictions are next ones already passed.  collected in 2015. HL-LHC collision the LHC with a fantastic level of under- would come from concurrent execution spent on software and computing. We event in an standing of the extraordinarily complex on multiple threads or from using vector- will need to live with flat budgets, so if upgraded ATLAS detectors, enabling commissioning to ised maths, rather than from faster cores. the event rate of an experiment increases detector, which has take place at a remarkable rate. The com- Experiments adapted by executing more by a factor of 10 then we have a budget an average of mon ROOT foundational libraries and tasks in parallel – from simply running per event that just shrank by the same around 200 analysis environment allowed physicists more jobs at the same time to adopting amount! Recognising this, the HEP Soft- collisions per to the billions of events that the multi-process and multi-threaded pro- ware Foundation (HSF) was invited by the particle bunch LHC supplied and extract the physics cessing models. This post hoc parallelism WLCG in 2016 to produce a roadmap for crossing. from them successfully at previously was often extremely difficult because the how to evolve software and computing unheard of scales. code and frameworks written for the LHC in the 2020s – resulting in a community had assumed a serial execution model. white paper supported by hundreds of Changes in the wider world The barriers being discovered for experts in many institutions worldwide While physicists were busy preparing for CPUs also caused hardware engineers to (CERN Courier April 2018 p38). In parallel, the LHC, the web became a pervasive part rethink how to exploit CMOS technology CERN open lab – a public–private part- of people’s lives. Internet superpowers for processors. The past decade has wit- nership through which CERN collaborates like Google, Amazon and Facebook grew nessed the rise of the graphics processing with leading ICT companies and other up as the LHC was being readied and this unit (GPU) as an alternative way to exploit research organisations – published a changed the position of particle phys- transistors on silicon. GPUs run with a white paper setting out specific chal- ics in the computing landscape. Where different execution model: much more lenges that are ripe for tackling through particle physics had once been a lead- of the silicon is devoted to floating-point collaborative R&D projects with leading ing player in software and hardware, calculations, and there are many more commercial partners. enjoying good terms and some generous processing cores, but each core is smaller discounts, we found ourselves increas- and less powerful than a CPU. To utilise Facing the data onslaught ingly dwarfed by these other players. Our s u c h d e v ic e s e ffe c t i v e l y, a l g or it h m s o f t en Since the white paper was published, the data volumes, while the biggest in sci- have to be entirely rethought and data HSF and the LHC-experiment collabo- ence, didn’t look so large next to Google; layouts have to be redesigned. Much of rations have worked hard to tackle the the processing power we needed, more the convenient, but slow, abstraction challenges it lays out. Understanding It is both a than we had ever used before, was small power of C++ has to be given up in favour how event generators can be best con- challenge beside Amazon; and our data centres, of more explicit code and simpler layouts. fi g u r e d t o g e t g o o d p h y s i c s a t m inmu and also an though growing, were easily outstripped However, this rapid evolution poses other cost is a major focus, while efforts to opportunity by Facebook. problems for the code long term. There get simulation speed-ups from clas- to work with Technology, too, started to shift. is no single way to programme a GPU sical fast techniques, as well as new new scientific Since around 2005, Moore’s law, while and vendors’ toolkits are usually quite machine-learning approaches, have still largely holding, has no longer been specific to t heir hardw are. intensified. Reconstruction algorithms partners in the accompanied by increases in CPU clock All of this would be less important have been reworked to take advantage era of exascale speeds. Programs that ran in a serial were it the case that the LHC experi- of GPUs and accelerators, and are being science mode on a single CPU core therefore ments were standing still, but nothing seriously considered for Run 3 by CMS

6 DECEMBER 2019 DECEMBER 2019 7

CCSupp_4_Computing_Foreword_v2.indd 6 20/11/2019 12:54 CCSupp_4_Computing_Foreword_v2.indd 7 20/11/2019 12:55 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

ADVERTISING FEATURE CERN COURIER IN FOCUS COMPUTING

When SCADA is the right solution COMPUTERS AT CERN Throughout Cosylab’s work in developing supervisory control and data acquisition With CERN about to be equipped with 91.10.63 CERN/PI (SCADA) solutions, we have often noticed that our customers have a lack of the most powerful computer in Europe, understanding of the role of SCADA in a F Beck of the data handling division system when we are asked to provide a solution. This poses a risk when it comes to considered the problems that it faced. whether or not our customers will find our solution useful for their work. As it is in our he popular picture of a biologist shows him wear- best interests to provide a useful resolution ing a white coat, peering through a microscope. to our customers, and since this is also If it is at all possible to draw a popular picture of what we strive for, we would like to share T what we see as the role of SCADA. In cases an experimental high-energy physicist, he will probably when SCADA really is the right solution be sitting at his desk and looking at his output from the for the given problem, then a number of computer. For the computer is increasingly becoming the questions must be asked in order to design tool by which raw experimental results are made intelli- a successful SCADA solution. gible to the physicist. Our ultimate aim is to bring clarity to those By one of those apparent strokes of luck that occur so thinking about whether they need SCADA often in the history of science, electronic digital com- A corner of the IBM 7090 computer room at CER N. At the back are five of the magnetic-tape or not. puters became available to scientists at just that stage in units, and in front of them the main control desk. In the right foreground is the on-line Vacuum the development of fundamental physics when further printer, which provides instructions to the operators and information on the progress of the What is a SCADA system? progress would otherwise have been barred by the lack calculations. Also in the picture (left to right): Richard Milan, Lucio Gourdiole and Eric Swoboda. SCADA systems are used all over the world of means to perform large-scale calculations. Analogue for supervisory control and data acquisition. This is the purpose of SCADA. It centralises 3. What supervisory information is required to computers, which had been in existence for a number of puter time. On each occasion that a new beam is set up from Understanding a useful SCADA and how to control, congregates and exposes data to enable improvements to the system? decades previously, have the disadvantage for this work the accelerator, computer programmes perform the neces- make it begins with an understanding about other system components on higher levels, o Here is where different types of SCADA the role of SCADA in a system. and attempts to minimise human interaction statistics come in, including data mining of limited accuracy and the more serious drawback that a sary calculations in particle optics as a matter of routine; with process control. and key performance indicators. more complicated calculation needs a more complicated beam parameters are kept in check by statistical methods; The need for SCADA evolved over time machine. What was needed was the equivalent of an hundreds of thousands of photographs from track-chamber after agrarian and handicraft economies Determining the right SCADA solution 4. Which parts of the work defined in the organized team of people, all operating desk calculating experiments are ‘digitized’ and have kinematic and statis- shifted rapidly to industrial and machine- Only after we have decided whether SCADA first three questions can be automated; machines, so that a large problem in computation could tical calculations performed on them; the new technique of manufacturing economies during the is the correct solution to a problem or not which parts do we want to automate; and be completed and checked in a reasonable time. So much s o n ic s p a r k c h a m b e r s, for t h e fi l m le s s d e t e c t io n o f p a r t ic le Industrial Revolution in the 18th century. can we start asking ourselves what we want which parts require human intervention/ was this need felt, that in England, for example, before tracks, uses the computer more directly. In addition, more Initially, machines were developed that SCADA to do. This can be achieved through supervision? digital computers became generally available, there was than 80 physicists and engineers use the computer on their could perform repeatable processes faster, the following set of questions: o In the case of a fully automated process, a commercial organization supplying just such a service own account, writing programmes to solve various com- with more consistency and with greater you would need a graphical user of hand computation. putational problems that arise in their day-to-day work. precision than people. Much of this was interface with a single start button. 1. What does the system need to be At CERN the requirement for computing facilities in the There are now two computers at CERN, the original also about eliminating human error. While supervised and acted upon (by SCADA) to T h e or y D i v i s io n w a s at fi r s t l a r g e l y s at i s fie d b y e m plo ying Ferranti Mercury and the IBM 7090, a transistorized and this first step in the Industrial Revolution ensure that the process will do what we In conclusion a calculating prodigy, Willem Klein. Mr. Klein is one of more powerful replacement for its predecessor, the IBM replaced many people with machines for want it to do? The key to getting a useful SCADA starts work that was previously done by hand, it o What are the building blocks of the with a conversation with the customer about those rare people who combine a prodigious memory with 709. The 7090, in spite of its great speed (about 100 000 opened up questions about whether it was process that need supervision and what what they need and a clear conceptual a love of numbers, and it was some time before computers multiplications per second!), is rapidly becoming over- possible to completely eliminate the human exactly does the supervision mean for design, guided by the above questions. Only in their ever-increasing development were able to catch up loaded and is to be replaced towards the end of this year factor from the whole process, including the every single building block? after this process has been completed does with him. He is still with us in the Theory Division, giving by a CDC 6600, which at present is the most powerful management of connecting all the different o What are the supervision tasks that are it make sense to choose a specific SCADA valuable help to those who need a quick check calculation. computing system available in the world. aspects of the manufacturing process, as common for all the building blocks of technology (for example, WinCC OA, EPICS or It is of interest to note, however, that he has now added a It is hoped that this new machine will satisfy the comput- well as the hands-on aspects. the process? TANGO) that best fits the job. knowledge of computer programming to his armoury of ing needs of CERN for upwards of five years. The Mercury weapons for problem solution! computer is now being used more and more as an exper- For example, there was a time when 2. What are the possible events in the system T h e fi r s t c o m p ut e r c a lc u l at io n s m a d e at C E R N w e realso imental machine and there is, for instance, a direct con- Author mechanised processes were made of that would negatively impact on what done in the very earliest days of the Organization. Even nexion to it at present from a spark-chamber experiment multiple machines operated by many Matevž Žugelj, we want the system to do (e.g. machine before October 1958, when the Ferranti Mercury computer at the . Calculations are performed and people, including at least one who would failure, software bugs, human error) and Project Manager, was installed, computer work had been sent out to an results returned to the experimental area immediately, be the supervisor of the whole process. The which of these need be handled by SCADA? WinCC OA Expert English Electric Deuce in Teddington, an IBM 704 in Paris giv ing great fle x ibilit y. process supervisor was there to make sure o How can these events be handled to and Mercury computers at Harwell and Manchester. Much that the system worked accurately, and avoid negative impacts on the system? This article was of this work was concerned with orbit calculations for the to ensure that this happened the machine o Is it possible to detect these events? adapted from text Track-chamber photographs operators had to report any problems that o How will SCADA receive notifications proton synchrotron, then being built. in CERN Courier Among the biggest users of computer time are the var- could hinder the operation to the supervisor. about these events? Cosylab., Control System Laboratory We have now reached a stage at which there is hardly a vol. 4, June 1964, ious devices for converting the information on bubble- As the supervisor had an overview of the o Are there any preventative actions that Tel +386 1 477 66 76 division at CERN not using its share of the available com- pp73–77 chamber and spark-chamber photographs, usually on whole process, they could make the most we can take to prevent these events Email [email protected] appropriate decisions to keep it running. from happening? Web www.cosylab.com DECEMBER 2019 9

CCSupp_4_Computing_CompatCERN_v2.indd 9 20/11/2019 11:25 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN CURIER IN FCUS CUTING

computer programmes. The electron storage ring, or matical models. Having devised a formula based on a novel organizing of the computer work-load, and the operating beam-stacking model, required the writing of a pro- theory, the physicist computes theoretical curves for some o f t h e m a c h i n e, t o s p e c i a l i s t r e c e p t io n s t a ff a n d c o m p ut e r

CERN/PI2.10.63 gramme that followed the motion of individual batches function that can be compared with experimental results. operators in the Data Handling Division. of particles during their acceleration and stacking in What of the future of computers at CERN? In a fi eld as the ring. A previous study by the same group resulted in ertion new as this, predictions are even more dangerous than in a series of programmes to examine the behaviour and The Data Handling Division, which actually has CERN’s others, but it is clear that the arrival of the new computer stability of a proposed fixed-field, alternating-gradient central computers under its charge, contains a number at the end of the year will cause a great change in the way stacking device. Various aspects of the performance of the of professional programmers, mostly mathematicians by computers are used. Having ten peripheral processors, l i n a c (t he l i ne a r a c c e le r at or t h at fe e d s t he P S) h a ve b e e n training. Some of these are responsible for the ‘systems e a c h o f w h ic h i s e ff e c t i v e l y a n i n d e p e n d e nt c o m p ut e r, t h e st udied and improved using t he computer, and t he later programmes’, that is, the Fortran compiler and its associated machine may have many pieces of equipment for data input stages of the design of the PS itself involved a detailed supervisor programme. Some have the job of disseminating and output attached to it ‘on-line’. The old concept that a computer simulation of the beam in the ring, including programming knowledge, helping individual users of the computer waiting for the arrival of data is standing idle, t h e v a r io u s t r a n s v e r s e or ‘b e t at r o n’ o s c i l l at io n s t o w h ic h computer and writing programmes for people in special cases. and that this is therefore wasteful and expensive, need it is subjected. Others are semi-permanently attached to various divisions, no longer be true. With the new system, a computer that There also exists a series of particle-optics programmes working on particular experiments as members of the team. is waiting for new data for one problem is never idle, but u s e d for t h e d e s i g n a n d s e t t i n g up o f p a r t ic le b e a m s, p a r- A small number of mathematicians are also engaged in continues with calculations on others. Every moment of its ticularly the ‘separated’ beams producing particles of only what might be called ‘specialist computer research’, covering working day is gainfully employed on one or other of the As t ll lrge one kind. These are ‘production’ calculations, in the sense such things as list-programming languages and methods many problems it is solving in parallel. Even so, there will outing Part of the Mercury 35-m m fi l m, i nto a for m i n w h ic h t he t r ac k s of t he p a r t i- that the programme is run with new parameters every of translating from one programme language into another. still be a need for a number of smaller computer installa- instlltions computer at CERN, c le s c a n b e fit t e d w it h c u r v e s a n d t h e e nt i r e k i n e m at ic sof t ime t here is a major c hange in beam layout in any of t he Such work might be expected to yield long-term profi ts by tions forming part of particular experiments. now being used for an event subsequently worked out. To this end, from the experimental halls. giving increases in computing power and effi ciency. As M..N. Hine, CERN’s Directorate Member for Applied outer tests with ‘on-line’ e a rl ie s t d a y s o f C E R N, I E P s (i n s t r u m e nt s for t h e e v a lu at io n As at all large computing installations, computer pro- Physics, pointed out at a recent conference, even with the rogrers experiments, in of photographs) [rumour once had it that IEP stood for Computer language grammers at CERN do not operate the machine them- growth of such facilities, the amount of computing time t CERN o which the measuring ‘i n s t r u m e nt for t h e e l i m i n at io n o f p h y s ic i s t s’!] h a v e b e e n At first, a major bar to the use of computers for small, but selves. Data and programmes are submitted through a available may one day dictate the amount of experimental not oerte equipment is b u i lt a n d p ut i nt o u s e. T h e s e i n s t r u m e nt s e n a ble a c c u r at e l y important, calculations was the difficulty of program- ‘reception offi ce’ and the results are eventually available physics research done at CERN, in much the same way as the hine connected more or m e a s u r e d c o-ord i n at e s o f p oi nt s o n a t r a c k, t o g e t h e r w it h ming them in their own special ‘language’ to solve the in a ‘computer output offi ce’, leaving the handling and the amount of accelerator time available dictates it now.  theseles less directly to the certain identifying information, to be recorded on punched specific problem in hand. What is the use of a machine computer, so that the paper tape. Their disadvantage is that measuring is done t h at c a n p e r for m a p a r t ic u l a r c a lc u l at io n i n a m i nut e, t h e results can be m a nu a l l y, r e q u i r e s s k i l l a n d, e v e n w it h t h e b e s t o p e r at or, would-be-user asks, if it will take a month to provide the worked out as the i s s lo w a n d pr o n e t o e r r or s. T h e p a p e r t a p e s pr o d uc e d h a v e programme for the calculation? Given a hand calculating 300kW All Solid-State RF Amplifier accelerator run to be copied on to a magnetic tape, checking for various machine, a pencil and paper, and a quiet room, I can do it progresses. Gildaz possible er rors on t he w ay, a nd t he mag net ic t ape is t hen myself in three weeks! This valid argument limited the Model : CA500BW2-8085RP Auffret is at the further processed to provide in turn geometric, kinematic use of computers to two kinds of calculation: those too R&K Company Limited R&K provides the world’s most reliable high performance solid-state RF Power Amplifiers. control console, with and stat ist ical results. long to be performed by hand, and those that had to be AMPLIFIER DESIGN Mrs. Peggy Minor It was recognized at an early stage, both in Europe and carried out so often that the original effort of producing (left) and Mrs. Ursula i n t h e Un it e d St at e s, t h at for e x p e r i m e nt s d e m a n d i n g t h e the programme was justified. ・Consists of four (4) rack-mounted 75kW SSAs. Franceschi in the d ig it i z at ion of ver y la rge numb ers of pic t ures, for e x a m- T h i s s it u at io n w a s r e c t i fie d b y t h e u s e o f ‘pr o g r a m ming ・Able to deliver 300kW via 4-way power combiner. background. ple t h o s e r e q u i r i n g h i g h s t at i s t ic a l a c c u r a c y, s o m e m or e- languages’, which make it possible to express one’s prob- ・96-way radial combiner for final power combining automatic picture-reading equipment would be needed. lem in a form closely resembling that of mathematics. of 75kW SSA. T wo suc h dev ices are now com ing into use at CERN. One, Such languages, if defined rigorously enough, express ・Long-term & highly reliable operation achieved the Hough-Powell device (known as HPD), developed jointly the problem unambiguously, and they can be translated by protecting each final-stage transistor with built-in circulator at its output. by CERN, Brookhaven, Berkeley and the Rutherford Lab- automatically (by the computer) into the instructions for 96-way Radial Combiner Output Power Linearity ■Output Pulse oratory, is an electro-mechanical machine of high pre- a particular computer. Two such languages have been 86 Frequency = 500MHz, Po = 300kW 85 X:50μs / DIV Y:200mV / DIV 84 FWD LIMIT 83 cision, which still requires a few pilot measurements to used at CERN: Mercury Autocode, and Fortran. The use 82 81 80 79 be made manually, on a measuring table named ‘Milady’, of Mercury Autocode has recently been discontinued, but 78 77 76 when used for bubble-chamber pictures. It has already u nt i l a s h or t t i m e a g o m a n y p h y s ic i s t s u s e d b o t h l a n g u a g e s 75 74 73 72

been used in one experiment, for the direct processing of with great success to express their computational prob- Output Power (dBm) 71 Frequency 70 69 499MHz 68 500MHz 200 000 spark-chamber photographs (for which the pilot lems in a form directly comprehensible by a computer. 67 501MHz -15 -10 -5 0 5 10 m e a s u r e m e nt s a r e u n n e c e s s a r y). T h e o t h e r d e v ic e i s c a l le d Courses in the Fortran language, given both in English Input Power (dBm)

■Pulse Rise ■Pulse Fall ‘Luciole’, a faster, purely electronic machine, although of a n d F r e n c h, a r e h e ld r e g u l a rl y, a n d u s u a l l y l a s t a b o ut t h r e e Frequency = 500MHz, Po = 300kW Frequency = 500MHz, Po = 300kW X:200ns / DIV Y:200mV / DIV X:50ns / DIV Y:200mV / DIV lo w e r pr e c i s io n, s p e c i a l l y d e v e lo p e d at C E R N for d i g it i z i n g w e e k s. Suc h ‘c o m pi le r l a n g u a g e s’, a s t h e y a r e c a l le d, u s e d SPECIFICATION spark-chamber photographs. to be considered a rather inferior method for program- Frequency Range : 500MHz±1MHz W it h t h e s o n ic s p a r k c h a m b e r, t h e p o s it io n o f t h e s p a r k ming computers, as the translations obtained from them Output Power : 320kW @Peak, 300kW (min.) b e t w e e n e a c h p a i r o f pl at e s i s d e d uc e d f r o m t h e t i m e i nt e r- often used the machine at a low efficiency, but it is now *Each rack-mounted amplifier section individually delivers 75kW. vals between its occurrence and the detection of the sound recognized that the advantage of writing programmes (75kW/rack ×4 outputs) Pulse Width : 1μs~300μs (max.) b y e a c h o f fo u r m ic r o p h o n e s. A r r a y s o f s uc h d e v ic e s c a n b e in a language that can be translated mechanically for a Duty : 0~3% (max.) connected directly to the computer, thus dispensing with nu m b e r o f d i ffe r e nt c o m p ut e r s f a r o ut w e i g h s a s m a l l los R&K Company Limited Operation Mode : Class AB t he t a k i ng, de ve lopi ng a nd e x a m i nat ion of photog r aph s. in programme efficiency. www.rk-microwave.com AC Supply Input : AC200V±10%/3φ, 50/60Hz The study of dynamics of particles in magnetic and In the Theory Division, computer programmes are often 721-1 Maeda, Fuji-City, Shizuoka, 416-8577 Japan Size : (W)2240mm×(D)1300mm×(H)1510mm electric fields gives rise to another important family of written by individual theorists to check various mathe- Tel:+81-545-31-2600 Fax:+81-545-31-1600 Email:[email protected]

10 DECEMBER 2019 ECEMER 11

CCSupp_4_Computing_CompatCERN_v2.indd 10 20/11/2019 11:33 CCSupp_4_Computing_CompatCERN_v2.indd 11 20/11/2019 12:38 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING COMPUTERS, WHY? Kicking off a special issue on computing in March 1972 – the year Intel’s 8008 processor was launched and A WORLD OF EXPERIENCE the compact disc invented – CERN’s Lew Kowarski explained why computers were here to stay. CERN 181.2.72CERN ERN is the favourite showpiece of international co-operation in adv anced scientific research. The NOW WITH C public at large is, by now, quite used to the para- dox of CERN’s outstandingly large-scale electromagnetic machines (accelerators) being needed to investigate out- s t a n d i n g l y s m a l l-s c a le p h y s ic a l p h e n o m e n a. A v i s it or fi n d s it natural that this, largest-in-Europe, centre of particle MACHINE LEARNING research should possess the largest, most complex and costly accelerating apparatus. But when told that CERN is also the home of the biggest European collection of computers, the layman may wonder: why is it precisely in this branch of knowledge that there is so much to compute? Some sciences such as meteorology and demography appear to rely quite naturally on enor- mously vast sets of numerical data, on their collection and manipulation. But high energy physics, not so long ago, was chiefly concerned with its zoo of ‘strange particles’ which were hunted and photographed like so many rare animals. This kind of preoccupation seems hardly consistent with the need for the most powerful ‘number crunchers’. Perplexities of this sort may arise if we pay too much attention to the (still quite recent) beginnings of the modern computer and to its very name. Electronic digital com- puters did originate in direct descent from mechanical arithmetic calculators; yet their main function today is far more significant and universal than that suggested by the word ‘computer’. The French term ‘ordinateur’ or the Italian ‘elaboratore’ are better suited to the present situation and this requires some explanation.

What is a computer? CERN’s new central computer, a CDC 7600, was flown into Geneva airport mid-February When, some forty years ago, the first attempts were made to and can be seen being unloaded from the plane and being wheeled across the tarmac. replace number-bearing cogwheels and electro-mechani- cal relays by electronic circuits, it was quickly noticed that, decisions which it can implement even without any human not only were the numbers easier to handle if expressed in intervention if it is directly connected to a correspondingly binary notation (as strings of zeros and ones) but also that structured set of open-or-closed . the familiar arithmetical operations could be presented as Automatic ‘black boxes’ capable of producing a limited combinations of two-way (yes or no) logical alternatives. choice of responses to a limited variety of input (for exam- It took some time to realize that a machine capable of ple, vending machines or dial telephones) were known accepting an array of binary-coded numbers, together with before; their discriminating and logical capabilities had to binary-coded instructions of what to do with them (stored be embodied in their rigid internal hardware. In compar- program) and of producing a similarly coded result, would ison, the computer can be seen as a universally versatile ADVANCING also be ready to take in any kind of coded information, to This article was black box, whose hardware responds to any sequence of process it through a prescribed chain of logical operations adapted from text coded instructions. The complication and the ingenuity are in a special issue CANCER and to produce a structured set of yes-or-no conclusions. then largely transferred into the writing of the program. of CERN Courier Today a digital computer is no longer a machine primarily The new black box became virtually as versatile as the TREATMENT devoted to intended for performing numerical calculations; it is more computing, human brain; at the same time it offered the advantages often used for non-numerical operations such as sorting, vol. 12, March of enormously greater speed, freedom from error and matching, retrieval, construction of patterns and making 1972, pp59–61 the ability to handle, in a single operation, any desired

DECEMBER 2019 13

CCSupp_4_Computing_Why_v2.indd 13 20/11/2019 11:36 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING CERN CERN 374.2.72 such contributions to experimentation as the kinematic chamber experiments – the reconstruction of physically We are only analysis of particle tracks and statistical deductions from coherent and meaningful tracks out of computed coor- beginning a multitude of observed events. dinates and track elements is performed by the computer to discover according to programmed rules. and explore 2. Data processing the new ways Counting and measuring devices used for the detection of 8. Process control of acquiring p a r t ic le s pr o d uc e a flo w o f d at a w h ic h h a v e t o b e r e c ord ed, Computers can be made to follow any flow of material scientific s or t e d a n d o t h e r w i s e h a n d le d a c c ord i n g t o a p pr o pr i at e pr o- objects through a processing system by means of sensing knowledge cedures. Between the stage of the impact of a fast-moving d e v ic e s w h ic h, at a n y m o m e nt, s up pl y i n for m at io n o n w h at which have particle on a sensing device and that of a numerical result i s h a p p e n i n g w it h i n t h e s y s t e m a n d w h at i s e m e r g i n g f r o m a v a i l a ble for a m at h e m at ic a l c o m p ut at io n, d at a pr o c e s s i n g it. Instant analysis of this information by the computer been opened may be a complex operation requiring its own hardware, may produce a ‘recommendation of an adjustment’ (such by the advent soft w are and somet imes a separate computer. a s c lo s i n g a v a l v e, m o d i f y i n g a n a p pl ie d v olt a g e, e t c.) w h ic h of computers the computer itself may be able to implement. Automation 3. Symbolic calculations of this kind is valuable when the response must be very E le m e nt a r y lo g ic a l o p e r at io n s w h ic h u n d e rl i n e t h e c o m- quick and the logical chain between the input and the puters’ basic capabilities are applicable to all sorts of oper- output is too complicated to be entrusted to any rigidly ands such as those occurring in algebra, calculus, graph constructed automatic device. At CERN the material flow theory, etc. High-level computer languages such as LISP to be controlled is usually that of charged particles (in are becoming available to tackle this category of prob- accelerators and beam transport systems) but the same Installation of the volume of incoming data and of ramified logical chains a bi l it y o f a n y hu m a n t e a m t o o b s e r v e t h e m o n t h e s p o t. T h e y le m s w h ic h, at C E R N, i s e n c o u nt e r e d m o s t l y i n t h e or e t ic a l approach is applicable in many domains of engineering, 7600 well advanced of instructions. In this latter respect the limit appears to h a v e t o b e r e c ord e d (o f t e n w it h h e l p f r o m a c o m p ut e r) a n d physics but, in the future, may become relevant in many such as vacuum and cryogenics. in the large hall of b e s e t o n l y b y t h e s i z e (a n d t h e r e for e t h e c o s t) o f t h e c o m- t he records have to be scanned and si f ted – a task whic h, o t h e r d o m a i n s s uc h a s a p p a r at u s d e s i g n, a n a l y s i s o f t r a c k the new computer puter, i.e. by the total number of circuit elements which n o w a d a y s, i s u s u a l l y le f t t o c o m p ut e r s b e c a u s e o f it s s h e e r configurations, etc. Centralization versus autonomy centre. Top left, the are brought together and interconnected in the same com- volume. In this way, experiments in which a prolonged The numerous computers available at CERN are of a great shape of the puting assembly. ‘run’ produces a sequence of mostly trivial events, with 4. Computer graphics variety of sizes and degrees of autonomy, which reflects the main-frame can be r e l at i v e l y fe w s i g n i fic a nt o n e s m i x e d i n at r a n d o m, b ecom Computers may be made to present their output in a pic- diversity of their uses. No user likes to share his computer picked out. Then High energy physics as a privileged user p ossi ble w it hout w a s t i ng t he v a lua ble t i me of c omp e te nt torial form, usually on a cathode-ray screen. Graphic out- with any other user; yet some of his problems may require comes the 6400 We are only beginning to discover and explore the new human examiners. put is particularly suitable for quick communication with a computing system so large and costly, that he cannot computer, the large ways of acquiring scientific knowledge which have been a human observer and intervener. Main applications at expect it to be reserved for his exclusive benefit nor to be core memory which opened by the advent of computers. During the first two 3. High statistics experiments present are the study of mathematical functions for the kept idle when he does not need it. The biggest comput- will be shared by the decades of this exploration, that is since 1950, particle As the high energy physics community became used to pu r p os es of t he ore t ic a l phy sic s, t he desig n of b ea m h a n- ers available at CERN must perforce belong to a central 6400 and 6500 physics happened to be t he most r ic hly endowed domain computer-aided processing of events, it became possible dling systems and Cherenkov optics and statistical s e r v ic e, a c c e s s i ble t o t he L a b or at or y a s a w hole. I n re c e nt (Extended Core o f b a s ic r e s e a rc h. S e c u r e i n t h e i r a bi l it y t o p a y, h i g h e n e r g y t o p e r for m e x p e r i m e nt s w h o s e p h y s ic a l m e a n i n g r e s id e d analysis of experimental results. years, the main equipment of this service has consisted Store) and, nearest physicists were not slow to recognize those features of i n a w h ole p o p u l at io n o f e v e nt s, r at h e r t h a n i n e a c h t a k e n of a CDC 6600 and CDC 6500. The recent arrival of a 7600 to the camera, a their science which put them in the forefront among the singly. In this case the need grew from an awareness of 5. Simulation (c o uple d w it h a 6400) w i l l mu lt i pl y t h e c e nt r a l l y a v a i l a ble number of potential users of the computing hardware and software hav i ng t he mea ns to sat isf y t he ne e d; a si m i la r e volut ion Mathematical models expressing ‘real world’ situations computing power by a factor of about five. controllers which in all of their numerical and non-numerical capabilities. m a y y e t o c c u r i n o t h e r s c ie n c e s (e.g. t h o s e d e a l i n g w it h t h e may be presented in a computer-accessible form, compris- For many applications, much smaller units of comp- connect peripheral The t hree most relev ant c haracterist ics are as follows: environment), following the currents of public attention ing the initial data and a set of equations and rules which uting power are quite adequate. CERN possesses some equipment to the and possibly de-throning our physics from pre-eminence t h e m o d e l le d s y s t e m i s s up p o s e d t o fol lo w i n it s e v olut io n. 80 other computers of various sizes, some of them for 6400. On the 1. Remoteness from the human scale of natural phenomena in scientific computation. Such ‘computer experiments’ are valuable for advance use in situations where autonomy is essential (for right and in the Each individual ‘event’ involving sub-nuclear particles testing of experimental set-ups and in many theoretical e x a m ple, a s a n i nt e g r a l p a r t o f a n e x p e r i m e nt a l s e t-up u s i n g foreground are the takes place on a scale of space, time and momentum so Modes of application problems. Situations involving statistical distributions electronic techniques or for process control in accelerating operator displays. infinitesimal that it can be made perceptible to human In order to stress here our main point, which is the ver- may require, for their computer simulation, the use of s y s t e m s). I n s o m e a p pl ic at io n s t h e r e i s n e e d for pr a c t ic a l l y senses only through a lengthy and distorting chain of satility of the modern computer and the diversity of its computer-generated random numbers during the calcu- continuous access to a smaller computer together with l a r g e r-s c a le p h y s ic a l pr o c e s s e s s e r v i n g a s a m pl i fie r s. T h e applications in a single branch of physical research, we lation. This kind of simulation, known as the Monte-Carlo intermittent access to a larger one. A data-conveying link r a w d at a s up pl ie d b y a h i g h e n e r g y e x p e r i m e nt h a v e h a rd l y shall classify all the ways in which the ‘universal black met hod, is w idely used at CERN. between the two may then become necessary. any direct physical meaning; they have to be sorted out, box’ can be put to use in CERN’s current work into eight i nt e r pr e t e d a n d r e-c a lc u l at e d b e for e t h e e x p e r i m e nt e r c a n ‘modes of application’ (roughly corresponding to the list 6. File management and retrieval Conclusion see whet her t hey ma ke any sense at all – and t his means o f ‘m e t h o d olo g ie s’ a d o p t e d i n 1968 b y t h e U.S. A s s o c i at io n As a big organization, CERN has its share of necessary T h e for e g oi n g r e m a r k s a r e m e a nt t o g i v e s o m e id e a o f h o w t h at t h e pr o c e s s i n g h a s t o b e p e r for m e d, i f p o s s i ble, i n t h e for Computing Machinery): paper-work including administration (personnel, payroll, the essential nature of the digital computer and that of ‘real time’ of the experiment in progress, or at any rate at budgets, etc.), documentation (library and publications) and high energy physics have blended to produce the present a speed only a computer can supply. 1. Numerical mathematics the storage of experimental records and results. Filing and prominence of CERN as a centre of computational physics. This mode is the classical domain of the ‘computer used r e t r ie v a l o f i n for m at io n t e n d n o w a d a y s t o b e c o m p ut e r i z e d The detailed questions of ‘how’ and ‘what for’ are treated 2. The rate and mode of production of physical data as a computer’ either for purely arithmetic purposes or in practically every field of organized human activity; in the other articles of this [special March 1972 issue of Accelerating and detecting equipment is very costly and for more sophisticated tasks such as the calculation of at CERN, these pedestrian applications add up to a non- CERN Courier devoted to computing] concretely enough to often unique; there is a considerable pressure from the user less common functions or the numerical solution of dif- negligible fraction of the total amount of computer use. show the way for similar developments in other branches community and from the governments who invest in this ferential and integral equations. Such uses are frequent o f s c ie n c e. I n t h i s r e s p e c t, a s i n m a n y o t h e r s, C E R N’s pio - equipment that it should not be allowed to stand idle. As in practically every phase of high energy physics work, 7. Pattern recognition n e e r i n g i n flue n c e m a y t r a n s c e n d t h e O r g a n i z at io n’s b asic a result, events are produced at a rate far surpassing the from accelerator design to theoretical physics, including Mainly of importance in spark-chamber and bubble- function as a centre of research in high energy physics. 

14 DECEMBER 2019 DECEMBER 2019 15

CCSupp_4_Computing_Why_v2.indd 14 20/11/2019 11:36 CCSupp_4_Computing_Why_v2.indd 15 20/11/2019 11:37 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING WEAVING A BETTER TOMORROW: THE FUTURE OF THE WEB

The Web was 10 years old and only just in Germany. Out boomed the mechanical words “Wie ist das Wetter in Heidelberg”, followed, presumably after the beginning to fulfil its potential, reported Heidelberger had typed “Es ist kalt und regnerisch”, by “It is CERN’s James Gillies from the eighth World cold and rainy”. That’s fine if all you want to do is discuss Wide Web conference in Toronto in 1999. the weather, but IBM’s translation software might have problems with more complex topics. Nevertheless, it served to show what’s coming. t was a weird conference,” said Ethernet inventor and self-styled technology pundit, Bob Metcalfe, summing E-business Iup the eighth World Wide Web conference (WWW8). Symbiotic video came next on Patrick’s list. That’s “Imagine,” he continued, “sitting there listening to a clickable television to you and me. A coffee advertise- senior executive of IBM wearing a tee-shirt and a beard.” ment took us to a Web site where, you’ve guessed it, you Appearances were not deceptive as Big Blue’s vice president could order coffee to be delivered to your door. This is an for Internet Technology, John Patrick, captured the spirit example of where e-business might be taking us and, as of the conference. “Power to the people,” he said, would anyone who’s looked at an IBM advert recently knows, be the driving force behind the computing industry as we e-busi ness is IBM’s ne x t rea l ly big t h i ng. Defined on t hei r enter the new millennium. For if one thing is abundantly Web pages as “the transformation of key business processes clear, it’s that the political geography of information tech- through the use of Internet technologies”, e-business, says nology has been turned on its head by personal computing Technical support at the World Wide Web conferences is traditionally supplied by volunteers. The WWW8 crew is seen here in front of the Web history Patrick, will force a new character onto the keyboard. (I don’t and the World Wide Web. “Stand aside, besuited corporate wall, assembled during the conference to mark the Web’s 10th anniversary. Volunteers pay their own way to get to the conference for a week’s hard work, have one, so to see what he means you’ll have to look at IBM’s executives”, came the message. Make way for the altruistic just for the pleasure of being there. Web site yourself, “http://www.ibm.com/”) What it boils geeks: the future belongs to them. down to is businesses maximizing their potential through It’s rare to find such an optimistic bunch of people. obscure computer called a Next cube at CERN. “Ask him Improving the Web computers and the Web with the help, of course, of IBM. The pony-tail count may have been way above average about control-shift-N,” said one delegate, referring to C onte nt t h at t he We b i s fi n a l ly c atc h i ng up w it h h i s or ig- Education is already benefiting from the Web. LEGO’s and the word “cool” still cool, but WWW8 delegates the combination of key strokes that instructed that early inal vision, Berners-Lee is now devoting his energies to high-tech programmable Mindstorms invites young engi- have their hearts in the right place. They are the ones browser/editor to create a new document linked to the improving it. The Web’s biggest problem is caused by its neers to submit their best designs and programs to a Web who have made the Web, motivated only by the fun of one you were already in. That simple manoeuvre encap- success. There’s so much information out there that it’s site (“http://www.legomindstorms.com/”). Mindstorms playing with computers and the belief that the Web sulates Berners-Lee’s vision of what the Web should be, “a o f t e n h a rd t o fi n d w h at yo u w a nt. T h e a n s w e r, a c c ord i n gto impressed Patrick so much that he felt inspired to submit can make the world a better place. Some were concerned common space in which we could all interact”, a medium in B e r n e r s-L e e, i s w h at h e c a l l s t h e s e m a nt ic We b. T h e k i n d o f h i s o w n, b ut w a s d i s t r e s s e d t o fi n d t h at t h e “d at e o f bi r t h” at Microsoft’s conference sponsorship. There was grum- which we’d all be creators, not just consumers. Expediency information on the Web today is understandable to humans choice only went back to 1970, so that’s what he clicked. bling that the delegates’ pack included complementary prevented that reality from coming sooner as Berners-Lee b ut n o t t o c o m p ut e r s. I f, for e x a m ple, B e r n e r s-L e e w a nt e d The Mindstorms design that most impressed him was The Web we’re Microsoft CD-ROMs (for Windows only). “Next year,” and his team at CERN concentrated on providing Web to buy a yellow car in Massachusetts and his neighbour posted by someone who had clicked on 1992. going to see one delegate was overheard to say, while tucking into a services to the particle physics community, leaving the wanted to sell a primrose automobile in Boston, how would Can the Internet handle all of these new big things? Yes, emerging over spring roll and sipping Chardonnay at the evening reception stage free for the entrance of Mosaic, a browser with no his search engine know that what he wanted was right on believes Patrick. Bandwidth is booming, and the much- the coming (courtesy of Bill Gates), “Microsoft will have bought the editing capacity, in 1994. his doorstep? If a current W3C project is successful, some touted address-space problem – simply running out of new decade is none World Wide Web.” However, his fears were not universally Ev e n w h e n t h e p a s s i v e We b t o o k off, B e r n e r s-L e e d id n ot kind of logical schema will tell the search engine that addresses – will soon be a thing of the past. The next version other than held. There is just too much grass-roots stuff going on abandon his dream. To most users of the Web the choice primrose is just a kind of yellow and that automobiles and of the Internet protocols will bring enough addresses for out there for one company, however powerful, to take of browsers comes down to two: Netscape Navigator and cars are in fact the same thing. every proton, according to Patrick. “That ought to do it.” the one Tim over completely. Internet Explorer. However, there’s actually a lot more Reminding delegates that there’s nothing new under There were no surprises from Greg Papadopoulos, Chief Berners-Lee choice available. Many of the early browsers can still be the Sun was IBM’s John Patrick who spelled out his vision Technology Officer at . He looks forward had running Information revolution found, and there are new companies turning out more. of how the Internet is poised to change our lives. Top of to the day when computers will not need to rely on com- almost It may seem from the outside that the information revo- The Web consortium (W3C) has produced a browser/editor, his list of next big things was instant messaging, which plicated protocols to talk to each other and to peripheral This article was 10 years ago lution has arrived, but in John Patrick’s view, “we’re right called Amaya, that allows the kind of interactive Web use is just around the corner. Curiously familiar to anyone devices. Instead, there will be just one simple protocol on an obscure adapted from text at the beginning”. The Web’s inventor, Tim Berners-Lee, that Berners-Lee envisaged from the start. If you want who used BITNET or DECNET in the 1980s, instant mes- and it will be used for sending “objects” – executable in CERN Courier computer doesn’t even go that far. The Web we’re going to see emerg- to see what the Web was meant to be, open Navigator or saging is a sort of halfway house between e-mail and the programmes – around the Web. Coming from the company vol. 39, called a Next ing over the coming decade, he believes, is none other Explorer for the last time, go to “http://www.w3.org” and telephone. Patrick demonstrated IBM’s version by typ- t h at fi l le d o u r We b p a g e s w it h Ja v a a p ple t s, w h at e l s e c ould September 1999, cube at CERN than the one he had running almost 10 years ago on an click on “Amaya browser/editor”. ing in “How is the weather in Heidelberg” to a colleague he be expected to say? The example he gave was printing pp26–28

16 DECEMBER 2019 DECEMBER 2019 17

CCSupp_4_Computing_Web_v2.indd 16 20/11/2019 11:38 CCSupp_4_Computing_Web_v2.indd 17 20/11/2019 11:39 CERNCOURIER www. I n F o c u s C o m p u t i n g NEW CERNCOURIER.COM LAUNCH FOR 2019 CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING OPEN ACCESS

TheNOW es OPEN FOR SUBMISSIONSfrom a Web phone, a device still far from most people’s igalapse a single network outage that would cost a billion iggest everyday reality but familiar to WWW8 conference-goers. man-hours. So confi dent was he that he promised to eat his Machine Learning: Science and Technology™ (MLST) prolem is words if there had not been one. Two years later, at WWW6, is a multidisciplinary, open access journal devoted THE MICROPROCESSOR BOOM caused y Patents he took a copy of his column and ate it for all to see. The year’s to the application and development of machine its success. Conference co-chair Murray Maloney began the closing biggest outage had been estimated at a tenth of a igalapse. learning for the sciences.plenary session by presenting the uri Rabinsky award Among his predictions at WWW8 was, again, the iga- Theres The early 1980s would see the microprocessor become a routine part of to Richard Stallman. Rabinsky was a pioneer of the Open lapse, but this time with no promises attached. Metcalfe so much SoftwareAll movementar ticle publication making charges Stallman, are founder of the Free also believes that microcharging is just around the corner. the high-energy physics toolkit, predicted the Courier in the summer of 1979. information Softwarecurrently Foundation paid by andIOP Publishing. author of the operating system He surveyed his readers to fi nd out how much they’d be out there that There are no author fees to publish NU, an appropriate recipient and Microsoft an unlikely prepared to pay to read his column. 0. cents came the reply, uring the past few years, electronic cir- Motorola used to activate the data collection system of an its often hard in the journal. sponsor of the award. This was an irony not lost on Stallman but, with half a million readers, he’s quite happy about cuitry techniques have been developed experiment only when certain criteria are met. to fi nd wh at who graciouslyEditor-in-Chief accepted the award while urging vigilance that. He had bad news for both Microsoft and Stallman, which enable complex logic units to Even with the help of this ‘hardwired’ selec- you w a nt D againstAnatole those whovon Lilienfeld would patent everything. predicting that the former has peaked but the latter’s Open be produced as tiny elements or ‘chips’. These tion, a large proportion of the accumulated data In theUniversity US, software of Basel, can be patented and in Tim Berners- Source will still never catch Mr ates. The Internet stock units are now mass-produced, and are avail- has to be thrown away, often after laborious Lee’s opinion,Switzerland the bar for what’s patentable is far too low. bubble, he predicted, will burst on 8 November 1999. able relatively cheaply to anyone building data calculations. With experiments reaching for As a consequence, a substantial part of W3C’s energies are How could he be so bold He was recently invited to a processing equipment. higher energies where many more particles are t ie d up i n fi g ht i n g p at e nt a p pl ic at io n s c o v e r i n g t h i n g s t h at meeting of venture capitalists and asked his opinion on Just a few years ago, the first complete pro- produced, and at the same time searching for the consortium believes should be standards. Stallman this. After carefully explaining that he knew nothing of cessing unit on a single chip was produced. Now rarer types of interaction, physicists continually urges Europe not to succumb to American pressure to stock markets, he told them 8 November and was amused To find out more about the journal scope and how to micro logic elements can be combined together require more and more computing power. adopt software patents. to see them all writing it down. will be a non-event. submit your paper, please visit iopscience.org/mlst to provide micro data processing systems whose Up till now, this demand has had to be met To sum up, Bob Metcalfe singled out the semantic Web Why he asked himself. Are computers reliable he or e-mail [email protected]. capabilities in certain respects can rival those of by bringing in more and bigger computers, of Tim Berners-Lee, whom he referred to as The Duke of replied. And anyway, he went on, 31 December is a Friday more conventional computers. Commercially- b o t h o n-l i n e at t h e e x p e r i m e nt s a n d off-l i n e at URL (pronounced Earl), as the principal subject of the so we’ll have the whole weekend to sort things out. So available are used widely in Laboratory computer centres. With the advent conference. A pundit’s role is to stick his neck out, and Met- on that note, delegates were able to leave the conference many fields. Enlargement of a microprocessor (in real life of , a solution to this problem calfe is famous for doing that. Four years ago at WWW, he looking forward to a restful last night of the millennium, Where an application requires special capabil- just a few millimetres across). Units like could be in sight. Micros could be incorporated predicted that the coming 1 months would see an Internet whatever year they happen to believe that might occur.  ities, it is preferable to take the individual micro these could soon make a big impact in data into experimental set-ups to carry out a sec- logic units and wire them together on a printed processing for high energy physics. ond level of data selection after the initial hard- circuit board to provide a tailor-made processing wired triggering - an example of the so called u n it: I f t h e r e i s s u ffic ie nt d e m a n d for t h e p e r fe c t e d d e s ign, ‘distributed processing’ approach where computing power the printed circuit board stage subsequently can be dispensed is placed as far upstream as possible in the data handling NEW with and the processor can be mass-produced by large-scale process. In this way the demand on the downstream central LAUNCH OPEN integration (LSI) techniques as a single microprocessor. computer would be reduced, and the richness of the data FOR 2019 ACCESS With these processing units, there generally is a trade-off sample increased. b e t we e n s p e e d a nd fle x ibi l it y, t he u lt i m ate i n s p e e d b e i ng The micros would filter the readout in the few microseconds a hard-wired unit which is only capable of doing one thing. before the data is transferred to the experimental data collec- Flexibility can be achieved through programmable logic, t io n s y s t e m. Z a n e l l a i s c o n v i n c e d t h at t h i s c o u ld s i g n i fic a nt l y but t his affects t he overall speed. improve the quality of the data and reduce the subsequent Programming micros is difficult, but one way of side- off-line processing effor t to eliminate bad triggers. NOW OPEN FOR SUBMISSIONS stepping these problems would be to design a unit which As well as being used in the data collection system, micros emulates a subset of an accessible mainframe computer. would also be useful for control and monitoring functions. Machine Learning: Science and Technology™ (MLST) is a multidisciplinary, With such an emulator, programs could be developed on the The use of off the-shelf microcomputers in accelerator open access journal devoted to the application and development of main computer, and transferred to the micro after they have control systems, for example, is already relatively wide- reached the required level of reliability. This could result spread. Some limited applications outside the control area iopscience.org/mlst machine learning for the sciences. in substantial savings in program development time. In are already being made in experiments, a notable example addition, restricting the design to a subset of the mainframe being the CERN/Copenhagen/Lund/Rutherford experiment All article publication charges are currently paid by IOP Publishing. There are no architecture results in a dramatic reduction in cost. now being assembled at the CERN Intersecting Storage Rings. author fees to publish in the journal. High energy physics, which has already amply demon- projects are now being tackled at several strated its voracious appetite for computer power, could also Laboratories. At CERN three projects are under way in the Editor-in-Chief soon cash in on this microcomputer boom and produce its Data Handling Division. Two of these are programmable Anatole von Lilienfeld, University of Basel, Switzerland own ‘brand’ of custom-built microprocessors. emulators (one being based on the IBM 370/168 and the other According to Paolo Zanella, Head of CERN’s Data Han- on the Digital Equipment PDP-11), while the third is a very To find out more about the journal scope and how to dling Division, now is the time to explore in depth the uses fast microprogrammable unit called ESOP. of microprocessors in high energy physics experiments. If High energy physics still has a lot to learn about micro- submit your paper, please visit iopscience.org/mlst initial projects now under way prove to be successful, the processor applications, and there is some way to go before early 1980s could see microprocessors come into their own. their feasibility is demonstrated and practical problems, or e-mail [email protected]. This article was One of the biggest data processing tasks in any physics such as programming, are overcome. adapted from text experiment is to sift through the collected signals from the in CERN Courier However this year could see some of these initial projects various detecting units to reject spurious information and vol. 19, July/ come to fruition, and the early 1980s could live up to Zanella’s separate out events of interest. Therefore to increase the August 1979, expectations as the time when the microprocessor becomes richness of the collected data, triggering techniques are pp192–193 a routine part of the high energy physicists’ toolkit. 

18 DECEMBER 2019 DECEMBER 2019 19

CCSupp_4_Computing_Web_v2.indd 18 20/11/2019 12:42 CCSupp_4_Computing_Microprocessor_v3.indd 19 20/11/2019 12:43 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING COMPUTING IN HIGH ENERGY PHYSICS RAL The 1989 Computing In High Energy Physics conference weighed up the challenges of ACCELERATOR ACCELERATOR analysing LEP and other data, reported Sarah Smith and Robin Devenish.

omputing in high energy physics has changed over ASTRO THEORY the years from being something one did on a slide- C rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year’s ‘Computing In High Energy Physics’ conference held in the dreamy setting of Oxford’s spires. The 260 delegates came mainly from Europe, the US, Japan and the USSR. Accommodation and meals in the PHOTON PARTICLE unique surroundings of New College provided a special atmosphere, with many delegates being amused at the idea o f a 500-y e a r- old c ol le g e s t i l l m e r it i n g t h e a dj e c t i v e ‘n e w’. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. David Williams reconstructed for events of interest. ENGINEERING In addition to high energy physics computing, a number (CERN) sets the Is the age of the large central processing facility based of papers were given by experts in computing science, in scene at the Oxford on mainframes past? In a provocative talk Richard Mount line with the conference’s aim - ‘to bring together high Conference on (Caltech) claimed that the best performance/cost solu- energ y physicists and computer scient ists’. Computing in High tion was to use powerful workstations based on reduced The c omple x it y a nd volu me of d at a ge ne r at e d i n p a r t i- Energy Physics. instruction set computer (RISC) technology and special c le phy sic s e x p er i ment s i s t he rea son w hy t he a sso c i ate d purpose computer-servers connected to a modest main- computing problems are of interest to computer science. frame, giving maybe a saving of a factor of five over a These ideas were covered by David Williams (CERN) and conventional computer centre. Louis Hertzberger (Amsterdam) in their keynote addresses. Networks bind physics collaborations together, but they T h e t a s k f a c i n g t h e e x p e r i m e nt s pr e p a r i n g t o e m b a r k o n are often complicated to use and there is seldom enough CERN’s new LEP electron–positron collider is enormous capacity. This was well demonstrated by the continuous by any standards but a lot of thought has gone into their use of the conference electronic mail service provided by preparation. Getting enough computer power is no longer Olivetti-Lee Data. The global village nature of the com- thought to be a problem but the issue of storage of the seven munity was evident when the news broke of the first Z Terabytes of data per experiment per year makes computer par t icle at Stanford’s new SLC linear collider. managers nervous even with the recent installation of IBM T h e pr o ble m s a n d p o t e nt i a l o f n e t w or k s w e r e e x plor e d b y 3480 cartridge systems. François Fluckiger (CERN), Harvey Newman (Caltech) and With the high interaction rates already achieved at the James Hutton (RARE). Both Fluckiger and Hutton explained CERN and Fermilab proton–antiproton colliders and orders that the problems are both technical and political, but of magnitude more to come at proposed proton colliders, progress is being made and users should be patient. Net- This article was t here a re e xc it i ng a reas where pa r t ic le physics a nd com- work managers have to provide the best service at the adapted from puter science could profitably collaborate. A key area is text in CERN lowest cost. HEPNET is one of the richest structures in pattern recognition and parallel processing for trigger- Courier vol. 29, networking, and while OSI seems constantly to remain ing. With ‘smart’ detector electronics this ultimately will July/August just around the corner, a more flexible and user-friendly www.desy.de/career produce summary information already reduced and fully 1989, pp4–8 system will emerge eventually.

DECEMBER 2019 21

CCSupp_4_Computing_HighEnergy_v2.indd 21 20/11/2019 11:40 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING

The importance of the interface between graphics and group had been set up (contact HEPNIX at CERNVM). pedagogical level. Sufrin, adopting a missionary approach, The insatiable relational databases was also emphasized in the talk by Software engineering became a heated talking point as showed how useful a mathematical model of a simple text number- NPL OxfordNPL Brian Read (RAL) on the special problems faced by scientists the conference progressed. Two approaches were suggested: editor could be. Hoare demonstrated predicate calculus crunching using database packages, illustrated by comparing the Carlo Mazza, head of the Data Processing Division of the applied to the design of a tricky circuit. appetite information content of atmospheric ozone concentration European Space Operations Centre, argued for vigorous Less high technology was apparent at the conference of both from satellite measurements in tabular and graphical form management in software design, requiring discipline and dinner in the candle-lit New College dining hall. Guest experimental - ‘one picture is worth a thousand numbers’. professionalism, while Richard Bornat (‘SASD- All Bubbles speaker Geoff Manning, former high energy physicist, and theoretical The insatiable number-crunching appetite of both and No Code’) advocated an artistic approach, likening one-time Director of the UK Rutherford Appleton Labo- particle experimental and theoretical particle physicists has led programming to architectural design rather than pro- ratory and now Chairman of Active Memory Technology, physicists has to many new computer architectures being explored. Many duction engineering. A. Putzer (Heidelberg) replied that believed that physicists have a lot to learn from advances of them exploit the powerful new (and relatively cheap) experimenters who have used software engineering tools in computer science but that fruitful collaboration with led to many RISC chips on the market. Vector supercomputers are very such as SASD would use them again. the industry is possible. Replying for the physicists, Tom new computer appropriate for calculations like lattice gauge theories Nash (Fermilab) thanked the industry for their continuing architectures but it has yet to be demonstrated that they will have a big Software crisis interest and support through joint projects and looked for- being explored impact on ‘standard’ codes. Paul Kunz (SLAC) gave a thoughtful critique of the so-called ward to a collaboration close enough for industry members An indication of the improvements to be expected – ‘software crisis’, arguing that code does not scale with the to work physics shifts and for physicists to share in profits! perhaps a factor of five - came in the talk by Bill Martin size of the detector or collaboration. Most detectors are Summarizing the meeting, Rudy Bock (CERN) high- (Michigan) on the vectorization of reactor physics codes. modular and so is the code associated with them. With lighted novel architectures as the major area where sig- Hands-on Harvey Newman (Caltech) is not patient. He protested Perhaps more relevant to experimental particle physics are proper management and quality control good code can and n i fic a nt a d v a n c e s h a v e b e e n m a d e a n d w i l l c o nt i nue tobe experience at the that physics could be jeopardized by not moving to high the processor ‘farms’ now well into the second generation. will be written. The conclusion is that both inspiration and made for the next few years. Standards are also impor- DEC demonstration. speed networks fast enough. Megabit per second capacity Paul Mackenzie (Fermilab) and Bill McCall (Oxford) discipline go hand in hand. tant provided they are used intelligently and not just as a is needed now and Gigabit rates in ten years. Imagination showed how important it is to match the architecture to A closely related issue is t hat of verifiable code - being straitjacket to impede progress. His dreams for the future should not be constrained by present technology. the natural parallelism of a problem and how devices like able to prove that the code will do what is intended before included neural networks and the wider use of formal This was underlined in presentations by W. Runge (Frei- the transputer enable this to be done. On a more speculative any executable version is written. The subject has not methods in software design. Some familiar topics of the burg) and R. Ruehle (Stuttgart) of the work going on in Ger- note Bruce Denby (Fermilab) showed the potential of neural yet had much impact on the physics community and was past, including code and memory management and the many on high bandwidth networks. Ruehle concentrated on networks for pattern recognition. They may also provide tackled by Tony Hoare and Bernard Sufrin (Oxford) at a choice of , could be ‘put to rest’.  the Stuttgart system providing graphics access the ability to enable computers to learn. Such futuristic through 140 Mbps links to local supercomputers. His talk possibilities were surveyed in an evening session by Phil was illustrated by slides and a video showing what can be Treleavan (London). Research into this form of computing done with a local workstation online to 2 Crays and a Convex could reveal more about the brain as well as help with the simultaneously! He also showed the importance of graphics new computing needs of future physics. in conceptualization as well as testing theory against exper- iment, comparing a computer simulation of air flow over The importance of UNIX

the sunroof of a Porsche with a film of actual performance. With powerful new workstations appearing almost daily and IOP Expanding Physics IOP Expanding Physics IOP Expanding Physics IOP Expanding Physics Computer graphics for particle physics attracted a lot with novel architecture in vogue, a machine-independent Experimental Energy Density A Modern Course in A Modern Course in Editor’s picks of interest. David Myers (CERN) discussed the conflicting operating system is obviously attractive. The main contender Particle Physics Functional Methods Quantum Field Theory Quantum Field Theory Understanding the measurements and for Atomic Nuclei Fundamentals Advanced topics searches at the requirements of software portability versus performance is UNIX. Although nominally machine independent, UNIX Edited by Badis Ydri Badis Ydri Deepak Kar Nicolas Schunck of the year in

VOLUME VOLUME and summarized with ‘Myers’ Law of Graphics’ - ‘you can’t comes in many implementations and its style is very much ONE TWO have your performance and port it’. Rene Brun (CERN) gave that of the early 7Os with cryptic command acronyms – few quantum and a concise presentation of PAW (Physics Analysis Worksta- concessions to user-friendliness! However it does have many tion). Demonstrations of PAW and other graphics packages powerful features and physicists will have to come to terms particle physics such as event displays were available at both the Apollo and with it to exploit modern hardware. DEC exhibits during the conference week. Other exhibitors Dietrich Wiegandt (CERN) summarized the development included Sun, Meiko, Caplin and IBM. IBM demonstrated of UNIX and its important features - notably the direc- Visit iopscience.org/books

the power of relational database technology using the tory tree structure and the ‘shells’ of command levels. IOP Concise Physics A Morgan & Claypool Publication IOP Concise Physics A Morgan & Claypool Publication IOP Concise Physics A Morgan & Claypool Publication IOP Concise Physics A Morgan & Claypool Publication for more information, a full online Oxford English Dictionary. An ensuing panel discussion chaired by Walter Hoogland Particle Physics International Linear B Factories The Physics list of titles and to learn Interactive data analysis on workstations is well estab- (NIKHEF) included physicists using UNIX and represent- Richard A Dunlap Collider (ILC) Boštjan Golob of Emergence The next mega-scale particle collider Robert C Bishop why you should become an lished and can be applied to all stages of program development atives from DEC, IBM and Sun. Both DEC and IBM support Alexey Drutskoy and design. Richard Mount likened interactive graphics to the development of UNIX systems. IOP ebook author. the ‘oscilloscope’ of software development and analysis. David McKenzie of IBM believed that the operating sys- E s t a bl i s h i n g a g o o d d at a s t r uc t u r e i s e s s e nt i a l i f fle x i ble tem should be transparent and looked forward to the day and easily maintainable code is to be written. Paulo Palazzi when operating systems are ‘as boring as a mains wall (CERN) showed how interactive graphics would enhance plug’. (This was pointed out to be rather an unfortunate the already considerable power of the entity-relation model example since plugs are not standard!) as realized in ADAMO. His presentation tied in very well Two flavours of UNIX are available - Open Software with a fascinating account by David Nagel (Apple) of work Foundation version one (OSF1), and UNIX International going on at Cupertino to extend the well-known Macintosh release four (SVR4) developed at Berkeley. The two imple- human/computer interface, with tables accessed by the mentations overlap considerably and a standard version mouse, data highlighted and the corresponding graphical will emerge through user pressure. Panel member W. Van output shown in an adjacent window. Leeuwen (NIKHEF) announced that a particle physics UNIX

22 DECEMBER 2019 DECEMBER 2019 23

CCSupp_4_Computing_HighEnergy_v2.indd 22 20/11/2019 11:41 CCSupp_4_Computing_HighEnergy_v2.indd 23 20/11/2019 12:45 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN

Facility for Antiproton and Ion Research Helmholtzzentrum für Schwerionenforschung GmbH A MAJOR GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt operates one of the leading particle accelerators for science. In the next few years, the new FAIR (Facility for Antiproton and Ion Research), one of the world‘s largest research projects, will be built in international cooperation. GSI and FAIR offer the opportunity to work in this inter- SHIFT IN national environment with a team of employees committed to conduct world-class science. The CBM (Compressed Baryonic Matter) experiment is one of the major experiments to be conducted at this new facility. CBM will investigate strongly interacting matter by measuring heavy-ion collisions, with the ambitious aim OUTLOOK to achieve very high interaction rates. The experimental concept foresees real-time data selection in software, which requires highly performant and efficient algorithms for event reconstruction.

For the CBM Software Group at GSI/FAIR, located in Darmstadt, Germany, we seek at the nearest possible point in In the summer of 2001, computer specialist time: Ben Segal described how distributed UNIX boxes took over from CERN’s all-powerful Developer Software Framework (Posting ID 771100-19.141) The changing landscape at CERN’s Computer Centre: 1988, with the Cray supercomputer IBM and Cray mainframe workhorses. (in blue and yellow) in the background. The successful applicant will take active part in the development of the CBM software framework for simulation, CERN online and offline reconstruction, and analysis. Further work areas are the management of software integration and don’t remember exactly who first proposed running integrity and the maintenance and further development of web-based tools for collaborative software development. p h y s ic s b at c h j o b s o n a U N I X wor k s t at io n, r at h e r t h a n From the applicant, we expect a PhD in physics or computer science, excellent knowledge of the C++ programming I on the big IBM or Cray mainframes that were doing language, and good knowledge of current computer architectures. Experience in software development and/or admi- that kind of production work in 1989 at CERN. The work­ s t at io n i n q ue s t io n w a s t o b e a n A p ol lo DN10000, t h e h o t t e s t nistration for large high-energy nuclear or particle physics experiments is of advantage. thing in town with reduced instruction set (RISC) CPUs of a for m id a ble fi v e C E R N Un it s (a C E R N Un it w a s d e fi n e das Developer Reconstruction and Analysis Software (Posting ID 771100-19.142) one IBM 370/168, equivalent to four VAX 11­780s) e a c h a n d cost ing around SwFr 150 000 for a 4­CPU box. The successful applicant will participate in the development of the CBM software for online and offline reconstruc- It must have been t he combined idea of Les Rober tson, tion, bringing it to production readiness for the start of experiment operation in 2025. The emphasis is the develop- Eric McIntosh, Frederic Hemmer, Jean­Philippe Baud, ment of highly performant and parallel algorithms for track and event reconstruction from experiment raw data. myself and perhaps some others who were working at From the applicant, we expect a PhD in physics or computer science, a strong background in algorithm development that time around the biggest UNIX machine that had ever for high-performance computing, and good knowledge of current computer architectures and concurrency techno- crossed the threshold of the Computer Centre – a Cray Ten years on: CERN’s Computer Centre in 1998, with banks of SHIFT-distributed XMP­48, r unning UNICOS. computing arrays for major experiments. logies. Experience in data reconstruction in high-energy nuclear or particle physics experiments is highly desirable. At any rate, when we spoke to the Apollo salespeople about our idea, they liked it so much that they lent us the FORTRAN programs was easier than we had expected. Both positions require the ability to work in an international research team with appropriate skills in team work and biggest box they had, a DN10040 with four CPUs plus a After around six months, the system was generating communication in English. We offer to the successful applicants challenging positions at the forefront of nowadays s t a g g e r i n g 64 M b o f m e m or y a n d 4 G b o f d i s k s p a c e. T h e n, 25 per cent of all CPU cycles in the centre. Management technology, in the context of an international collaboration of about 400 members. to round it off, they offered to hire a person of our choice began to notice the results when we included HOPE’s for three years to work on the project at CERN. a c c o u nt i n g fi le s i n t h e w e e k l y r e p or t w e m a d e t h at plo ted The positions are limited to a period of four years. Salary is equivalent to that for public employees as specified in the I n Ja nu a r y 1990 t h e m a c h i n e w a s i n s t a l le d a n d o u r n e w suc h t hings in easy­to­read histograms. collective agreement for public employees (TVöD Bund). “h i r e l i n g”, E r i k Ja g e l, a n A p ol lo e x p e r t a f t e r h i s t i m e m a n­ We w e r e e n c o u r a g e d b y t h i s s uc c e s s a n d w e nt t o w or k o n aging the Apollo farm for the , coined the a prop os a l to e x te nd HOPE. The idea w a s to bu i ld a s c a l a­ GSI promotes the professional development of women and therefore expressly welcomes applications from women. name “HOPE” for the new project. (Hewlett­Packard had ble version from interchangeable components: CPU serv­ Handicapped persons will be preferentially considered when equally qualified. bought Apollo and OPAL had expressed interest, so it was ers, disk servers and tape servers, all connected by a fast to be the “HP OPAL Physics Environment”). network and software to create a distributed mainframe. Applications, specifying the posting ID and your earliest possible starting date, should be directed until January 6, We a s k e d w h e r e w e c o u ld fi n d t h e s p a c e t o i n s t a l l HOPE “Commodity” became the keyword – we would use the 2020 to: FAIR GmbH, c/o GSI Helmholtzzentrum für Schwerionenforschung GmbH, Abteilung Personal, Planck- in the Computer Centre. We just needed a table with the cheapest building­blo c k s a v a i l a ble f r o m t h e m a nu f a c t u r e r s strasse 1, 64291 Darmstadt, Germany DN10040 underneath and an Ethernet connection to the t hat gave t he best price per for mance for eac h funct ion. or by email to: [email protected] C r a y, t o g i v e u s a c c e s s t o t h e t a p e d at a. T h e r e pl y w a s: “O h, On how large a scale could we build such a system and there’s room in the middle” – where the recently obsolete what would it cost? We asked around, and received help Should you have questions on the advertised positions, please contact [email protected]. round tape units had been – so that was where HOPE went, from some colleagues who treated it as a design study. This article was looking quite lost in the huge computer room, with the A simulation was done of the workflow through such a For further positions to open up in the next time, please consult our recruitment web page: adapted from text IBM complex on one side and the Cray supercomputer on in CERN Courier system, bandwidth requirements were estimated for the https://www.gsi.de/en/jobscareer/job_offers.htm t he ot her. vol. 41, July/ fast network “backplane” that was needed to connect S o o n t h e HOPE c yc le s w e r e s t a r t i n g t o flo w. T h e m a c h i ne August 2001, everything, prices were calculated, essential software was was surprisingly reliable, and porting the big physics pp19–20 s k e t c h e d o ut a n d t h e m a n p o w e r r e q u i r e d for d e v e lo p m e nt

DECEMBER 2019 25

CCSupp_4_Computing_Shift_v3.indd 25 20/11/2019 12:47 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING

quite happy with our all-VAX VMS approach.” L3 replied, CERN “No thanks, we have all the computing power we need.” DELPHI replied, “Sorry, we’ve no time to look at this as THE CERN we’re trying to get our basic system running.” Only OPAL took a serious look. They had already been our partner in HOPE and also had a new collaborator from Indiana with some cash to invest and some small com- OPENLAB: puter system interface (SCSI) disks for a planned storage enhancement to their existing VMS-based system. They would give us these contributions until March 1991, the next L E P s t a r t-up – o n t h e c o n d it io n t h at e v e r y t h i n g w a s w or k- A NOVEL ing by then, or we’d have to return their money and disks. It was September 1990, and there was a lot of work to do. Our modular approach and use of the UNIX, C language, TCP/IP and SCSI standards were the keys to the very short TESTBED timescale we achieved. The design studies had included technical evaluations of various workstation and net- PC power: CERN’s Computer Centre in 2001 has wall-to-wall PCs. working products. By September, code development could begin and orders FOR THE GRID and operation was predicted. for hardware went out. The first tests on site with SGI Power Software development would be a challenge. Fortunately, Series servers connected via UltraNet took place at the end some of us had been working with Cray at CERN, add- of December 1990. A full production environment was in ing some facilities to UNIX that were vital for mainframe place by Marc h 1991, t he date set by OPAL. Grid computing was the computer buzzword of the decade, computing: a proper batch scheduler and a tape-drive And then we hit a problem. The disk server system began wrote CERN’s François Grey in this 2003 feature about a new reservation system, for example. These could be reused crashing repeatedly with unexplained errors. Our design Sverre Jarp, chief technology officer of the CERN openlab, with quite easily. evaluations had led us to choose a “high-tech” approach: model for partnership between CERN and industry. equipment from Enterasys Networks. Other new functions would include a distributed “stager” the use of symmetric multiprocessor machines from Silicon and a “disk-pool manager”. These would allow the pre- Graphics for both CPU and disk servers, connected by the rid computing is the computer buzzword of the Grid dedicated to LHC needs, drawing on the experience assembly of each job’s tape data (read from drives on tape sophisticated “UltraNet” Gigabit network backplane. One decade. Not since the World Wide Web was devel- of the EDG and other international efforts. This project s e r v e r s) i nt o e ffic ie nt l y-m a n a g e d d i s k p o ol s t h at w o u ldbe supporting argument had been that if the UltraNet failed G oped at CERN more than 10 years ago has a new has started running a global Grid, called LCG-1. located on disk servers, ready to be accessed by the jobs in or could not be made to work in time, then we could put networking technology held so much promise for both the CPU servers. Also new would be the “RFIO”, a remote all the CPUs and disks together in one cabinet and ride out science and society. The philosophy of the Grid is to pro- Enter the openlab file input–output package that would offer a unified and the OPAL storm. We hadn’t expected any problems in the vide vast amounts of computer power at the click of a T h e C E R N o p e n l a b for D at a G r id a p pl ic at io n s fit s i nt o C E R N’s optimized data-transfer service between all of the servers more conventional area of the SCSI disk system. mouse, by linking geographically distributed computers portfolio of Grid activities by addressing a key issue, namely via the backplane. It looked like Sun’s networking filing Our disks were mounted in trays inside the disk server, and developing “middleware” to run the computers as the impact on the LCG of cutting-edge IT technologies that system, but w as muc h more efficient. connected via high-performance SCSI channels. It looked though they were an integrated resource. Whereas the Web are currently emerging from industry. Peering into the standard, but we had the latest models of everything. gives access to distributed information, the Grid does the technological crystal ball in this way can only be done in SHIFT in focus Like a performance car, it was a marvel of precision but same for distributed processing power and storage capacity. close collaboration with leading industrial partners. The Finally, a suitable name was coined, again by Erik impossible to keep in tune. We tried everything, but still There are many varieties of Grid technology. In the com- benefits are mutual: through generous sponsorship of state- Jagel: “SHIFT”, for “Scalable Heterogeneous Integrated it went on crashing and we finally had to ask SGI to send mercial arena, Grids that harness the combined power of-the-art equipment from the partners, CERN acquires early FaciliTy”, suggesting the paradigm shift that was taking an engineer. He found the problem: inside our disk trays of many workstations within a single organization are access to valuable technology that is still several years from place in large-scale computing: away from mainframes w a s a n e x t r a m e t r e o f fl at c a ble w h ic h h a d n o t b e e n t aken already common. But CERN’s objective is altogether more the commodity computing market the LCG will be based on. and towards a distributed low-cost approach. into account in our system configuration. We had exceeded ambitious: to store petabytes of data from the Large Hadron In return, CERN provides demanding data challenges, The “SHIFT” proposal report was finished in July 1990. the strict limit of 6 m for single-ended SCSI, and in fact Collider (LHC) experiments in a distributed fashion and which push these new technologies to their limits – this is It had 10 names on it, including t he colleag ues from sev- it was our own fault. Rather than charging us penalties make the data easily accessible to thousands of scientists the “lab” part of the openlab. CERN also provides a neutral eral groups that had offered their ideas and worked on and putting the blame where it belonged, SGI lent us two around the world. This requires much more than just spare environment for integrating solutions from different part- the document. extra CPUs to help us to make up the lost computing time PC capacity – a network of major computer centres around ners, to test their interoperability. This is a vital role in an “Were 10 people working on this?” and “How many Cray for OPAL and ensure the success of the test period! the world must provide their resources in a seamless way. age where open standards (the “open” part of openlab) are resources were being used and/or counted?” came the stern At the end of November 1991, a satisfied OPAL doubled CERN and a range of academic partners have launched increasingly guiding the development of the IT industry. reply. In response, we pointed out that most of the 10 people its investment in CPU and disk capacity for SHIFT. At the several major projects in order to achieve this objective. In The CERN openlab for DataGrid applications was launched had contributed small fractions of their time, and that the same time, 16 of the latest HP 9000/720 machines, each the European arena, CERN is leading the European DataGrid in 2001 by Manuel Delfino, then the IT Division leader at Cray had been used simply as a convenient tape server. wor t h 10 C E R N Un it s o f C P U, a r r i v e d t o for m t h e fi r s t C e nt r a l (EDG) project, which addresses the needs of several scien- CERN. After a hiatus, during which the IT industry was It was the only UNIX machine in the Computer Centre Simulation Facility or “Snake Farm”. t i fic c o m mu n it ie s, i n c lud i n g h i g h-e n e r g y p a r t ic le p h y s ic s. rocked by the telecoms crash, the partnership took off in Like a with access to the standard tape drives, all of which were The stage was set for the exit of the big tidy mainframes The EDG has already developed the middleware necessary September 2002, when HP joined founding members Intel performance physically connected to the IBM mainframe at that time. at CERN, and the beginning of the much less elegant but to run a Grid testbed involving more than 20 sites. CERN is and Enterasys Networks, and integration of technologies car, it was Closer to home, the idea fell on more fertile ground, and e v ol v i n g s c e n e w e s e e t o d a y o n t h e flo or o f t h e C E R NCom- also leading a follow-on project funded by the Euro pean from all three led to the CERN opencluster project. This article was a marvel of we were told that if we could persuade at least one of the puter Centre. SHIFT became the basis of LEP-era computing Union, EGEE (Enabling Grids for E-Science in Europe), At present, the CERN opencluster consists of 32 Linux- adapted from text precision but four LEP experiments to invest in our idea, we could have and its successor systems are set to perform even more which aims to provide a reliable Grid service to European in CERN Courier based HP rack-mounted servers, each equipped with two impossible to matching support from the Division. The search began. We demanding tasks for the LHC, scaled this time to the size science. Last year, the LHC Computing Grid (LCG) project vol. 43, October 1 GHz Intel Itanium 2 processors. Itanium uses 64-bit pro- keep in tune spoke to ALEPH, but they replied, “No, thank you, we’re of a worldwide grid.  was launched by CERN and partners to deploy a global 2003, pp31–34 cessor technology, which is anticipated to displace today’s

26 DECEMBER 2019 DECEMBER 2019 27

CCSupp_4_Computing_Shift_v3.indd 26 20/11/2019 12:48 CCSupp_4_Computing_Openlab_v2.indd 27 20/11/2019 11:48 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN COURIER IN FOCUS COMPUTING

IBM joins CERN openlab to tackle the petabyte challenge contributing factor to the success of the data challenge European universities to work on applications of rid was a high-performance switched network from Enterasys t e c h n olo g y. A n d t h e C E R N o p e n l a b i s a c t i v e l y s up p or t i n g The LHC will generate more than its participation in the European is designed to provide scalable, Networks with bps Ethernet capability, which routed the establishment of a Grid café for the CERN Microcosm 10 petabytes of data per year, the DataGrid project. The company high-performance and highly the data from PC to disk and from disk to tape. exhibition a Web caf for the general public with a focus equivalent of a stack of CD ROMs sees Grid computing as an available management of huge on rid technologies, including a dedicated website that 20 km high. There is no obvious important technological realization amounts of data using a single file n open ialoue will link to instructive rid demos. way to extend conventional data- of the vision of “computing on namespace, regardless of where or While many of the benefi ts of the CERN openlab for the Eff orts are ongoing in the CERN openlab to evaluate storage technology to this scale, so demand”, and expects that as on what operating system the data industrial partners stem from such data challenges, there is other possible areas of technological collaboration with new solutions must be considered. Grid computing moves from reside. (Recently, IBM announced also a strong emphasis in openlab’s mission on the oppor- current or future partners. The concept is certainly prov- IBM was therefore keen to join exclusive use in the scientific and that the commercial version will be tunities that this novel partnership provides for enhanced ing popular, with other major IT companies expressing the CERN openlab in April 2003, technical world into commercial named IBM TotalStorage SAN File communication and cross-fertilization between CERN and an interest in joining. This could occur by using comple- in order to establish a research Rainer Többicke of CERN with the IBM- applications, it will indeed be the System.) IBM and CERN will work the partners, and between the partners themselves. Top mentary technologies to provide added functionality and collaboration aimed at creating sponsored 28 terabyte storage system. foundation for the first wave of together to extend Storage Tank’s engineers from the partner companies collaborate closely performance to the existing opencluster. Or it could involve a massive data-management e-business on demand. capabilities so it can manage the with the CERN openlab team in CERN’s IT Division, so that launching new projects that deal with other aspects of rid system built on Grid computing, IBM has been a strong supporter The technology that IBM brings LHC data and provide access to it the inevitable technical challenges that arise when dealing technology relevant to the LC, such as rid security and Te purpose which will use innovative of Grid computing, from its to the CERN openlab partnership is from any location worldwide. w it h n e w t e c h n olo g ie s a r e d e a lt w it h r a pid l y a n d e ffi c ie nt l y. mobile access to the rid. of te ata storage virtualization and file- sponsorship of the first Global Grid called Storage Tank. Conceived in Furthermore, as part of their sponsorship, HP is funding I n c o n c lu s io n, t h e C E R N o p e n l a b p ut s a n e w t w i s t o n a n Brian E Carpenter, IBM Systems callene management technology. Forum in Amsterdam in 2001 to IBM Research, the new technology Group, and Jai Menon, IBM Research. two CERN fellows to work on the CERN opencluster. The a c t i v it y – c ol l a b or at io n w it h le a d i n g I T c o m p a n ie s – t h at was to sow CERN openlab team also organizes thematic workshops has been going on at CERN for decades. Whereas tradi- tat CERNs on specifi c topics of interest, bringing together leading tionally such collaboration was bilateral and focused on 32-bit technology over the next few years. As part of the One of the major challenges of the CERN opencluster technical experts from the partner companies, as well as “here-and-now” solutions, the CERN openlab brings a IT ivision agreement with the CERN openlab partners, this cluster is project is to take maximum advantage of the partners’ public “First Tuesday” events on general technology issues multilateral long-term perspective into play. This may be a is on track to planned to double in size during 2003, and double again in 10 Gbps technology. In April, a first series of tests was related to the openlab agenda, which attract hundreds of useful prototype for future industrial partnerships in other cope wit te 2004, making it an extremely high-performance computing conducted between two of the nodes in the cluster, which participants from the industrial and investor communities. h i g h-t e c h a r e a s, w h e r e C E R N a n d a r a n g e o f p a r t n e r s c a n enormous ata engine. In April this year, IBM joined the CERN openlab, were directly connected (via a “back-to-back” connection) A C E R N o p e n l a b s t ud e nt pr o g r a m m e h a s a l s o b e e n c r e- spread their risks and increase their potential for success rates epecte contributing advanced storage technology that will be through 10 Gbps Ethernet NICs. The transfer reached a data ated, bringing together teams of students from diff erent by working on long-term development projects together.  from te LHC combined with the CERN opencluster (see box above). rate of 755 megabytes per second (MB/s), a record, and For high-speed data transfer challenges, Intel has delivered double the maximum rate obtained with 32-bit processors. 10 Gbps Ethernet Network Interface Cards (NICs), which have The transfer took place over a 10 km fibre and used very been installed on the HP computers, and Enterasys Networks big frames (16 kB) in a single stream, as well as the regular has delivered three switches equipped to operate at 10 gigabits suite of Linux Kernel protocols (TCP/IP). per second (Gbps), with additional port capacity for 1 Gbps. The best results through the Enterasys switches were Over the next few months, the CERN opencluster will o b t a i n e d w h e n a g g r e g at i n g t h e 1 Gbps bi-directional traffic be linked to the EDG testbed to see how these new tech- involving 10 nodes in each group. The peak traffic between nologies perform in a Grid environment. The results will the switches was then measured to be 8.2 Gbps. The next be closely monitored by the LCG project to determine the stages of this data challenge will include evaluating the Libera Beam Loss potential impact of the technologies involved. Already next version of the Intel processors. at this stage, however, much has been learned that has - Consists of: plastic scintillator-based beam loss implications for the LCG. Data challenge Monitor System detector and dedicated electronics For example, thanks to the preinstalled management cards In May, CERN announced the successful completion of a - X-rays, Gamma-rays, electrons, in each node of the cluster, automation has been developed major data challenge aimed at pushing the limits of data Sensitive to: to allow remote system restart and remote power control. storage to tape. This involved, in a critical way, several protons and neutrons T h i s d e v e lo p m e nt c o n fi r m e d t h e n o t io n t h at – for a m o d est components of the CERN opencluster. Using 45 newly - Where to use: LINAC, transfer lines, storage rings, hardware investment – large clusters can be controlled installed StorageTek tape drives, capable of writing to tape extraction lines with no operator present. This is highly relevant to the LCG, at 30 MB/s, storage-to-tape rates of 1.1 GB/s were achieved - Main functions: ADC raw data, loss integration / which will need to deploy such automation on a large scale. for p e r io d s o f s e v e r a l h o u r s, w it h p e a k s o f 1.2 GB/s – roughly averaging / counting over user-configurable time Several major physics software packages have been equivalent to storing a whole movie on DVD every four intervals successfully ported and tested on the 64-bit environ- seconds. The average sustained over a three-day period - Interfaces: EPICS, Tango, TCP-IP, Web-Server ment of the CERN opencluster, in collaboration with the was of 920 MB/s. Previous best results by other research and command-line groups responsible for maintaining the various packages. labs were typically less than 850 MB/s. Benchmarking of the physics packages has begun and the The significance of this result, and the purpose of the first results are promising. For example, PROOF (Parallel data challenge, was to show that CERN’s IT Division is ROOT Facility) is a version of the popular CERN-developed on track to cope with the enormous data rates expected software ROOT for data analysis, which is being developed from the LHC. One experiment alone, ALICE, is expected scan and try out for free! for interactive analysis of very large ROOT data files on a to produce data at rates of 1.25 GB/s. cluster of computers. The CERN opencluster has shown In order to simulate the LHC data acquisition procedure, that the amount of data that can be handled by PROOF a n e q u i v a le nt s t r e a m o f a r t i fic i a l d at a w a s g e n e r at e d u s ing scales linearly with cluster size – on one cluster node it 40 computer servers. These data were stored temporarily takes 325 s to analyse a certain amount of data, and only to 60 disk servers, which included the CERN opencluster Instrumentation Technologies, d.o.o. / Velika pot 22, SI-5250 Solkan, Slovenia / e: [email protected] / w: www.i-tech.si 12 s when all 32 nodes are used. servers, before being transferred to the tape servers. A key

28 DECEMBER 2019 DECEMBER 2019 2

CCSupp_4_Computing_Openlab_v2.indd 28 20/11/2019 11:48 CCSupp_4_Computing_Openlab_v2.indd 29 20/11/2019 12:50 CERNCOURIER www. I n F o c u s C o m p u t i n g CERNCOURIER.COM

CERN COURIER IN FOCUS COMPUTING CERN’S ULTIMATE ACT OF OPENNESS

The seed that led CERN to relinquish ownership of the web in 1993 was planted when the Organization formally came into being, wrote Maarten Wilbers and Jonathan Drakeford of CERN legal service on the web’s 30th anniversary.

t a mere 30 years old, the World “brain drain” to the US, from a Europe Wide Web already ranks as one still recovering from the devastating A of humankind’s most disrup- effects of w ar. tive inventions. Developed at CERN in In 1953, CERN’s future Member States the early 1990s, it has touched prac- agreed on the text of the organisa- tically every facet of life, impacting tion’s founding Convention, defining industry, penetrating our personal its mission as providing “for collab- lives and transforming the way we oration among European States in transact. At the same time, the web nuclear research of a pure scientific is shrinking continents and erasing and fundamental character”. With the borders, bringing with it an array of public acutely aware of the role that benefits and challenges as humanity destructive nuclear technology had adjusts to this new technology. played during the war, the Convention This reality is apparent to all. What is additionally stipulated that CERN was less well known, but deserves recogni- to have “no concern with work for mili- tion, is the legal dimension of the web’s tary requirements” and that the results history. On 30 April 1993, CERN released of its work, were to be “published or a memo (part of which is pictured right) otherwise made generally available”. that placed into the public domain all In the early years of CERN’s exist- of the web’s underlying software: the ence, the openness resulting from this basic client, basic server and library requirement for transparency was of common code. The document was essentially delivered through tradi- addressed “To whom it may concern” – tional channels, in particular through which would suggest the authors were publication in scientific journals. Over not entirely sure who the target audi- time, this became the cultural norm ence was. Yet, with hindsight, this line at CERN, permeating all aspects of can equally be interpreted as an unin- its work both internally and with its tended address to humanity at large. collaborating partners and society at The legal implication was that CERN large. CERN’s release of the WWW soft- relinquished all intellectual property ware into the public domain, arguably rights in this software. It was a delib- in itself a consequence of the open- erate decision, the intention being that ness requirement of the Convention, a no-strings-attached release of the could be seen as a precursor to today’s software would “further compatibility, common practices, web-based tools that represent further manifestations of and standards in networking and computer supported col- CERN’s openness: the SCOAP3 publishing model, open- laboration” – arguably modest ambitions for what turned source software and hardware, and open data. out to be such a seismic technological step. To understand Perhaps the best measure for how ingrained openness is what seeded this development you need to go back to the in CERN’s ethos as a laboratory is to ask the question: “if 1950s, at a time when “software” would have been better CERN would have known then what it knows now about the understood as referring to clothing rather than computing. impact of the World Wide Web, would it still have made the web software available, just as it did in 1993?” We would like European project to suggest that, yes, our culture of openness would provoke This article was CERN was born out of the wreckage of World War II, playing the same response now as it did then, though no doubt a adapted from a role, on the one hand, as a mechanism for reconciliation modern, open-source licensing regime would be applied. text in CERN b e t w e e n for m e r b e l l i g e r e nt s, w h i le, o n t h e o t h e r, offe r i n g Courier vol. 59, This, in turn, can be viewed as testament and credit to the European nuclear physicists the opportunity to conduct March/April 2019, wisdom of CERN’s founders, and to the CERN Convention, their research locally. The hope was that this would stem the pp39–40 which remains the cornerstone of our work to this day. 

30 DECEMBER 2019

CCSupp_4_Computing_Openness_v2.indd 30 20/11/2019 11:51 CERNCOURIER www. I n F o c u s C o m p u t i n g CAEN Electronic Instrumentation 725-730 Digitizer Families CHECK THIS OUT! 8/16 Channel 14-bit 250-500 MS/s Digitizer The perfect mix of sampling rate, resolution and channel density for the maximum flexibility

CAEN continous R&D efforts are the driving force behind the technology of 725/730 Digitizers, the best seller models selected by many users and experiments Worldwide.

APPLICATIONS COMPATIBLE DETECTORS

• Nuclear and Particle • Neutrino Physics • Solid organic and Physics • Environmental inorganic Scintillators • Dark Matter and Monitoring • Liquid Scintillators Astroparticle Physics • Homeland Security • HPGe • Fast Neutron .... and more • SiPM Spectroscopy • Diamond Detectors • Fusion Plasma Diagnostic .... and more

Precise signal reconstruction, advanced zero suppression algorithms, high energy resolution, pulse-shape discrimination and sub-ns timing capabilities ... all of these are in 725/730 Digitizers, the ideal solutions for many applications in Physics and Nuclear Engineering. FIRMWARE & SOFTWARE WAVE DUMP CAEN

™ C and LabVIEW libraries available for custom developments.

- VME6 er Families 4, VME64X gitiz , NIM, Di Desk orm top, vef Rac Wa k

12/14-bit 12-bit 10-bit 12-bit 14-bit 100/250 MS/s 62.5 MS/s 1-2-4 GS/s 3.2/5 GS/s @125 MS/s

720/724 Digitizer Family 740 Digitizer Family 751/761 Digitizer Family 742/743 Switched Capacitor R5560/ DT5560SE Digitizer 8/4/2 Ch. 64/32 Ch. 2/1/8-4/4-2 Ch. Digitizer Family Family 32+2/16+1 - 16/8 Ch. 32/128 Ch.

Small details… Great differences www.caen.it

IFC.indd 1 20/11/19 14:11

www.

CERNCOURIER

I n F o c u s C o m p u t i n g