COMENIUS UNIVERSITY, BRATISLAVA, SLOVAKIA Faculty of Mathematics, Physics and Informatics

e ALICE Silicon Detector System

Svetozár Kapusta Under the supervision of Peter Choula

CERN 2009

I declare that I have elaborated this thesis independently, citing all information sources that Iused.

CERN, Svetozár Kapusta

“e observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself.”

Bertrand Russell

For my parents

Contents

List of Abbreviations ix

1Introduction 1

1.1 eQuark-GluonPlasma...... 3

2 e and its Experiments 7

2.1 eLargeHadronCollider...... 7

2.2 Acceleratorphysics...... 7

2.3 eLargeHadronColliderdesignparameters...... 9

2.4 eLHCExperiments...... 13

2.4.1 eATLASDetector...... 14

2.4.2 eCMSDetector...... 14

2.4.3 eALICEDetector...... 15

2.4.4 eLHCbDetector...... 15

2.4.5 eTOTEMDetector...... 16

2.4.6 eLHCFDetector...... 17

2.5 eLHCStartupandcurrentstatus...... 17

3 A Large Ion Collider Experiment (ALICE) 21

3.1 eALICECentralBarrel...... 22

3.1.1 InnerTraingSystem...... 23

i ii CONTENTS

3.1.2 TimeProjectionChamber...... 24

3.1.3 ACORDE...... 24

3.1.4 TimeOfFlightdetector...... 25

3.1.5 HighMomentumParticleIdentificationDetector...... 25

3.1.6 TransitionRadiationDetector...... 25

3.1.7 PhotonSpectrometer...... 25

3.1.8 ElectromagneticCalorimeter...... 26

3.2 ForwardDetectors...... 26

3.3 eForwardMuonSpectrometer...... 26

3.4 OnlineSystems...... 27

3.4.1 ExperimentalControlSystem...... 27

3.4.2 Trigger...... 27

3.4.3 DataAcquisition...... 27

3.4.4 High-LevelTrigger...... 28

3.4.5 DetectorControlSystem...... 28

4 e ALICE Silicon Pixel Detector 29

4.1 SPDLayout...... 30

4.2 e ALICE1LHCb Chip...... 34

5 e Silicon Pixel Detector Testbeams and Commissioning 39

5.1 Assemblytestbeam...... 40

5.1.1 ExperimentalSetup...... 40

5.1.2 X-Ytable...... 41

5.1.3 Results...... 41

5.2 Laddertestbeam...... 44

5.2.1 Experimentalsetup...... 44 CONTENTS iii

5.2.2 Results...... 45

5.3 Highmultiplicitytestbeam...... 49

5.3.1 ExperimentalSetup...... 49

5.3.2 PCI-2002 ...... 51

5.3.3 Results...... 51

5.4 JointITStestbeam...... 57

5.4.1 ExperimentalSetup...... 57

5.4.2 DataAcquisitionandDetectorControlSystem...... 59

5.4.3 Trigger...... 60

5.4.4 ResultsandOfflineAnalysis...... 60

5.5 SimulationoftheSiliconPixelDetector...... 61

5.6 SPD Commissioning and Cosmic runs in 2007-2009 ...... 63

5.6.1 Calibrationandalignment...... 65

6 e ALICE Detector Control System 69

6.1 IntroductiontoControlSystems...... 69

6.1.1 JCOP...... 70

6.2 PVSS...... 72

6.3 ALICEDCS...... 76

6.4 Systemlayout...... 77

6.4.1 Fieldlayer...... 78

6.4.2 Controllayer...... 78

6.4.3 Supervisorylayer...... 79

6.5 eFiniteStateMaines...... 79

6.6 Partitioning...... 80

6.7 eUserInterface...... 81 iv CONTENTS

6.8 eJCOPFRAMEWORK...... 84

6.9 Dataflow...... 85

6.9.1 eSynronizationDataFlowinALICEDCS...... 87

6.9.2 eControlsDataFlowinALICEDCS...... 88

6.10ALICEDCSdatabase...... 92

6.10.1Configurationdatabase...... 94

6.10.2ArivalDatabase...... 98

6.10.3PVSSAriving...... 99

6.10.4PVSS-OracleAriving...... 100

6.10.5PVSS-OracleArivingPerformance...... 103

6.10.6ALICEDCSdatabaseoperationandmaintenance...... 105

6.11AMANDA...... 110

6.12Systemcommissioningandfirstoperationexperience...... 113

7 Conclusions 115

Bibliography 118

List of Figures 128

List of 137

List of Publications 138

Anowledgments 142 Názov práce: Kremíkové pixlové detektory pre experiment ALICE. Autor: Svetozár Kapusta Školitel: Peter Choula Klučove slová: testbeam,SPD,ALICE,DCS

Veľký hadrónový zrážač (LHC) sa momentálne blíži svojmu opätovnému spusteniu v Európskom Centre pre Jadrový Výskum (CERN). LHC bol po prvý krát spustený 10-teho Septembra 2008. Spustenie bolo obrovským úspeom, pretože už v rámci jednej hodiny od vpustenia zväzkov do LHC sa podarilo previesť prvý zväzok celým urýľovačom v jednom smere a ešte v ten istý deň v opačnom smere. Počiatočná eufória bola však prerušená elektriou nehodou 19-teho Septembra počas zvyšovania nominálneho prúdu v dipóle v Sektore 34 zo 7 kA na 9,3 kA (čo zodpovedá zväzku s energiou 5,5 TeV). Táto nehoda spôsobila meanié škody, únik hélia a značné oneskorenie očakávaný kolízií na LHC.

Experiment ALICE bude využívať zrážky urýlený častíc vytváraný na LHC za účelom študovania správania sa silne interagujúcej hmoty v extrémny hustotá a vysoký teplotá. Kremíkový pixlový detektor (SPD) tvorí dve najvnútornejšie vrstvy vnútorného dráhovacieho systému (ITS) detektora ALICE. Dve vrsty cylindriého tvaru sa naádzajú 3,9 cm a 7,6 cm od bodu interakcií (IP). Jednou z hlavný úloh detektora SPD je poskytovať čo najpresnejšiu polohu elektriy nabitý častíc, ktoré ním preleteli. Táto informácia je mimoriadne dôležitá pre analýzu ťažký kvarkov rozpadajúci sa slabou interakciou, pretože typiým znakom týto rozpadov sú sekundárne vertexy vzdialené len niekolko stoviek mikrometrov od primárny vertexov. Pri zrážka ťažký iónov olova môže hustota častíc dosiahnuť až 80 dráh na cm2 vo vnútornej vrstve SPD. SPD dosahuje polohové rozlíšenie okolo ≈12 μm v smere rϕ aokolo≈70 μm v smere z. Obsadenosť kanálov detektora SPD sa očakáva v rozmedzí od 0,4% do 1,5% čo umožnuje detektoru SPD byť výnikajúcim detektorom na meranie násobnosti nabitý častíc v oblasti pseudorapidity |η| < 2. Ďaľšou jedinečnou vlastnosťou detektora SPD je, že skombinovaním všetkýjehozrekonštruovanýdráhmôžemezískaťhrubýodhadpolohyprimárnehovertexu. JednouzvelkývýzievjeajvysokéobmedzenienapoužitémateriályzktorýjeSPDpostavený (<1% z radiačnej dĺžky na jednu vrstvu) za účelom čo najmenšieho ovplyvňovania traverzujúci častíc. Kremíkový senzor a jeho vyčítavajúci čip majú spolu celkovú hrúbku iba 350 μm a spojenia pre signály medzi nimi a riadiacou a vyčítavacou elektronikou sú z hliníka.

V tejto dizertačnej práci prezentujem môj prínos k projektu ALICE SPD, zhrniem fázy vývoja, zhotovovania a testov projektu ALICE SPD. Môj prínos k projektu ALICE DCS je ťiež predstavený.

V uplynulý roko kolaborácia ALICE SPD uskutočnila štyri testy so zväzkami urýlený častíc. Hlavným cieľom týto testov bola kontrola funkčnosti čipov a príslušnej elektroniky, kremíkový senzorov, vyčítavajúcej elektroniky a takisto aj systémov online - DAQ, Trigger a DCS spolu aj s i programovým vybavením a taktiež aj systémov offline. Prototypy vyčítavajúci pixlový čipov a kremíkový senzorov boli testované počas odlišný podmienok (určovanie prahu detekcie, rôzne naklonenia vzhľadom na zväzok, meranie detekčnej účinnosti v závislosti od veľkosti záverného napätia, atď.). Okrem vyčítavajúci pixlový čipov a kremíkový sen- zorov s veľkou hrúbkou boli takisto testované detektory s malou hrúbkou a taktiež aj 5-čipové detektory (tzv. ladder) tak, ako boli navrhnuté pre experiment ALICE. Počas a najmä po testo so zväzkami urýlený častíc som vyvíjal programové vybavenia na overenie kvality zberaný dát, na zlúčenie dvo dátový tokov z dvo rôzny vrstiev a typov detektora SPD, na hľa- danie a možné odstránenie pixlov s nadmiernym šumom, na koreláciu priestorový koordinát z rôzny vrstiev detektora SPD, na komplexnú offline analýzu nameraný dát, obsahujúcu zobrazenie zásahov detektorov SPD, integrálne zobrazenie zásahov detektorov SPD, analýzu event po evente, určenie detekčnej účinnosti, násobnosti častíc, veľkosti klastrov, atď. Prototyp finálnej vyčítavajúcej elektroniky a dvo finálny 5-čipový detektorov bol tiež otestovaný za overenia funkčnosti programového vybavenia online systémov DAQ, Trigger a DCS a systému offline.

Konfigurácia, vyčítavanie a ovládanie detektora SPD vykonávané pomocou Ovládacieho sys- tému detektora (DCS). Ako člen tímu pre Koordináciu ovládacieho systému experimentu ALICE (ACC) som mal možnosť podieľať sa na koncepcii, vývoji, uvedení do prevádzky a samotnej prevádzke tohto systému. Na zodpovednosť som si zobral databázové systémy a vyvinul som meanizmus na spoľahlivé nastavenie predradenej elekroniky (FERO). Detektor SPD bol ukáž- kovým príkladom pre ostatné tímy detektorov, ktoré si osvojili tento meanizmus. Vyvinul som a zrealizoval uovávanie monitorovaný dát a podieľal som sa na vyvorení meanizmu na výmenu dát medzi systémom DCS a systémom offline. DCS a nástroje na zaobádaznie s dátami sú taktiež popísané v tejto práci.

Mnohé výsledky uvedené v tejto práci boli prezentované na niekoľký konferenciá a opublikované v renomovaný časopiso. Zúčastnil som sa na uvedení systému DCS a detektora SPD do prevádzky a zúčastnil som sa na zbere SPD a DCS dát z kozmiého žiarenia v experimente ALICE. Zhruba ≈100 k úspešne zozbieraný eventov podstatne prispelo k štúdiu priestorového usporiadania nielen detektora SPD, ale celého vnútorného dráhovacieho systému ITS. Title: e Silicon Pixel Detector for the ALICE Experiment at CERN. Autor: Svetozár Kapusta Supervisor: Peter Choula Key words: testbeam,SPD,ALICE,DCS

e Large Hadron Collider (LHC) is again reaing its startup phase at the European Orga- nization for (CERN). e LHC started its operation on the 10th of September, 2008 with huge success managing to sent the the first beam successfully around the entire ring in less than an hour a"er the first injection in one direction, and later that day in the opposite direction. Unfortunately, on the 19th of September, an accident occurred during the 5.5 TeV magnet commissioning in Sector 34, whi will significantly delay the operation of the LHC.

e ALICE experiment will exploit the collisions of accelerated ions produced at the LHC to study strongly interacting matter at extreme densities and high temperatures. e ALICE Silicon Pixel Detector (SPD) represents the two innermost layers of the ALICE Inner Traing System (ITS) located at radii of 3.9 cm and 7.6 cm from the Interaction Point (IP). One of the main tasks of the SPD is to provide precise traing information. is information is fundamental for the study of weak decays of heavy flavor particles, since the corresponding signature is a secondary vertex separated from the primary vertex only by a few hundred micro meters. e tra density couldbeashighas80traspercm2 in the innermost SPD layer as a consequence of a heavy ioncollision.eSPDwillprovideaspatialresolutionofaround≈12 μm in the rϕ direction and ≈70 μm in the z direction. e expected occupancy of the SPD ranges from 0.4% to 1.5% whi makes it an excellent arged particle multiplicity detector in the pseudorapidity region |η| <2. Furthermore, by combining all possible hits in the SPD, one can get a rough estimate of the positionoftheprimaryinteraction.Oneoftheallengesisthetightmaterialbudgetconstraint (<1% radiaton length per layer) in order to limit the scattering of the traversing particles. e silicon sensor and its readout ip have a total thiness of only 350 μm and the signal lines from the front-end to the on-detector electronics are from aluminum.

In this thesis, I present my involvement in the ALICE SPD project, I summarize the design, the construction, and the testing phase of the ALICE SPD. My involvement in the ALICE DCS project is also presented.

During the past years the ALICE SPD collaboration has carried out four testbeams. e primary objective of these testbeams was the validation of the pixel ASICs, the sensors, the read-out electronics and the online systems - Data Acquisition System (DAQ), Trigger (TRG) and Detector Control System DCS with their so"ware and offline as well. e pixel ip and sensor prototypes were studied under different conditions (threshold scan, different inclination angles with respect to the beam, bias voltage scan, etc.). Tests of thi and also thin single ip assemblies and ip ladders as designed to be used in the ALICE experiment were also performed. During and a"er the testbeams I developed so"ware to verify the data quality, to merge 2 data streams coming from different planes with a different format, to find and eventually remove noisy pixels offline, to correlate the spatial information from different planes, to run a complex offline analysis of the testbeam data, including hit maps, integrated hit maps, event by event analysis, efficiency, multiplicity, cluster size, etc. e prototype full read-out ain with two ladders, the DAQ, Trigger and DCS online systems with their so"ware and also offline code were tested and validated during the testbeams.

Configuration, readout and control of the SPD is performed via the Detector Control System DCS. As a member of the ALICE Control Coordination ACC team, I had the opportunity to participate in the design, development, commissioning and operation of this system. I took responsibility for the database systems and developed meanisms for configuring the Front- end Electronics (FERO). e SPD has been used as a working example for other detector groups whi adopted this approa. I developed and implemented a meanism of conditions data arival and participated in the creation of data exange meanism between the DCS and ALICE offline. e DCS as well as data handling tools are described in the thesis. List of Abbreviations

ADC Analog to Digital Converter ALICE A Large Ion Collider Experiment ACC ALICE Control Coordination ACORDE ALICE Detector ALTRO ALICE TPC Readout AMANDA A Manager for DCS Arives ANSI American National Standards Institute API Application Programming Interface ASCII American Standard Code for Information Interange ASIC Application-Specific Integrated Circuit ATLAS A Toroidal LHC Apparatus BE Ba-end BLOB Binary Large Object CAN Controller Area Network CASTOR CERN Advanced STORage manager CERN Organisation (originally Conseil) Européenne pour la Reere Nucléaire CDS CERN Document Server CLI Command Line Interface CMOS Complementary metal–oxide–semiconductor CMS COTS Commercial Off e Shelf CPU Central Processing Unit CR Counting Room CSC Cathode Strip Chamber CTP Central Trigger Processor DAC Digital to Analog Converter DAQ Data Acquisition DB Database DCS Detector Control System DDL Detector Data Link DIP Data Interange Protocol DIM Distributed Information Management DELPHI Detector with Lepton Photon and Hadron Identification

ix DP Data Point DPE Data Point Element DPT Data Point Type DT Dri" Tubes EMCAL Electromagnetic Calorimeter ECS Experiment Control System FC Fibre Channel FE Front-End FED Front-End Device FERO Front-End and Readout Electronics FSM Finite State Maines FIFO First In First Out FK Foreign Key FMD Forward Multiplicity Detector FXS File Exange Server GOL Gigabit Optical Link GDC Global Data Collector GEANT Geometry And Traing GEM Gas Electron Multiplier HBA Host Bus Adapter HCAL Hadronic Calorimeter HEP High Energy Physics HLT High Level Trigger HMPID High Momentum Particle Identification Detector HV High Voltage I/O Input Output IP Interaction Point IP Internet Protocol ITS Inner Traing System JCOP Joint Controls Project JTAG Joint Test Action Group LAN Local Area Network LDC Local Data Concentrator LEP Large Electron Positron Collider LHC LargeHadronCollider LHCb Large Hadron Collider Beauty Experiment LHCf LargeHadronColliderForwardExperiment LTU Local Trigger Units LUN Logical Unit Number LV Low Voltage Mbps Megabits per Second MCM Multi Chip Module MRPC Multi-gap Resistive Plate Chambers MWPC Multi Wire Proportional Chamber NFS Network File System OCCI Oracle C++ Call Interface OCDB Offline Conditions Database ODBC Open Database Connectivity OLE Object Linking and Embedding ON Operator Node OPC OLEforProcessControl OS Operating System PC Personal Computer PCI Peripheral Component Interconnect PID Particle Identification PIT Pixel Trigger System PHOS Photon Spectrometer PK Primary Key PLC Programmable Logic Controller PL/SQL Procedural Language/Structured Query Language PMD Photon Multiplicity Detector PVSS Prozessvisualisierungs und Steuerungssystem QCD Quantum Chromodynamics QED Quantum Electrodynamics QGP Quark-Gluon Plasma RAC Real Application Cluster RAID Redundant Array of Inexpensive Disks RAM Random Access Memory RDB Relational Database RF Radio Frequency RICH Ring Imaging Čerenkov Detector RHIC Relativistic Heavy Ion Collider RMAN Recovery Manager RMS Root Mean Square RORC Read Out Receiver Cards RPC Resistive Parallel Plate Chambers SAN Storage Area Network SCADA Supervisory Control And Data Acquisition SDD Silicon Dri" Detector SLAC Stanford Linear Accelerator SPD Silicon Pixel Detector SQL Structured Query Language SSD Silicon Strip Detector SM Standard Model SUSY Supersymmetry TOF Time Of Flight TOTEM Total Elastic and diffractive cross section Measurement TPC Time Projection Chamber TRD Transition Radiation Detector TRG Trigger UI User Interface WN Worker Node WAN Wide Area Network ZDC Zero Degree Calorimeter Chapter 1

Introduction

Humankind’scuriosityaboutthebasicquestionsoftheoriginofmatterandtheUniversedates batoman’sbeginnings.CurrentHighEnergyPhysics(HEP)continuesthesearforananswer to these and other questions like, what the matter and energy content of the Universe is, why is matter so mu more abundant than anti-matter, and many more questions. Nowadays we know that the Universe originated in a singularity, called the Big-Bang, whi occurred about ≈13.7 billion years ago with high energy density and a temperature set by the Plan scale, T≈1.22×1019 GeV. Ever since, the Universe has been expanding and therefore cooling. e universe underwent a series of phase transitions during its early expansion - roughly 10 μs a"er the Big Bang a phase transition occurred in whi all the matter of the Universe was converted from a plasma state made of colored states quarks and gluons, a Quark-Gluon Plasma (QGP) to a phase consisting of color-singlet hadrons.

e Standard Model (SM) is a well established theory whi explains experimental phenomena witnessed in the laboratory and is also able to predict new ones. e SM is a collection of theories whi are joined together in an attempt to build a single mathematical equation whi would be able to describe the fundamental particle physics. Since classical physics is a macroscopic extreme of the quantum physics, the SM actually represents our attempts for a single equation, a theory of particle physics.

Almost all of the new subatomic particles discovered during the last century in cosmic rays, fixed target and collider experiments were not present in ordinary matter and were highly 1 unstable. All particles are categorized into leptons (spin 2 point-like particles, like the electron), hadrons (half-integer spin non-point-like particles like the ) and gauge bosons (force carriers like the massless photon). Hadrons presented the largest mystery as it seemed that the number of particles and resonances, ea with different properties, had no limit. e particles were grouped together and empirically assigned quantum numbers without explanation describing their behavior. e SM brought in order, explanation, and it also reveals the beauty of elementary particle physics.

e SM is union of Quantum Electrodynamics (QED), Quantum romodynamics (QCD) and the theory of the weak interaction. QED predicts the interaction of electrically arged

1 2 1. INTRODUCTION

particles with photons. QCD describes the interaction of particles having a color arge. e weak interaction describes flavor dynamics - the interactions of quarks and leptons with W± and Z0 gauge bosons. In the SM the fundamental blos of matter consist of point-like particles 1 whi have a spin of 2 ,namelysixquarks and six leptons (see Table 1.1 with their corresponding antiparticles. ey are categorized into 3 generations. e members of all generations can be, in a simplified view, regarded as cousins since the members of the 2nd and the 3rd generation have most of their quantum numbers the same as their corresponding members of the 1st generation, the only difference is their higher mass. ese particles interact by the exange of force-mediating particles known as gauge bosons. ere exist four fundamental forces in the Universe, namely the strong, electromagnetic, weak and gravitational. All of them have their mediating particles that facilitate the corresponding interaction. For the strong force it is a gluon, the electromagnetic interaction is mediated by a photon, the weak force is caused by the exange of W± and Z0 bosons and it is assumed that the gravitational force is mediated by gravitons. Gravitons have not been observed experimentally yet. Leptons interact via the weak force. Charged leptons interact via the weak force and via the electromagnetic force. Quarks interact via the the weak force, the electromagnetic force and also via the strong force, becausequarkshaveapropertycalledcolor.InQCDthecolorargeisroughlytheequivalentof electromagneticargeinQED.ecolorcantakeoneofthreepossiblevalues,usuallyreferred to as red, green, and blue. Quarks are confined in the form of hadrons that are colorless (referred to as white). Hadrons can be either baryons or mesons. Baryons consist of three quarks or anti-quarks, qqq or qqq (for example the proton is uud). Mesons consist of a quark anti-quark pair qq (for example the π+ is ud). e strong force between the colored quarks is mediated by theeightgluonbosons.However,thegluonsthemselvescarrycolorfromwhifollowsthat gluons interact among themselves. e interaction of gluons gives rise to something, that is a remarkable feature of QCD and is called the asymptotic freedom,namelythattheinteractionforce between quarks weakens as they get closer to ea other. Asymptotic freedom has a spectacular consequence.Aboveacertaincriticaltemperatureanddensitythequarksandgluonsarefreed from their hadrons and create a deconfined phase of matter also known as the QGP.

1st Generation 2nd Generation 3rd Generation u up c arm t top Quarks d down s strange b bottom electron muon tau ν νμ ντ Leptons e neutrino neutrino neutrino e electron μ muon τ tau

Table 1.1: An overview of fundamental fermions.

Although the SM is the best theory we have currently, it is not a definitive theory as it suffers from several limitations - It does not account for the gravitational force. Also the origin of particle masses is not resolved - whenever the mass of a particle is to be known, it has to be determined experimentally. In total there are 19 free parameters in the SM whi must be determined experimentally. Furthermore, the SMcannotexplainsomecosmologicalphenomena like the origin of Dark Matter and why the excess of matter over anti-matter is observed in the Universe. e Large Hadron Collider (LHC), currently again reaing its startup phase at the 1.1. THE QUARK-GLUON PLASMA 3

European Organization for Particle Physics (CERN), together with the LHC experiments will shed light onto the answers to these questions, give an answer to them and hopefully discover new physics too. e LHC, whi will be capable of accelerating particles to an energy that was never aieved before in a particle collider is a great milestone, not only of high-energy physics, but also for all humankind.

1.1 e Quark-Gluon Plasma

Wemayknowwhatthebasicbuildingblosoftheuniversearebutwestillhavealongway to go until we can understand and describe the complex properties of matter and its various manifestations - nuclear matter is only one of the possible manifestations. As mentioned above, the nucleons are expected to melt into their constituents and to form a plasma consisting of quarks and gluons at very high densities and temperatures and create another possible manifestation of matter - the Quark-Gluon Plasma (QGP). Other phases might exist in the interior of neutron stars. e present experimental and theoretical knowledge about the different phases of strongly interacting matter can be summarized in a generic phase diagram (see Fig. 1.1). Nuclear matter exists in different phases as a function of temperature and density. In highly compressed cold nuclear matter, whi may exist in the interior of neutron stars, the baryons lose their identity and dissolve into quarks and gluons. At higher temperatures and small net baryon density a phase transition from ordinary hadronic matter to quark-gluon matter takes place. it is not possible to perturbatively calculate physics quantities in QCD due to the large QCD coupling constant in the limit of low energy and large distances. e only known way to solve the equations of QCD in the region of strong coupling is to discretize the Euclidean space-time onto a lattice - the so called Lattice QCD. Solving QCD in lattice calculations, at vanishing or finite net-baryon density, predicts a cross-over transition from the deconfined thermalized partonic matter to hadronic matter at a critical temperature Tc ≈160-180 MeV [1]. A similar value has been derived as the limiting temperature for hadrons when experimentally investigating hadronic matter [2].

When two heavy nuclei at ultra-relativistic energies collide, a bulk system with enormous density, pressure and temperature is created. is should free the quarks and the gluons of the bulk system into a small volume of Quark-Gluon Plasma. In this state quarks are no longer confined in hadrons as they are in normal nuclear matter, but can interact freely with a large number of other quarks. e created Quark-Gluon Plasma is expected to live only for a very short time about ≈10−23 s. Heavy ion physics is focused not only on the sear for the Quark-Gluon Plasma, but also at the study and understanding how collective phenomena and macroscopic properties emerge from the microscopic laws of elementary particle physics.

e existence and the properties of the QGP can provide a better understanding the QCD, of theconfinementandinformationaboutthetransitionfromthehadronicstatetotheQGP.Dueto the inner pressure, the QGP expands and cools down until a critical temperature is reaed where the hadronization starts. e QGP will also provide information about the restoration of the iral symmetry, whi is the symmetry of right and le" handed particles. At high temperature T and vanishing emical potential B (baryon-number density), the qualitative aspects of the transition to the QGP are controlled by the iral symmetry of the QCD Lagrangian. is symmetry is 4 1. INTRODUCTION

Figure 1.1: e phase diagram of nuclear matter summarizing the present understanding about the structure of nuclear matter at different densities and temperatures. e lines illustrates the results aieved by the different ultra-relativistic collider experiments. From [3]

an exact global symmetry only in the limit of vanishing quark masses. But the heavy quarks (arm, bottom, top) are too heavy to play any role in the thermodynamics in the vicinity of the phasetransition.at’swhythepropertiesof3-flavorQCDareofinterest.Inthemasslesslimit, a 3-flavor QCD undergoes a first-order phase transition. However, we know, that quarks are not massless in the nature. In particular, the strange quark mass, whose mass is of the order of the phase-transition temperature, plays a key role in determining the nature of the transition at a vanishing emical potential. It is still unclear whether the transition shows discontinuities for realistic values of the up, down and, strange quark masses, or whether it is only a rapid cross-over. QCD calculations on lattice suggest that the crossover is rather rapid and takes place in a narrow temperature interval around 170 MeV.

Due to the short life-time of the QGP this phase of matter can’t be observed directly. Various probes and observables have to be combined in order to get a reliable proof of the formation of a QGP and measure its properties. Some of these probes and observables come directly from the QGP whi do not interact strongly and are therefore not affected by the plasma, like for example photons or leptons. Direct information from the QGP can be obtained by the study of these observables. A different type of states do interact strongly and are altered or weakened in the QGP. eir analysis relies mainly on comparison of these observables to the reference measurements taken during p-p or p-ion runs. e most common probes and observables are:

• Direct Photons -PhotonsareproducedduringdifferentstagesofthecreationoftheQGP. Photons interact only electromagnetically and have therefore a mean free path mu larger than the size of the reaction volume. Photons provide a direct probe of the initial stages of the collision, since there are no final state interactions, like with hadrons. Photons created in the initial hard patron scattering are called prompt photons. ey can rea an energy 1.1. THE QUARK-GLUON PLASMA 5

of up to several hundred GeV. Prompt photons are followed by thermal photons with energies of up to a few GeV created in the QGP and hadron gas phases. An increase in the thermal photons is expected from a QGP. e low production rates of direct photons and an immense baground from the hadronic decays make the detection of these photons rather difficult.

• Dileptons - are lepton - antilepton pairs, that are created throughout the evolution of the system. Dileptons are an important tool for measuring the temperature and the dynamical properties of the matter produced in the heavy ion collision as they also offer a direct measurement of the QGP.

• Jet Quening - comes into effect when the propagation of partons through a hot and dense medium modifies their transverse momentum due to the induced radiative energy loss. is implies the suppression of the high pT particles and can therefore be studied by measurement of the momentum spectra. When a hard collision producing two jets occurs near the surface of the interacting region, jet quening might lead to the weakening or a complete absorption of one of the jets. is can be studied with azimuthal ba-to-ba correlation of the jets. e the jet quening can be verified comparing the high-pT spectra from heavy ion and p-p collisions.

• J/Ψ-Suppression -J/Ψ is a bound state of a cc pair.DuetoDebyescreeningtheJ/Ψ production will be suppressed in a QGP. A comparison with the data from the p-p collisionscanprovethepresenceoftheQGP.

• Strangeness Enhancement - particles containing ss pairs will be favored in spite of the heavy strange quark mass since the creation of uu and dd pairs will be bloed due to the Pauli exclusion principle. A comparison with the reference data from the p-p collisions willprovethiseffectoftheQGP.

Several experiments were carried out using 160 GeV/A lead ion projectiles against a fixed lead (208Pb) target at the Super Proton Synrotron (SPS) accelerator at CERN. ere were about 1500-2000 arged particles created in ea of theses collision events. At the LHC the arged particle multiplicity is expected up to 50 000 of whi several thousands of them are expected in the central rapidity region - the region of interest for QGP physics. erefore the detector system must have an excellent spatial resolution to be able to separate the particle tras and record electronically the tra (and the ionization strength) of ea traversing arged particle. In addition, the flight time of the particles has to be measured. is allows identifying and determining momentum of all arged particles produced in Pb-Pb head-on collision. It is also possible to identify neutral strange particles by their secondary decay into arged particles.

Chapter 2

e Large Hadron Collider and its Experiments

The aim of this chapter is to give an overview of the Large Hadron Collider and its current status. LHC experiments are briefly described as well.

2.1 e Large Hadron Collider

e Large Hadron Collider (LHC) started its operation on September 10th and is the biggest man-made accelerator on the planet. e 27 km long tunnel housing the maine is located at a depth of 50 to 175m underneath the border area of France and Switzerland close to Geneva at CERN, the European Council for Nuclear Resear. It is a proton synrotron consisting of a double-ring vacuum vessel, superconducting dipole and triplet focusing magnets, radio- frequency accelerating cavities and cryogenic cooling. e LHC replaces the Large Electron- Positron collider (LEP), whi was particularly successful in confirming the Standard Model between 1989 and 2000 with a maximum center-of-mass energy of 209 GeV. First discussions that led to the LHC project started in 1984 and the project was finally approved by the CERN council in 1994. will be accelerated to center-of-mass energies up to 7 TeV and ions up s to 574 TeV (2.76 TeV per nucleon) for lead ions, therefore providing collisions at =14TeV and sNN = 1 148 TeV (5.5 TeV per nucleon) for lead ions making the LHC, together with its experiments, the biggest sub-nuclear microscope in the world. It will explore the energy regime found in the universe 10−12 seconds a"er the Big Bang when its temperature was on the order of 1016 Kelvin.

2.2 Accelerator physics

Chargedparticlescanbeacceleratedbyexertinganelectricfieldonthem.Low-lossresonant microwave cavities are usually used to obtain the high field energies required. Accelerators can be either linear - when accelerating, the particles make a single pass through many of these

7 8 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

Figure 2.1: e LHC and its area, courtesy of the CERN photography service. An aerial view towards the Geneva area and the Alps on the le". An underground sematic, showing the SPS, LHC and the four main LHC experiments are shown on the right.

cavities (e.g. SLAC), or circular - the particle trajectories are bent using magnetic fields to facilitate many passes through the same cavities (a synrotron, like the LHC). In both cases additional cavities and higher field strengths are required to increase the energy. e two most important parameters of particle accelerators are energy and luminosity. e center of mass energy, s, of the collisions determines the energy available to create new particles from the vacuum. e luminosity is a measure of the particle flux. e rate of an interaction is proportional to the luminosity. For a particle accelerator experiment, the luminosity is defined by:

fnN2 L = A (2.1)

where n represents the number of bunes in both beams with N particles per bun, f is the revolution frequency, and A is the cross-sectional area of the beams that overlap completely. e frequency of interactions (or in general of a given process i) can be calculated from the corresponding cross-section σi and the luminosity: dN i = Lσ dt i (2.2)

or in the integral form during time t: t Ni(t) = Lσidt (2.3) 0 e cross-section of a given process is a measure of the probability of an interaction between two colliding particles and it is a function of the center-of-mass energy. Measuring the event rate measures the cross-section. Cross-sections have the dimension of area and are typically measured in barns (1b = 10−28m2). Integrated luminosities are typically measured in inverse barns, b−1 and are a measure of the sample size collected. Over the lifetime of the ALICE 2.3. THE LARGE HADRON COLLIDER DESIGN PARAMETERS 9 experiment some 10fb−1 of data are expected to be collected. In the case of a hadronic mode, k, in a detector with a real detection efficiency, εk, where hadronisation through an intermediate state j occurs, the equation 2.3 becomes: t Sk(t) = LσifijBRjkεkdt (2.4) 0 where Sk isthenumberofobservedsignaldecaysoftypek, fij is the fraction of Ni whi hadronise to state j and BRjk, known as the Braning Ratio (BR), is the probability that a state j decays to the signal k. eequation2.4impliesthattogathermoresignalofadesiredtype either higher luminosity or longer observation time is needed. Generally, it is more useful to increase the luminosity by some factor, rather than increasing the observation time by the same factor. Incorporating beam parameters to represent reality the equation 2.1 is expressed (for the Gaussian beam) as: N2n fγ L = b F ∗ (2.5) 4πεn β where N is the number of particles per bun, nb the number of bunes per beam, f the revolution frequency, γ the relativistic gamma factor, εn the normalized transverse beam emit- tance , β∗ the beta function at the collision point and F the geometric luminosity reduction factor due to the crossing angle at the IP.where β∗ is the value of β at the collision point. ese parametersdescribethebeamintransversecoordinates.eemittanceisaconservedquantity of the beam, provided no damping is employed. A larger emittance indicates more particles with high transverse energy, also known as high beam ‘temperature’. A smaller emittance indicates a more parallel beam of more highly contained particles - a low beam temperature. e beta function β is a function of position: a larger β describes a beam whose particles are spaced far apart, but traveling more parallel; a small beta describes a beam diverging from, or converging onto, a point. To increase luminosity one can do any of the following: 1. Increase the beam currents (by increasing f, n, N) 2. Decrease β∗, using powerful focusing magnets near the Interaction Point (IP) 3. Decrease ε, through damping or stoastic cooling All these teniques are employed together in the LHC, whi is at the cutting edge of current accelerator tenology. e LHC design parameters are given in the next section.

2.3 e Large Hadron Collider design parameters

e main purpose of the LHC [4]is the sear for the Higgs and SUSY particles (ATLAS and CMS), study of quark gluon plasma in Pb-Pb collisions (ALICE) and of CP violation and B-physics mainly in the frame of LHCb experiment. Possible effects of physics beyond the Standard Model as well as the measurements of total cross section, elastic scattering and diffractive processes (TOTEM) are also of interest. 10 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

ere are two major experiments at the LHC demanding the highest possible luminosity. ese are ATLAS and CMS aiming at peak luminosity of L = 1.0 × 1034 cm−2s−1.Inadditiontothese high luminosity experiments there exist two others experiments that require low luminosity (for the proton operation of the LHC). One of them is the LHCb with peak luminosity of L = 2.0×1032 cm−2s−1 and second is TOTEM (L = 2×1029 cm−2s−1 with 156 bunes). e peak luminosity required by the dedicated ion experiment ALICE is L = 1.0×1027 cm−2s−1 for nominal Pb-Pb ion operation with 592 bunes.

In order to provide more than one hadronic event per beam crossing the design luminosity of the LHC for protons has been set to L = 1.0×1034 cm−2s−1 whi corresponds to 2 808 bunes, ea containing 1.15×1011 protons, a transverse beam size of 16 μm r.m.s., bun length of 7.5 cm r.m.s. and a total crossing angle of 320 μrad at the interaction points. However, the LHC will deliver a significantly lower luminosity to the ALICE experiment during proton collisions (about 3×1030 cm−2s−1) by means of defocusing or displacing the beams. For heavy ion collisions, the luminosity will be L = 1.0×1027 cm−2s−1 corresponding to 592 bunes, ea having 7×107 lead ions, while the transverse beam sizes will be similar to those of the proton beams.

e high beam intensities exclude the use of anti-proton beams and one common vacuum and magnet system for both circulating beams since the aievable production rates for anti- protons are at present too low. Colliding two beams of the same particles requires that opposite magnet dipole fields are exerted on opposite beams. e LHC is therefore designed as a proton- proton collider with separate magnetic fields and vacuum ambers in the main arcs and with common sections only at the insertion regions where the experimental detectors are located. A noveltwo-in-onemagnetconstructionallowsbothbeampipestobehousedinasingleyokeand cryostat, significantly saving space and costs 2.2. e two beams share an approximately 130 m long common beam pipe along the Interaction Regions (IR). Together with the large number of bunes (2 808 for ea proton beam), and a nominal bun spacing of 25 ns, the long common beam pipe implies 34 parasitic collision points for ea experimental insertion region. Dedicated crossing angle orbit bumps separate the two LHC beams le" and right from the central interaction point in order to avoid collisions at these parasitic collision points.

Figure 2.2: e LHC dipoles, courtesy of the CERN photography service. On the le", a detail of the beam linesinsidethesuperconductingdipolemagnest.Ontheright,thedipolemagnetsinstalledintheLHC tunnel, also showing two beam pipes in the front. 2.3. THE LARGE HADRON COLLIDER DESIGN PARAMETERS 11

e LHC consists of 8 sectors, ea containing bending dipole magnets and focusing quadrupole magnets whi keep the particles centered on the designed orbit. e LHC also employs various beam-maintenance systems: Radio-frequency cavities, whi produce a field of 5.5 MVm−1, focus the particles into longitudinal bunes, accelerate them and compensate for energy losses due to synrotron radiation. Collimation systems ‘clean’ the beam by removing particles that have a too large distance to their bun (the so-called beam-halo) or are too fast or too slow. e cleaning prevents particles from being lost in an uncontrolled fashion within the accelerator. A beam dumping system can dump the beam in case unstable beam conditions are detected. e peak beam energy in a storage ring depends on the integrated dipole field along the storage ring circumference. Aiming at peak beam energies of up to 7 TeV inside the existing LEP tunnel a peak dipole field of 8.33 T is necessary. To aieve su high magnetic fields and at the same time avoid excessive resistive losses, the dipole magnets must be superconducting. e LHC consists of a total of 1 232 14.3 m long dipole magnets powered by a maximum current of 11.7 kA andcooleddownto1.9K1usingsuperfluidhelium.TocooldowntheLHCtotalcoldmassof 37 000 tons a total helium inventory of 96 000 kg is available whi makes the LHC the world’s biggest cryogenic system.

A summary of the nominal values of osen accelerator parameters are presented in Table 2.1.

InFigure2.3anoverviewofthewholeCERNacceleratorcomplexisshownandthelocations of individual detectors using the LHC beam are indicated. Acceleration of protons up to 50 MeV starts in the linear accelerator (LINAC2). e next accelerating stage is composed of two rings, a so called Proton Synrotron Booster (PSB) whi can boost the particles up to 1.4 GeV and then to the 628 m circumference Proton Synrotron (PS) with the proton energy reaing 26 GeV before extraction. During acceleration in the PS, the bun pattern and spacing needed for the LHC are generated by splitting the low-energy bunes. e final link in the injector ain for the LHC is the 7 km Super Proton Synrotron (SPS), accelerating protons from the PS up to 450 GeV.Several injections to the LHC are needed until all bunes of both beams are filled - the acceleration cycle takes about 20 s and creates a train of bunes with a total kinetic energy of more than 2 MJ. is represents approximately 8% of the beam bunes needed to fill the LHC ring,hencethewholeaccelerationcycleisrepeated12timesperring.

Lead ions 208Pb+27 ions are accelerated in the linear accelerator LINAC3 to 4.2 MeV per nucleon. en they are stripped by a carbon foil and the arge state 208Pb+27 is selected in a filter line. ese selected ions are further accelerated in the (LEIR) to an energy of 72 MeV per nucleon. e ions are then transferred to the PS where they are further acceleratedto5.9GeVpernucleonandsenttoanotherfoilwhifullystripsthePbionsto 208Pb+27. e SPS accelerates the fully stripped ions to 177 GeV per nucleon and injects them into the LHC where they are further accelerated to the maximal energy of 2.76 TeV per nucleon.

1that is 0.8 K lower than the baground temperature of the Universe 12 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

LHC Parameter Proton mode Pb ion mode Injection energy 450 GeV 36.9 TeV Maximum beam energy 7.0 TeV 574 TeV Stored energy per beam 362 MJ 3.81 MJ Time between collisions 25 ns 125 ns Bun spacing 7.5 m 40.5 m Bunes per beam 2808 592 Bun length (r.m.s.) 7.5 cm 7.94 cm Particles per bun 1.15*1011 7*107 Nominal luminosity 1034 cm−2 s−1 1.95*1027 cm−2 s−1 Total cross-section (nucleon-nucleon) 100 mb 514 000 mb Energy loss per turn per nucleon 6.7 keV 1.12 MeV Synrotron radiation power per ring 3.6 kW 83.9 W Relativistic gamma factor γ 7461 2963.5 Normalized transverse beam emittance εn 3.75 μm 1.5 μm Circulating Beam current 0.582 A 6.12 mA Geometric luminosity reduction factor F 0.836 1 Dipole magnetic temperature 1.9 K Magnetic field strength 8.33 T Circumference 26659 m

Beta function at IP1 and IP5 0.55 m

Beta function at IP2 0.5 m

Beta function at IP8 1–50 m

Table 2.1: Relevant LHC beam parameters for the peak luminosity and proton operation (data taken from [4]). 2.4. THE LHC EXPERIMENTS 13

Figure 2.3: e CERN accelerator complex showing LHC and also non-LHC experiments. e picture is not to scale and is adapted from the public CERN Document Server [5] e accelerating path for the protons and Pb ions is marked in red and blue respectively.

2.4 e LHC Experiments

Six experiments in total will harness the estimated 2.4×109 proton-proton collisions per second corresponding to about 2×1011 particles produced per second at the LHC at nominal luminosity. e LHC and its experiments are described in detail in [6]. ALICE [7], described in Chap. 3, is a dedicated heavy-ion experiment designed to study strongly-interacting matter. ATLAS [8], described in Sec. 2.4.1, and CMS [9], described in Sec. 2.4.2, are complementary experiments, designed to cover 4π-steradians, whi will cover the widest possible range of physics at the LHC, especially the Higgs boson and physics beyond the Standard Model. LHCB [10], described in Sec. 2.4.4, will investigate the physics of b-quarks examining CP-symmetry violation and rare decays.LHCF[11],describedinSec.2.4.6,issituatedclosetotheATLASexperimentandmeasures forward particles created during LHC collisions to provide further understanding of high-energy cosmic rays. TOTEM [12], described in Sec. 2.4.5,is situated close to the CMS experiment and measures the total cross-section, elastic scattering, and diffractive processes. 14 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

2.4.1 e ATLAS Detector

ATLAS (A Toroidal LHC ApparatuS) [8] is a general-purpose detector whi will exploit the high luminosities delivered by the LHC in order to sear for the Higgs boson(s), the most popular meanism for electroweak symmetry breaking in the Standard Model. It will also sear for physics beyond the Standard Model, for example, it will sear for the new heavy particles postulated by supersymmetric extensions (SUSY) of the Standard Model, and it will also look for evidence of extra dimensions. Precision measurements of the W boson and top quark masses will be covered as well. ATLAS employs three sets of magnets that provide up to 8 Tm bending power for excellent momentum resolution: One inner superconducting solenoidal 2 T magnet around the inner detector cavity, one outer superconducting toroidal air-core magnet providing a magnetic field of up to 4.1 T, and two endcap toroids. e inner detector consists of a large silicon system of pixel and strip detectors and a gas-based transition radiation straw traer. e electromagnetic calorimeters use liquid argon for their measurements. Some liquid argon isalsousedforhadronicmeasurementsintheend-capsofthedetector.Inthecentralpartof the detector an iron and scintillator system provides hadronic calorimetry. e muon traing ambers and trigger ambers detectors are based on gas and surround the calorimetry with four wheels on either side of the detector, the largest of whi is over 22m in radius. e ATLAS detectorisshownonthele"inFigure2.4.

Figure 2.4: e ATLAS and CMS detectors. e picture is adapted from the public CERN Document Server [5].

2.4.2 e CMS Detector

e Compact Muon Solenoid CMS [9] is the second general-purpose experiment. Like ATLAS, the CMS physics programme is dedicated to the study of the electroweak symmetry breaking meanism through the possible discovery of one or more Higgs bosons, detailed study of the Standard Model physics and a study of the possible phenomena beyond the Standard Model as well. CMS is designed to cover the full physics potential of the LHC. In order to aieve this, a precise measurement of the muons, leptons, photons and jets over a wide energy range will be performed. Unlike the ATLAS experiment, the CMS experiment uses only one magnetic system 2.4. THE LHC EXPERIMENTS 15 consisting of a single superconducting solenoid whi generates a magnetic field of 4 T. e main detector system is the Inner Traer System consisting of 10 layers of silicon strip and pixel detectors in the high occupancy range close to the Interaction Point with a total surface of more than 200 m2. e next layer is the Electromagnetic Calorimeter (ECAL), is built of 80 000 scintillating lead tungsten crystals. e ECAL layer is followed by the Hadronic Calorimeter (HCAL) whi consists of scintillator layers sandwied with layers of brass or steel. A design speciality of CMS is the placement of the ECAL inside the magnet whi allows optimized detection of one of the main possible Higgs decay annels H→ γγ. Outside of the solenoid, there is an iron-core muon spectrometer placed in the return field of the powerful solenoid, consisting of Dri" Tubes (DT), Cathode Strip Chambers (CSC) and Resistive Parallel Plate Chambers (RPC). In order to aieve high precision trajectory measurements the Resistive Parallel Plate Chambers are placed in both the Barrel and the End Caps, the Dri" Tubes are placed in the central barrels whiletheCathodeStripChambersaremountedintheEndCaps.eCMSdetectorisshownon the right in Figure 2.4

2.4.3 e ALICE Detector

e ALICE experiment is dedicated to heavy-ion physics. e ALICE detector is optimized to identify, study and aracterize strongly-interacting matter, especially the Quark-Gluon Plasma and the associated phase transition created at the LHC energy densities. ALICE also plays a special roleinthecontextoftheLHCexperimentsinp+pcollisions,sinceitssensitivityatverylowtrans- verse momentum pT and excellent particle identification allow for performing measurements that are not possible for the other LHC experiments but contribute to the understanding of other LHC experimental results. ALICE (see Fig. 2.5 le") is described more in detail in Chapter 3.

2.4.4 e LHCb Detector

e LHCb experiment [10] is dedicated to the study of CP violation in the B-meson system and other rare phenomena using b-quarks produced at LHC. Because the proton is not an elementary particle, almost all observed collisions in whi heavy quarks are produced are essentially gluon-gluon fusion. Simulations of these processes shows that B-hadrons formed by both b and b-quarksarepredominantlyproducedinthesameforwardcone.isparticular polar angle distribution leads to the specific design of the LHCb detector. Unlike the other three main LHC experiments, the LHCb detector is not a central detector, but rather is a forward single-arm spectrometer. is geometry has the advantage of simplifying the meanical design and allowing more precise detection of correlated bb production, however, the specific topology of events under study leads to the high particle density in the narrow angle of the acceptance of the detector whi necessitates the use of radiation hard components close to the Interaction Point(IP)andtheuseofahighperformancetrigger.TohelpthetriggerdecisiontheLHCb luminosity is decreased to about 2.0 × 1032 cm−2s−1 in order to have roughly one interesting event per bun crossing. e primary and secondary vertices have to be measured as accurately as possible in order to identify B-mesons and to determine their lifetime precisely. For this 16 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

the LHCb employs a silicon detector called the Vertex Locator (VeLo). e momenta of arged particles will be reconstructed by the magnetic spectrometer whi is composed of a dipole magnet and traing detectors. Particle identification is based on RICH detectors. e LHCb detector also uses electromagnetic and hadronic calorimeters together with muon ambers for the measurement of energy. e overall sematic of the LHCb detector is shown in Figure 2.5. It’s about 20 m long and 10 m wide. e polar angle coverage around the direction of the proton beams ranges from 10 mrad to 300 (250) mrad in the bending (non-bending) plane.

Figure 2.5: Side view of the LHCb detector on the le". ALICE detector on the right. e picture is adapted from the public CERN Document Server [5].

2.4.5 e TOTEM Detector

e Total Elastic and diffractive cross section Measurement (TOTEM) experiment [12] is a far- forward experiment operating near the CMS experiment (see Fig. 2.6). TOTEM will study forward particles in order to focus on physics that is not accessible to the general-purpose experiments. TOTEM will measure the total cross-section in the LHC proton collisions, by using a luminosity- independent method, so" diffraction and elastic proton scattering in the range of polar angle, Φ, and momentum, p, given by 10−3GeV2c−2 < (pΦ)2 < 10GeV2c−2.TOTEM,incollaboration with CMS, will also measure hard diffraction, central, exclusive particle production, physics at low Bjorken x, γγ and γp physics, particle and energy flow in the forward direction and leading particles. Ea TOTEM section has two near traing detectors T1 and T2 and three far detectors RP1, RP2 and RP3 on ea side of the beam line. e T1 detector is a cathode-strip amber and is located within the CMS endcap, the T2 detector uses Gas Electron Multipliers (GEM). e T-detectors detect inelastically scattered particles, with pseudorapidities up to η =5andη =7for T1 and T2 respectively. e three far detectors RP1, RP2 and RP3 are silicon microstrip detectors encapsulated in vacuum and cooled down to -15◦C, and are called Roman pots. ey are placed between 147 m and 220 m down the beamline in order to detect the total elastic cross-section. e Roman pots are located in the shadow of LHC collimators to reduce baground and are slightly offset from the beam line due to the gradual curvature of the ring. e total size of TOTEMis:440mlong,5mhighand5mwideanditweights20tonnes. 2.5. THE LHC STARTUP AND CURRENT STATUS 17

Figure 2.6: A sematic view of the placement of the TOTEM detectors [12]. e near detectors are placed inside the CMS cavern. One set of far detectors RP1, RP2, RP3 is also shown.

2.4.6 e LHCF Detector

Large Hadron Collider forward (LHCF) experiment [11], like the TOTEM detector, uses forward particlescreatedinsidetheLHC.eLHCFusesthemasasourcetosimulatecosmicraysin laboratory conditions, as the main goal of the LHCF experiment is to interpret and calibrate data from large-scale cosmic-ray experiments, like the Pierre Auger observatory, by studying how collisions inside the LHC cause cascades of particles like those that cosmic rays create when they bombard the Earth’s atmosphere. e LHCF experiment is located close to the ATLAS cavern and consists of two detectors, ea is 30 cm long, 80 cm high, 10 cm wide and weights 40 kg.

2.5 eLHCStartupandcurrentstatus

e LHC started its operation on the 10th of September 2008 with tremendous success. e first beam was sent successfully around the entire ring within one hour a"er the first injections. e LHC was able to pass another beam around the entire ring beam in the opposite direction within the same day. A picture of the beam monitor of the first beam that passed through the LHC ring is shown on Fig. 2.7. A picture of the beam profile monitor is on Fig. 2.8 le". Every line represents one bun pass. Already on the 12th of September the RF successfully captured the beam and a stable circulating beam was present Fig. 2.8 right.

≈ e LHC commissioning was stopped for 7 days a"er a transformer failure in Point 8. e LHC was almost ready for collisions at s = 900GeV on the 19th of September. However, in the Sector 34, whi was the last sector commissioned without a beam, dipole magnet currents were increasing from 7 kA to 9.3 kA (corresponding to beam energy of 5.5 TeV) and an electrical fault occurred resulting in a magnet quen, meanical damage and release of helium from the magnet cold mass [14]. To repair the damage (see Fig. 2.9) the whole sector had to be warmed up, and the other sectors had to be eed for the same flaw in order to avoid this accident next time the LHC starts. is delayed the LHC operation at least until November 2009. e current plan is that [15] the LHC will run for the first part of the 2009-2010 run at 3.5 TeV per beam, with the possibility that the energy might rise later in the run. 3.5 TeV was selected since it allows the LHC operators to gain experience running the maine safely while opening up a new discoveryregionfortheexperiments.edevelopmentsthathaveallowedthisaregoodprogress 18 2. THE LARGE HADRON COLLIDER AND ITS EXPERIMENTS

Figure 2.7: A picture of the beam monitor of the first beam that passed through the LHC ring [13].

Figure 2.8: A picture of the beam profile monitor made on the 10th of September (le") and on the 12th of September (right). Every line represents one bun pass. One can see on the le" monitor that with every pass the beam was more and more spatially dispersed. On the right, the RF captured successfully the beam and a stable circulating beam was present [13]. 2.5. THE LHC STARTUP AND CURRENT STATUS 19 in repairing the damage in sector 3-4 and the related consolidation work, and the conclusion of testing on the 10 000 high-current electrical connections in August 2009.

One of the latest tests looked at the resistance of the copper stabilizer that surrounds the superconducting cable and carries current away in case of a quen. Many copper splices showing anomalously high resistance have been repaired already, and the tests on the final two sectors revealed no more outliers, whi means that no more repairs are necessary for safe running this year and next.

e procedure for the 2009 start-up will be to inject and capture beams in ea direction, take collision data for a few shi"s at the injection energy, and then commission the ramp to higher energy. e first high-energy data should be collected a few weeks a"er the first beam of 2009 is injected. e LHC will run at 3.5 TeV per beam until a significant data sample has been collected and the operations team has gained experience in running the maine. erea"er, with the benefit of that experience, the energy will go up towards 5 TeV per beam. e LHC will run with lead-ions for the first time at the end of 2010. A"er that, the LHC will shut down and workonmovingthemainetowards7TeVperbeamwillbestarted.

Figure 2.9: A picture of the dipole electrical busbar splice interconnects welding during the reparation works in sector 34 [13].

Chapter 3

A Large Ion Collider Experiment (ALICE)

The aim of this chapter is to present an overview of the alice detector with its subdetectors. Online systems are presented as well.

A Large Ion Collider Experiment [7] (ALICE) is a general-purpose detector dedicated to heavy- ion collisions produced at the Large Hadron Collider (LHC) in Geneva. ALICE is designed to study the physics of strongly interacting matter at extreme energy densities by analyzing the collisions of lead nuclei at s = 5.5 TeV per nucleon, using the hadrons, electrons, muons and photons produced in the collisions as probes. e study of the production of beauty and arm hadrons is of particular interest in order to probe the formation of deconfined matter. Charm and beauty detection requires an excellent secondary vertexing capability coping with the high multiplicity environment of nucleus-nucleus collisions. Charged multiplicities of up to 8 000 tras per unit of rapidity in Pb-Pb collisions have been predicted1. e detector system dedicated to this task is the ITS [17]. e ALICE experiment will use the heavy ion, but also proton beams from the LHC accelerator. ALICE has to offer an excellent particle identification (PID) in a large momentum range, since the momenta of the produced particles in lead collisions will be rather low, as opposed to proton collisions at the LHC with relatively low arged tra multiplicities and high momenta. is implies the requirements for a low material budget, a rather low magnetic field and precision traing capabilities over a large momentum range (100 MeV/c < p < 100 GeV/c), p thus accessing physics topics from so" to jet physics and high- T particle production. Figure 3.1 shows the event rates and cross-sections as a function of s for some of the key annels. At the LHC conditions the event rates for cc should increase by ≈10, bb by ≈100, and jet rates by many orders of magnitude with respect to RHIC.

e ALICE collaboration is a multi-national team spreading over 30 countries and having more than 1 000 members. A"er almost 15 years in the making ALICE is becoming a reality, currently undergoing extensive commissioning and cosmic ray datataking. ALICE is designed to address a very ri expected physics programme and also is powerful and versatile enough to explore the unknown [18].

1e expected multiplicity might be lower dNch/dη = 1 500 - 4 000 as indicated by RHIC results [16]

21 22 3. A LARGE ION COLLIDER EXPERIMENT (ALICE)

105 Pb-Pb <1 |η| RHIC LHC 4 10 Pb-Pb -1 σtot s 1b103 1 kHz -2

cm 2 cc 10 27 pp σtot

10 =10

L 1mb1 jet 1Hz bb >100GeV 10-1 V + - →l l 00Ge J/ψ 2 -2 > Cross section 10 + - -3 Z→ l l 110μb - 1mHz ϒ→l +l Event rate at W → lv 10-4 10-5 10 102 103 104 Energy (GeV)

 Figure 3.1: Cross-sections and rates for several key annels versus s. From [18]

eALICEdetectorconsistsofacentralbarrel(|η| < 0.9) contained in a magnetic2 field of 0.5 T and is optimized for the detection of photons, electrons and hadrons. A very high granularity detector was osen, whi is, however limited in readout speed when compared to the other main LHC experiments. e ALICE detector employs a muon spectrometer at forward rapidities as well as additional forward and trigger detectors. Figure 3.2 shows a 3D computer view of the ALICE detector and its sub-detectors. I will introduce the various ALICE subdetectors in the following sections.

3.1 e ALICE Central Barrel

e detectors are located inside the L3 solenoid and cover roughly -0.9 ≤ η ≤ 0.9inrapidity. e following detectors covering the full azimuth of 2π will be traversed by a particle traveling radially outwards: the Inner Traing System (ITS), the Time Projection Chamber (TPC), the Transition Radiation Detector (TRD), and the Time Of Flight (TOF). eir tasks are traing and particle identification. e following detectors are also located in the central barrel, however they don’t cover the full azimuth: the High Momentum Particle Identification Detector (HMPID), the

2e magnet is called the L3 magnet since is a heritage from the at the LEP 3.1. THE ALICE CENTRAL BARREL 23

Figure 3.2: 3D computer view of the ALICE detector and its sub-detectors. From [18]

Photon Spectrometer (PHOS), the Electromagnetic Calorimeter (EMCAL) and the ALICE Cosmic Ray Detector (ACORDE).

3.1.1 Inner Traing System

e ITS consists of six barrel layers of silicon detectors whi provide high-resolution spatial traing with radii from 3.9 cm to 43 cm from the IP (Interaction Point). e main task of the ITS is the reconstruction of the primary vertex and of the secondary vertices of heavy-quark decays (B and D mesons) and hyperons with a resolution better than 100 μm in transverse direction. roughthemeasurementofthespecificenergylosstheITScontributesalsototheparticle identification. It consists of three subdetectors: the Silicon Pixel Detector (SPD), the Silicon Dri" Detector (SDD) and the Silicon Strip Detector (SSD). e geometrical dimensions and the tenology used in the various layers of the ITS are summarized in table 3.1.

e Silicon Pixel Detector (SPD) consists of two innermost layers, based on hybrid silicon pixels whi consist of silicon detector diodes. SPD active elements are small pixels on the face of a silicon sensor. e first layer has an extended pseudo-rapidity coverage (|η| < 1.98) to provide continuous coverage for the measurement of arged particles multiplicity with the help of the Forward Multiplicity Detectors (FMD). e SPD is presented in more detail in Chapter 4. e 24 3. A LARGE ION COLLIDER EXPERIMENT (ALICE)

Layer Type r (cm) ± z (cm) Area (m2) Channels 1 .9 14.1 0.07 3 276 800 2 Pixel 7.6 14.1 0.14 6 553 600 3 Dri" 15.0 22.2 0.42 43 008 4 Dri" 23.9 29.7 0.89 90 112 5 Strip 38.0 43.1 2.20 1 148 928 6 Strip 43.0 48.9 2.80 1 459 200

Table 3.1: Dimensions of the ITS detectors (active areas). From [16]

other layers of the ITS, the SDD and SSD, have less granularity than the SPD. ey provide further traing points and arged particle multiplicity measurements. e ITS can resolve decays of short-lived particles and determine the point of decay due to its fine granularity and proximity to the IP. e ITS can also be used to discard baground tras (for example from cosmic rays, scattering in materials, etc.) by eliminating tras that do not seem to originate relatively close to the IP.

3.1.2 Time Projection Chamber

eALICETimeProjectionChamber(TPC)istheworld’slargestTPCwithadiameterof5.6m,a lengthof5mandanactivevolumeof88m3. Itiscalledthe’heart’ofALICEduetoitslocation and mainly it capability to tra densities whi even excess dN/dy = 6000. e identification of particles is aieved by measuring the energy loss of particles ionizing the TPC gas. e electrons dri" ≈88 μs from the high voltage central electrode membrane to the readout ambers. is sets the maximum trigger rate for ALICE to ≈10kHz since this is the slowest subdetector. Remarkable features of the TPC include: the lightweight construction material in whi the total radiation length for perpendicular tras is 3% X0. Anoveldri"gasmixtureconsitingof86%Ne,9.5%CO2 and 4.5% N2.Newasosenforitslowradiationlenght,N2 for improved quening, while still alowing for the dri" velocity of 2.8 cm/μs. A novel Front-end Readout electronics (FERO) called the ALTRO (ALice Tpc ReadOut) ip. It provides digital shaping, tail cancellation, baseline restoration and zero suppression for the 570 000 TPC readout annels. e ALTRO ip became a for analogue signal processing not only for the TPC but also for other detectors with similar requirements.

3.1.3 ACORDE

e main task of the ALICE Cosmic Ray Detector (ACORDE) is to provide a cosmic ray trigger. It consists of a large array of plastic scintillators. e array is located on the top of the L3 magnet spread over three upper sides. 3.1. THE ALICE CENTRAL BARREL 25

3.1.4 Time Of Flight detector

eTimeOfFlight(TOF)detectormeasuresthetimeittakesforaparticletotraversefrom the IP to the outer rim of the central barrel inside the L3 magnetic field. e arrival time is measured in ten narrow (250 μm) gaps aieving σ ≈ 50 μs timing resolution with a detection efficiency higher than 99%. e high electric field is uniform across the sensitive gas volume. e arged particle induced ionization starts instantly an avalane process that produces an iduced signal at the anode of the detector. ese Multi-gap Resistive Plate Chambers (MRPC) have pad electronics to cope with the high particle multiplicity consisting of 160 000 annels in total.

3.1.5 High Momentum Particle Identification Detector

As the name suggests, the main task of the High Momentum Particle Identification Detector (HMPID) is particle identification in the 1-6 GeV/c range (π/K separation up to 3 GeV, K/p separation up to 5 GeV/c). e HMPID is a Ring-Imaging, proximity focused Cherenkov detector covering in total 11 m2 of detection area located at ≈4.5m from the IP. Cherenkov radiation is emitted by arged particles whi exceed the velocity of light in the traversed medium. e seven modules in total use C6F14 as the liquid radiator and Multi-Wire Proportional Chambers (MWPCs) employing CsI photocathodes with pad-ambers.

3.1.6 Transition Radiation Detector

e main task of the Transition Radiation Detector (TRD) is to identify electrons with momenta above 1 GeV/c. In the TRD the transition radiation photons are radiated by light arged particle passage through a polyethylene fiber mat. Ea particle, if traveling radially, traverses six radiator/detector units providing /electron discrimination of ≈102 in a high multiplicity environment. e TRD provides also a fast trigger for arged particles with high transverse momentum (above 3 GeV/c).

3.1.7 Photon Spectrometer

e Photon Spectrometer (PHOS) is an electromagnetic calorimeter using PbWO4 (lead-tungsten) crystals as scintillators. It will measure γ, π0 and η up to pT < 10 GeV/c to provide direct measurements of the QGP initial temperature and to study jets and the signatures of iral symmetry restoration as well. ree units ea consisting of ≈4000 crystals are already installed, and another 2 units are foreseen to be installed. 26 3. A LARGE ION COLLIDER EXPERIMENT (ALICE)

3.1.8 Electromagnetic Calorimeter

e Electromagnetic Calorimeter (EMCAL) is currently under construction as it was approved only in December 2007. e 11 scintillator/lead modules will be placed in su a way to point towards the IP and will cover δη =1.4 (roughly the same as TPC and TRD) and 110o in azimuth. e EMCAL trigger capabilities will allow for ALICE jet physics studies of jets with 100 MeV/c >pT > 200 GeV/c.

3.2 Forward Detectors

Several small subdetector systems are located at various positions at small angles to provide information on multiplicity, primary vertices, triggers and other global event aracteristics:

e T0 is very fast pair of high-resolution timing detectors. Ea T0 includes 12 annels of quartz radiator Cherenkov radiators glued onto photo-multiplier tubes. e T0 provides event triggering and tagging with ≈40 ps resolution. A coincidence between the two sides of the T0 theT0-AandT0-CwillserveasanL0triggerandalsoasasignaltowake-uptheelectronicsof other detectors like the TRD one.

e V0 is a pair of tiled scintillator disks on either side of the IP. It provides beam-gas baground rejection, luminosity information and triggering as a function of centrality especially for pp collisions for whi the T0 acceptance is not large enough to provide an L0 trigger at high efficiency.

e Forward Multiplicity Detector (FMD) consists of 3 planes of silicon pad detectors. e FMD provides multiplicity information over -3.4 < η < -1.7 and 1.7 < η <5.0.

e Zero Degree Calorimeter (ZDC) 4 detectors are located at ≈116 m on both sides from the IP. ey detect spectator protons and neutrons and thus provide a complementary trigger on the centrality of a collision.

e Photon Multiplicity Detector (PMD) provides information on photon production by traing before and a"er a 3X0 Pb converter. It also determines the elliptic flow, the event reaction and the transverse energy of neutral particles. It employs a gas amber tenology based on 6 mm hexagonal cells.

3.3 e Forward Muon Spectrometer

e main task of the Forward Muon Spectrometer is identify dileptons, especially muons, as the products of heavy quark meson decays. It consist of several traing planes whi cover the region of -4.0 < η < -2.5. Using a absorber the baground from decay muons is minimized by 3.4. ONLINE SYSTEMS 27 absorbing the hadrons in the muon acceptance. e absorber intercepts hadrons at ≈1mfrom the IP. e first traing plane is located inside the L3 magnet and the second one just behind the magnet in order to measure where the particles le" the solenoid field. e third traing plane is inside the large and warm dipole in order to determine the angle of the bended arged particle trajectories. e fourth and the fi"h planes are just behind the dipole in front of the ≈1 m thi iron wall for further filtering muons. ese traing planes pad ambers provide a space resolution of σ ≈ 60 μm. e last two traing planes are the muon trigger ambers implemented as resistive plate ambers. ey provide particle identification by measuring the time of the flight of the particles.

3.4 Online Systems

3.4.1 Experimental Control System

e Experimental Control System (ECS) is the top layer. Its main task is to manage the ALICE online and offline systems and provide the user interface.

3.4.2 Trigger

e ALICE low-level trigger is a hardware trigger named the Central Trigger Processor (CTP). e main task of the CTP is to select events of different features at rates whi can be scaled down to fulfill the physics requirements and also meet the restrictions imposed by the bandwidth of the DAQand the High-Level Trigger (HLT). To aieve this, the CTP combines inputs from different various subdetectors whi are capable of providing a trigger signal. A subdetector trigger signal can be a the result of a single hit in the subdetector, but also fast algorithm over many hit or observables (e.g. SPD Fast-OR trigger). e trigger system features a hierary of three ALICE global trigger levels called Level 0 (L0), Level 1 (L1) and Level 2 (L2) differentiated by their arrival latency. e L0 trigger is issued as the earliest with a latency of 1.2 μs. e L1 reaes the subdetectors 6.5 μs a"er the collision. Only upon the receipt of the L2 with 100 μs latency the slowest subdetector, the TPC is read out. e L0 and L1 triggers are sent synronously with the LHCcloatfixedtimesa"erthebuncrossing,whiletheL2issentasynronously.Onlythe fastest subdetectors contribute to the L0 decision, namely the TOF, T0 and V0. e TOF, PHOS and TRD contribute to the L1. Also a pre-trigger is defined and issued in less than 900 ns a"er the bun crossing in order to wake-up the TRD electronics whi is in a low-power sleep mode for the majority of the time.

3.4.3 Data Acquisition

e main task of the Data Acquisition DAQsystem is to build complete events by concatenating the event parts a"er gathering the data from the front-end electronics of the subdetectors in 28 3. A LARGE ION COLLIDER EXPERIMENT (ALICE)

parallel. Other tasks include data buffering and the transfer of build events to permanent storage. e event data produced by the subdetectors is transfered via the Detector Data Links (DDLs) onto Local Data Concentrators (LDCs) where it is combined into sub-events and transfered to Global Data Concentrators (GDCs). e GDCs then compile the sub-events received from the LDCs into complete events. ese are then shipped to the CERN computing center and stored on tapes through disks by the CASTOR [19]. e DAQprovides so"ware paages for the monitoring of the data quality and the DAQsystem performance. e DAQhas currently a bandwidth of 500 MBytes/s whi will be extended to 1.25 GBytes/s whi is in line with the constraints imposed by tenology, storage and computing necessary to analyze the data offline.

3.4.4 High-Level Trigger

e High-Level Trigger (HLT) is a so"ware trigger, thus allows for the implementation of more complex trigger logic than the CTP. e HLT receives a copy of the data read out from the subdetectors and processed online and real-time reconstructed using the ALICE offline so"ware. us, the HLT sharpens the trigger decisions and allows for data compression and pre-processing. en, the data is returned to the data acquisition ain and stored for subsequent offline analysis. Currently the HLT is using a 1 000-processor farm. It is, however scalable to about ≈20 000- processor farm to meet the possible need of the ALICE resear programme.

3.4.5 Detector Control System

e Detector Control System (DCS) controls, monitors, arives and generates alarms for more than 10 million detector annels and more than 1000 environmental sensors. e DCS is introduced in more detail in Chapter 6. Chapter 4

e ALICE Silicon Pixel Detector

The aim of this chapter is to introduce the alice Silicon Pixel Detector and its components.

Figure 4.1: A drawing of the ALICE Inner Traing System. From [16]

e ITS [17] consists of 2 layers of silicon pixel detectors, 2 layers of silicon dri" detectors and 2 layers of silicon strip detectors (see Fig. 4.1). e ALICE SPD provides high granularity traing information close to the interaction region and will thus play an important role in the overall physics performance in ALICE. e ALICE Silicon Pixel Detector (SPD) forms the two innermost layers of the ALICE Inner Traing System (ITS) at radii of 3.9 cm and 7.6 cm, respectively. e SPD consists of 1200 pixel readout ASICs developed in a commercial 0.25 μm CMOS process. Ea ip contains 8192 readout cells, leading to an overall 9.83 million readout annels in the whole SPD. e sensors are matrices of p+n-diodes produced on 200 μm thi silicon.

29 30 4. THE ALICE SILICON PIXEL DETECTOR

4.1 SPD Layout

e SPD consists of two barrel layers  at radii of 3.9 cm and 7.6 cm, respectively. e pseudorapidity coverage of the inner layer is η ≤1.9 and η ≤0.9 for the outer layer. e two layers will be built out of 120 half-staves (see figure 4.2). Figure 4.2 shows also a sematic drawing of a half-stave. Ea half-stave contains two ladders consisting ea of a silicon pixel sensor (70.7 × 16.8 mm2) bump bonded to 5 pixel ips. e staves on the outer layer are mounted in a turbine configuration while those on the inner layer are staggered. Two half-staves are combined to cover the whole length of the SPD barrel (28.6 cm). e total number of staves in the SPD is 60 (20 in the inner layer, 40 in the outer), with in total 1,200 read-out ips and 9.83 million read-out annels.

Figure 4.2: A CAD drawing of the SPD and an artistic view of one sector and one half stave. From [20].

Figure 4.3: Image of one SPD sector-8 outer half staves in the middle and power connections on the sides. From [20].

e p+n sensors are produced on 200 μm silicon wafers with a simple guard ring structure. e estimated fluence integrated over 10 years is in the order of a few 1012 neutrons/cm2 in 4.1. SPD LAYOUT 31

Figure 4.4: Picture of the SPD with the 0.8 mm-thi beryllium beam-pipe as currently installed in Point 2. From [5].

Figure 4.5: A drawing of one SPD sector with the numbering convention. From [21]. 32 4. THE ALICE SILICON PIXEL DETECTOR

theinnermostlayer[7].eradiationdamageatthislevelisexpectedtobeverylow,therefore astandardp+n design was osen for the pixel sensors [22] In order to reduce the multiple scattering of particles with low transverse momenta, whi decreases the traing precision, the overall material budget should be kept to a minimum. e staves are mounted on light-weight carbon fiber sectors, ea sector supporting two full staves on the inner layer and 4 full staves on the outer layer. An aluminum-polyimide multilayer bus, glued on top of the ladders, provides the signal and power connections for the readout ips. e thiness of the bus is 240 μm. e readout ip wafers are thinned down to 150 μm thiness a"er bump deposition. e cooling tubes are embedded in the sector profile. e overall material budget per layer is estimated to ≈1% percent of radiation length (see table 4.1).

SPD Element iness [μm] %X0 Al Bus: Kapton 60 0.021 Al power 100 0.112 Al signals [50% of total surface] 17.5 0.020 Glue Epoxy 70 0.016 SMD components 16.4 0.173 Total bus 0.341 Other Components: Pixel ip 150 0.160 Sensor 200 0.214 Bump bonds Sn 60%+Pb 40% 0.18+0.12 0.004 Grounding Foil-Kapton/Al 50+10 0.029 Glue Epoxy/thermal grease 200 0.049 Carbon fiber 200 0.106 Total components 0.561 Total bus and components 0.903

Table 4.1: e material budget of one SPD layer. Table data taken from [20]

einterconnectionsfromtheipstothebusaremadeusingultrasonicwirebondingwith wiresof25μmdiameter.epictureinfigure4.10showsthecornerofanALICEladdermounted on a prototype bus. In this version the bus was mounted underneath the ips to facilitate testing. In the final version the bus will be mounted as indicated in figure 4.9 [23].

e readout ip signals are carried by the bus to the Multi Chip Module (MCM) at the end of the half-stave (see Figure 4.11). e MCM houses the analog PILOT, the digital PILOT, the Gigabit Optical Link driver (GOL) and the optical module containing two PIN photodiodes and a laser diode (see Figure 4.12). e analog PILOT provides the reference biases for the pixel ips and an ADC to monitor currents and voltages. All the signals to/from the counting room 4.1. SPD LAYOUT 33

Figure 4.6: An artistic view of one "exploded" half stave. From [20].

Figure 4.7: Picture of one sensor with 5 bonded ips on the bottom. From [20]. 34 4. THE ALICE SILICON PIXEL DETECTOR

Figure 4.8: An artistic view of a ip bonded to a sensor on the right and a picture of the Sn-Pb bump bond. From [20].

are transmitted on optical fibers. e digital PILOT handles the incoming clo, trigger and configuration data and provides timing, control and readout for ea half-stave. e readout data are serialized by the GOL in a G-link compatible format and sent out at 800 Mb/s. A detailed description of the on detector PILOT system (OPS)canbefoundin[24].

Figure 4.9: Anartisticviewofthebuswithasensorandaip(le")andapictureofthebusbondingpads. From [20].

One Router card with three Link Receiver mezzanine cards (the LinkRx card performs zero suppression and data encoding in the experiment counting room) serves a Half-Sector (6 Half- Staves) and has optical links to the experiment DAQ and Trigger systems (see Fig. 4.13). A specific feature of the the SPD is that it will provide a multiplicity signal FastOR contributing to theL0(lowestlatency)trigger.

4.2 e ALICE1LHCb Chip

e SPD readout ASIC is named ALICE1LHCb as it has been designed with dualmode features that allow its use also in the HPD for the LHCB RICH. It is a mixed signal ip in a 0.25 μm 4.2. THE ALICE1LHCB CHIP 35

Figure 4.10: Picture of the wire bonds between the MCM and the bus (right) and between the ips and the bus (le"). From [16]

Figure 4.11: PictureoftheMultiChipModulemountedontheHalf-stave.eMCMconsistsoftheAnalog Pilot(le"), Digital Pilot(middle), GOL(right) and the Optical cables (very right). From [20].

Analog Pilot Digital Pilot GOL

PILOT MCM

Figure 4.12: AcloserlookontheMultiChipModuleandthethreeips-theAnalogPilot(le"),theDigital Pilot(middle), and the GOL(right). From [25] 36 4. THE ALICE SILICON PIXEL DETECTOR

Figure 4.13: A blo diagram of the SPD electronics. From [25]

Figure 4.14: A detailed blo diagram of the data readout electronics. From [16]

Figure 4.15: Picture of the router with three link reciever (LRx) cards. From [25] 4.2. THE ALICE1LHCB CHIP 37 commercial CMOS process. e radiation tolerant design includes enclosed layout transistors and guard rings [26].

eALICE1LHCbPixelipcontains8192pixelcells, eawithasizeof50μm(rφ) × 425μm(z). e cells are arranged in a matrix with 32 columns and 256 rows. e active area of the ip is 12.8 × 13.6 mm2, the full size of the ip is 13.5 × 15.8 mm2. e ip is cloed at 10MHz for the ALICE experiment and contains about 13 million transistors. Ea pixel cell consists of an analog part with differential preamplifier, two shaper stages and a discriminator and a digital part. A test-pulse can be applied via a test capacitor to the input of ea pixel cell. e digital part consists of a synronizer followed by two delay units of whi ea can delay a hit up to 512clocycles.Ifastrobesignalfromanoutsidetriggerarrivesincoincidencewiththeoutput of a delay unit, a logic one is stored in a 4-event {FO buffer. On readout of the ip the pending events in the pixel {FOs are loaded into a 256-bit shi" register in every column and transfered serially as 32 parallel data streams. Ea pixel cell contains also one bit to mask the cell, one bittosetthetest-pulseandthreebitstofineadjustthethreshold.Adetaileddescriptionofthe design of the ALICE1LHCb pixel ip can be found in [27] and in [28]

13.5 mm 58mm 15.8

Figure 4.16: ApictureoftheALICE1LHCb ip (le") containing 8192 pixel cells. On the right a blo diagram of the electronics in one pixel cell. From [29]

Measurements using the test-pulse have indicated a minimum threshold of about 1000 electrons with an rms of about 200 electrons [30]. e mean noise measured is about 110 electrons. ese measurements were carried out without individual threshold adjust. e preliminary conversion factor of 66 electrons/mV was obtained from 55Fe-source measurements.

A wafer probing test system was developed in order to select ips that will be used for bump bonding [31]. e system is based on the modular test system developed within the ALICE SPD team. All tests are carried out on a PA200 wafer prober from Karl Suss. Probe cards to contact the ips via tungsten needles were designed at CERN and produced at CERPROBE [32].

A complete test procedure was developed including current consumption measurements, test of all DACs, a JTAG test, measurement of the minimum threshold and a threshold scan. According to the results of these tests, ips are classified in three classes. Chips used for bump bonding (class I) have to have a mean measured threshold of all pixels of less than 30 mV (≈2000 electrons), less than 1% of defect pixels and less than 620 mA of total current consumption. Ea wafer contains 86 ALICE1LHCb pixel ips. On Figure 4.17 Class I ips are shown in green, class II orange and class III in red on the le" drawing. On the right the histogram shows the mean measured threshold of all class I ips on one wafer. e distribution peaks at 18.1 mV with 38 4. THE ALICE SILICON PIXEL DETECTOR anrmsof1.4mV.NoiphadtobeexcludedfromtheclassIgroupduetoameanthreshold exceeding 30 mV. Further discussions are in [33], [34], [35], [31], [36], [37] and [38].

Figure 4.17: A wafer contains 86 ALICE1LHCb pixel ips (le"). Class I ips are green. e histogram on the right shows the mean measured threshold of all class I ips from one wafer. From [39] Chapter 5

e Silicon Pixel Detector Testbeams and Commissioning

The aim of this chapter is to summarize the testbeam phase and the commissioning with cosmic rays phase of the Silicon Pixel Detector.

During the past few years the ALICE SPD collaboration has carried out four testbeams. e primary objective of these testbeams was the validation of the pixel Application-Specific Integrated Circuits (ASICs), the sensors, of the read-out electronics and also the DAQ, Trigger and DCS online systems with their so"ware. e experimental setup was not optimized for precision measurements of the position resolution, but an accurate determination of the intrinsic spatial precision could be performed anyway [40].

e main aievements of the assembly testbeam included: several thi assemblies (300 μm sensor) were tested with a particle beam for the first time and the first studies of ip efficiency, thresholds and timings were performed [39].

e main aievements of the ladder testbeam included: several thi assemblies (300 μm sensor) and also one thin assembly having the final designed sensor thiness (200 μm sensor); also the first prototype full ladder with 5 ips was tested and the first spatial resolution results were aieved [39] [41].

e main aievements of the heavy-ion testbeam were: the assemblies were tested in a higher multiplicity environment and also in a heavy ion beam for the first time; the SPD spatial resolution results were refined; a full SPD half-stave with the prototype of the final readout electronics was tested [41] [42]. e main aievements of the fourth testbeam, whi was a common testbeam of all three silicon ALICE ITS subdetectors were: the full read-out ain was tested using the final components; the ALICE DAQand trigger system were used for the first time for more than one sub-detector system; testbeam data was used for code validation of the ALICE so"ware framework AliRoot [42] [43].

In the following sections the individual testbeams, their experimental setups and their results are presented in more detail.

39 40 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

5.1 Assembly testbeam

e first testbeam was performed in the H4 beam-line in the NA57 area at the CERN SPS accelerator using 150 GeV/c with 105-106 particles per spill. e beam-focus was roughly ≈10 × 5mm2. As the trigger the coincidence signal from four scintillators was used. One was placed behind the setup, another one was upstream, 10 meters in front of two crossed scintillators, whi selected a beam-spot of 2.5 × 2.5 mm2 (see Figure 5.1).

5.1.1 Experimental Setup

C1A C2 scintillator S3 Assembly 0 Assembly 1 Assembly 2 beam ~10m C1B

{

x-y table two small scintillators MB MB card MB orthogonal to each other card card

power supply power supply power supply

Figure 5.1: A drawing of the testbeam setup. Crossed scintillators in the front followed by 3 planes of ips, the middle one mounted on the X-Y table. From [30]

During the first period of a few days, two assemblies were put into the beam to perform the first timing, thresholds and efficiency studies. During the second period 5 assemblies were tested and a Pixel telescope was created out of 3 stages of ALICE pixel detectors aligned along the beam axis. e first and the last stage were used as reference planes for traing (see Figure 5.1). e middle traing plane was mounted on an X-Y table. As mentioned earlier, the crossed scintillators coincidence was used as the trigger.

TRIGGER DAQ + SETUP

DAQ ~30 m

SC + xyF JTAG

Figure 5.2: A drawing of the testbeam setup indicating the readout and monitoring electronics controlled by 2 computers (le"). A photo of the testbeam setup. From [44], [30]. 5.1. ASSEMBLY TESTBEAM 41

5.1.2 X-Y table

e X-Y table was used in all ALICE SPD testbeams. e X-Y table is able to move the device (for example the Pixel carrier) mounted on it in the horizontal direction (≈ 10 cm in total), in the vertical direction (≈ 21 cm in total) and to ange the angle of the mounted device (from 0 to 45 degrees, where 0 means that the device is perpendicular to the beam). e X-Y table was manufactured by the Cze Academy of Sciences in 1991. Its control components were no longer functional and were replaced by the MID7602-7604 Stepper Motor Controller from National Instruments [45]. e position of the table was read via the original digital micrometers MITUTOYO [46] connected via the original table controller to the parallel port of the PC. I created control so"ware (see figure 5.3) in LabView [47] to precisely move remotely (including the effect of the dead screw rotation – although the stepper motors where moving, the tabledidn’tangeitspositionwhenangingthedirectionofthemovement,whiresultedin position error up to ≈ 1 mm)), monitor (whether the position did not rea the physical limits) and to read the current position all via cables ≈ 30 meters long.

Figure 5.3: e so"ware I developed for an easy and reliable control of the X-Y table. Picture taken from [48]

5.1.3 Results

In Figure 5.4 the beam profile is shown with the hitmap on the le", number of accumulated hits in the longer pixel cell axis (center) and number of accumulated hits in the shorter pixel cell axis (right). In Figure 5.5 the efficiency dependence on the DAC threshold setting (le") and on the bias voltage (right) was determined online via scintillators. e efficiency dependence had the expected behavior. VTH indicates the internal DAC setting for the global threshold setting of the ip. VTH= 220 corresponds to about 1000 electrons threshold and VTH= 185 corresponds to about 4200 electrons. e depletion voltage is ≈21 V.A wide plateau in efficiency 42 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

above 99% is observed for both threshold settings above depletion voltage. e individual pixel thresholds were not adjusted for these measurements. On the Figure 5.6 the efficiency dependenceontheexternalstrobesignaldelayispresentedonthele".Ontherightisadrawing for easier understanding - the external strobe signal delay was anged in 5 ns steps and the online efficiency was calculated. e ALICE1LHCb clo frequency is 10 MHz, the strobe signal duration was set to 120 ns and that’s why the top efficiency had a plateau of 20 ns. On the Figure5.7theclustersizedependenceonthedifferentincidentparticleanglesmeasurementfor different DAC threshold settings on the le". e incident particle angles where anged by steps of 5 degrees by the XY table for 3 different thresholds and the average cluster size was calculated. Test beam results on cluster size are in good agreement with earlier measurements (RD19, LHC1 test ip) (on the right) and confirmed the minimum operable threshold of 800 e−. e Figure 5.8 shows the so"ware used for the Data Acquisition (DAQ) on the le", the online beam spot in the center and the first reconstructed tra ever in an SPD testbeam.

Figure5.4: ebeamprofile.Hitmaponthele".Incenterthebeamprofileinz(425μmpixels):≈7pixels=3 mm. On the right the beamprofile in x (50 μm pixels): ≈ pixels = 2.5 mm. From [30].

100 1.2

1 80

0.8 60 0.6

0.4 AMS59 40 Efficiency (%) VTT10

0.2 Online Efficiency [%] 20 VTT 1 0 th=215 ~ 1600 electrons RMS 0 50 100 150 200 250 th=200 ~ 2900 electrons RMS 0 Pre-VTH (threshold DAC code) 0 20 40 60 80 Bias Voltage [V]

Figure 5.5: e efficiency dependence on the threshold (le") and on the bias voltage (right) determined online via scintillators. From [30]. 5.1. ASSEMBLY TESTBEAM 43

100 VTT12 Particle burst -> Scintillator trigger delaying strobe strobe=120ns 80 strobe synch. only on chip

vth=215 60 vth=200

Efficiency40 [%] 10 MHz clock

20 20 ns Strobe

0 -100 -50 0 50 100 Strobe signal delay [ns] 120ns

Figure 5.6: e Strobe delay scan. From [30].

Cluster Size (threshold=3000e) 6

5 0.8 Alice1LHCb 4 0.6 I. Ropotar et al. (NIMA 439, 2000) 3 0.4 2 Threshold = 1400 electrons Cluster Size Fraction Threshold = 2800 electrons 1 0.2 Threshold = 4200 electrons 0 0 0 10203040 123 Detector angle (degrees) Cluster Size

Figure 5.7: e cluster size dependence on the different incident particle angles measurement(le") and earlier measurements and predictions (right). From [30].

Figure 5.8: DAQso"ware (le"), the beam profile (center) and the first reconstructed tra (right). From [49], [29]. 44 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

5.2 Ladder testbeam

e second ALICE SPD test beam was carried out also in the H4 beam line at CERN SPS using 350 GeV/c protons [39], [41]. Like in the first testbeam, the setup consisted of 3 stages of ALICE pixel detectors aligned along the beam axis. e first and last stage were used as reference planes for traing. What was different from the first testbeam was that now, ea reference plane consisted of two single ip assemblies mounted one behind the other. e distance between these two single assemblies along the beam axis was ≈2cm.Bothsingleswerereadouttogetherviaone DAQain similar to the readout seme of a bus. erefore ea of the reference planes was referred to as a mini-bus. e two reference planes, ea equipped with one mini-bus, provided 4 space points for traing. e reference planes consisted of 32768 bump bonded pixel cells. e central plane was again mounted on the XY table whi allowed also to rotate the plane with respect to the beam axis.

5.2.1 Experimental setup

Figure 5.9: Picture of the ladder testbeam setup. Crossed scintillators in the front followed by 5 planes of ips, the middle one mounted on the X-Y table (le") and a minibus (right). From [44]

e testbeam setup (see Fig. 5.9 right) was basically the same as in the second period of the first testbeam, however, this time the first and the last reference planes consisted of 2 single assemblies, creating 5 planes in total. e trigger was provided by the coincidence signal from 4 scintillators. Two small scintillators (2 × 20 mm2, 2 mm thi) were mounted orthogonally directly in front of the first reference plane (see Fig. 5.9 le"), selecting a beam spot of about 2 × 2mm2. One large scintillator (5 × 5cm2,5mmthi)wasmounted≈10 m upstream of the setup. A 1 × 1cm2 scintillator (5 mm thi) was placed directly behind the second reference plane.

Two single assemblies were tested in the center position of the setup. One assembly consisted of a 300 μm thi detector bump bonded to a 750 μm thi ip (AMS76) and the other single assembly consisted of a 200 μm detector (the sensor thiness designed for the experiment) on a 750 μm thi ip(VTT49). 5.2. LADDER TESTBEAM 45

5.2.2 Results

For both assemblies timing scans, threshold scans, bias scans and measurements at different an- gles were carried out. e measured online efficiencies were 99.6% (VTT49) and 98.8% (AMS76). e online efficiency was determined using the information from the scintillator trigger. e precision of this measurement is the order of 1-2%.

In the initial phase the data collected from 3 Pilot readout cards suffered several corruptions from pilot buffer overflows, missing data headers and/or data trailers, pilot card desynroniza- tions whi resulted in data written with event shi"s. erefore, the hits in one event read out by one pilot card had the corresponding hits read out by another pilot card were not in the same event, but in the following one. e need arose to have fast and reliable data quality eing so"ware to analyze the data right a"er the run, in order to produce as little corrupted data as possible. To meet these requirements I created the so"ware whose user interface is on Figure 5.10. Using this so"ware, data corruptions could be detected and reacted to a few seconds a"er the run has finished. One can find the runs containing the event shi"s when plotting correlations between the different traing planes. Correlation plots can be obtained when plotting row and column hits from one plane to another. An example is shown in the right bottom part of the Figure 5.10. I added correlation functionality to the so"ware during the testbeam and this so"ware was successfully used also to analyze the data offline and also in the third testbeam.

Figure 5.10: e so"ware I created for data quality es and extended for offline data analysis, featuring integrated hit maps, event-by-event analysis, correlation plots, etc.

One ALICE ladder (VTT2-2001) mounted on a prototype bus was also tested in the center position of the setup. Figure 5.12 shows a reconstructed tra in the ladder using the traing information from the pixel reference planes. e online efficiency of the ladder was determined 46 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.11: e tested two ladder prototype mounted on the pixel extender card. A similar pixel extender card was used in the testbeam with a 5 ips ladder mounted. From [29]

Figure 5.12: e first reconstructed tra from the second testbeam. [39] 5.2. LADDER TESTBEAM 47 to be better than 99%. e same set of measurements as carried out for the single assemblies was repeated for ea ip on the ladder. Additional threshold scans were taken by positioning the beam spot between two ips. In the region between two ips the pixel cells are prolongated (625 μm instead of 425 μm) to fully cover the gap between ips. No difference in online efficiency was observed for the inter-ip regions compared to the center of one ip.

Figure 5.13 shows the online efficiency measured as a function of the bias voltage measured at three different thresholds on ip 3 of the ladder. VTH indicates the internal DAC setting for the global threshold setting of the ip. VTH = 220 corresponds to about 1000 electrons threshold and VTH = 185 corresponds to about 4200 electrons. e depletion voltage is 21 V. A wide plateau in efficiency above 99% is observed for all three threshold settings above depletion voltage. e individual pixel thresholds were not adjusted for these measurements.

Bias Scan AMS 76 Bias Scan ladder - Chip03 120 100 100 80 80 60 th=215 60 Th=220 40 th=200 40 Th=200 th=185 20 20 Efficiency (%) Th=185 Efficiency (%) 0 0 0 1020304050607080 0 102030405060708090 Bias Voltage (V) Bias Voltage (V)

Figure 5.13: e efficiency dependence on the bias voltage for a single assembly (le") and a ladder (right) [29].

Figure 5.14 shows the online efficiency measured as function of the different global threshold for a single assembly, for a single assembly tilted by 30 degrees w. r. t. the beam axis, for one ip of the ladder and for an average of all 5 ips from the ladder. A wide plateau in efficiency above 99% is observed for threshold settings above 185 for both-single assemblies and ladders.

Threshold scan AMS76 Threshold scan (LADDER-Chip02)

100 100 80 80 60 60 30 degrees 40 40 0 degrees All Chips Chip 02 Efficency 20

Efficiency (%) 20 0 0 0 50 100 150 200 250 0 50 100 150 200 250 pre_VTH pre_VTH

Figure 5.14: e efficiency dependence on the threshold for a single assembly (le") and a ladder (right) [29].

Figure 5.15 shows the online efficiency measured as function of delay of the external strobe signal for a single assembly at two different threshold settings and for the ladder. ere are no differences between the two plots whi proves the good design and assembly of the prototype ladder. 48 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Efficiency vs. Ext. Delay Efficiency vs. Ext. Delay

120 100 100 Pre_vth=210 80 80 Pre_vth=200 60 60 40 Efficiency ( %) 40 20 Efficiency (%) 20 0 0 0 50 100 150 200 250 300 350 400 450 0 50 100 150 200 250 300 350 400 pre_VTH Ext. Delay (ns)

Figure 5.15: e efficiency dependence on the delay of the external strobe signal for a single assembly (le") and a ladder (right) [29].

First spatial resolution studies were performed a"er the testbeam [41]. e resolution algorithm was constructed as follows (for illustration see Figure 5.16 le"): First perform tra- ing with plane 0, 1, 3 and 4 only requiring hits in all 4 traing planes. en project the found tra into the plane 2, whi was the plane under study. Finally calculate the resid- uals. e expected resolution in the case of binary readout with hits of cluster size one is σx = 425 μm/ 12 ≈ 122.7μmandσy = 50 μm/ 12 ≈ 14.4 μm. In reality σy will be less due to arge sharing, while this is less likely in the longer pixel direction. e aieved spatial resolution was actually: σx = 88.5μmandσy = 12.8μm.ehistogramontherightinthe Figure 5.16 shows the resolution in the y direction (the shorter pixel side).

Reconstructed track

?

Plane 2 Hit Under study

Figure 5.16: e spatial resolution algorithm (le") and the results for the short pixel side (right). From [50]. 5.3. HIGH MULTIPLICITY TESTBEAM 49

5.3 High multiplicity testbeam

Liketheprevioustestbeams,thethirdonewascarriedoutbytheALICESPDcollaborationonthe H4 beamline at the CERN SPS [41]. e motivation of this testbeam was to study the performance of the SPD under different conditions (threshold and inclination angle), the traing of high energetic particles in a high multiplicity environment and also to test for the first time the full final readout ain prototype (MCM) connected to two ladders (Pixel Chip Interface (PCI-2002 see picture 5.19)). Two beamtypes were used - during the first days a proton beam with an energy of 120 GeV/c and a"er the initial scans and debugging was finished a fully stripped In beam with an energy of 158 GeV/A was extracted onto a 4 mm thi Pb-Sn target to create the high multiplicity environment.

5.3.1 Experimental Setup

Asintheprevioustestbeamsetup,thescintillators,thetargetandallthe5pixelipmodule planes including the PCI-2002 with their electronics were mounted in the beam area on a granite-metal experimental table whi could be moved on metal rails and was later aligned and fixed with the beam pipe. As usually used during the previous testbeams, for position and anglescansthemiddleplane(numbered2)wasmountedonaremotelycontrolledX-Ytable. Also in this testbeam, the idea was to use four reference planes (0, 1, 3 and 4) to re-create the tras offline and then to project the found tras into the middle plane (2). e two upfront reference traing planes (0 and 1) and also the ba ones (3 and 4) were connected in parallel toeaothertoformadoubletandwerereadoutwithonePilotModule.etwoaxesofea doublet were perpendicular to ea other, as can be seen in Figure 5.18. e two doublets in the crossed geometry improved the resolution also for the direction with the larger pixel cell width. With this configuration, a good traing precision was obtained (≈10 μm both in x and y direction ). For the p+ beam all planes were put directly into the beam without a target. e aim of these measurements was to compare with the results obtained in testbeam 2, when the plane under study had a 200 μm thi sensor. As mentioned earlier, to study the traing of the SPD assemblies in higher multiplicity environment a Pb-Sn target was placed in front of the SPD planes during the heavy ion runs. As indicated in the Figure 5.17 the planes were aligned off the beam axis. To improve correlations and traing between the different planes they had to be adjusted several times. e average multiplicity was around 8 tras per event. An additional plane numbered 5 (PCI-2002) was introduced at the end of the reference planes and contained a full prototype readout ain (MCM) of two ladders. e reference planes carried single ip assemblies with either 300 or 200 μm thi sensors. e plane under study had a 300 μm thi sensor. A Multi Wire Proportional Chamber (MWPC) whi provided the information for beam steering was placed behind the setup.

A2cm× 2 cm scintillator positioned centrally into the beam in coincidence with the quartz counter provided the trigger signal during the proton run. During the heavy ion runs the quartz counter positioned into the beam in front of the target in coincidence with the scintillator, whi registered secondaries coming from the target generated the trigger signal (see Figure 5.17). 50 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.17: Apicture(le")andadrawing(right)oftheexperimentalsetupfortheheavyionbeamruns. From [50], [51].

Figure 5.18: A drawing of the crossed geometry for the proton beam with the offline definitions. From [52] 5.3. HIGH MULTIPLICITY TESTBEAM 51

5.3.2 PCI-2002

A full ladder was tested during the ladder testbeam. However, the first time a full half-stave containing 2 ladders and the read-out ain (PCI-2002) was tested in a particle beam was in this testbeam. Due to a different card design, it was not possible to plug the PCI-2002 in the main crate and had to use its own VME-crate. Since the PCI-2002 data created by the ladders was readout via the MCM and Pilot by the dedicated PCI-2002 readout so"ware and written in a different data format than the data read out from all the other planes, I wrote so"ware to merge the different data streams during one over-night shi". e merged data had to look like there would be a fourth Pilot card, thus no so"ware used for offline data analysis would have to be modified. e next day, tra correlations between the PCI-2002 and the other planes were successfully observed. Figure 5.19 shows a photo of the PCI-2002 and the figure 5.20 shows the hit maps visualized by the dedicated PCI-2002 data taking so"ware.

Figure 5.19: Picture of the PCI-2002 comprising the two ladders with their proper MCM. From [44]

5.3.3 Results

e analysis can be also found in [53], [54], [55] and [42]. e data from different planes were event shi"ed during the proton beam runs due to buffer problems in the pilot modules, this meant that hits from different planes were not written in the same triggered event [53], [42]. Onecanfindtherunscontainingtheeventshi"swhenplottingcorrelationsbetweenthedifferent traing planes. e data can be cured offline if the event shi" occurred between two runs and 52 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.20: e online monitoring so"ware showing the hit maps of ea of the 10 ips of the PCI-2002. From [44]

then used for traing and resolution studies. However, if the event shi" happened during the run, whi represent some ≈5.5% of all data, the files couldn’t be corrected and were excluded from further analysis. e event shi"s were evaluated for all files and made publicly available for the SPD offline community. Correlation plots can be obtained when plotting row and column hits from one plane to another. An example is shown in the Figure 5.21). Correlation plots werealsoo"enusedduringthetestbeamfortheheavyionrunstoadjusttheplacementofthe modules.

During the proton beam runs, the wide beam setting was used on the first day while on the second day, the beam was focused to an an ellipsoid cone with the first diameter of around 5 mm and the second diameter if around 3 mm. For the focused beam, the alignment of the planes was performed using the center of the beam profile [56], [42].

e upper le" corner in 5.22 shows the testbeam data file meta data, like Start of run and End of run date and time, data format version, total number of collected events, type of End of rune,etc.Intheupperpartthetotalhitsinthe3testbeamplanesareshowneashowingone pixel detector. e middle part contains an important input - an adjustable noise level threshold whideterminesthethresholdabovewhipixelswillbeconsideredasnoisy.A"ercliing "Start searing for Noisy Pixels" the found noisy pixels are show in the middle part, for 3 planes for all ips. e bottom part enables the user to select and visualize a single event.

e le" side in 5.23 shows as usual the testbeam data file meta data, the program status, etc. 5.3. HIGH MULTIPLICITY TESTBEAM 53

Figure 5.21: e correlation between the planes 2 and 0 in the focused proton beam runs is clearly visible by the straight line.

Figure 5.22: e so"ware I developed to find and visualize noisy pixels. In this picture I ose 0.4% as the noiselevelthresholdtovisualizeallnoisypixels.ebeamprofileandthenoisypixelscanbeclearlyseen intheupperpart,whilethemiddlepartshowthefoundnoisypixels.ebottompartenablestheuserto selectandvisualizeasingleevent. 54 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.23: e so"ware I developed to find and permanently remove the noisy pixels from the data files, whi speeds up and simplifies considerably the later testbeam data analysis.

Inthecentralpart,thefoundnoisypixelsarelistedfor3planeswiththeirips.eusercan add or remove noisy pixels here. Cliing "START" will create a new data file from the selected testbeam datafile while permanently removing the noisy pixels from the data whi speeds up and simplifies considerably the later testbeam data analysis.

Figure5.24showstheso"wareIdevelopedtoanalyzethetestbeamdataoffline.Asusual the testbeam data file meta data, the program status, etc. is on the le" part. e central part is divided into 3 parts - for 3 planes, ea visualizing the occupancy, computing the efficiency, the total number of hits, singles, clusters and empty triggers for a given ip in the plane. e histograms show the distribution of hits per event, multiplicity per event and cluster sizes for a given ip in the plane.

e prototype in the test plane was studied under different conditions (threshold scan, different inclination angles w.r.t. the beam and bias voltage scan). Clusters of hit pixels correlated with a tra were then used to estimate the combined detector/reconstruction efficiency, whi was found to be > 99 % in a wide range of threshold values including the normal working point, see figure 5.25.

e intrinsic precision as a function of various parameters has been calculated using an iterative method [43][57]. e intrinsic precision at the normal working point is found to be (11.1 ± 0.2) μm in the rφ direction. e dependence of intrinsic precision on threshold and angleofincidenceisshowninfigure5.26.DiscussionsoftheSPDtestbeams2and3canalsobe foundin[57][41]and[31]. 5.3. HIGH MULTIPLICITY TESTBEAM 55

Figure 5.24: e so"ware I developed to analyze thoroughly the testbeam data offline.

100

80 Efficiency [%] 60

40

20

0 60 80 100 120 140 160 180 200 220 Threshold [DAC units]

Figure 5.25: e reconstruction efficiency as a function of threshold for 200 μm thi sensors. A DAC threshold setting of 214 is equivalent to approximately 2000 e−. e normal working point is around DAC = 200. [40] 56 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

14.5

m] 18 μ 14 m]

μ 17 13.5 thr = 185 13 16 thr = 200 12.5 15

12 14 13 Intrinsic y-precision [ 11.5

11 Intrinsic y-precision [ 12 10.5 11 10 10 9.5 9 140 150 160 170 180 190 200 210 220 -5 0 5 10 15 20 25 30 35 Threshold [DAC units] Track Angle [deg]

Figure 5.26: e intrinsic precision (200 μm sensor) as a function of threshold for tras normal incidence angle (le") and as a function of angle for two different thresholds (right) [40]. 5.4. JOINT ITS TESTBEAM 57

5.4 Joint ITS testbeam

To test the of prototypes of the ALICE Inner Traing System integrated with the ALICE online systems (DAQ, DCS and trigger) a combined ITS testbeam was performed in the H4 line (north hall) at CERN SPS with mainly a positive beam (55% π+, 40% p, 5% K+ at production) and with three days of negative beam (≈100%π−,fractionofK− at production) with particle momenta 120 GeV/c and momenta spread max 1.5% (depending on the collimator settings) [42]. e ALICE DAQand trigger system were for the first time used for more than one sub-detector system. As the previous testbeams were very successful in terms of testing the pixel ASICs, the readout electronics, the studies of traing and resolution, this testbeam was more concentrated on testing the online and offline so"ware [58]. e testbeam data was used for code validation of the ALICE so"ware framework AliRoot [59].

5.4.1 Experimental Setup

e setup included two detector modules of ea of the three silicon ITS tenologies used in ALICE: pixels (SPD), dri" (SDD) and strips (SSD) put in the same order as in the final experiment (see Figure 5.27 and 5.28).

Figure 5.27: e geometrical setup used during the joint ITS testbeam. From [56].

A"er module installation in the testbeam and the first beam steering , the SPD planes were meanically aligned to show the first beam spots. During the first few days no data could be written due to initial problems with the DAQ. However, the online so"ware was working properly, and SPD stand alone runs could be performed anyway, allowing for runs with beam, delay scans 58 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.28: Picture of the experimental the joint ITS testbeam. From [60].

and noise runs. Due to problems with the multi-event buffer in the SPD Link receivers, whi was solved later in the testbeam, all data had to be taken in the single event mode in the initial phase.

e SPD used two half-staves in the final configuration as they are installed in the ALICE experiment (see Figure 5.29 le"). Ea half-stave consists of two ladders readout via an MCM. One SPD Router with two link-receivers merged the data and sent it through the Detector Data Link (DDL).

e SDD had two production detectors fully equipped with four front end electronics cards (PASCAL)andeventbufferips(AMBRA).elowvoltagepowerwassuppliedbyanAremPRO unit. e readout was performed by the CARLOSrx ip, whi together with the trigger unit TTCrm was embedded in the CARLOS box. e SDD readout ain was tested in an 16 hours runwithouterrorsinthedata(insinglebuffermode).

e SSD detector consisted of two short ladders with 2 modules ea. All parts of the SSD system were according to the final design - e readout was performed by 2 end caps connected to the readout system via the prototype patpanel, a VME based unit called FEROM served as the interface to the DAQ, the trigger and the DCS. While the annel noise of the SSDs was within on acceptable range, the common mode noise was too big. is influenced the zero-suppression and was studied and understood only a"er the testbeam. 5.4. JOINT ITS TESTBEAM 59

Figure 5.29: Picture of the SPD (le"), SSD (center) and SDD (right) in the joint ITS testbeam. From [60].

5.4.2 Data Acquisition and Detector Control System

Initially, all data was gathered in the standalone mode for individual detectors in the joint testbeam in order to simplify the debugging. e detectors were controlled independently, with independent triggers and data was taken without event building. Later, the data for all the three subdetectors (or any combination of two subdetectors) was taken with a common trigger in thecommonruns.eeventbuildingwasbasedonorbitcounterandbuncrossingevent identification. When the trigger signal came, ea subdetector sent the data via the Detector Data Link (DDL) to its DAQcomputer named Local Data Concentrator (LDC). Every LDC has one or more PCI boards called ReadOut Receiver Cards (RORC) whi receives the data. e function of the RORC is to perform concurrent and autonomous DMA transfer into the memory of the PC with minimal so"ware supervision. Two different versions of RORC were used in this testbeam: D-RORC (d standing for DAQ) for the SPD and pRORC (p meaning PCI) for the SDD and SSD. e function of the LDC is to transform the fragmented data from different RORCs into sub-events and to send them through a fast Ethernet to the another computer named the Global Data Collector (GDC). e GDC performs the event building and stores the data on local disks. Initially, all data recorded was stored on a local disk, but later it was transfered to the permanent storage system (CASTOR) at CERN. e DAQrun control processes run four control processes permanently on anindependentcontrolcomputer-oneforeasubdetectorandonefortheglobalDAQ.e sameversionofDATEv5.0(DataAcquisitionandTestEnvironment)wasusedtoperformthe data acquisition for all the subdetectors. DATE was controlled by the Experiment Control System (ECS) so"ware, both also running on the GDC. e purpose of the ECS is to synronize the online systems, to start and stop the global run (for all subdetectors or a partition in whi only some subdetectors participate) and to take the control of a subdetector or give the control ba. e event building was based on information taken from the Common Data Header. e ECS and DATE run only on Linux platforms. e SPD modules were controlled and monitored by so"ware made in LabView and PVSS (see section 6.2) running on an independent Windows PC. Independent data taking could be performed this way too. Another computer was used by the Detector Control System (DCS) whi controlled the power supplies whi provided the 60 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

low and high voltage to the SPD modules. A blo diagram of the independent SPD control and monitoring system and the SPD trigger is in Figure 5.30.

Figure 5.30: Sematic layout of the DAQ(le") and also of all online systems (right) in the joint ITS testbeam. From [60].

5.4.3 Trigger

One of the many motivations of the joint testbeam was also to test the implementation of the final trigger system into the DAQprocess. e coincidence between two crossed scintillators (1 × 1cm2) located upstream and one downstream scintillator (2 × 2cm2) was used as the default trigger. Some of the data was taken when a steel-copper target was placed just before the first SPD plane. In this configuration, the downstream scintillator was replaced by a 20 × 20 cm2) one in order to cover the diffracted tras and the trigger was provided by the FastOR signal from the SPD planes (instead of the upstream scintillators) in coincidence with the signal from the downstream scintillator. e signals from the scintillators were passed to the Local Trigger Units (LTUs), where trigger signals based on the coincidence were created. ree LTUs (one unit per ea subdetector) were placed in the local trigger crate (see Figure 5.31 right). A simplified blo diagramoftheLTUisshowninFigure5.31onthele".eLTUso"wareallowedtoswitfroma common(triggeringofthereadoutofallthreesubdetectors)toastandalonemodewithoutany re-cabling.ActivatingtheBUSY2signalinoneoftheLTUs,wouldmakethisLTUmasterandthe others slaves.

5.4.4 Results and Offline Analysis

AliRoot stands for the ALICE Off-line framework for simulation, reconstruction and analysis. It uses the ROOT1 system as a foundation on whi the framework and all its applications are built. 5.5. SIMULATION OF THE SILICON PIXEL DETECTOR 61

Figure 5.31: Le" is a simplified blo-diagram of the LTU shown. On the right a picture of the trigger crate with the three LTUs. From [60]

e framework is based on Object Oriented programming, and is written in C++. e testbeam analysis was done with the AliRoot Framework implementing the three different stand-alone codes (as used by SPD, SDD and SSD in previous testbeams). A successful test of the AliRoot offline so"ware was performed for the whole ITS. e framework was used during the testbeam to e the data quality and correlations between the different planes and different detectors [61].InFigure5.32thecorrelationbetweentheSPDandtheSDDplanesisvisibleontheright, while on the le" the correlation is not any more present revealing a multi event buffer problem of the SDD. To convert the raw data written by DATE to digits, new classes had to be added to the AliRoot framework. ese classes also provide for example the algorithms for clustering and removal of the noisy annels. Figure 5.33 illustrates the removal of the noisy pixels in the SPD.

Figure 5.32: No correlation due to multi event buffer problem of the SDD (le") and a visible correlation between the SPD and SDD planes (right). From [61]

5.5 Simulation of the Silicon Pixel Detector

A simulation in GEANT3 of the cluster sizes and shapes was done and compared with the cluster sizes and shapes observed in the testbeams [40]. e energy deposited by GEANT was 62 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.33: e noisy pixel removal with new AliRoot classes - le" histogram shows the beamspot before the noisy pixels removal, while the the histogram on the right shows no more signs of noisy pixels. From [61]

transformed into the number of electron-hole pairs (3.6 eV per (e,h) pair) and then compared with a threshold. reshold fluctuations and noise are taken into account pixel by pixel. Charge sharing was simulated assuming Gaussian diffusion of e/h-pairs. In ea GEANT step, the diffusion variance was evaluated: σdiff = k ldr,whereldr is the dri" path and k = 2D/vdr. D is 2 the hole diffusion coefficient: D =11cm /s [62] and vdr = μ · E,wheretheholemobilityisμ= 450 cm2/Vs [62] and E is the electric field per unit length (whi depends on the bias voltage). is model gave a qualitatively good description of the testbeam data (see figure 5.34) [40].

o 10 4 Tilt = 0

10 3

10 2

10 02468101214 Cluster Type 123 4 ......

Figure 5.34: e distribution of different cluster types for simulation (histogram) and from data from the ladder testbeam (stars). e definition of the four most common cluster types is shown. e definition of the remaining cluster types is in [57]. From [40]. 5.6. SPD COMMISSIONING AND COSMIC RUNS IN 2007-2009 63

5.6 SPD Commissioning and Cosmic runs in 2007-2009

In December 2007 the SPD commissioning at Point 2 in the ALICE underground cavern started [63]. e SPD was initially configured with configuration files produced during the half-stave aracterization. Later on, the data was migrated to configuration tables stored in the ALICE online database. In this phase the matrix response uniformity was verified with the internal pulser to find noisy and unresponsive pixels. e operating temperature of ea half-stave was measured. Fine tuning of the DAC settings was conducted in order to obtain the best compromise between performance and current dissipation. e calibration data were analyzed with online and offline tools of the ALICE framework.

106 out of 120 (88.3 %) total half-staves were powered on during this commissioning. e remaining 14 half-staves could not be operated due to a lower cooling system efficiency. Also few half-staves were found to have faulty connections and were repaired. e total percentage of unusablepixelcellsfromtheincludedhalf-stavesis5×10−3 % and is well within the specifications of less than 1 %. However, this number takes into account only the dead pixels found during the half-staves aracterization and the noisy pixels whi were masked. at’s why the total number of unusable pixels represents a lower limit since the count of dead pixels could not be verified due to the la of enough cosmic ray tras.

e installation and commissioning of the ALICE subdetectors continued well also in the year 2008. Two global cosmic runs were organized in late 2007 and in early 2008 with most of the detectors present at Point 2 in the ALICE detector. Most of the time was dedicated to detector commissioning, subdetector DAQand DCS systems integration and debugging to fix issues (for example, the SPD suffered from the busy problem when entering the global run whi wasn’t the case in the standalone runs). e first global run took place from the 10th of December 2007 until the 21st of December 2007. e first global run took place from the 4th of February 2008 until the 9th of Mar 2008. From May, the 5th 2008, the third global run was organized. e intention was to smoothly transform from cosmics data taking into data taking with beam collisions. us, the global run included 24/7 operation over few months and calibration data taking for the various subdetectors. e first LHC injection tests started in June 2008. Beams were injected into the LHC before ALICE in the anticlowise direction and were dumped before they reaed ALICE. On the 12th of June 2008, the SPD was the first detector from the LHC experiments to observe the first particles created in LHC (see Fig. 5.35. e SPD also gave a fast feedba to the LHC responsible during the tests that followed (e.g. the multiplicity observed in ea beam condition like beam dumped, beam circulating with and without beam monitoring screens). e first beams were circulating in the LHC on the 11th of September. at day the ITS, triggered by the SPD PIT, recorded the first interaction between the proton beam and one SPD module. e figures 5.36 and 5.35 show the most remarkable pictures. ey prove the SPD readiness for physics and its great contribution in the ALICE experiment since the beginning.

e data taking continued until October 20th with calibration triggers even a"er the serious incident [14] of September 19th. is unforeseen extended shutdown period is used for main- tenance, upgrades and the installation of more subdetector modules (PHOS, TRD and EMCAL) and for another global cosmics runs including 24/7 operation. More details on the detector 64 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

Figure 5.35: A visualization of reconstructed hits in the first event with particles generated in LHC ever seen by a LHC experiment. e muons generated far away from the interaction point by the beam dump traveled parallel to the beam axis making more than 10 cm long tras in the SPD. From [63].

Figure 5.36: A visualization of reconstructed hits of the first event during the circulating beams on the 11th of September 2008 in the ITS triggered by the SPD showing the LHC beam whi interacts with the detector materials. e event has been reconstructed with the final vertexing algorithm. From [64]. 5.6. SPD COMMISSIONING AND COSMIC RUNS IN 2007-2009 65 commissioningcanbefoundin[65].

5.6.1 Calibration and alignment

A series of calibration runs was performed to tune the SPD configuration and to reduce the current consumption without lowering the physics performance. e SPD participated with other subdetectors in the data taking of the second ALICE global cosmic run whi took place in February 2008 [63]. e pixel trigger system (PIT)[66] was installed in May 2008. Its commissioning required the fine-tuning of DACs responsible for the SPD fast-OR electronics behavior. A manual procedure to tune the 4 fast-OR DACs of ea ip was used to build the basis of a full automatic procedure. 923 out of 1200 (77 %) ips are included in the fast-OR logic input (out of 88 % available half-staves).

e PIT has a particular importance for the SPD commissioning and for the alignment of the ITS and the TPC - the central barrel traing detectors. ere are few predefined PIT logic algorithms, for example multiplicity and cosmic trigger. e algorithm whi was used to collect cosmic events with tras in the SPD is named top-bottom-outer-layer. In this logic algorithm, a trigger is generated if there were at least two hits in the outer layer of whi one is located in the top half-barrel and one in the bottom half-barrel. is trigger condition is safe against noise, however looses most of the horizontal tras. Currently, the possibility of an algorithm to include also horizontal cosmic rays is discussed (See Fig.5.37).

2008 2009 Nhits top outer>0 Nhits outer layer>1

(not on same stave) AND

Nhits bottom outer>0

Figure 5.37: On the le": A visualization of the trigger algorithm (top-bottom-outer-layer) used to collect cosmic events with tras in the SPD in 2008. A trigger is generated if there were at least two hits in the outer layer of whi one is located in the top half-barrel and one in the bottom half-barrel. is trigger condition is safe against noise, however looses most of the horizontal tras. On the right: e proposal of an algorithm to include also horizontal cosmic rays, currently under discussion. From [67].

More than 100 000 reconstructed cosmic tras were collected by the ITS with the PIT trigger provided by the SPD Since May 2008 [68]. e trigger rate varied between 0.08 Hz and 0.18 Hz, depending on the actual number of pixel ips participating as the fast-OR input of the PIT triggerlogic.evaluesareinagreementwiththeratefromtheL3measurements.Outofthe ≈100kcosmictras≈45 000 had 4 clusters in the SPD (one cluster in ea half-barrel) and ≈55 000 had 3 clusters in the SPD. ese tras were used to align the SPD. 66 5. THE SILICON PIXEL DETECTOR TESTBEAMS AND COMMISSIONING

e SPD and also some other subdetectors are composed of several modules. ese modules are not positioned exactly in space as they were designed to. is is due to the limited precision during manufacturing, mounting and due to deformations caused by other components. During a survey the subdetector module positions are marked at well defined coordinates. en the exact positions are calculated from digital images taken from various angles of the setup. Precision of 1 mm is aieved in the ALICE cavern, a slightly better precision aieved in labs during assembling a subdetector.

e purpose of the subdetector alignment if to determine as precisely as possible the correc- tions of the subdetector modules coordinates in space with respect to the ideal geometry. e found corrections are then applied to the ideal geometry implemented in the ALICE so"ware framework in order to represent the real geometry of the installed subdetector modules.

e smallest module for the SPD to be aligned is the ladder. ere are 6 alignment parameters (3 spatial coordinates, 3 rotations) to be determined for ea ladder. ere are 240 ladders in total whi results in 1440 alignment parameters for the whole SPD. e required precision is below 10 μm in the transverse plane. To aieve this precision, the mentioned survey only is not sufficient. A"er the survey, two independent methods additionally, based on tras-to-measured- points residuals minimization are used to determine these parameters. e first method uses the Millepede approa [69], in whi a global fit for all residuals is performed, extracting all the misalignment and tra parameters simultaneously. e second method performs a (local) minimization for ea single module and accounts for correlations between modules by iterating the procedure until convergence is reaed. e same cosmic ray tras allow to measure the quality of the alignment, since ea cosmic ray tra is reconstructed twice - once in the upper half-barrel, once in the lower half-barrel. For the reconstruction so"ware, both tras appear to originate from the ’center’ of the detector at y = 0. e le" picture on th Fig. 5.38 illustrates this idea for the SPD. e pictures in the center and on the right show cosmic ray tras used for the alignment of the whole ITS. e tra parameters of these two tras can be compared, especially the tra-to-tra distance Δxy at y = 0. e Δxy distributions for cosmic ray tras before and a"er alignment is shown on Fig. 5.39. e dotted line represents simulated data of ideal geometry. As shown, the results from the Millepede realignment give a spread of ≈52 μm in the cosmic data. is compared to the ≈43 μm obtained from a simulation with perfectly aligned detector geometry. is result, confirmed by other independent es, indicates a residual misalignment lower than 10 μm at the ladder level. e residual misalignment has less effect in the z-direction, since the expected spatial resolution is anyway about ≈100 μm. Another possibility is to compare the positions of clusters in parts of the SPD where the sensitive areas of the same layer overlap. is yields the spatial resolution in rφ-direction of clusters to about ≈14 μm compared to ≈11 μm in simulations with the ideal geometry. A residual misalignment for clusters of about ≈8 μm can be concluded. e obtained resolution is about ≈25 % higher than the theoretical aievable value [70]. Roughly ≈83 % of the SPD are aligned. Missing are the ladders for whi little data was recorded due to the fast-OR trigger algorithm whi excluded most of the horizontal tras.

e SPD is ready for the first collisions. Further optimizations are in progress to aieve maximum performance and ≈100% coverage. 5.6. SPD COMMISSIONING AND COSMIC RUNS IN 2007-2009 67

Figure 5.38: On the le", the idea of measuring the quality of the alignment is illustrated for the SPD. Ea cosmic ray tra is reconstructed twice - once in the upper half-barrel, once in the lower half-barrel. For the reconstruction so"ware, both tras appear to originate from the ’center’ of the detector at y = 0. e pictures in the center and on the right show cosmic ray tras used for the alignment of the whole ITS. From [71].

Figure 5.39: Onthele":AnintegratedvisualizationofallthereconstructedclustersinITSfromtheevents taken during the cosmic run and used for the alignment [71]. As mentioner earlier, the dramatically reduced occurrences of clusters around y = +/-5 cm is due to the fast-OR trigger algorithm whi excluded most of the horizontal tras. On the right: e residuals for the SPD alignment using the Millepede so"ware with cosmic ray tras. e figure shows the tra-to-tra distance of the same cosmic ray tra reconstructedtwice-onceintheupperandandonceinthelowerhalf-barreloftheSPD.edistribution before (blue solid) and a"er (bla solid) alignment is shown, as well as the distribution from simulated data with ideal geometry (red dashed). e inset in the top right shows a zoom in the central region. e simulated data is scaled to the same maximum value as the aligned distribution. From [70].

Chapter 6

e ALICE Detector Control System

The aim of this chapter is to give an introduction to the complex domain of the controls systems used in the LHC era and to show my involvement in it.

6.1 Introduction to Control Systems

ere has been immense tenological progress from the LEP to the LHC era. erefore the Detector Control Systems DCS of the present LHC experiments had to be re-engineered. In the seventies, the typical tasks to monitor and control industrial and scientific systems required custom design of instruments and control methods. e development of digital processors has triggered the use of computers to monitor and control systems from a central point. Several smart sensors implementing digital control started to emerge in the eighties, whi implied the need to integrate the various types of digital instrumentation into field networks. To meet this need fieldbus standards were developed to standardize the control of smart sensors. e Supervisory Control And Data Acquisition SCADA systems started to be developed during the nineties allowing for a fully distributed control using the IP protocol over Ethernet as means of communication. Due to the la of standardization in many areas, the development and maintenance of controls systems during the lifetime of the experiments in the LEP era was in most cases inefficient since high cost, plenty of time and manpower were required. Due to the tenical infrastructure available, several different programming languages, custom hardware and protocols were used at that time [72]. e design and engineering of the LHC experiments startedinthemiddleofthenineties.ankstotheexperiencegainedduringtheLEPera,a decision was taken for the design and engineering of controls systems for the LHC experiments to use and rely as mu as possible on so-called commercial off-the-shelf (COTS) components (e.g. SCADA products, fieldbuses, PLCs1, etc.), while retaining a certain degree of freedom through the implementation of an integrated engineering platform suited for the specific requirements of ea experiment. is integrated engineering platform was later implemented within the context of the Joint COntrols Project (JCOP) [73].

1Programmable Logic Controller

69 70 6. THE ALICE DETECTOR CONTROL SYSTEM

6.1.1 JCOP

VariousgroupsinargeofthecontrolssystemsfortheLHCexperimentsareusuallyusethe same or similar equipment and o"en require very similar functionality. To address common issues of these groups the JCOP working group was born at CERN in the end of 1997. e aim of the JCOP is to reduce the duplication of efforts and to simplify the integration by developing and supporting control systems centrally. JCOP has adopted many commercial products whi have been successfully used in existing high-energy physics laboratories, for example SCADA tools, fieldbuses (fieldbuses are an ideal solution in a geographically dispersed and harsh environment suasthelargecavernsoftheLHCexperiments),PLCs(PLCsareanexcellentsolutionfor performing autonomous and safe local process control), etc. As the standard solution for the exange of the information of DCS with external systems, su as CERN Tenical Services and LHC maine, JCOP adopted the Data Interange Protocol (DIP). e DIP protocol is based on the Distributed Information Management (DIM) protocol successfully used in the DELPHI experiment [74] and is used for exanging information between heterogeneous systems running on different platforms.

One of the major tasks performed by the JCOP was the difficult oice of a common su- pervisory and control so"ware to be used by all LHC experiments. Between 1997 and 1998 an evaluation of the widely used open-source Experimental Physics and Industrial Control System (EPICS) was performed at CERN. EPICS is a collection of three main aspects [75]:

• An aritecture for building scalable control systems

• A collection of code and documentation comprising a so"ware toolkit

• A collaboration of major scientific laboratories and industry.

e evaluation stated, that while EPICS had certain strengths, it would not be appropriate for experiments as complex as those of the LHC ones. is implied a decision by the CERN controls board to sponsor a detailed survey of the SCADA market [76]. In 1999 this survey concluded that to construct the supervisory layer of their control systems, the four LHC experiments would ose together the commercial SCADA tool called PVSS whi stands for Prozeßvisualisierungs und Steuerungssystem (more described in section 6.2). A"er selecting the SCADA system to be used, a common so"ware framework based on the selected SCADA with the name JCOP FRAMEWORK was created. e main aim of the Framework is to deliver an integrated set of common guidelines, so"ware components and tools whi can be used by developers of the control systems of all the LHC experiments to build their part of DCS applications (e.g. interfaces to power supplies, configuration tools, etc). us, the overall efforts required to build and to maintain the experiment controls systems are reduced. e JCOP FRAMEWORK is one of the key projects of the JCOP and reflects the excellent collaboration between the LHC experiments controls groups and the CERN-IT controls division2. e JCOP FRAMEWORK was originally influenced by the So"ware Engineering Standard PSS-05 [77] whose development started at the European

2Recently moved to the Engineering Department and renamed to EN-ICE-SCD 6.1. INTRODUCTION TO CONTROL SYSTEMS 71

Space Agency (ESA) in 1984. e PSS-05 guides provide an easy to understand set of guidelines covering all aspects of a so"ware development project.

e DCS of ea experiment is an integration of multiple subdetector DCS developments, whi are however, all different from one another. e advantage of having adopted the common development guidelines, procedures and tools mentioned above, results in a common global DCS aritecture 6.1. e shown aritecture is divided in two layers:

• A ba-end (BE) system running on PCs and servers (supervision layer)

• A front-end (FE) system composed of several commercial and custom devices (process and field management layers)

Configuration DB, Architecture Layer Structure Technologies Archival DB, Log , etc. Commercial Custom Storage FSM

WAN Supervision SCADA

LAN

OPC DIM LAN Process Controller/ Management Communication Protocols PLC VME Field Bus PLC/UNICOS VME/SLiC

Other systems

(LHC, Safety, ...) Node Node Field Field buses & Nodes Management Experimental equipment Sensors & devices

Figure 6.1: Sematic view of a typical controls system in the LHC era. Picture adopted from [78]

e common DCS requirements to all LHC experiments, taking into account the above mentioned aritecture and the present available tenologies, are:

• Distribution and Parallelism: e acquisition and monitoring of the data has to be done in parallel and distributed over several maines due to the large number of I/O annels and devices.

• Hierarical control: e data gathered by the different maines has to be summarized in order to present a simplified but coherent view to the users.

• Decentralized decision making: Sinceacentralizeddecisionenginewouldbeabottlene, ea sub-system should be capable of taking local decisions. 72 6. THE ALICE DETECTOR CONTROL SYSTEM

• Partitioning: Due to the large number of different sub-systems involved and the various operations modes, the capability of operating parts of the system independently and concurrently is essential. • Full automation: In order to prevent human mistakes and also to speed up standard procedures, the standard operation modes and error recovery procedures should be to the maximum possible extent fully automated. • Intuitive user interfaces: Since the shi"ers and the operators are usually not the control system expert, it is essential that the user interfaces provide a uniform and coherent view of the system, are self-explanatory and are as easy to use as possible.

Common solutions include systems and tools for both the front-end and the ba-end layers of the DCS to fulfill these requirements. ere is very wide variety of these components depending on the particular application.

6.2 PVSS

e acronym PVSS stands for Prozeßvisualisierungs und Steuerungssystem and means Process visualization and control system in German. PVSS is a commercial SCADA system created by ETM [79], a company of the Siemens group. PVSS is versatile application since it provides a flexible, distributed and open aritecture whi allows customization to be added for a particular application area and therefore is used not only at CERN, but also in a variety of domains, su as the supervision and monitoring of traffic, tunnels, metros, sewers, oil transfer, etc. On top of the basic SCADA functions PVSS provides a set of standard interfaces to hardware and so"ware and also an Application Programming Interface (API) (see 6.11) whi allows an integration with other applications. PVSS has the following strengths that make it interesting in the HEP domain [80]:

• It can run in a distributed manner with any of its manager running in a distributed manner • It is possible to integrate distributed systems • It has multi-platform support (Linux and Windows) • It is device oriented with a flexible data point concept • It has advanced scripting capabilities • It has a flexible API allowing access to all features of PVSS from an external application

In a simple view PVSS is used to connect to hardware or so"ware devices, gathers the data produced by the devices and uses it for device supervision, for example i.e. to initialize, configure, operateandmonitorthedevicebehavior.Toaievethis,PVSSfeaturesthesemaincomponents and tools: 6.2. PVSS 73

• Drivers -whiprovidetheconnectionbetweenthesupervisedhardwareorso"ware devices and PVSS

• A run-time database - where the data coming from the devices and other sources (i.e. scripts) is stored, and can be used for visualization, processing, etc.

• Ariving (see 6.10.3) - Data in the run-time database can be arived into files or into a relational database and retrieved later by user interfaces or other processes.

• Alarm Generation and Handling - Alarms can be generated by defining conditions whi apply to data arriving in PVSS. e alarms are then stored in an alarm database and can be selectively displayed by an Alarm display and they can also be filtered, summarized, etc.

• A Graphical Editor (called GEDI) - whi allows users to design and implement their own user interfaces (UIs).

• A Scripting Language - whi allows users to interact with PVSS and with the data stored in the database, either via a user interface or via a baground process. PVSS scripts are called control (CTRL) scripts and provide many SCADA-specific functions.

• A graphical parameterization tool (called PARA) -whiallowsusersto: Define the structure of the database Define whi data should be arived Define whi data, if any, coming from a device should generate alarms etc.

PVSS has a highly distributed aritecture. e modular design of any full PVSS application (usually reffered to as a PVSS project) is shown on Fig 6.2. It consists of separate functional modules (OS processes) ea carrying out a specific tasks. ese modules are called managers in the PVSS nomenclature. e Managers communicate with ea other via a PVSS specific protocol over TCP/IP. Managers subscribe to data and this is then sent only on ange to the Event Manager.

e Event Manager (EV) is the heart of the system - it is responsible for all internal commu- nication. It receives data from Drivers (D), sends it to the DataBase Manager (DB) to be stored in the run-time data base, it ensures the distribution of the data to all Managers whi have subscribed to this data. It also maintains the so-called process image in its memory - the current valueofallthedata.

e DataBase Manager (DB) provides the interface to the run-time database (a 3rd party product called the RAIMA database).

e Drivers (D) provide the interface between the supervised hardware or so"ware devices and PVSS. Drivers can be configured to only send data to the Event Manager when a significant ange is measured. erefore, during stable conditions, when the process variables are not anging, provided the system is correctly configured there is essentially no data traffic. Common 74 6. THE ALICE DETECTOR CONTROL SYSTEM

drivers that are provided with PVSS are OPC, PRO{BUS, CANBUS, MODBUS TCP/IP and APPLICOM. A DIM driver is provided as part of the JCOP FRAMEWORK.

Control Managers (CTRL) can provide for any data processing as baground processes, by running a scripting language. is scripting language is like ANSI-C with extensions. It is a high-level, procedure-based and advanced language that uses multi-threading. e code is processed interpretively, thus it does not need compiling. User functions that are repeatedly used can be stored in PVSS libraries for later use by panels and scripts.

Arive Managers are used to arive data into OS files for later retrieval and viewing. A PVSS project can, and usually has (see 6.10.3), more than one Arive Managers. A user can configure whi data is stored by whi manager.

Relational Database Arive Manager (RDB) is used to arive data into a relational database for later retrieval and viewing. A PVSS project can have only one RDB manager. e ariving inPVSScanbepreformedeitherviatheAriveManagersorviatheRDBManager,notbothat the same time (see 6.10.4).

API Manager (API) allows users to write their own self-contained PVSS managers in C++ using a PVSS API whi is implemented as a set of C++ libraries. is is the most powerful way to customize and add extra functionality to PVSS.

ASCII Manager (ASCII) allow to export and import the configuration of a PVSS project to and from an ASCII file.

User Interface Managers (UI) provide the interface to the user. ey can get device data from the database, send the data to the database to be sent to the devices, they can request to keep an open connection to the database and be informed (for example to update the screen) when newdatacomesfromadevice.IntheUI,valuescanbedisplayed,commandsissuedandalerts traed in the dedicated alarm panel. e UI can also be run in development mode, for example a graphical editor (GEDI), a database editor named graphical parametrization (PARA) and the generaluserinterfaceoftheapplication(NativeVisioninforWindowsandQtinLinux).

Any PVSS project runs one DataBase Manager, one Event Manager and can run, and usually does run, several instances of a given Manager, for example Drivers, User Interfaces, etc. As mentioned earlier, PVSS Managers can run both Windows and Linux. Due to their modularity and their communications capability, for a given PVSS Project, they can all run in the same PC or they can be distributed across different PCs including a mixed Windows and Linux environment. In case the Managers of a given PVSS Project run distributed across more than one maine this system is then reffered to as a PVSS Scattered System (see Fig. 6.3.

PVSS is even capable of handling very large applications, in whi one PVSS system is not enough. To aieve this a PVSS Distributed System, a confederation of communicating PVSS systems, is used. Figure 6.4 shows a distributed PVSS system that is built by adding a Distribution Manager (Dist) to ea PVSS project and connecting them together (see Fig. 6.4. Hundreds of systems can be connected in this way [81]. 6.2. PVSS 75

Figure 6.2: A drawing of a typical PVSS project. Not all managers are shown. Picture adopted from [80]

UI UI UI Manager Manager Manager UI UI AMANDA ASCII. Distrib. Manager Manager Server Manager Manager AMANDA ASCII. Distrib. Database Event Control Server Manager Manager Manager Manager Manager Database Event Control Manager Manager Manager UI Archive DIM API Manager Manager Server Manager Archive DIM API Manager Server Manager DIM Driver Driver Driver DIM Driver Client Client

Figure 6.3: A drawing of a PVSS project consisting of various managers running on a single PC (le") or running scattered on two PCs.

UI UI UI Manager Manager Manager AMANDA ASCII. Server Manager Driver Database Event Control Manager Manager Manager Archive DIM API Manager Server Manager UI UI UI DIM Distrib Manager Manager Manager Driver Client Manager AMANDA ASCII. Driver Server Manager Database Event Control Manager Manager Manager UI UI UI Manager Manager Manager Archive DIM API AMANDA ASCII. Distrib. Manager Server Manager Server Manager Manager Distrib DIM Driver Manager Client Database Event Control Manager Manager Manager Archive DIM API Manager Server Manager Driver DIM Driver Client

Figure 6.4: A drawing of a PVSS Distributed System of several PVSS projects communicating via a a Distribution Manager. 76 6. THE ALICE DETECTOR CONTROL SYSTEM

In the scope of this thesis we need to introduce one more PVSS concept, that is the concept of Data Points: e device data are in the PVSS database is represented as Data Points (DPs) of a pre-defined Data Point Type (DPT). Devices are modelled using these DPTs. DPTs are similarly classes in object-oriented terminology. e DPT describes the data structure of a certain device and a DP is an instance of su a device type. us the datapoint type is a sort of template. DPs aresimilartoobjectsinstantiatedfromaclassinobject-orientedterminology.estructureof aDPTistobedefinedbyauser(oracompany)andcanbeascomplexandashieraricalasit required to be. e elements whi constitute a DPT are called Data Point Elements (DPEs) and are user-specific too. A user can first define a Data Point Type and then create the Data Points of that type whi will hold the data of ea device by using the PARA tool or by writing a control scripts and executing it via the CTRL manager.

Figure 6.5: A picture of the PVSS so"ware console. is one serves the PVSS project whi simulates all Data Point Elements to be transfered to the Offline.

6.3 ALICE DCS

e e Alice Detector Control System is built in close collaboration with detector groups from the participating institutes and with controls and tenical services groups at CERN. e work of about ≈100 contributors, the developers of the various parts of the DCS of the 18 subdetectors, is coordinated by a small and compact central team at CERN. e DCS has been designed to assure a high running efficiency by reducing the downtime of any subsystem to a minimum. ALICE DCS is in arge of configuration, control and monitoring of more than 100 subsystems, consisting of atotalnumberof≈100 000 annels. It also tries to maximize the number of readout annels 6.4. SYSTEM LAYOUT 77 operational at any time, measures and stores all parameters necessary for efficient data analysis of the physics data. All controls tasks are performed by a large distributed system based on PVSS. Enormous emphasis was put into providing so"ware abstraction layers whi hide the complexity and variety of the implemented hardware access tenologies. is together with centralized data management based on a relational database the implemented approa allows for a unified operation of 18 ALICE subdetectors. e control and monitoring is provided in su a way, that the whole experiment can be operated from a single workspace in the ALICE control room. As mentioned, the core of the controls system is a commercial Supervisory Control and Data Acquisition (SCADA) system named PVSS. It controls and monitors the detector devices, provides configuration data from the configuration database and arives acquired values in the arival database. It allows for data exange with external services and systems through a standardized set of interfaces. e ALICE DCS hardware aritecture is shown in 6.6.

File servers Operator Nodes (ON) Database servers Disk Arrays

External Users

Protection External systems and services Central Local LHC, Electricity, Safety, etc operator operator

LAN (Ethernet)

PLC Worker Nodes (WN)

Fieldbus Fieldbus Node Fieldbus Power Node Power Node Power supply Node supply supply VME crate

Field layer Control layer Supervisory layer

Detector and experiment equipment

Figure 6.6: A sematic drawing of the ALICE DCS hardware aritecture. Figure adapted from [82].

6.4 System layout

e whole ALICE DCS can be regarded as a combination of two planes:

• e systems plane is responsible for the communication between the components, the 78 6. THE ALICE DETECTOR CONTROL SYSTEM

execution of the commands and for the data arival. e systems plane can be further logically subdivided into three layers: e field layer e control layer e supervisory layer

• e operations plane is a logical layer built on top of the systems plane whi provides a hierarical representation of the ALICE DCS subsystems and assures the coherent execution of control commands.

6.4.1 Field layer

e field layer is represented by the devices whi acquire data from the detectors and provide services to them. Ea sub-detector system is logically divided into several sub-systems. Ea sub-system covers different types of devices su as Low Voltage (LV), High Voltage (HV), Front End and Readout Electronics (FERO), etc. ere are in total about ≈150 sub-systems in ALICE, composed of about ≈1 200 network attaed devices and ≈270 VME and power supply crates. Huge effort has been put into the standardization of the hardware used. As a standard for the device communication the OPC protocol was osen. e FERO sub-system is a special part of the ALICE DCS field layer since its hardware aritecture is detector specific and differs from one sub-detector to another. A dedicated access meanism, called the Front End Device (FED) was developed [83] to standardize the operation of individual detector FEROs. e FED provides access to the specific FERO hardware of ea sub-detector through an abstraction layer whi has a standardized interface. e communication between PVSS and the FED and done via the DIM [74]. e concept of FED is also used for devices whi do not provide an OPC access. e complexity and the number of the FERO is rather allenging for the controls systems. ere are more than 1 000 000 FERO annels to be controlled and monitored in the ALICE DCS. In order to aieve this task there are around ≈800 single board computers mounted directly on thedetectors.Tolimitthenetworktraffic,sampledvaluesarepaedintogroups,beforebeing sent for processing.

6.4.2 Control layer

e control layer of ALICE DCS consists roughly of ≈100PCsandanumberofPLCsorPLC-like devices, whicollectinformationfromthefieldlayerandsendcontrolcommandstothedevices. Most of the PCs run PVSS along with the FED or OPC servers. ese sub-detector PCs rely heavily on infrastructure servers that provide the important services like database, network, storage, etc. As mentioned earlier, the core component of the control layer is PVSS. PVSS projects are built as a set of managers whi communicate via TCP/IP. To operate and monitor ALICE about ≈100 PVSS systems with more than 1 000 managers are deployed. e decoupled PVSS Manager aritecture allows scattering of the PVSS projects - heavy load PVSS processes run on dedicated PCs in order 6.5. THE FINITE STATE MACHINES 79 tobalancetheload.PVSSisalsoabletobuilddistributedsystemswherePVSSprojectsshare data. Ea sub-detector runs several PVSS projects, ea taking care of one or several field layer subsystems. eses sub-detector PVSS projects are integrated into one large distributed system. e main ALICE PVSS system is built as a large distributed system of sub-detector distributed systems.

6.4.3 Supervisory layer

ecomputerswhirunandexecutetheDCStasksarecalledWorkerNodes(WN)andthey typically do not allow for interactive work. e operators use dedicated servers whi are called Operator Nodes (ON) and provide a set of standardized user interfaces. ese ONs form thesupervisorylayeroftheALICEDCS.eUserinterfacesrunningontheONsareremotely connected to the individual detector systems on the WNs. e main advantage of this aritecture in whi the interactive work (like trending) is separated from control tasks is that it provides a natural protection against the overload of critical systems. Ea user interface whi generates excessive load is automatically bloed.

6.5 eFiniteStateMaines

A Finite State Maines FSM concept is an intuitive, generic meanism to model the functionality of a piece of equipment or a sub-system [82]. e entity to be modelled is thought of as having a set of stable states and can swit between these states by executing actions that are triggered either by commands from an operator or another component or by other events su as state anges of other components. Two types of objects can be defined in the FSM concept: abstract objects, representing a Control Unit (CU) in a control tree, and physical objects, representing a Device Unit (DU) in a control tree. e Control Units and a Device Units serve as basic building blos for the entire hierarical control system. e control unit models and controls the sub-tree below it and the device unit drives a device. e hierary can have as many levels needed to provide the sub-detectors with as many abstraction layers as required. e behavior and functionality of ea DCS component is described in the terms of FSM using the CERN SMI++ toolkit [84]. Standardized state diagrams deployed in all sub-systems hide the underlying complexity and provide an intuitive representation of the system to be operated. e FSM represent the operations plane, built on top of the systems plane. Ea detector defines ahieraricalstructuretorepresentthelayoutofthesystemstobecontrolled,startingfrom annels, device modules up to the complete sub-systems. is structure is further extended up to the ALICE top-level, whi collects information from all sub-systems and sends commands to them (see Fig. 6.7). Ea object has a defined set of stable states between whi it can transit. e transition can be either triggered by an operator, or executed automatically, for example as a reaction to an anomaly whi requires an intervention. e automatic action can be launed by a command sent by a parent (the object above in the hierary), or by state ange ofleaves(theobjectsbelowinthehierary).edescribedaritectureallowsforautomatic 80 6. THE ALICE DETECTOR CONTROL SYSTEM

and centralized operation of all components. A single operator is able to send commands, whi propagate across the tree and execute pre-programmed actions. e set of commands used by the operator is reduced to a minimum, reflecting the ALICE operational needs, su as GO_READY, GO_STANDBY, etc. e FSM meanism assures that the commands are executed by all targeted leaves and that the actions are synronized. e physical execution of the actions is performed by PVSS on the systems plane, and the status is reported ba to the operator via FSM on the operations plane. e global status is computed as a combination of states of all the sub-systems.

ECS Control Unit

Operator s s te d ta an S m Device Unit om C DCS

s ate s St nd ma

m States Co

Commands

Det 1 Det 2

tes ta ds S an mm Co Sub 1.1

Control Units

C S es s o ta at d m te t an m s S m a m n o ds C Each CU logically Sub Sub

combines states and Commands 1.1.1 1.1.2 S distributes commands s s C t States and alarms te d o at ta n m e S a m s m an om d C s Sub Sub

States 1.1.2.1 1.1.2.2

Commands

Dev Dev Dev Dev 1 2 3 4

Units

Device

Devices

Figure 6.7: A simplified sematic view of the FSM aritecture in ALICE. Figure adapted from [82].

6.6 Partitioning

Partitioning is the ability to control and monitor a part of a system (typically a sub-tree of the hierarical control tree) independently and concurrently with the rest of the control system. A partitioning meanism implemented by the SMI++ toolkit extends the capabilities of the system and provides additional flexibility. anks to this meanism, several operators can work in parallel, ea operating a different sub-system. e logical view of the system can be created independently of the systems plane. Masking of hierary sub-trees or creation of trees to be operated in separate from the main ALICE tree does not affect the execution of the PVSS systems. 6.7. THE USER INTERFACE 81

Figure 6.8: Le": e FSM control panel from where commands can be issued provided the adequate privi- leges are granted to the operator. e open/closed coloured lo gives information about the take/release condition. Right: e main FSM control panel. From here start/stop of all or a single FSM process can be done. Only expert operators are allowed to open this panel. [85].

is functionality was and still is essential for the installation and commissioning phase, where parts of the control system might not yet be available but sub-detectors need to control the installed equipment. It is also very important during operation for debugging purposes or sub- detector test or calibration running and during longer shutdown periods, sub-detectors might want to run their sub-detector control system while other parts of the control system are still switedoff.Partitioningisalsoveryusefulfordebugging,whereforexampleabrokenmodule can be excluded from global run and tested separately without affecting the rest of the system. Once a part of the hierary is disconnected or masked, its FSM reporting is no longer propagated to the top node. To ensure a safe operation of the experiment, the central operator is nevertheless warned about anomalies from the entire system via the alert meanism whi is implemented on the systems plane. If needed, the central operator can take full control of any sub-tree and perform required tasks. A partitioning process is illustrated in Fig. 6.10.

6.7 e User Interface

e Graphical User Interface (GUI) used in ALICE is distributed as a standard component and used by all detectors. e supplied tools and guidelines assure similar look and feel for all components of the DCS. e GUI provides hierary browser, alert overview, access to FSM and status monitoring. e commands via the GUI are sent to the different components using either the FSM meanism for standard actions or directly via PVSS for expert actions. An integrated role-based access control meanism assures protection against inadvertent errors [86]. e GUI exposed to the operators is divided into specific control and monitoring zones and it provides an intuitive overview of the system. 82 6. THE ALICE DETECTOR CONTROL SYSTEM

OFF GO_OFF GO_STANDBY CONFIGURE (run_mode)

STANDBY GO_STANDBY CONFIGURE (run_mode) CALIBRATE (calib_mode) DOWNLOADING GO_BEAM_TUN GO_READY STBY_CONFIGURED

STOP CONFIGURE (run_mode) CALIBRATING CALIBRATE (calib_mode) MOVING_BEAM_TUN GO_STBY_CONF MOVING_STBY_CONF GO_READY

BEAM_TUNING

STOP DOWNLOADING CALIBRATING

MOVING_READY MOVING_BEAM_TUN

STOP DOWNLOADING CALIBRATING

READY GO_STBY_CONF GO_BEAM_TUN UNLOCK CALIBRATE (calib_mode) READY_LOCKED CONFIGURE (run_mode) LOCK

Figure 6.9: e standard state diagram for the sub-detectors in ALICE [85]. 6.7. THE USER INTERFACE 83

Central operator Root

CU CU CU

CU CU

CU CU

CU CU

DU DU DU DU DU DU DU DU

Devices

If the partitioning requests are granted

Central operator Root

Local New operator root CU CU

Local operator CU CU

Local CU CU operator New CU root

DU DU DU DU DU DU DU DU

Devices

Partition A Partition B Partition C Partition D (original main partition)

Figure 6.10: A sematic example of the partitioning process. Figure adapted from [82]. 84 6. THE ALICE DETECTOR CONTROL SYSTEM

Figure 6.11: e ALICE standard UI with one of the possible experiment views is shown. Six out of eighteen subdetectors FSM are already integrated below the ALI_DCS node [87].

6.8 e JCOP FRAMEWORK

e motivation for the creation of a JCOP FRAMEWORK was to simplify the design, standardize the communication, complement the PVSS functionalities and ease the task of integrating the many different developments of the control systems of the LHC experiments, as mentioned earlier. Any development conducted in the frames of the Framework, but also any contribution from ea of the experiments is available to all experiments, in the context of a Joint COntrols Project (JCOP), so common features are developed only once and reused many times within ea of the experiments. e Framework also incorporates tools that are not included in PVSS, for example the DIP and DIM communication protocols whi proves that the Framework not only extends the functionality of PVSS, but it does also benefit from stand-alone developments. e main tools cover Finite State Maine (FSM), alarm handling, configuration, ariving, access control, user interfaces, data exange and communication. e JCOP FRAMEWORK is complemented by ALICE specific components to meet specific needs of ALICE. en, the complete ALICE framework is used by the sub-detector experts to build their own control applications, like high voltage control, front-end electronics control, etc. About 140 su applications are finally integrated into a global ALICE control system. Figure 6.12 shows where the Framework fits in the experimental supervisory control system development. ere are three main types of components in the 6.9. DATA FLOW 85

Framework:

• Core whi contains the fundamental functionality needed for any other Framework components. • Devices whi are used to monitor and control common hardware devices (for example analog and digital inputs, power supplies from Wiener, CAEN and Iseg, etc.). • Tools whi are used to handle, display and store data (e.g. communication protocols, trending displays, user access control, storage and retrieval of device configuration data from a database).

Detector Control System

ALICE Add-ons

JCOP Framework

PVSS FSM, DIM, DIP, etc.

PC (Windows, Linux)

Communication DIP protocols (OPC, DIM, etc.) communication

Hardware External Systems

Figure 6.12: Sematic view of the JCOP Framework in the context of a detector control system. Figure adapted from [88]

e Framework components can be installed and removed using the Framework Installation Tool. is tool automatically installs the necessary files and performs various actions during the installation to configure correctly the target system or to perform migration tasks when installing a newer version of a Framework component. A typical Framework component consists of some libraries of code, a set of graphical user interface panels, some configuration data and, if it relates to a hardware device (for example power supplies), the device definition. ere are requirements on the Framework components from some experiments that are not necessary to other experiments. To meet those requirements the Framework Installation Tool offers to install any required Framework component. If a particular Framework component is not useful for a certain development, this component can easily not be installed, but can be installed any time later if it’s discovered to be needed.

6.9 Data flow

In this section the data flow from the configuration database to the devices and from the monitored annels into the arive is described [89]. Data processing at different stages of 86 6. THE ALICE DETECTOR CONTROL SYSTEM

the system is explained. Information on data exange with external systems (su as data acquisition system, offline, high level trigger) is also provided.

e primary task of the ALICE Detector Control System (DCS) [82] is to ensure a safe and correct operation of the ALICE experiment of CERN. It is in arge of configuration, control and monitoring of 18 ALICE sub-detectors, their sub-systems (Low Voltage, High Voltage, etc.) and interfaces with various services (su as cooling, safety, gas, etc.). e operation of the DCS is synronized with the other ALICE online systems, namely the Data Acquisition System (DAQ), Trigger (TRG) and High Level Trigger (HLT) through a controls layer: the so-called Experiment ControlSystem(ECS)asshownin6.13.

External Services AliceOnlineSystems DAQ TRIGGER Experiment Control System

Electricity HLT

Ventilation Detector Controls System Cooling SCADA Configuration Gas Database

Magnets Archival Safety Database Devices Devices Devices Access Control Devices

LHC Offline Conditions Database

Figure 6.13: A general drawing of the data flow in the ALICE DCS. e operation of the DCS is synronized with the other ALICE online systems, DAQ, TRG and HLT through a controls layer: ECS

As mentioned earlier, the core of the ALICE DCS is a commercial SCADA System PVSS [79]. is is built as a collection of autonomous so"ware modules (managers) whi communicate via TCP/IP. About 90 individual PVSS systems consisting of ≈900 managers are running on 150 computers. Together they form one global distributed system. e data flow in ALICE DCS can be divided into two data categories:

• e synronization data allows for coherent operation of all DCS subsystems and assures coordination between online systems. It consists of commands and states transferred between different components of the controls system. 6.9. DATA FLOW 87

• e controls data includes all information needed to configure the detectors and the controls system itself. It also contains all information read ba from the detectors.

6.9.1 e Synronization Data Flow in ALICE DCS

In order to allow for coherent and parallel operation of many devices with different operational requirements, the DCS is internally grouped in logical blos. Ea subdetector is treated as an autonomous entity whi reacts to commands and publishes states reflecting its own internal operation.

At the sub-detector level, the DCS is divided into sub-systems, representing control objects su as Low Voltage subsystem (LV), High Voltage (HV), Front-end and Readout Electronics (FERO), Gas, Cooling, etc. Ea subsystem consists of devices build from modules. e smallest controlled unit is a device annel, whi can be for example a voltage annel, a temperature probe or a register from a front-end ip. ere are ≈150 subsystems in total whi form the ALICE DCS.

All units are organized in a hierarical way as shown in 6.14. e upper layers provide a simplified view of the state of the underlying layers. At the top level, the global DCS object represents the whole controls system. is interacts with the ECS in order to synronize with the other online systems.

OFFLINE ECS HLT DAQ TRG

DCS

SPD … SPD … TPC TPC

SPD TPC … … … … …

HV Cameras HV Cooling VHV rod cooling LV Pulser LV FERO Drift Mon. FERO VHV LASER Gas Cooling VHV Curr. Mon.

Figure 6.14: A drawing of the logical blos of the ALICE online systems, especially the DCS. Ea logical blo is treated as an autonomous entity. e upper layers provide a simplified view of the state of the underlying layers. Logical blos react to commands and publish their own internal states.

e communication between all units is handled via the already introduced Finite State Maine (FSM) meanism, implemented in the SMI++ environment [84]. e operation of ea unit is modeled as a finite state maine whi reacts to the commands sent by its parent and reports ba its own internal status. Standard state diagrams are used all across the DCS and 88 6. THE ALICE DETECTOR CONTROL SYSTEM

provide an abstraction level for the controlled units. For example the same commands, su as OFF or ON, are recognized by all power supplies, even if these were produced by different manufacturers.

6.9.2 e Controls Data Flow in ALICE DCS

e DCS controls data flow originates in the Configuration Database, whi contains all informa- tion needed for detector operation. is consists of settings for all controlled devices, including the operational and alert limits, arival settings, readout refresh rates, etc. Monitoring data acquired from the detectors and devices and stored in the Arival Database forms the largest part of the DCS controls data flow.

Once configured, all devices are controlled and monitored and the resulting data is stored in the DCS Arive. If any of the monitored parameters exceeds a predefined range, the DCS takes action in order to correct the value by adjusting the device settings or to protect the equipment by initiating a so"ware (so") interlo.

Eaarivedvalueistaggedbyatimestampwhiindicatesthetimeoftheacquisition.If needed, this can be correlated with external events. For this reasons the DCS arives data arriving from external systems along with its own data. All data stored in the arive is available for display and analysis using graphical User Interfaces (UI). A subset of this data is also transferred to the Offline Conditions Database (OCDB) at the end of ea run for later use in the physics data analysis. is meanism is explained more in detail in section 6.11.

e HV and LV subsystems consist of more that 270 power supplies, providing about 4000 annels. For ea annel roughly 20-30 parameters are monitored and controlled. For example, a typical low voltage annel provides values of its voltage and current. In addition, the controls system must configure the desired set value, measurement refresh rate, alert limits and corresponding so"ware actions, arival settings (su as smoothing parameters) etc.

Probably the most allenging part in terms of dataflow is the front-end electronics. e DCS actively monitors ≈30000 annels, additional ≈70000 annels are accessible via the DCS and can be readout on operator’s requests (i.e. for debugging purposes). Many of these annels provide several parameters to the DCS. In total about 160 MB of data is read from the database and loaded into the ips. In addition, up to 6 GB of data created dynamically per run (su as pedestals) is loaded via the DCS.

e communication with the front-end ips is established either directly from the controls computer farm using a dedicated bus, or over the network using ≈800 single-ip computers mounted directly on the front-end modules. For example, the TPC uses 216 computers talking to 4356 front-end cards containing 34848 ips. ere are in total ≈3.7 million registers and ≈558 000 pedestal memories (10 bit) to be programmed by the DCS for this detector.

Alargevarietyofaccesstenologiesisdeployedtocommunicatewiththedevices.Atthe field layer the DCS communicates over several buses including JTAG, VME, CANBUS, PRO{BUS, RS- 232, ETHERNET, EASYNET, etc. Deployment of hardware abstraction layers significantly reduces 6.9. DATA FLOW 89 the complexity of communication meanisms between the SCADA system and the devices. Commercial devices, su as power supplies, are interfaced to PVSS via an OPC meanism whi is a industrial standard. Another layer, called FED server [83], is used to hide the complexity of the ALICE front-end aritectures. e FED API provides an uniform method for accessing the hardware and is used to access all aritectures controlled by the DCS.

A number of additional devices su as temperature probes, pressure and flow sensors, NMR probes,etc.isneededtooperateALICE.edatafromthesedevicestogetherwiththedata flowing from external services is processed by the DCS. ere are in total ≈1 000 000 parameters used by the DCS to control the ALICE Experiment. e Fig. 6.15 illustrates the controls data flow within the ALICE DCS.

Detector

VHV Drift Monitor Cooling Cameras HV LV FERO Gas VHV Laser rod CANBUS CANBUS Tempera ETHERNET CANBUS Pulser cooling ETHERNET ETHERNET ture RS 232 JTAG Servo RS 232 Geometry RS 232 Controllers Monitor VHV LED Current 28 Power Monitor 176 Power 10 FED servers Supplies Supplies 780 DCS Boards

>> 30000 30000 Monitored Monitored Channels Channels 22002200 Channels Channels 31003100 Channels Channels >> 161 161 MB MB of of Configuration Configuration Data Data

Figure 6.15: A drawing of the controls data flow within the ALICE DCS.

e amount of data handled by the ALICE DCS goes largely beyond the scale seen in previous generation of control systems. Up to 6 GB of data is loaded from the DCS database to the detector devices at the start of a physics run. is includes PVSS recipes, su as nominal values of device parameters as well as alert limits and FERO settings. About 1 000 000 parameters need to be configured to prepare ALICE for a physics run. PVSS is constantly monitoring all controlled parameters. About 300 000 values are read out by OPC and FED servers ea second. To minimize the data traffic, first level filtering is applied at the first level stage and only values exceeding pre-programmed threshold are injected to the PVSS systems. e 10-fold reduction factor is aieved by this meanism. Ea value processed by the PVSS system if first compared with the nominal one. Should the difference exceed the limit, an alert is generated and displayed on the operator screens. According to the severity of the alert an automatic action might be triggered. All values tagged for arival by the system experts are transferred to the DCS arival database. To reduce the storage requirement, additional level of filtering is applied. Only values falling outside a band defined around the previously arived value are recorded. A new band is then defined around this value. is smoothing meanism reduces the amount of data written by 90 6. THE ALICE DETECTOR CONTROL SYSTEM

ALICE DCS to database to about 1 000 inserts per second. e database servers are configured to cope with a steady insertion rate of 150 000 inserts/s. is is largely sufficient to cope with the steady ALICE insertion rate as well as with the peak load during the detector configuration and voltage ramping.

e DCS Database Services

All PVSS systems use the same database for arival. e same database is used to store the configuration data of the Front-end electronics and the PVSS devices. Most of the DCS annels are monitored at ≈1Hz refresh rate and are arived. To keep the database size within reasonable limits, data compression is applied at several stages of the data acquisition and processing.

UI UI UI UI UI UI Manager Manager Manager Manager Manager Manager Network Infrastructure AMANDA ASCII. Distrib. AMANDA ASCII. Distrib. Server Manager Manager Server Manager Manager Services Database Event Control Database Event Control Manager Manager Manager Manager Manager Manager Remote Access Servers Archive DIM API Archive DIM API Manager Server Manager Manager Server Manager Interface to external DIM DIM Driver Driver Driver Driver Systems Client Client

Engineering nodes ORACLE User Interfaces Archive UI UI UI UI UI UI File Exchange Servers Manager Manager Manager Manager Manager Manager AMANDA ASCII. Distrib. AMANDA ASCII. Distrib. Database Servers Server Manager Manager Server Manager Manager Database Event Control Database Event Control Engineering Nodes Manager Manager Manager Manager Manager Manager Archive DIM API Archive DIM API Manager Server Manager Manager Server Manager DIM DIM Driver Driver Driver Driver Client Client

Figure 6.16: An illustration of the controls data flow into the ALICE DCS arival database.

e device annels are typically polled at frequencies of 1-2Hz. Deadbands applied in the driversitself(OPCorFEDservers)minimizethetrafficfromthehardwaretothePVSSdown to ≈0.1 Hz. Additional smoothing is applied at the level of the PVSS arive managers. Every readout value is compared with the previous measurement and written into the arive database if the difference exceeds the predefined threshold.

It is expected that the steady arival rate for all ALICE will be ≈1000 inserts/s throughout the year. e database service is designed to cope with a steady state of 150 000 inserts/s, whi corresponds to a peak load during the ramp-up periods, where most of the monitored annels ange.

Roughly 250 MB of configuration data is read from the database and written to the ALICE devices. A versioning system assures that there is a configuration version available for ea ALICE running mode (cosmics, calibration, physics with protons or ions…) and limits the data duplication. 6.9. DATA FLOW 91

e expected total size of configuration and arival data needed for one year of running is 20 TB, this includes also online baups for a fast disaster recovery. e ALICE DCS database is discussed more in detail in section 6.10

e Controls Data Exange with Systems External to the DCS

eDCSisdesignedfor24/7operationandisabletooperatefullyautonomously.However, during standard operation a significant amount of controls data needs to be exanged with external systems. ere are three meanisms implemented to aieve this task (see Fig. 6.17):

• edatarequiredfortheofflinereconstructionisstoredintheofflineconditionsdatabase. A meanism based on an AMANDA server has been developed [90]. e AMANDA server receives requests from an offline client and retrieves the data from the arive. e results are formatted into blos and sent to client to be stored in the conditions database. It is expected that the AMANDA server will transmit 60MB of data at the end of ea run to the offline. However, this requires fine-tuning of the arival and data smoothing. In the beginning of the ALICE operation these numbers can be significantly higher. In addition to offline, the same AMANDA server-client meanism is also used to transmit data to the HLT.

• File Exange Servers (FXS) are used to transfer huge amounts of data. is includes for example pedestals computed by DAQor configuration parameters prepared on the HLT farm. Some parameters, like images produced by alignment systems, are processed by DCS andsenttoofflinethroughdedicatedFXS.

• A small amount of data is exanged directly via a meanism based on the DIM protocol [74].eDCSpublishessomeparametersviatheDIMserver.isisusedforexample to transmit slowly anging detector parameters to the HLT or to read ba the HLT farm status and communicate with the services and the LHC accelerator.

Detector Controls System

AMANDA ConfigDB ArchiveDB HLT DAQ FXS

HLT FXS DCS OFFLINE

DCS FXS DIM services Detector

Figure 6.17: An illustration summarizing of the controls data exange with systems external to the DCS. 92 6. THE ALICE DETECTOR CONTROL SYSTEM

6.10 ALICE DCS database

e ALICE DCS production database located at LHC Point 2 in the CR3 (Counting Room). It is implemented as an ORACLE[91] Real Application Cluster (RAC)[92] Enterprise Edition, currently version 10.2.0.4. ORACLE RAC tenology has been used at CERN since 1996 and is increasingly used in High Energy Physics. RAC is a shared-everything clustered system in whi ea of the clustered nodes directly accesses the storage. RAC provides a solution whi scales in terms of performance and high availability. e meanism for the coordination of the data anges from the different cluster nodes is called cae fusion. e hardware is illustrated in a simplified view on Fig. 6.18. It consists of 6 dual core, 2 GHz, 4 GB RAM database server nodes comprising mirrored disks and 3 redundant SAN3 Infortrend EonStor R2431 [93] disk arrays ea having 16 disks of 500 GB. Ea of this disk arrays has 2 redundant RAID (Redundant Array of Inexpensive Disks) controllers. ese 6 RAID controllers are communicating with the 2 redundant Host Bus Adapters (HBA) located in ea of the 6 database server nodes over Fibre Channel FC via a redundant QLogic SANbox 5600 [94] FC swit.

Infortrend EonStor R2431 6 DB Servers

QLogic SANbox 5600 16 ports enabled

Figure 6.18: A simplified view illustrating the hardware used for the ALICE DCS database. e Fibre Channel communication is redundant, comprising redundant HBAs, QLogic FC swites and RAID array controllers. e network swites for inter-cluster communication and the disk array for baups are not shown. From [95]

e database server nodes are running RHEL4 (Red Hat Enterprise Linux ES release 4 (Nahant Update 7)) [96]. On the so"ware level, the 48 disks are exposed as LUNs (Logical Unit Number) to the ORACLE Automatic Storage Management ((ASM) [97], whi takes care of all database I/Os and also provides redundancy, a so"ware RAID10 in the case of ALICE DCS. Another SAN Infortrend disk array having 16 disks of 1 TB ea is configured as RAID6 on the disk array level. is additional disk array is used for the ORACLE flash recovery area and baups. e ≈14 TB are partitioned into seven ≈2 TB LUNs of whi five are exposed to the ASM and two are NFS mounted to allow for non-ASM baups. eALICEDCSusestwobasicdatabasesforitsoperation:

• e Configuration database 3SAN (Storage Area Network) is a tenology to atta remote data storage devices (i.e. disk arrays) to servers in su a way that the storage devices appear as if they were locally attaed to the operating system. 6.10. ALICE DCS DATABASE 93

• e Arive database

ese databases are just logical, as they reside in the same ALICE DCS ORACLE database. e ALICE DCS configuration database stores all parameters needed for PVSS operation, device settings and front-end electronics configuration data. During the normal ALICE operation, this logical database is used mostly for uploading data into the devices and is therefore optimized for query performance. e DCS arive is the main data storage for the DCS. It stores all monitored parameters tagged for arival. is database is optimized for data insertion, however read requests are also quite frequent. DCS databases are used to store and manage configuration data as well as acquired data whi has been arived for later processing. e meanism for accessing data external to PVSS is based on JCOP FRAMEWORK tools, however the tools for ALICE specific configuration (e.g. for storing FEE parameters) had to be developed. A local FEE configuration database is essential for every ALICE subdetector. In some cases this data is shared between DCS and DAQ.

e internal PVSS arive provides an efficient meanism for storing, accessing and ma- nipulating historical data acquired by the DCS system (see 6.10.3). Although it contains all information acquired during the DCS operation, it is not trivial to retrieve it from outside of PVSS. e main reason is that DCS is distributed and data access requires detailed knowledge about its structure. To overcome these any many other limitations an external ORACLE based database is used (see 6.10.4). An intensive testing campaign was therefore conducted over the past years (see 6.10.5).

I participated in the definition, development and implementation of the ALICE configura- tion and arival databases and I was also involved in the evaluation of common tools in the JCOP framework working group. is work on configurations database involved installation of database server and in collaboration with the sub-detector groups also definition, creation and maintenanceoflocalconfigurationdatabases.Animportantpartoftheworkwasthedevelop- ment of tools for data management and the creation of an interface between the database and the DCS system. Stability and performance of the systems was tested in conjunction with prototype and later with final detector systems.

I have tested the performance of a few database flavors, reported the test results and have given recommendation to use the Oracle database. e database access is an integral part of a non-trivial meanism, called FED Server [83]. e creation of FERO configuration data, stored in database, involves both online and offline systems. is task requires a deep understanding of detector teniques and calibration procedures. e whole process involves detector data taking in calibration mode (whi needs a synronized operation of DCS, DAQand Trigger systems), offline processing of acquired data and creation of new calibration records using the results of the offline analysis. I have created a prototype of a part of this meanism for the ALICE Silicon Pixel Detector as a test ben for whi I developed and tested the required so"ware tools. is comprised the design, creation and maintenance of a efficient database sema to handle the FERO configuration data and corresponding so"ware to allow for this data to be uploaded and downloaded. For this process I have created and tuned a prototype versioning meanism for the SPD employing automatic version incrementing whi saves enormous amounts of disk space by storing only pointers to duplicate data, not the data itself (see 6.10.1). I have transfered 94 6. THE ALICE DETECTOR CONTROL SYSTEM

this knowledge to other subdetector groups. I have tested various access patterns to different database data types (numbers, ars, Binary Large Objects (BLOBs, etc.). I have helped other subdetectors to define their database sema and access patterns and provided consultancy and verification.

eDCSdatarequiredfortheofflinereconstructionisstoredintheofflineconditionsdatabase. Interfacing with offline is essential for data analysis and it also requires coordination with ALICE offline team. I have co-developed a meanism based on an AMANDA (see 6.11) server and maintained it. e AMANDA server receives requests from an offline client and retrieves the data from the arive. I have developed and maintained a PVSS project whi simulates all data from all subdetectors relevant for offline reconstruction to test and validate the AMANDA meanism and the detector reconstruction algorithms as well. It is expected that the AMANDA server will transmit 60 MB of data at the end of ea run to the offline system. e same AMANDA server-client meanism is also used to transmit data to the HLT. e ALICE DCS system, its data ariving and the data transfer to the Offline and HLT has been successfully integrated in the experimental setup of ALICE and has been successfully used during the SPD and other sub-detectors pre-commissioning, commissioning, during the cosmic ray runs since 2007 and during the LHC startup.

In collaboration with JCOP FRAMEWORK group I was also involved in the improvement proposals and the testing of the arival DB. I prepared a set of guidelines and so"ware tools whi were used by detector groups for implementing the access to the arival database. I have provided tools to ange the PVSS arive type from local files to database arival for all subdetectors and assisted them to configure PVSS ariving and make the arive type swit when needed. I have provided tools to them to query the arived data as well. I have installed and maintained several Oracle test databases in the DCS lab and production databases in the LHCPoint 2 (P2). I implemented data ariving for the test ben in the DCS lab. I was involved in the extremely successful PVSS Oracle ariving performance tuning campaigns whi resulted in the PVSS ariving capability being boosted from the initial ≈100 inserts per second to ≈150 000 inserts per second 6.10.5. e immense improvements aieved in PVSS Oracle ariving have been made available not only for the SPD, not only for ALICE detectors, but for all LHC experiments. Iwilldescribethisworkinmoredetailinthefollowingsections.

6.10.1 Configuration database

e Configuration database serves the applications whi need to insert, update and retrieve the DCS configuration data whi can be divided into:

• System Static Configuration (e.g. whi processes are running, managers, drivers etc.) • Device Static Configuration (device structure, addresses, etc.) • Device Dynamic Configuration – Recipe (device settings, ariving, alert limits etc.) • Alice add-on: FERO configuration, whi is in fact also a device configuration, both dynamic and static. 6.10. ALICE DCS DATABASE 95

e device static configuration uses one dimensional versioning. It contains a complete description of the PVSS device. e device dynamic configuration (recipes) employs two dimen- sional versioning - the operating mode (run type) and its version. It contains values for the device DPs and their proper alert configuration. Initially, a default recipe is loaded to the device as a part of its static configuration. e device dynamic configuration is handled via the JCOP FRAMEWORK configuration database tool.

JCOP FRAMEWORK configuration database tool

JCOP FRAMEWORK configuration database tool is a part of JCOP FRAMEWORK distribution and is available for download on JCOP FRAMEWORK pages. To optimize the performance and to simplify the database handling, the underlying database system is Oracle. e tool features an automated database setup. It offers user interfaces for a simple sema creation and sema updates. Also the database connection setup is straightforward, as there is no need to define an ODBC data source and the tool is independent of actual Oracle client libraries version. Like PVSS and the JCOP FRAMEWORK, it is also able to run on both Windows and Linux operating systems. For efficient data storing and lookup, it employs two-dimensional versioning - the operational mode (tag) and version number. e tool handles the configuration data of any JCOP FRAMEWORK integrated devices (e.g. Analog & Digital I/O, CAEN power supply, Wiener power supply, ELMB. Its user interface consists of several PVSS panels integrated in the Framework Device Editor and Navigator tool, whi is the basic JCOP FRAMEWORK tool providing access toallotherJCOPFRAMEWORKtools.etoolusestheconceptofRecipes.Arecipecontains grouped settings (values and alert configuration) for dynamic configuration of a set of devices (device list) belonging to the same framework (also called hierary). e recipes are stored in the database, but for convenience and performance also in the recipe cae - a data point in PVSS. Recipes are manipulated by means of transient recipeObject [80]. Recipes are identified by a Tag when used with a DB or a Cae name when used with recipe cae (e.g. voltages and alerts for HV annels in SuperModule0 for the Low Luminosity runs). I was responsible for the implementation and maintenance of the JCOP FRAMEWORK configuration database tool in the ALICE DCS environment. is consisted of creating user accounts, managing their data and providing guidance and first line support to the subdetectors as well.

FERO configuration database

e configuration of the Front end and Readout electronics (FERO) [83] is ALICE specific. Even ea detector has different requirements – e.g. access patterns, data volumes, data structures, etc. Upon a PVSS request, the FERO configuration data is downloaded by special so"ware called the FED (Front-end Device) server. With simulated FERO configuration data I have evaluated the performance of a few database flavors. I reported the evaluation results and have given recommendation to use the Oracle database. Depending on the aritecture, the FERO configuration data can be stored either as human readable data or BLOBs (Binary Large Object). Based on the available information I have implemented and tested various access patterns and data downloading performance using different database data types (numbers, ars, Binary 96 6. THE ALICE DETECTOR CONTROL SYSTEM

Large Objects (BLOBs). For these tests and also for the pre-commissioning I’ve installed and maintained several Oracle databases in the DCS laboratory. I also installed and I maintain the production database at ALICE Point 2 with the assistance of the CERN IT division.

Data retrieval from Oracle server (BLOBs)

Large BLOBswerestoredintheDBserverconnectedtoaprivateDCSnetwork(1Gbit/sserver connection, 100 Mbit/s client connection). e concurrent retrieval rate of 150 MB FERO configuration BLOBs by 3 clients was measured to be ranging from ≈3to11MB/sperclient depending on the status of the database cae (there is no need to read the data from the disks if it is already in the cae, thus first retrieval is slower, succeeding access is faster). e upper limit of 11 MB/s per client corresponds to client network connection bandwidth. For comparison the retrieval rate of the same 150 MB BLOB by 1 client at the CERN 10 Mbit/s network was measured tobe0.8MB/satmostevenifcaed.

Small BLOBs: e simulated test records consisted of 10 BLOBs with size 10 kB ea. ere were 260 configuration records retrieved per test. e fraction of BLOBs to be retrieved was anged from random to shared (certain BLOBs were reused between configuration records) by steps of 10% to test the Oracle caing meanism. e number of concurrent clients anged from 1 to 4. e results of the transfer rates tests are on Fig. 6.19.

5000 1client 2 clients 3 clients 4 clients 4500 4000 3500 3000 2500 Transfer rate[kB/s] 2000 1500 1000 500 0 0 20406080100 Fraction of repeated BLOBs [%]

Figure 6.19: e transfer rates of simulated FERO configuration BLOBs as a function of the ratio between random and shared BLOBs for several concurrent Oracle clients.

e obtained results are comparable with the raw network throughput (direct copy between two computers). e DB server does not add significant overhead, even for concurrent client sessions. For the pre-installation phase, the configuration database server was 2×Xeon (3 GHz) with 2 TB SATA RAID Array connected to a 100 Mbit/s swit. e production configuration 6.10. ALICE DCS DATABASE 97 database server consists of 6 dual core, 2 GHz, 4 GB RAM database server nodes connected through FC to 3 disk arrays ea having 16 disks in an ASM RAID10 configuration and using a 1 Gbit/s swit for the cluster interconnect. e measured transfer rates per client exceeded the requirements during normal operation already before the pre-installation phase.

Data retrieval from Oracle server tables using SPD data

e performance was studied also using a realistic example, the SPD FERO configuration data. e FERO configuration data of one SPD readout ip consists of 44 DAC settings. ese are 8-bit thus their value can vary from 0 to 255. ere are 8192 pixels in one pixel readout ip. Ea pixel can be masked and the threshold of ea pixel can be adjusted by an additional 3-bit DAC. e additional pixel threshold adjustment feature however, is not foreseen to be used, thus needs no configuration. ere are in total 1 200 ips in the SPD. Ea one of the 120 half-staves needs also ≈64 kB front-end configuration data for the MCM.

I have created, implemented and tuned a prototype versioning meanism for the SPD. is included the database sema design and the so"ware to upload and download the FERO configuration data from and to the database employing automatic version incrementing whi saves enormous amounts of disk space by storing only pointers to duplicate data, not the data itself. e sema design is shown in Fig. 6.20. e main table SPDVersion contains the global SPD version and its corresponding side A and side C versions. e version numbers are integers whi are automatically incremented whenever a new version is stored. e Primary Keys (PK) and Foreign Keys (FK) ensure data integrity and consistency. e SIDAVersion and SIDCVersion tables contain the version information of the corresponding sector versions on side A and side C, respectively. e 20 sector version tables contain their proper half-stave versions and these point then to the 120 half-stave version tables, whi themselves contain pointers to the actual FERO configuration data contained in the DACVersion, MCMVersion and MBRVersion tables. Since there are 10 pixel ips in one half-stave, 440 bytes are required to configure the 44 8-bit DACs. ese are stored in the DACVect column. e MCM needs 64 kB configuration (ACO BLOB), API 6 bytes, DPI 8 bytes and GOL 4 bytes. e MBR column contains the noisy pixel coordinates in a variable length array of maximum 1640 bytes. Similar to Zero suppression in the detector readout - Only pixel coordinates of pixels whi are actually noisy are written. Since the SPD consists of only class I ips having less then 1% noisy pixels and the noisiy pixel ip number and column number can be stored in 1 byte. e row number in the second one, the 1640 bytes are enough to contain all noixy pixel coordinates for the commisioning and cosmic ray data taking. In case more noisy pixels will emerge in the coming years, Oracle offers an easy way of enlarging the column size by altering the table.

e general policy for all subdetectors is that old configurations are never deleted or updated, instead new ones are created. e advantage of this sema design is that whenever a single parameter has to be anged, not all data except this parameter must be replicated, but only the corresponding part of the FERO configuration data for a given half-stave, or more general, a module.

e full SPD configuration was retrieved in less than 3 seconds on a single database instance 98 6. THE ALICE DETECTOR CONTROL SYSTEM

SIDAVersion SIDVersion PK SECA0Version SEC0Ver FK SECA1VersionHalfstave 0

SEC1Ver FK HalfstaveSECA2VersionHalfstave 1 0 MCMVersion SEC2Ver FK HalfstaveHalfstaveHalfstave 2 1 … 0 HASA90VersionHASA90VersionHASA90Version MCMVersion Number(9) PK SPDVersion HalfstaveHalfstaveHalfstave 2 1 … 0 HASVersionHASVersionHASVersion PK PK PK … … ACO Blob SPDVersion PK HalfstaveSECA9VersionHalfstave 1 0 … … Halfstave… 2 MCMVerMCMVerMCMVer FK FK FK DACVersion APIVect Char(6) SIDAVer FK HalfstaveHalfstaveSECVersion 2 1 PK SEC9Ver FK Halfstave… 5 … DACVerDACVerDACVer FK FK FK DACVersion Number(9) PK DPIVect Char(8) SIDCVer FK Halfstave…HAS0Ver 2 FK Halfstave… 5 MASVerMASVerMASVer FK FK FK DACVect CHAR(440) GOLVect Char(4) SIDCVersion Halfstave… 5HAS1Ver… FK SIDVersion PK Halfstave… 5 … SEC0Ver FK MBRVersion Halfstave… 5 SEC1Ver FK HAS5Ver FK MBRVersion Number(9) PK SEC2Ver FK MBR Varchar(1640) … ... …

SEC9Ver FK ...

Figure 6.20: e prototype of the SPD FERO configuration database sema design employing a prototype automatic version incrementing meanism whi saves enormous amounts of disk space by storing only pointers to duplicate data, not the data itself. ere are 147 look-up tables in total and 3 tables containing the actual configuration data. Primary and Foreign keys ensure data integrity and consistency and boots data retrieval via their corresponding indexes.

using one disk. e SPD has created its database sema design and access patterns based on these developments and tests and also added more features. I have helped other subdetectors to define their database sema and access patterns and provided consultancy and verification with the help from the CERN IT department.

6.10.2 Arival Database

e DCS arive contains all measured values retrieved by PVSS and whi are tagged for arival. e arival database data is used by trending tools and allows for an access to historical values, e.g. for system debugging. e arival database is too big and too complex for offline data analysis as it also contains information whi is not directly related to physics data su as safety, services, etc. One of the main priorities of the ALICE DCS is to assure reliable arival of its data. e conditions database contains a small sub-set of measured values, stripped from the arival database.

For the PVSS arival all subdetectors have a separate sema created via PVSS sema creation scripts located in any PVSS installation directory. is allows for easier maintenance, for example only the concerned subdetector needs to stop its operation while the sema and its clients are being upgraded. e sema versions are the same for all the subdetectors, currently version 8.1. e table 6.1 shows all sema names for the PVSS arival and their starting suffix for history tables (see 6.10.3) and the last reserved suffix as well. e sema names follow a simple rule: the 3 letter detector naming convention + an "ar" suffix. COOARCH is dedicated for cooling related application data (dcs_gas project). e DCSARCH sema is used by many DCS services like ras and global parameters (magnetic field, temperature, pressure, etc). ese projects currently use the DCSARCH sema: dcs_env, dcs_sysmon, dcs_magnet, dcs_r, dcs_globals, lhc_exange, lhc_monitoring. 6.10. ALICE DCS DATABASE 99

Sema name Starting suffix for history tables Last reserved suffix trgar 00000000 02999999 ssdar 03000000 05999999 spdar 06000000 09999999 sddar 10000000 12999999 mtrar 13000000 15999999 cooar 16000000 18999999 zdcar 19000000 19999999 acoar 20000000 20999999 fmdar 21000000 21999999 t00ar 22000000 22999999 v00ar 23000000 23999999 dssar 24000000 24999999 pmdar 25000000 25999999 phsar 26000000 26999999 cpvar 27000000 27999999 emcar 28000000 28999999 trdar 35000000 39999999 mar 40000000 49999999 tofar 50000000 59999999 tpcar 60000000 69999999 hmpar 70000000 79999999 dcsar 90000000 99999999

Table 6.1: All sema names for the PVSS arival and their starting suffix for history tables. e last reserved suffix is shown as well.

6.10.3 PVSS Ariving

e initial PVSS ariving meanism was only file-based. e PVSS Arives are stored in local files. ere is one set of files for a PVSS system. e simplified ariving meanism is shown in Fig. 6.10.3. e last acquired data is residing in the memory and is written into the Current History files. If a file exceeds a predefined size, a file-swit occurs and the Current History file(s) is anged to an Online History file while a new Current History file is created. ose Online History files whi are no longer needed are anged to Offline History files and can be brought online if necessary. Offline arives are stored on a baup media. e tools for baup and retrieval are a part of PVSS.

Although this approa had certain strengths, for example simplicity, little configuration needed, new PVSS projects pre-setup, etc., it had also major drawbas: e files were written in a proprietary format, so no other application except PVSS was able to read them. e database system (called RAIMA database) was not transaction-based, so an unexpected power-outage or a computer crash has o"en led to data corruption. Entries for different arived annels were pre-allocated leading to huge space wasting in case a annel was ariving its value more o"en than other annels. e impact of this could be slightly eased using up to 6 different Arive 100 6. THE ALICE DETECTOR CONTROL SYSTEM

Offline Offline SPD Offline SPD HistoryValues OfflineHistory SPD 00000011 SPD 00000011 HistoryValues History 00000012 00000012

Online Online SPD Online SPD Online History SPDHistoryValues SPD 00000123 CurrentHistoryValues00000123 Current History SPD 00000124 SPD 00000124 HistoryValues History 00000125 00000125

SPD SPD LastValValues LastVal (arrays) (“basic” datapoints)

Figure 6.21: A sematic drawing of the PVSS data ariving meanism. e last acquired data is written from the memory (LastVal and LastValValues) into the Current History (or HistoryValues) file. Once the file exceeds a predefined size, a file-swit occurs: the Current History file is anged to an Online History file and a new Current file is created. Online History files whi are no longer needed are anged to Offline History files and can be brought online if necessary. Figure adapted from [98]

ManagersforeaPVSSproject,easervingannelswithroughlythesamearivalfrequency. e drawbas showed the necessity of a different PVSS ariving implementation. ETM has developed a separate manager called the Relational Database Manager (RDB), whi is capable of ariving the data into an Oracle database. e ALICE DCS provided file-based arival during the pre-installation phase. is was replaced with the PVSS-Oracle ariving when it qualified for deployment - a"er its performance tuning (see 6.10.5). I have provided consultancy, support and a set of guidelines, procedures and tools to swit the subdetector’s PVSS projects from file-based ariving to the database ariving. is process ranged from DB installation, configuration and maintenance through sema creation and configuration to PVSS project altering and tools development.OneofthesetoolsUIisonFig.6.22.Itwasusedtoange(andalsoremove) theariveclassofDPEs.etoolwasbasedonthedevelopmentsofJimCookforATLASand demonstrated to be very useful to swit from file to database ariving. Tools for migrating already existing arival data from files to the RDB were provided by ETM and tested and used by me.

6.10.4 PVSS-Oracle Ariving

e PVSS RDB arival is replacing the previous ariving method based on local files. An Oracle database is a prerequisite. Although ETM had the intention to be database flavor independent, currently only an Oracle database is supported and this will not ange in the coming years. e ariving aritecture resembles the previous concept based on local files - For ea sema and ea defined group of arived values a set of tables is generated: e LastVal table is an 6.10. ALICE DCS DATABASE 101

Figure 6.22: A picture of the PVSS panel developed for anging and removing the arive class of DPEs. is panel proved to be indispensable for switing PVSS projects already ariving to files to Oracle ariving. It is based on the work of Jim Cook.

Oracle Temporary4 table storing the most recently acquired values. e Current table holds a history record of the measured values. An Oracle trigger was used to write data to this table. If its size exceeds a predefined value the table is closed, put to read-only online status and it a new Current table is created. e latest available values are copied to the new table automatically. Usually there is maximum of 4 Online tables. Older Online tables are swited to Offline status, thus are no longer available and can be removed, a"er their baup, from the storage. Ea sema comprises also a set of internal tables for the arival behavior and parametrization. All arive tables, except the temporary one are using their own Oracle tablespace whi roughly replaces the file from the file-based arival. Several parameters are stored for ea datapoint element, of whi the most notable are its value, acquisition timestamp and flags for reliability. All PVSS-based tools are compatible with the database ariving approa. To ease the retrieval of the arived data for the subdetectors, I have provided a PVSS panel (Fig. 6.23,6.24) based on thedevelopmentsdonebytheJCOPFRAMEWORK.

When the ariving meanism based on Oracle was delivered by ETM, it was tested by the JCOP and ALICE and also by the other LHC experiments. Several problems were discovered, the main concern was the performance whi was ≈100 inserts per second per PVSS system. is was clearly not enough to handle an alert avalane in a reasonable time. e poor arival performance was confirmed also by other groups. ATLAS demonstrated a way of inserting 1000 anges/s by modifying the database. In order to exclude possible incorrect server configuration, additional tests were carried out in the ALICE DCS lab - Data was inserted into a 2 column table (number(38), varar2(128), no index). Following results were obtained when inserting 107 rows into the database table: 4A Temporary table is residing in the server memory and is available only to the current Oracle session 102 6. THE ALICE DETECTOR CONTROL SYSTEM

Figure 6.23: A picture of the PVSS panel developed for querying the arived data in the database. It has the capability of querying the data from different PVSS projects, provided the projects are running in a distributed system. e user can select one or more or all DPs of a particular DP type, then one or more DPEs, select the time range, bonus values and even set the query to run at regular intervals. It is based on a PVSS panel developed by the JCOP FRAMEWORK whi was used during the PVSS-Oracle ariving performance tuning.

Figure 6.24: A picture of the PVSS panel displaying the results (the DPE, value and timestamp) of the executed query from the panel above. It is based on a PVSS panel developed by the JCOP FRAMEWORK whi was used during the PVSS-Oracle ariving performance tuning. 6.10. ALICE DCS DATABASE 103

• OCCI autocommit ea row ≈500 inserts/s

• PL/SQL using bind variables ≈10 000 inserts/s

• PL/SQL using vararrays: ≈73000 inserts/s using a single client ≈42000 inserts/s/client using 2 concurrent clients ≈25000 inserts/s/client using 3 concurrent clients

Comparing this test results with aieved performance of the PVSS arival, it became evident that the ariving and the interface implementation needed aritecture and performance tuning. I will describe it in the following section.

6.10.5 PVSS-Oracle Ariving Performance

An extensive test campaign of the PVSS-Oracle ariving ([99], [100], [101]) took place at CERN during April and May 2006. Manpower from various groups were involved, including IT-CO (now EN-ICE), IT-DES, IT-PSS (now IT-PDB), ETM, Oracle and PH-AIT. e aim of this test campaign was to evaluate and to improve the performance and scalability of the PVSS RDB ariving. e following Hardware and So"ware was used:

• On the Client side: a mixture of Windows (20 PCs from the ALICE DCS lab, 4 PCs from the IT-CO lab) and Linux clients (44 PCs from LHCb Point 8, and later 120 PCs from LxBat Computing Center). e PCs had installed PVSS version 3.1, with the cumulative pat, pates 167, 203 and JCOP FRAMEWORK version 2.3.6.

• OntheServerside:AnOracleRACserverwith1,2,4and6nodesrespectively(3GHz Dual Processor Xeon, 2 GB RAM), 32 SATA disks used by ASM RAID10 over Fiber Channel. Oracle Real Application Cluster version 10.2.0.2 was used with Linux Red Hat Enterprise 4.

e data was generated with simulator drivers in the clients configured to ange the values of DPEs at the required rate.

PerformancetuninginOracleisbasedontheOraclewaitinterface.IntheOracleversion 10, features like Automatic Workload Repository (AWR) and Active Session History (ASH) were introduced. ey provide automatic Oracle wait event information of sessions acquisition server wide. e PVSS-Oracle performance tuning method used during this test campaign was based on an iterative AWR and ASH gathering and performance tuning, until the required performance of 150 000 inserts ea second was aieved.

Initial Performance 104 6. THE ALICE DETECTOR CONTROL SYSTEM

e initial performance tests have confirmed that the out of the box PVSS RDB ariving performance was about ≈100 events inserted per second per client. e database server was waiting on the clients. e wait event is called: SQL*Net message from client. is was due to the fact that ea individual ange of an arived DPE in the PVSS system was sent to the database without any grouping. Also, a generic C++ database API was used in order to be independent on the database flavor. is was holding us ba from using many optimizations available in Oracle. Another identified bottlene was a database side trigger whi was responsible for inserting the data from the temporary LastVal table into the actual History table. is trigger fired ea time a new value arrived.

Client API Change

Following the initial tests and recommendations ETM has implemented the RDB manager using the Oracle native client libraries (OCCI). e main advantage was to possibility of bulk insertion - values to be arived are first stored locally in a client memory blo and once the blo is full or a timeout occur the data blo is transferred to the temporary table residing in the database server. e blo size and the timeout are both configurable in any PVSS project. Another speedup was aieved thanks to removing the trigger. Now a"er bulk loading a blo of values to be arived into a temporary table, the so"ware makes a call to a PL/SQL paage. A"er implementing these anges the ariving performance was ≈2 000 values inserted ea second fromaclient.isissatisfactory,since≈1 000 inserts per second from one client are expected during ramp up and ramp down phases before and a"er a physics run takes place.

Server Sema Change

A"er the optimizations on the client side a test was run with a group of clients ea inserting at a rate of 1000 anges/s. e database server could handle around 20-30 clients depending on theclientblosizewhiholdsthevaluestobearived.isnumberofclientswashowever below the 70 PVSS clients needed by the ALICE DCS (or even 150 required for ATLAS). For further speedup the Oracle RAC tenology was needed. Adding a second node into the database resulted in higher maximum insert rate, however it was distant from the double single node insert rate. e AWR and ASH pointed to Oracle I/O wait events (db file sequential read) and have also shown that the nodes were taking exclusive los in the history tables for long periods, thus interfering with ea other. A first solution was to reduce the time the los were held by using a direct path insert into the Oracle tables. During the direct path insert any integrity constraints are disabled and indexes are only updated once the insertion finishes. Also the database buffer cae is bypassed. To reduce the I/O needs an index usage analysis was performed and resulted inthereductionfrom3indexesto2.AlsothepossibleuseofIndexOrganizedTables(IOT) was analyzed. Direct path insert and I/O reduction gave a promising result as with 56 clients, 2(later4)nodeRACserverwewereabletoinsertcontinuously≈56 000 (later 100k) anges/s respectively. e next step was to move to a 6 node database server in order to handle the 150 000 inserts per second. However, adding the 2 nodes didn’t have the assumed benefit. e Oracle top wait events indicated that the system is slowed down due to cluster related wait events - e system was not scaling properly because the nodes had to communicate o"en to keep the coherency of the tables they were inserting into. Further optimizations were needed on the server side. 6.10. ALICE DCS DATABASE 105

RAC aware ange

In order to reduce the cluster contention, the tables had to be partitioned, so ea client inserts its data in its own table partition and so the systems don’t interfere with ea other. e advantage was taken of the fact that ea client inserts with the arived data also its own identifier(thePVSSsystemID)intothetables.eHistory tables are partitioned by this identifier. Partitioning finally made the application scalable on the server side, thus it is possible to add more nodes to the server if a higher performance is required. With 150 clients, a 6 node RAC server we were able to insert continuously ≈150 000 anges/s. However, significant slow down was encountered when the Current tablespacegotfullandanewonestartedtobecreated.e solution was to create the tablespace with a baground job well before they are needed.

Althoughitlookslikethattosatisfythesubdetectorarivalneedsduringtheramp-upand ramp-downperiodsonly2-3nodesareneededfortheRAC,it’sunderestimatedsincethetests did not include additional load on the RAC server resulting from queries for arived data and inserts and updates for alarms. A long term stability test was also performed with 170 clients ea inserting at ≈500 anges/s and a 6 node RAC server: ≈70 000 anges/s inserted continuously over 14 hours. For inserts of 170 clients ea at 1000 anges/s when clients were inserting the data at the same time because of their simultaneous start, the DB was not able to handle fully the load. e database could cope with the load when the clients inserted the data randomly whi is more corresponding the reality.

6.10.6 ALICE DCS database operation and maintenance

e Expected FERO Configuration data volumes during normal operation

ere are four main DCS subsystems scattered over ≈70 computers contributing to the database load:

• e Low voltage subsystem (LV)

• e High voltage subsystem (HV)

• e front-end and readout electronics subsystems (FERO)

•Infrastructure

e LV subsystem contains ≈3 000 annels. One LV annel stores ≈16 of its properties (ea 64 Bytes) in the Configuration database. e HV subsystem consists of ≈20 000 annels. One HV annel stores ≈26 of its properties (ea 104 Bytes) in the Configuration database. e FERO subsystem consists of ≈50 000 annels. One FERO annel stores ≈8 of its properties (ea 64 Bytes) in the configuration database. e Infrastructure subsystem consists of ≈1000 annels. Oneinfrastructureannelstores≈8 of its properties (ea 64 Bytes) in the configuration database. However, this data is not always enough to configure the system since several versions 106 6. THE ALICE DETECTOR CONTROL SYSTEM

of the configuration are and will be needed e.g. for different running modes, debug reasons, etc. erefore the total volume of data stored in the configuration database can grow up to 100-1000 times, resulting in total 200-500 MB of configuration data to be stored in the ConfDB per year. e current data volume is 147 MB.

e configuration database contains also the FERO configuration data. ese are stored as BLOBs with sizes ranging from 64 kB to 4 MB amounting in total 50-100 MB of information yearly. e largest part of the FERO configuration data is stored as data records with total size 400-600 MB yearly. e estimated total amount of FERO configuration data stored in the database is ≈1-5GB of configuration data yearly, as all data versions are arived. Currently, the FERO configuration data volume amounts to 601 MB.

Tosummarize-inordertoconfigurethefullcontrolssystem,roughly≈100MB of data are needed to be downloaded from the configuration database and roughly ≈1-6 GB of data stored yearly in the configuration database.

e Arived data

e data acquired by the devices is not synronized to any external event. A typical devices performs an internal polling of its registers. e gathered values are then readout by the device drivers whi are based on OPC or DIM servers. In case values have anged since the previous measurement, they are transmitted to PVSS where again if these values exceed the predefined dead bands, the PVSS will automatically arive them. e DCS is configured to arive data on ange.

e ALICE DCS arives roughly ≈74 000 of the monitored annels whi are polled at typical frequency of 1 Hz. us, in the worst case the expected load is 74 000 anges per second. Ea arived value is time stamped. e present model writes ≈150Bytesforeaarivedvalue into the database. is does not include the database redo logs.

e operation of the experiment requires stable DCS conditions, so like during the cosmic runs, we expect also during physics runs some hundreds of anges per second for the whole system. During the configuration of the experiment or ramping of the voltages, the anges will bemorefrequent.Wecanexpectaburstofdatacomingfromall74000annelsatarateof 1 Hz, lasting several minutes. A rough estimate of the typical DCS operation is summarized in table 6.2. Phase Typical action Arival Duration 1 Configuration Read all data needed for DCS 100-1000 anges/s Minutes 2 Ramping up Turn on the experiment ≤ 74000 anges/s Minutes 3 Running Several runs with different duration 100-1000 anges/s Hours 4 Ramping down Turn off the experiment or go standby ≤ 74000 anges/s Minutes

Table 6.2: e typical DCS operation cycle

Usually, the DCS configuration data need to be read from the database once. is data will be then caed in Oracle, PVSS and FED servers. us, DCS can react to the LHC status, ramp the 6.10. ALICE DCS DATABASE 107 voltages to safe values and proceed immediately with phase 2 if the beam conditions allow for this.

If ALICE decides to ange the running mode, the front-end electronics might need to be reconfigured. In this case the DCS operation cycle will start from the beginning - reading the data from configuration database. e phase 2 might be skipped if the voltages are already ramped. is particular case will happen if the decision to ange the running mode will be taken during the run (stable beams).

During the real operation, it is expected, that the detectors will benefit from the periods outside the runs to perform local tasks (maintenance, calibration, etc.). In this case it might be necessary to reconfigure the subdetector several times, whi involves the database access.

e regular read access to the arived DCS data in ALICE

ere are three main access types to the DCS database: the operator access, the transfer of the DCS data to offline conditions database and the data exange with the High Level Trigger. e operator typically displays arived values in order to monitor or debug his system. is access covers minutes-hours of running and the operator might want to display 10-100 of annels at the same time. We can predict that the shi"ers will make daily reports, whi might require the retrieval of data for a period of 24 hours, covering 100-1000 annels. In total we can expect that all values arived during last 24 hours will be retrieved at least once per day for every DCS annel. In case the need arises to limit the database resource usage, it is possible to restrict by policy the excessive access to the arive during critical periods when the database is heavy loaded (ramping, etc.). e Offline is using the shuttle service whi will collect the conditions data at the end of ea run. e shuttle will retrieve arived values of thousands annels for a time period defined by the duration of the run. e DCS controls the access to the arive by shuttle and can eventually refuse read requests if the arive is heavy loaded. e interface to the HLT will deliver data at the beginning of ea run. e data will contain actual reading for thousands of annels. is information will be refreshed roughly every 10 minutes. Also here, the access to the arive database can be restricted during the high load periods, if needed.

e number of clients

e ALICE DCS will runs approximately ≈70 RDB clients reading the or writing data coming from PVSS into the Oracle arive. Few clients are the FED servers, whi read or write the FERO data from configuration database. Another 18 clients will be using the JCOP FRAMEWORK configuration database tool and approximately 20 operators working on the DCS can be also expected. ere is an AMANDA server (see 6.11) for the offline shuttle service and can open up to 20 parallel connections to the database. e same AMANDA server will provide data to the High Level Trigger. In case the performance will be unsatisfactory, a dedicated AMANDA server can be stared for the HLT.

Expected PVSS Arival Data Volumes

e ALICE Detector Control System runs 24 hours a day and 365 days a year. We expect roughly 100-1000 anges/s to be written to the arive at any given time. On top of that 108 6. THE ALICE DETECTOR CONTROL SYSTEM

there will be ≈400physicsrunsperyearintheworstcase(orinthebestcasefromthephysics point of view). e typical duration, if nothing goes wrong, of a physics run is 8 hours. As described earlier there are expected the following data rates: 74000 anges/s during the ramp up (≈2 minutes), ramp down (≈2 minutes) and some 100-1000 anges/s expected during the 8 hour physics runs. At the moment it is still not clear if the stable rate is closer to the 100 anges/s or the 1000 anges/s side. e following simple formula was used to estimate the yearly data volume of the ALICE DCS arival DB:

V = (400 × (trampup + trampdown) × 74000 + (tyear − 400 × (trampup + trampdown)) × D) × R (6.1)

where: V is the estimated data volume for one year of operation in Bytes, trampup is the time of ramping up (typically 120 seconds), trampdown is the time of ramping down (typically 120 seconds), tyear istheamountofsecondsinoneyear(≈31 536 000 seconds), D is the data rate at any time except ramping, R is the number of Bytes in the DB per row (the Bytes needed in the DB to store one ange including the corresponding indexes. Currently the PVSS ariving comes with 3 indexes for the EventHistory table. Investigations are currently being done on a possibility to implement a special function in PVSS so only 1 index would be necessary for the ALICE online and offline operation. R is ≈150Bytesif1indexisusedand≈190 Bytes in case 3 indexes are used.

Using the formula 6.1 an estimate can be performed for the yearly data volumes for different scenarios (from 0 to the maximum possible 74000 anges/s outside the ramping periods). e results are in the table 6.3 in Gigabytes:

0 10 100 1000 10 000 74 000 1index 992.42 1036.34 1431.63 5384.53 44913.59 326009.09 3indexes 1257.06 1312.70 1813.40 6820.41 56890.55 412944.85

Table 6.3: e estimated DCS arive database yearly data volumes for different scenarios (1 or 3 indexes on the EventHistory table and from 0 to 74000 arived anges/s outside the ramping periods)

As the table 6.3 shows, the expected yearly data volume for the PVSS ariving is dominated by the ramp up and ramp down periods if there are only some 100-1000 anges per second written during the non-ramping periods. erefore an actively application of a DCS policy to strongly discourage the DB service consumers to write above 1000 anges/s in total is needed. e ramp up and ramp down periods used in this estimate for ea run are 2 minutes to ramp up and 2 minutes to ramp down. ese are crucial parameters for the estimate of the total yearly data volume, yet it’s still unknown how well these two estimated parameters do reflect the real operation. e presented estimate reflects the worst case scenario since some voltages in the subdetectors might not need to be ramped down and ba up again between consecutive physics runs. Currently, a"er 2 years of pre-commissioning, commissioning and startup the arival database data volume is 902 GB.

Baup Strategy 6.10. ALICE DCS DATABASE 109

e baup strategy is based on ORACLE Recovery Manager (RMAN) 10g. is strategy allows for baups to tape and also to disk, thus reducing substantially the recovery time for many common recovery scenarios.

e PVSS RDB ariving meanism is the only database application using the SQL UPDATE and DELETE commands. All other database applications only insert or query data. us there is no need for using the point-in-time database recovery meanism whi is able to recover the database from an user error and consequently, the retention period doesn’t need to be longer than few weeks (currently 4 weeks). As high availability is required, in case the need to recover the database would arise, the database is required to be ba as soon as possible. erefore the Oracle-suggested baup strategy is being used: Oracle RMAN with the retention period of few days using Oracle’s incremental baup and incrementally-updated baup features. e level 0 incremental baup (the full DB baup) and the current level 1 baup are stored on the same disk array. e arival DB will contain the data only for the last 2 years. Tablespaces containing older data are of no interest and will be therefore put offline and moved onto tapes. ORACLE RMAN first copies all data files into the flash recovery area residing on a different storage array than the data files, and then all the subsequent baups are differential. A tenique called Incrementally Updated Baups is used to maintain this type of baup. ORACLE’s blo ange traing feature is used to significantly reduce the latency and the load of the incremental DB baups since only anged DB blos are read during a baup with this optimization. e baup on disk retention is set to 4 weeks and allows for database recoveries in that time frame. e sedule for the baup on disk is as follows: Full - at database creation; Incremental -daily.

Tosummarize: e expected size of the ALICE DCS DB is ≈7 TB per year (in a reasonable-worst case scenario). During the first 2 years the baup on the disk and its copy on the tapes will be slightly bigger than the DCS DB as we will use incremental baups. Later, the baup on the tapesisexpectedtorisewitharateof7TByearly,whilethesizeoftheDCSDBanditsdiskbaup remains almost constant.

ALICE DCS database migration

Few weeks before the ALICE cosmics/first physics run in summer 2008 started a campaign waslaunedtomigratealldatafromthesingleinstanceALIDCSDB database onto the new 6- instance ALIONR database. e whole migration was accomplished very quily and performed one detector sema a"er another in order to minimize the impact on the commissioning phase of every subdetector.

In the migration, the PVSS reader application semes (called XXXAPPL where XXX is 3 letter detector code according to the naming convention in [102]) where omitted, since they were not used on ALIDCSDB.AlsotheroleR_APP_PVSSRDB wasn’t created since it is used by the APPL semes). All XXXARCH semes use only one global TEMP tablespace now. For ea XXXARCH sema: All EventHistory_ tables where put into a single tablespace (called XXXARCH_EVENT_NN000001), all AlertHistory_ tables where put into a single tablespace (called XXXARCH_ALERT_NN000000) and all other PVSS tables where put into a single ta- blespace (called XXXARCH_DATA_01)

Maximum Availability Aritecture and Oracle Streams 110 6. THE ALICE DETECTOR CONTROL SYSTEM

Recently, a Physical Standby Database [103], whi belongs to the Maximum Availability Aritecture best practices [104], was set-up in the IT Computer center consisting of 2 nodes. eredologsapplylatencyissetto3daysduringwhiahumanerrorcanbeeasilycorrected, if detected within the 3 days. is recovery window can be enlarged, however, the larger it is, the longer time is needed to perform a database role switover, in case the need will rise to swit the primary database to the physical standby database, since it’s necessary to apply all the redo logs of the recovery window.

Lately, Oracle Streaming [105] of 2 PVSS ariving semes to the PDBR database in the IT Computer Center was successfully established in order for the detector experts to access their data almost instantinuously - in most cases, the newly produced data in ALIONR database can be accessed within few seconds in the PDBR replica database. is is necessary to offer the possibility to access the PVSS data also from CERN and not only from the ALICE experimental pit. eplanistoreplicateallsubdetectorPVSSsemesintheclosefuture.

6.11 AMANDA

e AMANDA server (ALICE Manager for DCS Arives) [90] is a so"ware tool to access the DCS arives. e first release of AMANDA was a PVSS manager whi used the PVSS API to access both arive types either file based or database based. e advantage was that the arive aritecture wastransparenttoAMANDAasitwasconnectingtoitthroughtheDM(DataManager).However the disadvantages were that AMANDA added additional load to the running PVSS system and mainly the limitation of the PVSS DM as the DM was not multi-threaded the requests were served sequentially as they arrived. is limited the aievable transfer rates and triggered the development of an AMANDA II server whi was independent of the PVSS and has accessed directly the database. Multiple threads can be spawn, however access to file-based arival is no longer possible. Another advantage is that the AMANDA server has access to any arived DPEs, even if they are not part of a distributed PVSS system. Also, the load imposed by the AMANDA server is limited to the PC(s) where it runs and of course the database server.

AMANDA is used as an interface between the PVSS arive and non-PVSS clients. e AMANDA data transfer meanism consists of several components: e windows server, the AMANDA com- municationprotocolandtheROOTclientandinterfacetotheoffline.AwindowsC++GUIclient and a ruby [106] CLI (Command Line Interface) exists as well. e AMANDA communication protocol was developed in collaboration with the offline team. e ROOT client and interface to the offline was developed by the offline team. e HLT client was build by the HLT based on the offline client.

e AMANDA server returns data for requested time period by the client. A"er receiving the connection request, AMANDA creates a thread whi will handle the client request. AMANDA es the existence of the requested DPE in the arival database and returns an error if it is not available. AMANDA retrieves the arived data values and their corresponding time stamps from the arival database and sends it ba to the client in formatted blos. e time stamps are transformed into the Unix time stamp format inside the database by a PL/SQL function named 6.11. AMANDA 111

TS2UTS.

In order for the AMANDA server to be able to retrieve data from the arival database for any arived DPEs from all subdetectors, a special lookup table named alias4offline was created. It contains a list of all arived DPEs, their corresponding aliases, unique PVSS identifications (DPEID), data types and their originating subdetector. e table is populated by concatenating all subdetector elements tables. is is aieved by executing a PL/SQL script whi has to be currently run manually since the script is not final as there are still new PVSS data types introduced.

For the pre-commissioning, commissioning and start up phases the subdetectors need to validate their Detector Algorithms (DAs). To allow for this, I have created a PVSS panel (see Fig. 6.25) whi creates all DPEs needed by the Detector Algorithms for offline reconstruction. e panel also assigns simulated reasonable values for them. Another PVSS panel (see Fig. 6.26) was developed for simulating reasonable values at predefined frequencies for all the created DPEs. ese were arived into a test Oracle database and retrieved every night via the AMANDA server by the offline SHUTTLE and used for Detector Algorithm code validation. In case the PC running the simulation breaks, only these panels need to restored to a new PC and not the wholePVSSProject.efigure6.27showsthesamplecodeusedtosimulatethevaluesinthe PVSS scripting language environment. e successful long term running of the AMANDA server and the offline SHUTTLE client verified the DCS arival data transfer meanism concept its performance and stability.

Figure 6.25: A picture of the PVSS panel developed for creating all DPEs whi are required by the Detector Algorithms for offline reconstruction and assigns simulated reasonable values for them. 112 6. THE ALICE DETECTOR CONTROL SYSTEM

Figure 6.26: A picture of the PVSS panel developed for simulating reasonable values at predefined frequen- cies for all DPEs whi are required by the Detector Algorithms for offline reconstruction and arived into a test Oracle database. e simulated values are retrieved every night via the AMANDA server by the offline SHUTTLEandusedforDetectorAlgorithmcodevalidation.

Figure 6.27: ApictureofthePVSSscriptinglanguageenvironmentandthesamplecodeusedtosimulate the values for all DPEs whi are required by the Detector Algorithms for offline reconstruction. 6.12. SYSTEM COMMISSIONING AND FIRST OPERATION EXPERIENCE 113

6.12 System commissioning and first operation experience

e detector control system infrastructure, covering all ba-end infrastructure and common services, at the ALICE experimental site was installed and has been operational since early 2007 [107]. e full ALICE DCS has been integrated and commissioned during 2008 together with the detectors. About 100 individual and common integration sessions were needed to test all functionalities. e detector integration sessions focused on verification of the functionality of all system components and on compliance with ALICE conventions and rules. Special efforts were given to the detector safety and the commissioning of the interlos and alerts. During the common sessions, all participating detectors were operated in parallel. e aim of these sessions wastodemonstratethatALICEDCScanbeoperatedfromasinglepostinthecontrolroom.e performance of the ba-end infrastructure and systems (su as network traffic, arival rates, etc.) was closely studied. Several perturbations were introduced in a controlled way, in order to discover possible irregularities. e acquired experience has been used for further improvements of the systems. As a result of carefully organized and executed development, integration and test campaigns, the ALICE DCS was fully operational before the LHC startup and contributed to a successful ALICE operation with first beams.

Chapter 7

Conclusions

In this thesis, I presented my involvement in the ALICE SPD project, I summarized the design, the construction and the testing phase of the ALICE SPD. My involvement in the ALICE DCS project wasalsopresented.

During the past few years the ALICE SPD collaboration has carried out four testbeams. e primary objective of these testbeams was the validation of the pixel ASICs, the sensors, the read-out electronics and the online systems - Data Acquisition System (DAQ), Trigger (TRG) and Detector Control System DCS with their so"ware and offline as well. e pixel ip and sensor prototypes were studied under different conditions (threshold scan, different inclination angles with respect to the beam, bias voltage scan, etc.). Tests of thi and also thin single ip assemblies and ip ladders as designed to be used in the ALICE experiment were also performed. During and a"er the testbeams I developed so"ware to verify the data quality, to merge 2 data streams coming from different planes with a different format, to find and eventually remove noisy pixels offline, to correlate the spatial information from different planes, to run a complex offline analysis of the testbeam data, including hit maps, integrated hit maps, event by event analysis, efficiency, multiplicity, cluster size, etc. e prototype full read-out ain with two ladders, the DAQ, Trigger and DCS online systems with their so"ware and offline code were tested and validated during the testbeams. e prototype full read-out ain with two ladders, the DAQ, Trigger and DCS online systems with their so"ware and also offline code were tested and validated during the testbeams.

Configuration, readout and control of the SPD is performed via the Detector Control System DCS. As a member of the ALICE Control Coordination ACC team I had the possibility to participate in the design, development, commissioning and operation of this system. I took responsibility for the database systems and developed meanisms for configuring the Front-end Electronics (FERO). e SPD has been used as a working example for other detector groups whi adopted this approa. I developed and implemented a meanism of conditions data arival and participated in the creation of data exange meanism between the DCS and ALICE offline. e DCSaswellasdatahandlingtoolsweredescribedinthethesis.

e primary task of the ALICE DCS is to ensure a safe and correct operation of ALICE. It is

115 116 7. CONCLUSIONS

in arge of configuration, control and monitoring of 18 ALICE sub-detectors, their sub-systems (Low Voltage, High Voltage, etc.) and interfaces with various services (su as cooling, safety, gas, etc.). e operation of the DCS is synronized with other online systems - the DAQ, the TRG and the High Level Trigger (HLT) through a controls layer: the Experiment Control System (ECS). e core of the DCS is a commercial Supervisory Control and Data Acquisition (SCADA) system named PVSS. It controls and monitors the detector devices, provides configuration data from the configuration database and arives acquired values in the arival database. It allows for data exange with external services and systems through a standardized set of interfaces. e amount of data handled by the DCS goes largely beyond the scale seen in previous generations of control systems. At the start of a physics run, up to 6 GB of data is loaded from the DCS database to the detector devices including PVSS recipes, su as nominal values of device parameters, alert limits, etc. and Front End and Readout Electronics (FERO) settings. About 1 000 000 parameters need to be configured to prepare ALICE for a physics run.

I have tested the performance of a few database flavors, reported the test results and have given recommendation to use the Oracle database. I have tested various access patterns to different database data types (numbers, ars, Binary Large Objects (BLOBs). I have created and tuned a prototype versioning meanism for the SPD including the database sema design and the so"ware to upload and download the FERO configuration data from and to the database employing automatic version incrementing whi saves enormous amounts of disk space by storing only pointers to duplicate data, not the data itself. I have helped other subdetectors to define their database sema and access patterns and provided consultancy and verification.

e DCS data required for the offline reconstruction is stored in the offline conditions database. I have co-developed a meanism based on an AMANDA server and maintained it. e AMANDA server receives requests from an offline client and retrieves the data from the arive. I have developed and maintained a PVSS project whi simulates all data from all subdetectors relevant for offline reconstruction to test and validate the AMANDA meanism and the detector reconstruction algorithms as well. It is expected that the AMANDA server will transmit 60 MB of data at the end of ea run to the offline system. e same AMANDA server-client meanism is also used to transmit data to the HLT. e ALICE DCS system, its data ariving and the data transfer to the Offline and HLT has been successfully integrated in the experimental setup of ALICE and has been successfully used during the SPD and other sub-detectors pre-commissioning, commissioning, during the cosmic ray runs since 2007 and during the LHC startup.

I have provided tools to ange the PVSS arive type from local files to database arival for all subdetectors and assisted them to configure PVSS ariving and make the arive type swit when needed. I have provided tools to them to query the arived data as well. I have installed and maintained several Oracle test databases in the DCS lab and production databases in the LHCPoint 2 (P2). I was involved in the extremely successful PVSS Oracle ariving performance tuning campaigns whi resulted in the PVSS ariving capability being boosted from the initial ≈100 inserts per second to ≈150 000 inserts per second. e immense improvements aieved in PVSS Oracle ariving have been made available not only for the SPD, not only for ALICE detectors, but for all LHC experiments.

e results of this work were presented at several conferences and published in prestigious 117 journals. I also participated in the DCS and SPD commissioning and cosmic run shi"s during the ALICE cosmic runs. e ≈100 k cosmic ray events gathered successfully have significantly enhanced alignment studies of the SPD and of the whole ITS.

Bibliography

[1] F. Kars, Nucl. Phys. A 698, 199c (1996).

[2] R. Hagedorn, Nuovo Cim. Suppl. 3, 147 (1965).

[3] HEP Phase Diagram, GSI, Germany.URL:.

[4] COLLABORATION, L.: LHC Design Report Volume I, II, III, CERN-2004-003-V-1, CERN-2004-003-V-2, CERN-2004-003-V-3. 2004, URL: .

[5] CERN Document Server.URL:.

[6] e CERN Large Hadron Collider: accelerator and experiments. J. Instrum., 2008, vol. 3, pp. S08 001–S08 007, URL: .

[7] ALICE Collaboration: Tenical Proposal for A Large Ion Collider Experiment at the CERN LHC. 1995.

[8] ATLAS Collaboration: Tenical Proposal for a General-Purpose pp Experiment at the Large Hadron Collider at CERN. 1994.

[9] CMS Collaboration: e Compact Muon Solenoid Tenical Proposal. 1994.

[10] LHCb Collaboration: LHCb Letter of Intent: A Dedicated LHC Collider Beauty Experiment for Precision Measurements of CP-Violation. 1995.

[11] LHCf Collaboration: Tenical Proposal for the CERN LHCf Experiment. 2005, cERN/LHCC 2005-032.

[12] TOTEM Collaboration: TOTEM Tenical Proposal. 1999, cERN/LHCC 99-7.

[13] EVANS, L.: LHC Status. EDMS Id: 976647, 10. Nov. 2008.

[14] CERN/AT/PHL: INTERIM SUMMARY REPORT ON THE ANALYSIS OF THE 19 SEPTEMBER 2008 INCIDENT AT THE LHC. EDMS Id: 406393, 10. Oct. 2008.

[15] Offical communiquee of the Director General aer a meeting involving the experiments, the maine people and the CERN management.

119 [16] AAMODT, K., et al.: e ALICE experiment at the CERN LHC. J. Instrum., 2008, vol. 3, p. S08 002. 259 p.

[17] ALICE Collaboration: Tenical Design Report of the Inner Traing System. 1999.

[18] FABJAN, C. W.: ALICE at the LHC: getting ready for physics. J. Phys. G, 2008, vol. 35, no. 10, p. 104 038. 8 p.

[19] CASTOR.URL:.

[20] KLUGE, A., et al.: e ALICE silicon pixel detector: electronics system integration.In Nuclear Science Symposium Conference Record, 2005 IEEE, vol. 2, 2005, ISSN 1082-3654 pp. 761–764, doi:<10.1109/NSSMIC.2005.1596367>.

[21] CALI, I. A.: e ALICE Silicon Pixel Detector Control and Calibration Systems. Ph.D. thesis, Universita Degli Studi Di Bari, Bari, 2008.

[22] LINDSTRÖM, G., et al.: . Nuclear Instruments and Methods in Physics Resear A, 2001, pp. 60–69.

[23] CHOCHULA, P., et al.: e Alice silicon pixel detector. Nuclear Physics A, 2003, vol. 715, pp. 849c – 852c, ISSN 0375-9474, doi:, URL: , quark Matter 2002, Proceedings of the 16th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions.

[24] KLUGE, A., et al.: e Read-Out System of the ALICE Pixel Detector. 2003.

[25] Alexander Kluge’s homepage.URL:.

[26] WYLLIE, K.: Front-End Pixel Chips for Traing in ALICE and Particle Identification at LHCb. 1999.

[27] DINAPOLI, R., et al.: An Analog Front-ent in Standard 0.25 μm CMOS for Silicon Pixel Detectors in ALICE and LHCb.InProceedings of the 6th Workshop on Electronics for LHC Experiments, Krakow, Poland, 2000 p. 110.

[28] WYLLIE, K.: APixelReadoutChipforTraingatALICEandParticleIdentificationat LHCb.InProceedings of the 5th Workshop on Electronics for LHC Experiments,Colorado, USA, 1999 .

[29] CHOCHULA, P.: e Alice silicon pixel detector. Presentation given at the Quark Matter 2002, 16th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions, 2002, Nantes.

[30] RIEDLER, P., et al.: Proceedings of the 10th International Workshop on Vertex Detectors, Brunnen, Switzerland, 2001. Nuclear Instruments and Methods in Physics Resear A, 2003, vol. 501, pp. 111—-118. [31] RIEDLER, P., et al.: Recent test results of the ALICE silicon pixel detector. Nuclear Instruments and Methods in Physics Resear Section A: Accelerators, Spectrometers, Detectors and Asso- ciated Equipment, 2005, vol. 549, no. 1-3, pp. 65 – 69, ISSN 0168-9002, doi:, URL: , vERTEX 2003.

[32] CERPROBE Europe SAS, Bat. 7, Parc de la Saint Victoire, Quartier du Canet, F-13590 Meyreuille, France.

[33] RIEDLER, P., et al.: e ALICE Silicon Pixel Detector: System, components and test procedures. Nuclear Instruments and Methods in Physics Resear Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 568, no. 1, pp. 284 – 288, ISSN 0168-9002, doi:, URL: , new Developments in Radiation Detectors - Proceedings of the 10th European Symposium on Semiconductor Detectors, 10th Euro- pean Symposium on Semiconductor Detectors.

[34] MORETTO, S., et al.: e Assembly of the first Sector of the ALICE Silicon Pixel Detector. Journal of Physics: Conference Series, 2006, vol. 41, pp. 361–368, URL: .

[35] PEPATO, A., et al.: e meanics and cooling system of the ALICE sili- con pixel detector. Nuclear Instruments and Methods in Physics Resear Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 565, no. 1, pp. 6 – 12, ISSN 0168-9002, doi:, URL: , proceedings of the International Workshop on Semiconductor Pixel Detectors for Particles and Imaging - PIXEL 2005.

[36] RIEDLER, P., et al.: Overview and status of the ALICE silicon pixel detector. Nuclear Instru- ments and Methods in Physics Resear Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 565, no. 1, pp. 1 – 5, ISSN 0168-9002, doi:, URL: , proceedings of the In- ternational Workshop on Semiconductor Pixel Detectors for Particles and Imaging - PIXEL 2005.

[37] RIEDLER, P., et al.: Production and Integration of the ALICE Silicon Pixel Detector. Nuclear Instruments and Methods in Physics Resear Section A: Ac- celerators, Spectrometers, Detectors and Associated Equipment, 2007, vol. 572, no. 1, pp. 128 – 131, ISSN 0168-9002, doi:, URL: , frontier Detectors for Frontier Physics - Pro- ceedings of the 10th Pisa Meeting on Advanced Detectors. [38] OSMIC, F., et al.: Infrared laser testing of ALICE silicon pixel detector as- semblies. Nuclear Instruments and Methods in Physics Resear Section A: Ac- celerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 565, no. 1, pp. 13 – 17, ISSN 0168-9002, doi:, URL: , proceedings of the International Workshop on Semiconductor Pixel Detectors for Particles and Imaging - PIXEL 2005.

[39] RIEDLER, P., et al.: e ALICE silicon pixel detector, Proceedings of the PIXEL 2002 Work- shop, Carmel, USA. Published in SLAC Electronics Conference Arive, 2002.

[40] CONRAD, J., et al.: Beam test performance and simulation of prototypes for the AL- ICE silicon pixel detector. Nuclear Instruments and Methods in Physics Resear Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2007, vol. 573, no. 1-2, pp. 1 – 3, ISSN 0168-9002, doi:, URL: , proceedings of the 7th International Confer- ence on Position-Sensitive Detectors - PSD-7, 7th International Conference on Position- Sensitive Detectors.

[41] NILSSON, P., et al.: Test beam performance of the ALICE silicon pixel de- tector. Nuclear Instruments and Methods in Physics Resear Section A: Ac- celerators, Spectrometers, Detectors and Associated Equipment, 2004, vol. 535, no. 1-2, pp. 424 – 427, ISSN 0168-9002, doi:, URL: , proceedings of the 10th International Vienna Conference on Instrumentation.

[42] OSMIC, F.: e ALICE Silicon Pixel Detector System. Ph.D. thesis, Tenise Universität Wien Fakultät für Physik, Vienna, 2005.

[43] CONRAD, J., NILSSON, P.: ALICE Public Internal Note. 2005, aLICE-INT-2005-003.

[44] ALICE.URL:.

[45] National Instruments.URL:.

[46] Mitutoyo.URL:.

[47] National Instruments LabView.URL:.

[48] KAPUSTA, S.: Kremíkové pixlové detektory pre experiment ALICE. Master’s thesis, Come- nius University, Bratislava, 2003.

[49] BURNS, M., et al.: e ALICE Pixel Detector Readout Chip Test System. 2001.

[50] NILSSON, P.: Test beam performance of the ALICE silicon pixel detector. 2004, Vienna.

[51] RIEDLER, P.: ALICE SPD beam best 2003 summary. presentation at CERN, 2003. [52] FABRIS, D.: Test Beam Data Analysis at Padova/LNL. presentation at CERN, 2004.

[53] PULVIRENTI, A., et al.: Test of prototypes of the ALICE silicon pixel detector in a multi-tra environment. Nuclear Instruments and Methods in Physics Resear Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 565, no. 1, pp. 18 – 22, ISSN 0168-9002, doi:, URL: , proceedings of the International Workshop on Semiconductor Pixel Detectors for Particles and Imaging - PIXEL 2005.

[54] ELIA, D., et al.: Performance of ALICE silicon pixel detector prototypes in high energy beams. Nuclear Instruments and Methods in Physics Resear Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2006, vol. 565, no. 1, pp. 30 – 35, ISSN 0168-9002, doi:, URL: , proceedings of the International Workshop on Semiconductor Pixel Detectors for Particles and Imaging - PIXEL 2005.

[55] (FOR THE SPD PROJECT IN THE ALICE COLLABORATION), V. M., et al.: e silicon pixel detector (SPD) for the ALICE experiment. Journal of Physics G: Nuclear and Particle Physics, 2004, vol. 30, no. 8, pp. S1091–S1095, URL: , proceedings of the 17th Quark Matter Conference.

[56] ALICE SPD Test Beams.URL:.

[57] ANELLI, G., et al.: ALICE Public Internal Note. ALICE SPD Collaboration, 2005.

[58] CALI, I. A., et al.: Test, qualification and electronics integration of the ALICE silicon pixel detector modules. Prepared for 9th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications, Villa Erba, Como, Italy, 17-21 Oct 2005.

[59] AliRoot Engineering Data Management System.URL:.

[60] NOUAIS, D.: Common ITS beam test. presentation given at the Tenical Forum at CERN, 2004.

[61] CONRAD, J.: ITSbeamtest:offlinesoware. presentation given at the SPD general meeting, Nov. 2004.

[62] W. C. SAILOR, W. W. K., H. J. Zio, HOLZSCHEITER, K.: AModelfortheperformanceof silicon microstrip detectors. Nuclear Instruments an Methods in Physics Resear A, 1991, vol. 303, p. 285.

[63] SANTORO, R., et al.: e ALICE Silicon Pixel Detector: readiness for the first proton beam. Journal of Instrumentation, 2009, vol. 4, no. 03, p. P03 023, URL: . [64] DAINESE, A.: ALICE commissioning and prospects for beauty and quarkonia measurements. Presentation given at the 12th international conference on B-physics at hadron maines, 08. Sep. 2009, Heidelberg.

[65] MANZARI, V., et al.: Commissioning and first operation of the ALICE Silicon Pixel Detector. Proceedings from the Vertex 2008 Conference, to be published.

[66] TORCATO DE MATOS, C., et al.: eALICElevel0pixeltriggerdriverlayer. CERN-2008-008.

[67] DAINESE, A.: Charm and Beauty at the LHC. Presentation given at the ALICE Physics Forum, 25. Mar. 2009, CERN.

[68] COLLABORATION, A.: AlignmentoftheALICEInnerTraingSystemwithcosmic-raytras. Journal of Instrumentation, 2010, vol. 5, no. 03, p. P03 003, URL: .

[69] BLOBEL, V., KLEINWORT, C.: A new method for the high-precision alignment of tra detectors. 2002, .

[70] GROßE-OETRINGHAUS, J. F.: Measurement of the Charged-Particle Multiplicity in Proton–Proton Collisions with the ALICE Detector. Ph.D. thesis, Mathematis- Naturwissensa"lien Fakultät der Westfälisen Wilhelms-Universität, Münster, 2009.

[71] DAINESE, A.: Alignment of the ALICE Si traing detectors. Presentation given at Vertex 2009, 14. Sep. 2009, Veluwe.

[72] PÉREZ, J. M.: Development of the control system of the ALICE Transition Radiation Detector and of a test environment for quality-assurance of its front-end electronics. Ph.D. thesis, Physikalises Institut, Universität Heidelberg, Heidelberg, 2008.

[73] MYERS, D. R.: e LHC experiments’ joint controls project, JCOP. Prepared for 7th In- ternational Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS 99), Trieste, Italy, 4-8 Oct 1999.

[74] GASPAR, C., DÖNSZELMANN, M.: DIM: a distributed information management system for the DELPHI experiment at CERN.InProceedings of the 8th Conference on Real-Time Computer applications in Nuclear, Particle and Plasma Physics, Vancouver, Canada, June 1993 .

[75] LEWIS, S. A.: Overview of the Experimental Physics and Industrial Control System (EPICS), Tenical Report. Lawrence Berkeley National Laboratory, 2000.

[76] DANEELS, A., SALTER, W.: Selection and evaluation of commercial SCADA systems for the controls of the CERN LHC experiments.InProceedings of the 7th International conference on accelerator and large experimental physics control systems, Trieste, Italy, 1999 .

[77] C. MAZZA, e. a.: Soware Engineering Guides. ISBN 0-13-449281-1, Prentice-Hall, 1996.

[78] SALTER, W.: ATLAS DCS Workshop. 2001. [79] ETM.URL:.

[80] CERN IT-CO Division.URL:.

[81] P.C.BURKIMSHER: SCALING UP PVSS. 10th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 10 - 14 Oct 2005.

[82] COLLABORATION, A.: ALICE TDR of the Trigger, Data Acquisition, High-Level Trigger and Control System, CERN-LHCC-2003-062. 2003.

[83] CHOCHULA, P., et al.: Control and Monitoring of Front-End Electronics in ALICE. 9th Workshop on Electronics for LHC Experiments LECC, 2003.

[84] SMI++ State Management Interface.URL:.

[85] CATALDO, G. D., et al.: Finite state maines for integration and control in ALICE. 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007.

[86] CHOCHULA, P.,et al.: CybersecurityinALICE. 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007.

[87] JIRDEN, L., et al.: ALICE control system – ready for LHC operation. 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007.

[88] HOLME, O., et al.: e JCOP Framework. 10th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 10 - 14 Oct 2005.

[89] KAPUSTA, S., OTHERS.: Data Flow in ALICE Detector Control System.Posterpresentedan the 11th international Vienna Conference on Instrumentation, 2007.

[90] AMANDA.URL:.

[91] ORACLE Corporation.URL:.

[92] ORACLE Real Application Clusters.URL:.

[93] Infortrend.URL:.

[94] QLogic.URL:.

[95] CHOCHULA, P.: DB Service for ALICE DCS. Presentation given at an ALICE DCS meeting, 2007, CERN.

[96] Red Hat Enterprise Linux.URL:.

[97] ORACLE Automatic Storage Management.URL:. [98] CHOCHULA, P.: ALICE DCS Review. presentation at CERN, 14. Nov. 2005.

[99] KAPUSTA, S.: PVSS Oracle Ariving. presentation given at the ALICE week in Bologna, 19. Jun. 2006.

[100] GRANCHER, E., TOPUROV, A.: Oracle RAC (Real Application Cluster) application scala- bility, experience with PVSS and methodology. RAC = Oracle clustering tenology, Real Application Cluster. Te. Rep. CERN-IT-Note-2007-049, CERN, Geneva, Nov 2007, not published in the proceedings.

[101] GONZALEZ-BERGES, M.: e High Performance Database Ariver for the LHC Experi- ments. 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007.

[102] L. BETEV, P. C.: Naming and Numbering Convention for the ALICE Detector Part Identifi- cation – Generic Seme. EDMS Id: 973073, 03. Oct. 2003.

[103] ORACLE Data Guard.URL:.

[104] ORACLE Maximum Availability Aritecture.URL:.

[105] ORACLE Streams.URL:.

[106] Ruby.URL:.

[107] CHOCHULA, P., et al.: e ALICE Detector Control System. in preparation.

[108] CORTESE, P., et al.: ALICE physics performance: Tenical Design Report. Tenical Design Report ALICE, Geneva: CERN, 2005, revised version submitted on 2006-05-29 15:15:40.

[109] KLUGE, A., et al.: e ALICE Silicon Pixel Detector System (SPD). 2007.

[110] AUGUSTINUS, A., et al.: e ALICE Controls System : a Tenical and Managerial Challenge. 9th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 13 - 17 Oct 2003.

[111] CHOCHULA, P., et al.: Handling large amounts of data in ALICE. 11th International Con- ference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007.

[112] RIEDLER, P., et al.: Review of Particle Physics. Journal of Physics G, 2006, vol. 33.

[113] CARMINATI, F., et al.: ALICE Physics Performance Report, Volume I. J. Phys. G: Nucl. Part. Phys., 2004, vol. 30, pp. 1517–1763.

[114] ROPOTAR, I.: An investigation of silicon pixel traing detectors and their aplication in a pro- totype vertex telescope in the CERN NA50 heavy-ion experiment. Ph.D. thesis, Faberei Physik Bergise Universität Gesamthosule Wuppertal, Wuppertal, 2000. [115] ALICE Collaboration: Tenical Design Report of the High-Momentum Particle Identifica- tion Detector. 1998.

[116] ALICE Collaboration: Tenical Design Report of the Photon Spectrometer. 1999.

[117] ALICE Collaboration: Tenical Design Report of the Zero-Degree Calorimeter. 1999.

[118] ALICE Collaboration: TenicalDesignReportoftheForwardMuonSpectrometer. 1999.

[119] ALICE Collaboration: TenicalDesignReportoftheForwardMuonSpectrometer Addendum-1. 2000.

[120] ALICE Collaboration: Tenical Design Report of the Photon Multiplicity Detector. 1999.

[121] ALICE Collaboration: Tenical Design Report for the Photon Multiplicity Detector Addendum-1. 2003.

[122] ALICE Collaboration: Tenical Design Report of the Time-Projection Chamber. 2000.

[123] ALICE Collaboration: Tenical Design Report of the Time-Of-Flight Detector. 2000.

[124] ALICE Collaboration: Tenical Design Report of the Transition-Radiation Detector. 2001.

[125] ALICE Collaboration: TenicalDesignReportoftheForwardDetectors. 2004.

[126] ELIA, D., OTHERS.: ALICE Internal Note 2005–007. 2005.

[127] CERN IT-CO Division.URL:.

[128] Joint Controls Project.URL:.

[129] ELIA, D., OTHERS.: ALICE Internal Note 2005–011. 2005.

[130] CERN Engineering Data Management System.URL:.

List of Figures

1.1 e phase diagram of nuclear matter summarizing the present understanding about the structure of nuclear matter at different densities and temperatures. e lines illustrates the results aieved by the different ultra-relativistic collider experiments.From[3]...... 4

2.1 e LHC and its area, courtesy of the CERN photography service. An aerial view towards the Geneva area and the Alps on the le". An underground sematic, showing the SPS, LHC and the four main LHC experiments are shown on the right. 8

2.2 e LHC dipoles, courtesy of the CERN photography service. On the le", a detail of the beam lines inside the superconducting dipole magnest. On the right, the dipole magnets installed in the LHC tunnel, also showing two beam pipes in the front...... 10

2.3 eCERNacceleratorcomplexshowingLHCandalsonon-LHCexperiments.e picture is not to scale and is adapted from the public CERN Document Server [5] e accelerating path for the protons and Pb ions is marked in red and blue respectively...... 13

2.4 eATLASandCMSdetectors.epictureisadaptedfromthepublicCERN DocumentServer[5]...... 14

2.5 Side view of the LHCb detector on the le". ALICE detector on the right. e pictureisadaptedfromthepublicCERNDocumentServer[5]...... 16

2.6 A sematic view of the placement of the TOTEM detectors [12]. e near detectors are placed inside the CMS cavern. One set of far detectors RP1, RP2, RP3isalsoshown...... 17

2.7 A picture of the beam monitor of the first beam that passed through the LHC ring[13]...... 18

129 2.8 A picture of the beam profile monitor made on the 10th of September (le") and on the 12th of September (right). Every line represents one bun pass. One can see on the le" monitor that with every pass the beam was more and more spatially dispersed. On the right, the RF captured successfully the beam and a stablecirculatingbeamwaspresent[13]...... 18

2.9 A picture of the dipole electrical busbar splice interconnects welding during the reparationworksinsector34[13]...... 19

 3.1 Cross-sections and rates for several key annels versus s.From[18]...... 22

3.2 3D computer view of the ALICE detector and its sub-detectors. From [18] . . . . 23

4.1 AdrawingoftheALICEInnerTraingSystem.From[16]...... 29

4.2 A CAD drawing of the SPD and an artistic view of one sector and one half stave. From[20]...... 30

4.3 Image of one SPD sector-8 outer half staves in the middle and power connections onthesides.From[20]...... 30

4.4 Picture of the SPD with the 0.8 mm-thi beryllium beam-pipe as currently installedinPoint2.From[5]...... 31

4.5 A drawing of one SPD sector with the numbering convention. From [21]. . . . . 31

4.6 Anartisticviewofone"exploded"halfstave.From[20]...... 33

4.7 Picture of one sensor with 5 bonded ips on the bottom. From [20]...... 33

4.8 An artistic view of a ip bonded to a sensor on the right and a picture of the Sn-Pbbumpbond.From[20]...... 34

4.9 An artistic view of the bus with a sensor and a ip (le") and a picture of the bus bondingpads.From[20]...... 34

4.10 Picture of the wire bonds between the MCM and the bus (right) and between the ipsandthebus(le").From[16]...... 35

4.11 Picture of the Multi Chip Module mounted on the Half-stave. e MCM consists of the Analog Pilot(le"), Digital Pilot(middle), GOL(right) and the Optical cables (veryright).From[20]...... 35

4.12 A closer look on the Multi Chip Module and the three ips - the Analog Pilot(le"), the Digital Pilot(middle), and the GOL(right). From [25] ...... 35

4.13AblodiagramoftheSPDelectronics.From[25]...... 36 4.14 A detailed blo diagram of the data readout electronics. From [16] ...... 36

4.15 Picture of the router with three link reciever (LRx) cards. From [25] ...... 36

4.16 A picture of the ALICE1LHCb ip (le") containing 8192 pixel cells. On the rightablodiagramoftheelectronicsinonepixelcell.From[29]...... 37

4.17 A wafer contains 86 ALICE1LHCb pixel ips (le"). Class I ips are green. e histogram on the right shows the mean measured threshold of all class I ips fromonewafer.From[39] ...... 38

5.1 A drawing of the testbeam setup. Crossed scintillators in the front followed by 3planesofips,themiddleonemountedontheX-Ytable.From[30] . . . . . 40

5.2 A drawing of the testbeam setup indicating the readout and monitoring elec- tronics controlled by 2 computers (le"). A photo of the testbeam setup. From [44],[30]...... 40

5.3 e so"ware I developed for an easy and reliable control of the X-Y table. Picture takenfrom[48]...... 41

5.4 e beam profile. Hitmap on the le". In center the beam profile in z (425 μm pixels): ≈7pixels=3mm.Ontherightthebeamprofileinx(50μmpixels):≈ pix- els=2.5mm.From[30]...... 42

5.5 e efficiency dependence on the threshold (le") and on the bias voltage (right) determinedonlineviascintillators.From[30]...... 42

5.6 eStrobedelayscan.From[30]...... 43

5.7 e cluster size dependence on the different incident particle angles measure- ment(le") and earlier measurements and predictions (right). From [30]. . . . . 43

5.8 DAQso"ware (le"), the beam profile (center) and the first reconstructed tra (right).From[49],[29]...... 43

5.9 Picture of the ladder testbeam setup. Crossed scintillators in the front followed by5planesofips,themiddleonemountedontheX-Ytable(le")anda minibus(right).From[44] ...... 44

5.10 e so"ware I created for data quality es and extended for offline data analysis, featuring integrated hit maps, event-by-event analysis, correlation plots, etc...... 45

5.11 e tested two ladder prototype mounted on the pixel extender card. A similar pixelextendercardwasusedinthetestbeamwitha5ipsladdermounted. From[29] ...... 46

5.12efirstreconstructedtrafromthesecondtestbeam.[39]...... 46 5.13 e efficiency dependence on the bias voltage for a single assembly (le") and a ladder(right)[29]...... 47

5.14 e efficiency dependence on the threshold for a single assembly (le") and a ladder(right)[29]...... 47

5.15 e efficiency dependence on the delay of the external strobe signal for a single assembly(le")andaladder(right)[29]...... 48

5.16 e spatial resolution algorithm (le") and the results for the short pixel side (right).From[50]...... 48

5.17 A picture (le") and a drawing (right) of the experimental setup for the heavy ion beamruns.From[50],[51]...... 50

5.18 A drawing of the crossed geometry for the proton beam with the offline defini- tions.From[52]...... 50

5.19 Picture of the PCI-2002 comprising the two ladders with their proper MCM. From[44] ...... 51

5.20 e online monitoring so"ware showing the hit maps of ea of the 10 ips of the PCI-2002. From [44] ...... 52

5.21 e correlation between the planes 2 and 0 in the focused proton beam runs is clearlyvisiblebythestraightline...... 53

5.22 e so"ware I developed to find and visualize noisy pixels. In this picture I ose 0.4% as the noise level threshold to visualize all noisy pixels. e beam profile and the noisy pixels can be clearly seen in the upper part, while the middle part show the found noisy pixels. e bottom part enables the user to select and visualizeasingleevent...... 53

5.23 e so"ware I developed to find and permanently remove the noisy pixels from the data files, whi speeds up and simplifies considerably the later testbeam dataanalysis...... 54

5.24 e so"ware I developed to analyze thoroughly the testbeam data offline. . . . . 55

5.25 e reconstruction efficiency as a function of threshold for 200 μm thi sensors. A DAC threshold setting of 214 is equivalent to approximately 2000 e−.e normal working point is around DAC = 200. [40] ...... 55

5.26 e intrinsic precision (200 μm sensor) as a function of threshold for tras nor- mal incidence angle (le") and as a function of angle for two different thresholds (right)[40]...... 56

5.27 e geometrical setup used during the joint ITS testbeam. From [56]...... 57 5.28PictureoftheexperimentalthejointITStestbeam.From[60]...... 58

5.29 Picture of the SPD (le"), SSD (center) and SDD (right) in the joint ITS testbeam. From[60]...... 59

5.30 Sematic layout of the DAQ(le") and also of all online systems (right) in the jointITStestbeam.From[60]...... 60

5.31Le"isasimplifiedblo-diagramoftheLTUshown.Ontherightapictureof thetriggercratewiththethreeLTUs.From[60]...... 61

5.32 No correlation due to multi event buffer problem of the SDD (le") and a visible correlation between the SPD and SDD planes (right). From [61] ...... 61

5.33 e noisy pixel removal with new AliRoot classes - le" histogram shows the beamspot before the noisy pixels removal, while the the histogram on the right showsnomoresignsofnoisypixels.From[61] ...... 62

5.34 e distribution of different cluster types for simulation (histogram) and from data from the ladder testbeam (stars). e definition of the four most common cluster types is shown. e definition of the remaining cluster types is in [57]. From[40]...... 62

5.35 A visualization of reconstructed hits in the first event with particles generated in LHC ever seen by a LHC experiment. e muons generated far away from the interaction point by the beam dump traveled parallel to the beam axis making morethan10cmlongtrasintheSPD.From[63]...... 64

5.36 A visualization of reconstructed hits of the first event during the circulating beams on the 11th of September 2008 in the ITS triggered by the SPD showing the LHC beam whi interacts with the detector materials. e event has been reconstructedwiththefinalvertexingalgorithm.From[64]...... 64

5.37 On the le": A visualization of the trigger algorithm (top-bottom-outer-layer) used to collect cosmic events with tras in the SPD in 2008. A trigger is generated if there were at least two hits in the outer layer of whi one is located in the top half-barrel and one in the bottom half-barrel. is trigger condition is safe against noise, however looses most of the horizontal tras. On the right: e proposal of an algorithm to include also horizontal cosmic rays, currently under discussion.From[67]...... 65

5.38 On the le", the idea of measuring the quality of the alignment is illustrated for the SPD. Ea cosmic ray tra is reconstructed twice - once in the upper half-barrel, once in the lower half-barrel. For the reconstruction so"ware, both trasappeartooriginatefromthe’center’ofthedetectoraty=0.epictures in the center and on the right show cosmic ray tras used for the alignment of thewholeITS.From[71]...... 67 5.39Onthele":AnintegratedvisualizationofallthereconstructedclustersinITS from the events taken during the cosmic run and used for the alignment [71]. As mentioner earlier, the dramatically reduced occurrences of clusters around y = +/-5 cm is due to the fast-OR trigger algorithm whi excluded most of the horizontal tras. On the right: e residuals for the SPD alignment using the Millepede so"ware with cosmic ray tras. e figure shows the tra-to-tra distance of the same cosmic ray tra reconstructed twice - once in the upper and and once in the lower half-barrel of the SPD. e distribution before (blue solid) and a"er (bla solid) alignment is shown, as well as the distribution from simulated data with ideal geometry (red dashed). e inset in the top right shows a zoom in the central region. e simulated data is scaled to the same maximumvalueasthealigneddistribution.From[70]...... 67

6.1 Sematic view of a typical controls system in the LHC era. Picture adopted from [78]...... 71

6.2 A drawing of a typical PVSS project. Not all managers are shown. Picture adopted from[80]...... 75

6.3 A drawing of a PVSS project consisting of various managers running on a single PC(le")orrunningscatteredontwoPCs...... 75

6.4 A drawing of a PVSS Distributed System of several PVSS projects communicating viaaaDistributionManager...... 75

6.5 A picture of the PVSS so"ware console. is one serves the PVSS project whi simulatesallDataPointElementstobetransferedtotheOffline...... 76

6.6 A sematic drawing of the ALICE DCS hardware aritecture. Figure adapted from[82]...... 77

6.7 A simplified sematic view of the FSM aritecture in ALICE. Figure adapted from[82]...... 80

6.8 Le": e FSM control panel from where commands can be issued provided the adequate privileges are granted to the operator. e open/closed coloured lo gives information about the take/release condition. Right: e main FSM control panel. From here start/stop of all or a single FSM process can be done. Only expertoperatorsareallowedtoopenthispanel.[85]...... 81

6.9 estandardstatediagramforthesub-detectorsinALICE[85]...... 82

6.10 A sematic example of the partitioning process. Figure adapted from [82]. . . 83

6.11 e ALICE standard UI with one of the possible experiment views is shown. Six out of eighteen subdetectors FSM are already integrated below the ALI_DCS node[87]...... 84 6.12 Sematic view of the JCOP Framework in the context of a detector control system.Figureadaptedfrom[88] ...... 85

6.13 A general drawing of the data flow in the ALICE DCS. e operation of the DCS is synronized with the other ALICE online systems, DAQ, TRG and HLT through a controlslayer:ECS ...... 86

6.14 A drawing of the logical blos of the ALICE online systems, especially the DCS. Ealogicalbloistreatedasanautonomousentity.eupperlayersprovide a simplified view of the state of the underlying layers. Logical blos react to commandsandpublishtheirowninternalstates...... 87

6.15AdrawingofthecontrolsdataflowwithintheALICEDCS...... 89

6.16 AnillustrationofthecontrolsdataflowintotheALICEDCSarivaldatabase. . 90

6.17 An illustration summarizing of the controls data exange with systems external totheDCS...... 91

6.18 A simplified view illustrating the hardware used for the ALICE DCS database. e Fibre Channel communication is redundant, comprising redundant HBAs, QLogic FC swites and RAID array controllers. e network swites for inter- cluster communication and the disk array for baups are not shown. From [95]...... 92

6.19 e transfer rates of simulated FERO configuration BLOBs as a function of the ratio between random and shared BLOBs for several concurrent Oracle clients. . 96

6.20 e prototype of the SPD FERO configuration database sema design employing a prototype automatic version incrementing meanism whi saves enormous amountsofdiskspacebystoringonlypointerstoduplicatedata,notthedata itself. ere are 147 look-up tables in total and 3 tables containing the actual con- figuration data. Primary and Foreign keys ensure data integrity and consistency andbootsdataretrievalviatheircorrespondingindexes...... 98

6.21 A sematic drawing of the PVSS data ariving meanism. e last acquired data is written from the memory (LastVal and LastValValues) into the Current History (or HistoryValues) file. Once the file exceeds a predefined size, a file- swit occurs: the Current History file is anged to an Online History file and a new Current file is created. Online History files whi are no longer needed are anged to Offline History files and can be brought online if necessary. Figure adaptedfrom[98]...... 100

6.22 A picture of the PVSS panel developed for anging and removing the arive class of DPEs. is panel proved to be indispensable for switing PVSS projects already ariving to files to Oracle ariving. It is based on the work of Jim Cook. 101 6.23 A picture of the PVSS panel developed for querying the arived data in the database. It has the capability of querying the data from different PVSS projects, provided the projects are running in a distributed system. e user can select one or more or all DPs of a particular DP type, then one or more DPEs, select the time range, bonus values and even set the query to run at regular intervals. It is based on a PVSS panel developed by the JCOP FRAMEWORK whi was used duringthePVSS-Oraclearivingperformancetuning...... 102

6.24 A picture of the PVSS panel displaying the results (the DPE, value and timestamp) oftheexecutedqueryfromthepanelabove.ItisbasedonaPVSSpaneldeveloped by the JCOP FRAMEWORK whi was used during the PVSS-Oracle ariving performancetuning...... 102

6.25 A picture of the PVSS panel developed for creating all DPEs whi are required by the Detector Algorithms for offline reconstruction and assigns simulated reasonablevaluesforthem...... 111

6.26 A picture of the PVSS panel developed for simulating reasonable values at prede- fined frequencies for all DPEs whi are required by the Detector Algorithms for offline reconstruction and arived into a test Oracle database. e simulated values are retrieved every night via the AMANDA server by the offline SHUTTLE andusedforDetectorAlgorithmcodevalidation...... 112

6.27 A picture of the PVSS scripting language environment and the sample code used to simulate the values for all DPEs whi are required by the Detector Algorithms forofflinereconstruction...... 112 List of Tables

1.1 Anoverviewoffundamentalfermions...... 2

2.1 Relevant LHC beam parameters for the peak luminosity and proton operation (datatakenfrom[4])...... 12

3.1 DimensionsoftheITSdetectors(activeareas).From[16]...... 24

4.1 e material budget of one SPD layer. Table data taken from [20] ...... 32

6.1 All sema names for the PVSS arival and their starting suffix for history tables. elastreservedsuffixisshownaswell...... 99

6.2 etypicalDCSoperationcycle...... 106

6.3 e estimated DCS arive database yearly data volumes for different scenarios (1 or 3 indexes on the EventHistory table and from 0 to 74000 arived anges/s outsidetherampingperiods)...... 108

137 List of Publications

e ALICE Silicon Pixel Detector - P.Riedler, G.Anelli, F.Antinori, M.Burns, M.Campbell, M.Caselle, P.Choula, R.Dinapoli, D.Elia, R.A.Fini, F.Formenti, J.J. van Hunen, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, V.Manzari, F.Meddi, M.Morel, P.Nilsson, A.Pepato, R.Santoro, G.Stefanini, K.Wyllie, Proceedings of the PIXEL2002 Conference, Carmel, USA, Sept. 9-12, 2002, SLAC Conference Proceedings C 02/09/09

ALICE Silicon Pixel Detector -P.Choula,F.Antinori,G.Anelli,M.Burns,M.Campbell, M. Caselle, R. Dinapoli, D. Elia, R.A. Fini, F. Formenti, J.J. van Hunen, S. Kapusta, A. Kluge, M. Krivda,V.Lenti,V.Manzari,F.Meddi,M.Morel,P.Nilsson,A.Pepato,P.Riedler,R.Santoro,G. Stefanini, K. Wyllie, Proceedings of the QM2002 Conference, Nantes, France, published in Nuclear Physics A715 (2003), p. 849c-852c

Testbeam Performance of the ALICE Silicon Pixel Detector - P. Nilsson, G. Anelli, F. Anti- nori, M. Burns, I.A. Cali, M. Campbell, M. Caselle, P. Choula, M. Cinausero, A. Dalessandro, R. Dima, R. Dinapoli, D. Elia, H. Enyo, D. Fabris, R. Fini, E. Fioretto, F. Formenti, K. Fujiwara, J.M. Heuser, H. Kano, S. Kapusta, A. Kluge, M. Krivda, V.Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel, S. Moretto, H. Onishi, A.Olmos Giner, F. Osmic, G. Pappalardo, A. Pepato, , G. Prete, P. Riedler, R. Santoro, F. Scarlassara, F. Soramel, G. Stefanini, K. Tanida, A. Taketani, R. Turrisi, L. Vannucci, G. Viesti, T. Virgili, Proceedings of the VCI 2004 Conference, Vienna, Austria, published in Nuclear Instruments and Methods in Physics Resear A, vol. 535 (2004) p.424-7

e ALICE Silicon Pixel Detector: Electronics System Integration -A.Kluge,G.Anelli, F. Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, I.A. Cali, M. Campbell, M. Caselle, S. Ceresa, P.Choula, M.Cinausero, J.Conrad, R.Dima, D.Elia, D.Fabris, R.A.Fini, E.Fioretto, F.Formenti, S.Kapusta, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, A.Pulvirenti, P.Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.TorcatoDeMatos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Nuclear Science Symposium Conference Record, IEEE, Puerto Rico, vol. 2, p. 761-764, October 2005

Beam Test Performance of a Prototype Assemblies for the ALICE Silicon Pixel Detector - D. Elia, G. Anelli, F. Antinori, A. Badala, G.E. Bruno, M. Burns, I. A. Cali, M. Campbell, M. Caselle,S.Ceresa,P.Choula,M.Cinausero,J.Conrad,R.Dima,D.Fabris,R.A.Fini,E. Fioretto,S.Kapusta,A.Kluge,M.Krivda,V.Lenti,F.Librizzi,M.Lunardon,V.Manzari,M. Morel, S. Moretto, F. Osmic, G. S. Pappalardo, V. Paticio, Adriano Pepato, G. Prete, Alberto Pulvirenti, P. Riedler, F. Riggi, L. Sandor, R. Santoro, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato Matos, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the Quark Matter Conference 2005, Budapest, Hungary, Submitted Test, qualification and electronics integration of the ALICE silicon pixel detector mod- ules - I.A. Cali, G. Anelli, F. Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, M. Campbell, M. Caselle, S. Ceresa, P.Choula, M.Cinausero, J.Conrad, R.Dima, D.Elia, D.Fabris, R.A.Fini, E.Fioretto, F.Formenti, B.Ghidini, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, F.Nava, P.Nilsson, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, A.Pulvirenti, P.Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Proceedings of the 9th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications, Villa Erba, Como, Italy. p. 1054-1059. World Scientific, October 2005

Recent Test Results of the ALICE Silicon Pixel Detector -P.Riedler,G.Anelli,F.Antinori, A. Boccardi, M. Burns, I. A. Cali, M. Campbell, M. Caselle, P. Choula, M. Cinausero, A. Dalessandro, R. Dima, R. Dinapoli, D. Elia, D. Fabris, R. A. Fini, E. Fioretto, F. Formenti, S. Kapusta, A. Kluge, M. Krivda, V. Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel, S. Moretto,P.Nilsson,A.OlmosGiner,F.Osmic,G.S.Pappalardo,A.Pepato,G.Prete,R.Santoro, F. Scarlassara, G. Segato, L. Sándor, F. Soramel, G. Stefanini, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the VERTEX Conference, Lake Windermere, UK, September 2003, Nuclear Instruments and Methods in Physics Resear A, vol. 549 (2005) p.65-69

e Silicon Pixel Detector for the ALICE Experiment -V.Manzari,G.Anelli,F.Antinori,A. Boccardi, G. E. Bruno, M. Burns, I. A. Cali, M. Campbell, M. Caselle, P.Choula, M. Cinausero, A. Dalessandro, R. Dima, R. Dinapoli, D. Elia, D. Fabris, R. A. Fini, E. Fioretto, F. Formenti, B. Ghidini, S. Kapusta, A. Kluge, M. Krivda, V. Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel,S.Moretto,F.Nava,P.Nilsson,A.OlmosGiner,F.Osmic,G.S.Pappalardo,A.Pepato, G. Prete, P. Riedler, R. Santoro, F. Scarlassara, G. Segato, L. Sándor, F. Soramel, G. Stefanini, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the 17th Quark Matter Conference, Oakland, USA, 2004, Journal of Physics G, 30, p. 1091-95, 2005

e ALICE Silicon Pixel Detector: System, Components and Test Procedures -P.Riedler, G. Anelli, F. Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, I. A. Cali, M. Campbell, M. Caselle,S.Ceresa,P.Choula,M.Cinausero,J.Conrad,R.Dima,D.Elia,D.Fabris,R.A.Fini, E. Fioretto, F. Formenti, S. Kapusta, A. Kluge, M. Krivda, V. Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel, S. Moretto, P. Nilsson, F. Osmic, G. S. Pappalardo, V. Paticio, A. Pepato, G. Prete, A. Pulvirenti, F. Riggi, L. Sandor, R. Santoro, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato Matos, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the 10 European Symposium on Semiconductor Detectors, Wildbad Kreuth, Germany, 2005. Nuclear Instruments & Methods in Physics Resear Section A30 (2006), vol. 568, no. 1, p. 284-288

Infrared Laser Testing of ALICE Silicon Pixel Detector Assemblies -F.Osmic,G.Anelli,F. Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, I. A. Cali, M. Campbell, M. Caselle, S. Ceresa,P.Choula,M.Cinausero,J.Conrad,R.Dima,D.Elia,D.Fabris,R.A.Fini,E.Fioretto, F.Formenti,S.Kapusta,A.Kluge,M.Krivda,V.Lenti,F.Librizzi,M.Lunardon,V.Manzari,M. Morel, S. Moretto, P. Nilsson, G. S. Pappalardo, V. Paticio, A. Pepato, G. Prete, A. Pulvirenti, P. Riedler, F. Riggi, L. Sandor, R. Santoro, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato Matos, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the PIXEL 2005 Conference, Bonn, Germany, 2005. Nuclear Instruments & Methods in Physics Resear A (2006), vol. 565, p. 13-17

Overview and Status of the ALICE Silicon Pixel Detector -P.Riedler,G.Anelli,F.Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, I. A. Cali, M. Campbell, M. Caselle, S. Ceresa, P. Choula, M. Cinausero, J. Conrad, R. Dima, D. Elia, D. Fabris, R. A. Fini, E. Fioretto, F. Formenti, S. Kapusta, A. Kluge, M. Krivda, V. Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel, S. Moretto, P. Nilsson, F. Osmic, G. S. Pappalardo, V. Paticio, A. Pepato, G. Prete, A. Pulvirenti, F. Riggi, L. Sandor, R. Santoro, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato Matos, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the PIXEL 2005 Conference, Bonn, Germany, 2005. Nuclear Instruments & Methods in Phys-ics Resear Section A (2006), vol. 565, p. 1-5

Performance of ALICE silicon pixel detector prototypes in high energy beams -D. Elia, G. Anelli, F. Antinori, A. Badala, G.E. Bruno, M. Burns, I.A. Cali, M. Campbell, M. Caselle, S. Ceresa, P.Choula, M.Cinausero, J.Conrad, R.Dima, D.Fabris, R.A.Fini, E.Fioretto, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, P.Nilsson, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, A.Pulvirenti, P. Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Proceedings of the PIXEL 2005 Conference, Bonn, Germany, 2005. Nuclear Instruments & Methods in Phys-ics Resear Section A (2006), vol. 565, p. 30-35

Test of prototypes of the ALICE silicon pixel detector in a multi-tra environment -A.Pul- virenti,G.Anelli,F.Antinori,A.Badala,G.E.Bruno,M.Burns,I.A.Cali,M.Campbell,M.Caselle, S. Ceresa, P.Choula, M.Cinausero, J.Conrad, R.Dima, D.Elia, D.Fabris, R.A.Fini, E.Fioretto, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, P.Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Proceedings of the PIXEL 2005 Conference, Bonn, Germany, 2005. Nuclear Instruments & Methods in Phys-ics Resear Section A (2006), vol. 565, p. 18-22

e Meanics and Cooling System of the ALICE Silicon Pixel Detector -AdrianoPepato, G.Anelli,F.Antinori,A.Badala,G.E.Bruno,M.Burns,I.A.Cali,M.Campbell,M.Caselle,S. Ceresa,P.Choula,M.Cinausero,J.Conrad,R.Dima,D.Elia,D.Fabris,R.A.Fini,E.Fioretto, S. Kapusta, A. Kluge, M. Krivda, V. Lenti, F. Librizzi, M. Lunardon, V. Manzari, M. Morel, S. Moretto, F. Osmic, G. S. Pappalardo, V.Paticio, G. Prete, Alberto Pulvirenti, P.Riedler, F. Riggi, L. Sandor, R. Santoro, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato Matos, R. Turrisi, L .Vannucci, G. Viesti and T. Virgili, Proceedings of the PIXEL 2005 Conference, Bonn, Germany, 2005. Nuclear Instruments & Methods in Phys-ics Resear Section A (2006), vol. 565, p. 6-12

eassemblyofthefirstsectoroftheALICEsiliconpixeldetector-S.Moretto,G. Anelli, F. Antinori, A. Badala, A. Boccardi, G.E. Bruno, M. Burns, I.A. Cali, M. Campbell, M. Caselle, S. Ceresa, P.Choula, M.Cinausero, J.Conrad, R.Dima, D.Elia, D.Fabris, R.A.Fini, E.Fioretto, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, P.Nilsson, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, A.Pulvirenti, P.Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Journal of Physics: Conference Series, Volume 41, Issue 1, p. 361-368, 2006

Beam test performance and simulation of prototypes for the ALICE silicon pixel detector - J. Conrad, G. Anelli, F. Antinori, A. Badala, R. Barbera, A. Boccardi, M. Burns, G.E. Bruno, I.A. Cali, M. Campbell, M. Caselle, P.Choula, S. Ceresa, M.Cinausero, R.Dima, D.Elia, D.Fabris, E.Fioretto, R.A.Fini, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, A. Mors, P.Nilsson, M.L. Noriega, F.Osmic, G.S.Pappalardo, V.Paticio, A.Pepato, G.Prete, A.Pulvirenti, P.Riedler, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, L.Vannucci, G.Viesti, T.Virgili, Proceed- ings of the PSD7 Conference, Liverpool, UK 2005, Nuclear Instruments & Methods in Physics Re-sear A (2007), vol. 573, p. 1-3

ALICE control system - ready for LHC operation -L.Jirden,A.Augustinus,M.Boccioli,P. Choula, G. De Cataldo, S. Kapusta, P. Rosinsky, C. Torcato de Matos, L. Wallet, Miele Nitti, Proceedings of the 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007, Knoxville, Tennessee, USA

Handling large amounts of data in ALICE - Peter Choula, André Augustinus, Vladimír Fekete, Lennart Jirdén, Svetozár Kapusta, Peter Rosinský, Proceedings of the 11th International Conference on Accelerator and Large Experimental Physics Control Systems ICALEPCS, 2007, Knoxville, Tennessee, USA

Production and Integration of the ALICE Silicon Pixel Detector - P.Riedler, G.Anelli, F.Antinori, A.Badala, G.E.Bruno, M.Burns, I.A.Cali, M.Campbell, M.Caselle, S.Ceresa, P.Choula, M.Cinausero, R.Dima, D.Elia, D.Fabris, R.A.Fini, E.Fioretto, F.Formenti, S.Kapusta, A.Kluge, M.Krivda, V.Lenti, F.Librizzi, M.Lunardon, V.Manzari, M.Morel, S.Moretto, F.Osmic, G.S.Pappalardo, A.Pepato, G.Prete, A.Pulvirenti, F.Riggi, L.Sandor, R.Santoro, F.Scarlassara, G.Segato, F.Soramel, G.Stefanini, C.Torcato De Matos, R.Turrisi, H.Tydesjö, L.Vannucci, G.Viesti, T.Virgili, Nuclear Instruments & Methods in Physics Resear A (2007), vol. 572, p. 128-131

e ALICE experiment at the CERN LHC - Aamodt, K., et al., J. Instrum., 2008, vol. 3, p. S08 002. 259 p.

e ALICE Silicon Pixel Detector: readiness for the first proton beam -R.Santoro,G. Aglieri Rinella, F. Antinori, A. Badala, F. Blanco, C. Bombonati, C. Bortolin, G.E. Bruno, M. Burns, I.A. Cali, M. Campbell, M. Caselle, C. Cavicioli, A. Dainese, C. Di Giglio, R. Dima, D. Elia,D.Fabris,J.Faivre,R.Ferretti,R.A.Fini,F.Formenti,S.Kapusta,A.Kluge,M.Krivda,V. Lenti,F.Librizzi,M.Lunardon,V.Manzari,G.Marangio,A.Mastroserio,M.Morel,S.Moretto, M. Nicassio, A. Palmeri, G.S. Pappalardo, V.Paticio, A. Pepato, A. Pulvirenti, P.Riedler, F.Riggi, R. Romita, L. Sandor, F. Scarlassara, G. Segato, F. Soramel, G. Stefanini, C. Torcato de Matos, R. Turrisi, H. Tydesjo, L. Vannucci, P. Vasta, G. Viestid, T. Virgili, PIXEL 2008 International Workshop, JINST 4 P03023, Fermilab, Batavia, IL, U.S.A., 23 - 26 Sep 2008, pp.516-521

Alignmentof theALICEInner Traing System with cosmic-raytras - Aamodt, K., et al., J. Instrum., 2010, vol. 5, no. 03, p. P03003, URL:.

Anowledgments

First of all I would like to thank Jozef Masarik, Karol Hollý and Branislav Sitár for giving me the opportunity to perform my thesis work at CERN. My greatest thank you goes to my supervisor, Peter Choula, for his valuable advice, enduring guidance and for encouraging and supporting me in all my efforts.

My special thanks to the members of the ALICE Silicon Pixel Detector Group, mainly Giorgio Stefanini and Vito Manzari for giving me the opportunity to be a part of the team. It was always a pleasure to work with members of the pixel collaboration, mainly with Ivan Amos Cali, César Torcato de Matos, Romualdo Santoro, Petra Riedler, Fadmar Osmic, Mike Burns, Miel Morel, Alexander Kluge, Miael Campbell, Ken Wyllie, Frederico Antinori and many others. Many thanks to the Detector Control System group for an excellent and entertaining working and personal atmosphere. I anowledge Lennart Jirdén for his kindness, leadership, for taking me into the group, for the enjoyable working environment he created and for his organizational skills also outside working hours. I had a great time facing DCS related allenges with André Augustinus, Giacinto De Cataldo, Lionel Wallet, Marco Boccioli, Peter Choula and Peter Rosinský.

I really enjoyed working with Vladimír Fekete, Alberto Colla, Chiara Zampolli and Jan-Fiete Grosse Oetringhaus on the AMANDA-Shuttle data exange and I thank them for their efficient and honest cooperation.

My greatfullness deserve the IT-CO (currently EN-ICE) and IT-DM (currently IT-DB) groups, namely Manuel Gonzales Berges, Piotr Golonka, Jacek Wojcziesuk and Luca Canali, for their excellent collaboration by providing professional advice and consultancy. e tremendous PVSS ariving improvements were only possible due to their cooperative spirit. Anowledgmets deserves the IT-DES group, ETM, ORACLE and also all members of the performance tuning team, namely Wayne Salter, Chris Lambert, Nilo Segura Chinilla, Eric Graner, Anton Topurov, Bjorn Engsig and Lothar Flatz.

Many thanks go to Karel Šafarík, Peter Choula, Peter Rosinský, Pavol Vojtyla, Pavel Stejskal andAsenChristovforcountlessfruitfulprofessionalandpersonaldiscussions.

I would like to thank people that provided support during the writing of this thesis and that were of particular importance for this thesis: My girlfriend, my parents, my brother, my family and all people patiently waiting for me to submit the thesis and Matthew Buican for providing corrections and suggestions.