<<

Femtoscopy of -proton collisions in the ALICE experiment

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University

By Nicolas Bock, B.Sc. B.Eng., M.Sc. Graduate Program in Physics

The Ohio State University 2011

Dissertation Committee: Professor Thomas J. Humanic, Advisor

Professor Michael Lisa #1

Professor Klaus Honscheid #2

Professor Richard Furnstahl #3 c Copyright by

Nicolas Bock

2011 Abstract

The Large Experiment (ALICE) at CERN has been designed to study matter at extreme conditions of temperature and pressure, with the long term goal of observing deconfined matter (free and ), study its properties and learn more details about the phase diagram of nuclear matter. The ALICE experiment provides excellent tracking capabilities in high multiplicity proton-proton and heavy ion collisions, allowing to carry out detailed research of nuclear matter. This dissertation presents the study of the space time structure of the particle emission region, also known as femtoscopy, in proton- proton collisions at 0.9, 2.76 and 7.0 TeV. The emission region can be characterized by taking advantage of the Bose-Einstein effect for identical , which causes an enhancement of produced identical pairs at low relative . The geometry of the emission region is related to the relative momentum distribution of all pairs by the Fourier transform of the source function, therefore the measurement of the final relative momentum distribution allows to extract the initial space-time characteristics. Results show that there is a clear dependence of the femtoscopic radii on event multiplicity as well as transverse momentum, a signature of the transition of nuclear matter into its fundamental components and also of strong interaction among these. The present work also considers a physics motivated parametrization of non-femtoscopic correlations to characterize the background signal. It is shown that at these high and multiplicities this parametrization does not work as it does for lower energies. A special chapter containing the detailed study of possible signatures of black hole formation is also presented.

ii Acknowledgments

I would like to thank my advisor Dr. Tom Humanic for all your support and guidance during these Ph.D. years, it has been a great pleasure to learn from you and work with you in such a great project as is the ALICE experiment at CERN. Too bad we did not detect any black holes, but maybe at 14 TeV. Thank you also for all your encouragement to advance my career at conferences and workshops. My co-advisor Dr. Mike Lisa: It was always very enlightening to go to your office and show you results or even code as you always had an answer to my questions. The Experimental Heavy Ion Group at Ohio State: Chris Anson, Zibi Chajecki, Dave Truesdale, Matt Steinpreis. The Femtoscopy Group at CERN: Adam Kisiel, Dariusz Miskowiec, Sebastian Huber, Jorge Mercado, Johanna Gramling, and Mads Stormo Nielsen. My Collaborators and friends at ALICE and CERN: Mario Sitta, Francesco Prino, Dhe- van Ghandagaran, Lucy Renshall, Carnita Hervet, Ulla Tihinen, Bjorn Nielsen, Emmanuelle Biolcati and Stephania Beole. I would also like to thank all the professors at OSU that contributed in one way or another to my formation as a physicist: (in no particular order): Dr. David Stroud, Dr. Eric Braaten, Dr. John Beacom, Dr. Junko Shigemitsu, Dr. Dick Furnstahl, Dr. Ulrich Heinz, Dr. Klaus Honscheid, Dr. Yuri Kovchegov, Dr. Julia Meyer, Dr. Samir Matur. Many thanks to the OSU Physics Department staff with whom I interacted on many occasions: Mary Kay Jackson, Arnay Tate, Beth Deinlein, Brenda Mellet, Karen Kitts, Yavonne McGarry. I wish acknowledge the support of the U.S. National Science Foundation under grant PHY-0970048, and to acknowledge computing support from the Ohio Supercomputing Cen- ter. I would like to thank the Physics Department at OSU for the continued support and the great opportunity of becoming a PhD here. I enjoyed and learned a lot from all the classes I took, and the experience as a Teaching Associate was also very rewarding. Finally I wish to thank my family for their constant support.

iii Vita

April 10, 1979 ...... Born— Bogot´a,Colombia

January 2002 - September 2004 ...... Undergraduate Teaching Associate, Uni- versidad de los Andes, Bogot´aColombia September 18, 2004 ...... B.S. Physics, Universidad de los Andes, Bogot´aColombia September 18, 2004 ...... B.S. Mechanical Engineering, Universidad de los Andes, Bogot´aColombia January 2005 - July 2006 ...... Adjunct Faculty Professor, Universidad Javeriana, Bogot´aColombia January 2006 - July 2006 ...... Adjunct Faculty Professor, Universidad del Bosque, Bogot´aColombia September 2006 - August 2009 ...... Graduate Teaching Associate, The Ohio State University, Columbus Ohio August 31, 2009 ...... M.S. Physics, The Ohio State University, Columbus, Ohio September 2009 - August 2011 ...... Graduate Research Associate, The Ohio State University, Columbus Ohio

Publications in refereed journals

1. Nicolas Bock for the ALICE Collaboration, Femtoscopy and -momentum con- servation effects in proton-proton collisions at 900 GeV in ALICE. Workshop for young scientists on the physics of ultra-relativistic heavy ion collisions, June 21 -26, La Londe Les Maures, France. 2011 J. Phys.: Conf. Ser. 270 012022, arXiv:1009.3157v1 [hep-ex].

2. Nicolas Bock, Thomas J. Humanic, Quantitative Calculations for Black Hole Pro- duction at the Large Collider, Int.J.Mod.Phys.A24:1105-1118,2009.

iv 3. B Alessandro et al, Operation and calibration of the Silicon Drift Detectors of the ALICE experiment during the 2008 data taking period, 2010 JINST 5 P04004

Conference Proceedings

1. Nicolas Bock, Energy momentum conservation effects on two-particle correlation func- tions. VI Workshop on Particle Correlations and Femtoscopy, September 14-17, 2010, Kiev, Ukraine. arXiv:1101.5241v1 [hep-ex] (2010). Submitted to Physics of Elementary Particles and Atomic Nuclei, Letters.

Publications with the ALICE Collaboration

1. The ALICE Collaboration, Two- Bose-Einstein correlations in pp collisions at √ s 900 GeV at the LHC, Phys. Rev. D 82, 052001 (2010)

√ 2. The ALICE Collaboration, Femtoscopy of pp collisions at s=0.9 and 7 TeV at the LHC with two-pion Bose-Einstein correlations, arXiv:1101.3665v1 [hep-ex], To be published in Physical Review C.

3. The ALICE Collaboration, Two-pion Bose-Einstein correlations in central PbPb col- p lisions at (sNN ) = 2.76 TeV ALICE Collaboration, Phys.Lett.B 696:328-337 2011, arXiv:1012.4035v2 [nucl-ex].

Invited Presentations

1. HBT Interferometry with the ALICE experiment 26th Lake Louise Winter Institute, Lake Louise, Canada. Feb 20-26, 2011

2. Energy-momentum conservation effects on the two-particle correlation function VI. Workshop on Particle Correlations and Femtoscopy, Bogolyubov institute for the- oretical physics Kiev, Ukraine. Sep 14 -18, 2010

v 3. Femtoscopy and energy-momentum conservation effects in ALICE Hot Quarks: Physics of ultra relativistic nucleus-nucleus collisions, La Londe Les Maures, France. June 21 - 27 2010

4. Improving femtoscopy by characterizing energy-momentum conservation effects Hayes Graduate Research Forum, The Ohio State University, Columbus Ohio. May 1 2010

5. Two pion femtosopy at 7 and 14 TeV pp collisions V. Workshop on Particle Correlations and Femtoscopy, CERN, Geneva, . October 14 -17 2009.

Full List of Publications with the ALICE Collaboration

1. The ALICE Collaboration, Higher harmonic anisotropic flow measurements of charged particles in Pb-Pb collisions at 2.76 TeV, arXiv:1105.3865v1 [nucl-ex], To be pub- lished in Physical Review Letters

2. The ALICE Collaboration, Production of , and in pp collisions at √ s= 900 GeV with ALICE at the LHC, Eur.Phys.J.C 71(6): 1655, 2011

3. The ALICE Collaboration, Rapidity and transverse momentum dependence of inclu- √ sive J/psi production in pp collisions at s=7 TeV, arXiv:1105.0380v1 [hep-ex], To be published in Phys. Lett. B

4. The ALICE Collaboration, Strange particle production in proton-proton collisions at √ s = 0.9 TeV with ALICE at the LHC, Eur. Phys. J. C 71 (3), 1594 (2011)

5. The ALICE Collaboration, Centrality dependence of the charged-particle multiplicity √ density at mid-rapidity in Pb-Pb collisions at sNN = 2.76 TeV, Phys. Rev. Lett. 106, 032301 (2011)

6. The ALICE Collaboration, Suppression of Charged Particle Production at Large √ Transverse Momentum in Central Pb-Pb Collisions at sNN = 2.76 TeV, Phys. Lett. B 696 (2011) 30-39

vi 7. The ALICE Collaboration, Elliptic flow of charged particles in Pb-Pb collisions at 2.76 TeV, Phys. Rev. Lett. 105, 252302 (2010)

8. The ALICE Collaboration, Charged-particle multiplicity density at mid-rapidity in √ central Pb-Pb collisions at sNN = 2.76 TeV, Phys. Rev. Lett. 105, 252301 (2010)

9. The ALICE Collaboration, Transverse momentum spectra of charged particles in pro- √ tonproton collisions at s=900 GeV with ALICE at the LHC, Physics Letters B 693 (2010) 5368

10. The ALICE Collaboration, Midrapidity -to-Proton Ratio in pp Collisons √ at s=0.9 and 7 TeV Measured by the ALICE Experiment, Phys Rev Lett Vol.105, No.7, (2010)

11. The ALICE Collaboration, Charged-particle multiplicity measurement in protonproton √ collisions at s=7 TeV with ALICE at LHC, Eur. Phys. J. C (2010) 68: 345354

12. The ALICE Collaboration, Charged-particle multiplicity measurement in protonproton √ collisions at s=0.9 and 2.36 TeV with ALICE at LHC, Eur. Phys. J. C (2010) 68: 89108

13. The ALICE Collaboration, Alignment of the ALICE Inner Tracking System with cosmic-ray tracks, J. Instrum. 5, P03003

14. The ALICE Collaboration, First protonproton collisions at the LHC as observed with the ALICE detector: measurement of the charged-particle pseudorapidity density at √ s=900 GeV, Eur. Phys. J. C (2010) 65: 111-125

vii Fields of Study

Major Field: Experimental Nuclear Physics

Studies in Identical Pion Femtoscopy in proton-proton collisions with the ALICE detec- tor: Thomas Humanic

viii Table of Contents

Page Abstract ...... ii Acknowledgments ...... iii Vita...... iv List of Figures ...... xi List of Tables ...... xiii List of Algorithms ...... xiv

Chapters

1 Introduction 1 1.1 The proton-proton and heavy ion programs at the LHC ...... 1 1.2 Motivation for HBT studies at the LHC ...... 2 1.2.1 The QCD phase diagram ...... 2 1.2.2 Description of a heavy ion collision ...... 3 1.2.3 The HBT contribution ...... 5 1.3 In this thesis ...... 5

2 A Large Ion Collider Experiment ALICE 6 2.1 The ...... 6 2.1.1 Current Status of the LHC ...... 7 2.2 The ALICE experiment ...... 7 2.2.1 The physics program at ALICE ...... 8 2.2.2 The ALICE detector Layout ...... 8 2.3 The ALICE offline framework ...... 15 2.3.1 AliFemto ...... 15

3 HBT Formalism 17 3.1 Hanbury Brown and Twiss interferometry ...... 17 3.2 The theoretical correlation function ...... 18 3.2.1 The one dimensional correlation function ...... 18 3.2.2 The three dimensional correlation function ...... 20 3.2.3 Coulomb and residual strong force effects ...... 20 3.2.4 Non femtoscopic correlations ...... 21 3.3 The experimental correlation function ...... 22 3.3.1 The experimental correlation function in one dimension ...... 22

ix 3.3.2 The experimental correlation function in three dimensions ...... 23 3.4 Spherical Harmonic Decomposition of the Correlation Function ...... 25 3.4.1 Spherical Harmonic Decomposition ...... 25 3.5 Simulating HBT events ...... 26

4 Energy Momentum Conservation Induced Correlations 29 4.1 The EMCICs parametrization ...... 29

5 Femtoscopy of proton-proton collisions with the ALICE experiment 32 5.1 Momentum Resolution effects on the correlation function ...... 32 √ 5.2 Femtoscopy of proton-proton collisions at s = 2.76 TeV...... 34 5.2.1 One dimensional analysis ...... 34 5.2.2 Three dimensional analysis ...... 38 √ 5.3 Femtoscopy of proton-proton collisions at s = 900 GeV and 7 TeV. . . . . 40 5.3.1 One dimensional analysis ...... 40 5.3.2 Three dimensional analysis ...... 40

6 Femtoscopy and Energy-Momentum Conservation effects 44 6.1 Preliminary Studies ...... 44 6.2 EMCICs and Femtoscopy of proton-proton collisions at 900 GeV and 7 TeV 46 6.2.1 One dimensional EMCIC analysis ...... 46 6.2.2 Three Dimensional EMCIC Analysis ...... 50

7 Summary of femtoscopy results from ALICE 56

8 Quantitative Calculations For Black Hole Production At The Large Hadron Collider 58 8.1 Introduction ...... 58 8.2 Black hole formation models in CATFISH ...... 59 8.2.1 Event simulation in CATFISH ...... 61 8.3 Black hole formation signatures ...... 61 8.3.1 Black hole signal in the hadronic channel ...... 61 8.3.2 Signal from black hole Remnants ...... 68 8.4 Summary ...... 72 8.4.1 Have black holes been observed with the ALICE detector? ...... 73

Bibliography 75

Appendices

A Useful algorithms 80 A.1 Algorithm to calculate the correlation functions ...... 80 A.1.1 The correlated pairs ...... 80 A.1.2 The uncorrelated pairs ...... 81 A.2 Algorithm to obtain the Qout,Qside,Qlong projections of the three dimen- sional correlation function ...... 82 x List of Figures

Figure Page

1.1 The QCD phase diagram...... 3 1.2 Representation of the evolution of a heavy ion collision...... 4

2.1 A schematic view of the LHC accelerator ring, the beam injection system and the experiments. Figures courtesy of CERN...... 7 2.2 The ALICE Detector ...... 9 2.3 Inner Tracking System ...... 10 2.4 ...... 11 2.5 Transition Radiation Detector ...... 12

3.1 Two identical particles are emitted and detected simultaneously. The two possible ways in which the particles can be emitted results in the Bose- Einstein quantum mechanical effect...... 17 3.2 The measurement of the two particle momenta, and Q = p1 − p2 allows to extract the system source size R...... 18 3.3 A general Gaussian source...... 20 3.4 Momenta of two particles as seen in the laboratory frame...... 24 3.5 View of the transverse plane and the three axes of the LCMS coordinate system...... 24 3.6 Numerator and denominator of a simulated correlation function...... 27 3.7 Simulated correlation function with a Gaussian fit...... 28

5.1 Momentum Resolution Study Flow Diagram ...... 33 5.2 Momentum Resolution Study ...... 33 5.3 Comparison of Pythia and Phojet baselines...... 35 5.4 Fitting the correlation function...... 36 5.5 kT and Multiplicity dependence of Rinv and λ at 2.76 TeV...... 37 5.6 Fit Range Study of the correlation function at 2.76 TeV...... 37 5.7 Gaussian vs. Exponential Rinv ...... 38 5.8 Projections of the fit to the three dimensional Correlation Function. . . . . 38 5.9 kT dependence of the three dimensional radii...... 39

xi 5.10 Average pair momentum kT and Multiplicity dependence of Rinv and λ at 900 GeV...... 41 5.11 Average pair momentum kT and Multiplicity dependence of Rinv and λ at 7 TeV...... 41 5.12 Average pair momentum and multiplicity dependence of 3D Gaussian radii at 900 GeV and 7 TeV...... 42 5.13 Multiplicity dependence of 3D Gaussian radii at 900 GeV and 7 TeV. . . . 43 5.14 Comparison of femtoscopic radii from heavy ion and proton-proton systems. 43

6.1 The four EMCIC histograms {X} are shown together with CEMCIC (Q) (yel- low)...... 45 6.2 Equation (4.9) is used to fit Monte Carlo Data...... 45 6.3 Numerator and denominator of the correlation function ...... 46 6.4 Correlation function from 900 GeV pp events...... 46 6.5 Fit to the kT and multiplicity integrated correlation function from simulated p+p events at 900 GeV...... 47 6.6 EMCIC Fit to Monte Carlo data at 900 GeV...... 48 6.7 Comparison of data at 900 GeV to Monte Carlo simulations...... 49 6.8 EMCIC Fit to 900 GeV data...... 50 6.9 EMCIC Fit to 7 TeV p+p Monte Carlo data...... 51 6.10 EMCIC Fit to 7 TeV data...... 52 6.11 EMCIC Fit to 7 TeV p+p data...... 52 6.12 Two dimensional projection of the correlation function...... 53 6.13 3D EMCIC Fit to the correlation function from 900 GeV p+p Monte Carlo. 54 6.14 3D EMCIC Fit to the correlation function from 900 GeV p+p data. . . . . 55

8.1 Comparison of the PT for different values of MP with the QCD background 62 8.2 Comparison of the PT with applied cuts for different values of MP and the QCD background...... 63 8.3 Combined signal of the QCD background and the black hole signal for a MP =1TeV...... 64 8.4 Comparison of PT for MP of 1 TeV for different number of extra dimensions and the QCD background...... 65 8.5 Comparison of the three different black hole formation models for MP = 1 TeV ...... 66 8.6 Comparison of two values of the black hole mass at formation (Xmin) and black hole mass at evaporation (Qmin) and the QCD background. Model: NGL with MP =1TeV...... 67 8.7 Comparison of the PT distribution in the NGL model with and without black hole remnants...... 68 8.8 Comparison of the PT in the YN model with and without black hole remnants. 69 8.9 Comparison of the PT in the YR model with and without black hole remnants. 70 8.10 Comparison of black hole remnant mass with the reconstructed mass from the TOF for MP = 1 TeV, Qmin =1 ...... 71 8.11 Charge Distribution among the black hole remnants for MP = 1 TeV, Qmin =1 ...... 72

xii 8.12 Difference in the black hole remnant mass and the reconstructed mass for MP = 1 TeV, Qmin =1...... 73 8.13 Comparison of different black hole remnant mass distributions for MP = 1 TeV...... 74

List of Tables

Table Page

3.1 Source functions and their corresponding correlation functions ...... 19

5.1 Summary of momentum resolution effect on fit parameters...... 34

xiii List of Algorithms

1 Algorithm to calculate the numerator of the two particle correlation function. 81 2 Algorithm to calculate the denominator of the two particle correlation function. 81 3 Algorithm to calculate the projections of the 3D fit function on the Qout , Qside and Qlong axes...... 83

xiv Chapter 1 Introduction

1.1 The proton-proton and heavy ion programs at the LHC

The European Organization for Nuclear Research (CERN) located in Geneva, across the borders of Switzerland and France was founded in 1954 with the idea of doing state of the art research in nuclear physics to learn about the fundamental interactions between particles. It started as a collaboration of 12 European countries that saw the possibility of joining efforts to have a specialized physics laboratory. Since then, many experiments have been carried out there, starting in 1957 with the 600 MeV , then in 1959 with a Proton running at 28 GeV, in 1971 the world’s first proton- proton collider was built, the Intersecting Storage Rings (ISR). In 1976 the Super , an accelerator with 7 km in circumference, was commissioned and it provided proton-antiproton beams at 400 GeV. This project probed the inner structure of protons, studied matter- asymmetries and the discovered the in 1983. The next accelerator was commissioned in 1989. The Large -Proton collider (LEP), with a 27 km circumference, operated for 11 years at 100 GeV producing millions of Z and W bosons, and it also to prove that there are only 3 generations of particles in the of . The Large Hadron Collider (LHC) was conceived in the 1980s, approved in 1994 and its construction started in 2000 using the same tunnel that was used by LEP. There are six experiments that use the beam of the LHC: A Toroidal LHC Apparatus (ATLAS) and the Compact Solenoid (CMS), the two largest detectors at the LHC, will search for the , an elusive particle predicted by the Standard Model which would explain how particles get their mass. Other research topics are top production, electro-weak symmetry breaking and the mysterious and dark energy of the Universe, among others. A Large Ion Collider Experiment (ALICE) will study the quark- plasma, a state of matter that would have existed at the very beginning of the Universe. The LCH beauty (LHCb) will study CP violation, which would help to explain why the Universe is

1 made of matter and not antimatter. The LHC forward (LHCf) experiment uses forward particles created inside the LHC as a source to simulate cosmic rays in laboratory conditions. Finally, the Total Elastic and diffractive cross section Measurement experiment (TOTEM) studies forward particles to focus on physics that is not accessible to the general-purpose experiments. Among a range of studies, it will measure, in effect, the size of the proton and also monitor accurately the LHC’s luminosity. The LHC was first commissioned in September 10 of 2008, when the first beams cir- culated the accelerator in both directions. A few days after the successful start up, an electrical failure caused a helium leak in the tunnel and the LHC had to be turned off for more than one year to repair and update the magnets to prevent these problems in the future. The delay was long but it proved to be worth the effort because on November 23 of 2009 the LHC came back on line without problems. During the initial data taking period between November and December 2009 the ALICE experiment recorded 330k proton-proton events at 900 GeV. After a short technical stop at the beginning of 2010 more than 200M proton-proton events at 7 TeV and 8M additional 900 GeV events were recorded. The first heavy ion run at the LHC took place in November of 2010 and delivered 30M Pb-Pb events at 2.76 TeV followed in February 2011 by a proton-proton run at the same energy, providing the reference for certain heavy ion studies, such as in the quark gluon plasma and nuclear modification factors. The luminosity has been increased for the 7 TeV runs in 2011 in order to gather statistics of low cross section events. This is the dawn of the LHC era, an exciting time and a great opportunity for the particle physics community to discover new physics as well as rediscover all the known particles and processes at higher center of mass energies.

1.2 Motivation for HBT studies at the LHC

1.2.1 The QCD phase diagram

High energy heavy ion and proton-proton collisions are the tool to study matter at extreme temperatures and low barion density, and determine its properties on different regions of the phase diagram of nuclear matter (also referred to as the QCD phase diagram because Quantum Chromo-Dynamics describes the interactions among ). Figure 1.1 shows the phase diagram as it is known or predicted to be now. At very low temperature and very low chemical potential, or barion density, the state of matter is simply vacuum, above µ ∼ 922 MeV there is nuclear matter, and as barion density increases matter will be eventually in a super fluid and superconducting phase, that is supposed to be found in stars. High temperatures are reached by colliding two beams of particles traveling in opposite directions, or by colliding a beam of particles with a fixed target. When nuclear matter is

2 Figure 1.1: The QCD phase diagram. Figure taken from [1].

heated, it can reach a hadronic fluid phase and for higher energies the quark gluon plasma (QGP) phase, which is a deconfined state of matter where quarks and gluons are not bound together and the chiral symmetry is restored. Results from RHIC experiments [2] and from lattice models [3] calculations estimate a critical point at a temperature of T ∼ 170 MeV. Increasing the energy in the collision allows to follow different paths along the diagram: SPS energies get close to the phase transition and RHIC energies are large enough to make a phase transition. It is still debated if it is a first order phase transition with an associated latent heat, or a second order phase transition with a smooth cross over. LHC energies would allow for a second order phase transition into the QGP phase.

1.2.2 Description of a heavy ion collision

Heavy experiments at RHIC have shown that the matter created in high energy colli- sions has fluid-like behavior, because the particles show collective motion in the final state. The particle multiplicities obtained at 7 TeV will be for the first time comparable to mul- tiplicities observed in peripheral heavy ion collisions at lower energies allowing to study

3 Figure 1.2: Representation of the evolution of a heavy ion collision, showing the different stages and the models used to describe them. Figure from [10].

particle production mechanisms. Figure 1.2 shows a representation of the time evolution of temperature in a heavy ion collision. The collision occurs at time t = 0 and the system reaches rapidly a state of deconfined quarks and gluons, which interact strongly and rescatter with very short mean free path lengths initially, allowing the system to quickly thermalize. The red line repre- sents the temperature as a function of time, and the state of matter is specified above the curve: deconfined quarks and gluons in the very early stage, QGP after thermalization, a hadron resonance gas after the phase transition and free streaming particles after freeze-out. The curve flattens at the phase transition only if there is a latent heat (first order phase transition), otherwise it would be continuous in the first derivative. Each stage is labeled by the theoretical model used to describe it: the early times are described by the Glauber model [4] and Color Glass Condensate (CGC) [5], the early QGP stage by ideal hydrodynamics [6, 7] as well as partonic rescattering models [8, 9], and viscous hydrodynamics works better as the system approaches the phase transition. After the phase transition viscous hydrodynamics and hadronic cascade models are used. This qualitative description of a heavy ion collision also applies to a proton-proton collision, but with shorter timescales and lower temperatures. There is still a lot to learn about nuclear matter; a big motivation for the all the studies being done at the ALICE experiment.

4 1.2.3 The HBT contribution

HBT interferometry has been widely used in particle physics since Goldhaber, Goldhaber, Lee and Pais first used it at the in 1967 to measure the radii of particle emission sources in proton-antiproton collisions. Other laboratories that have measured HBT radii in proton-proton and heavy ion systems are CERN with the SPS, with E795, the Brookhaven National Laboratory, first with the Alternating Gradient Synchrotron (AGS) E895 experiment and later with the Relativistic Heavy Ion Collider (RHIC) with the STAR 1, PHENIX2 and PHOBOS3 experiments. The range in energies covered in these studies is from 2 AGeV to 2.76 ATeV in heavy ions and from 2 GeV to 7 TeV in proton-proton. HBT studies have shown that the system created is not just a source of free streaming particles, but a system where there is a high level of interaction among the particles. It provides us with a picture of the freezeout configuration for particles with different momenta, as well as information about the initial eccentricity in space, which together with the measured final eccentricity in momentum-space allows to study the evolution of the system. The lifetime of the system can also be extracted from a full three dimensional analysis. The variety of topics that can be studied with HBT interferometry have direct relevance in the study of the QGP and the QCD phase diagram, making it therefore an exciting subject.

1.3 In this thesis

In this thesis I present the systematic study of HBT radii at 900 GeV, 2.76 TeV and 7 TeV in the ALICE experiment at the LHC. The radii are studied in multiplicity and pair average momentum bins in order to access different kinematic regions and allowing to get a better picture of the radii at freeze-out, given the idea that faster particles leave the system earlier and slower particles interact for a longer time. An important feature of the study of HBT radii is the determination of the baseline in the two-particle correlation function. This topic is studied here by using Monte Carlo simulations tuned to the data, and also using the method of Energy Momentum Conservation Induced Correlations (EMCICs), which is a first principle approach to non femtoscopic correlations. I also present a study on the formation of mini-black holes at the LHC nominal Energy of 14 TeV.

1Solenoid Tracker At RHIC 2Pioneering High Energy Nuclear Interaction eXperiment 3Not an acronym 5 Chapter 2 A Large Ion Collider Experiment ALICE

This chapter will introduce the reader to the LHC accelerator and the ALICE experiment.

2.1 The Large Hadron Collider

The Large Hadron Collider (LHC) [11] is a “so-called” discovery experiment, in that it will be probing energies never explored before in the search of the Higgs boson, the quark gluon plasma, dark matter, electro-weak symmetry breaking and even more exotic physics such as production of mini black holes, extra dimensions and super symmetric particles among others. The LHC is located at the French-Swiss border, west of Geneva, and it has a circum- √ ference of 27 km, a design energy of s =14 TeV in the center of mass for protons and √ sNN = 5.5 TeV for lead ions. The LHC is a synchrotron that accelerates two counter rotating beams made out of 2808 bunches, each consisting of 1.15 × 1011 protons that circulate in separate beam pipes. The layout can be seen in Figure 2.1(left). The accelerator bends the beams around the ring with help from 1232 super conducting dipole magnets that operate at a temperature 1.9 K and a magnetic field between 0.535 T and 8.33 T. The nominal bunch length is 7.55 cm at collision and their separation is 25 ns. The spatial dimension of the bunches has to be minimized to provide a high number of collisions per time interval at the collision points, i.e. a high luminosity. The design luminosity is 1034 cm−2s−1 for protons and 1027cm−2s−1for lead ions. In mid 2011 the integrated luminosity (amount of data) delivered by the LHC has been on the order of 1 pb−1 for ALICE and about 1 fm−1 for ATLAS and CMS. Particles are injected before point 2 and 8, see Figure 2.1 (right). The radio-frequency (RF) system that accelerates the particles is located at point 4; the beam dumping system is located at point 6. At points 3 and 7 collimation systems are placed that “clean” the beam

6 Figure 2.1: A schematic view of the LHC accelerator ring, the beam injection system and the experiments. Figures courtesy of CERN.

by removing particles that have either a too large spatial distance to their bunch (particles in the so-called beam-halo) or are too fast or too slow, thus separated in momentum-space.

2.1.1 Current Status of the LHC √ The LHC is scheduled to run through the end of 2012 at s = 7 TeV with proton beams. √ There will be lead ion runs at sNN = 2.76 TeV for about a month at the end of 2011 and 2012, and there is a special request for a proton-ion run, which will be fundamental for the study of the QGP phase. A long technical stop will come after that, closing three years of continuous data taking, in order to prepare the accelerator for the nominal energies.

2.2 The ALICE experiment

The ALICE experiment is a dedicated experiment for heavy ion collisions, and for the study of QCD matter, the strong interaction part of the Standard Model. It is designed to address the physics of the quark gluon plasma at extreme values of energy density and temperature and it will allow the study of , , , and at the highest multiplicities [12]. ALICE was first proposed in 1990 [13] and approved in 1997. The detector performance and its capacity for physics analysis are described in detail in the Physics Performance Report [14, 15]. The ALICE collaboration has approximately 1300 members from a 105 Institutes in 33 countries.

7 2.2.1 The physics program at ALICE

ALICE addresses a broad range of observables which will complement the research done at previous accelerators (AGS, SPS, RHIC). The following list describes the physics observables accessible in ALICE:

• Global event features such as multiplicity and transverse or zero-degree energy flow define the geometry of the collision, i.e. the impact parameter, reaction plane orien- tation and number of participating nucleons.

• Nuclear modifications of the parton distribution functions can be extracted by com- paring global features to specific hard processes (direct photons or heavy flavors in p+p, p+A and A+A).

• Parton kinematics and energy loss in the plasma will be probed by measurements of heavy flavor production, and jet fragmentation.

• Elliptic flow (v2) is sensitive to the equation of state and the fluid viscosity.

• Prompt photons can reveal the thermal radiation from the early phase.

• Quarkonia production (mesons with a quark and its anti-quark) probes deconfinement.

• Resonance parameters such as masses and branching ratios are sensitive to chiral symmetry restoration.

• Transverse momentum distribution and particle yields are governed by thermodyna- mical properties and hydrodynamical evolution close to the phase transition.

• Particle interferometry (HBT) measures the space-time evolution of the system .

2.2.2 The ALICE detector Layout

The ALICE detector is composed of 17 sub-detectors with different functionality, like vertex tracking, particle identification (PID), detection, muon detection, spectator detection, cosmic ray detection. In this section I will explain briefly the layout and func- tionality of each detector component. The ALICE detector is shown in Figure 2.2.

Inner Tracking System (ITS)

The tracking system of ALICE is composed of the Inner Tracking System (ITS) and the Time Projection Chamber (TPC). It is worth noting at this point that these are the two detectors that have the largest impact on the femtoscopic analysis, because the pion tracks are mostly reconstructed from hits in the ITS as well as hits and energy deposition in the

8 Figure 2.2: The Alice Detector.

TPC. The Time of flight detector is also used to improve particle identification, but it is mostly used for femtoscopy. A study of the momentum resolution effect on the extracted HBT radii is presented in Section 5.1. The ITS is composed of six layers of detectors. The two innermost layers are Silicon Pixel Detectors (SPD) at r = 3.9 cm and r = 7.6 cm from the beam axis. The next two layers are Silicon Drift Detectors (SDD) at r = 15.0 cm and r = 23.9 cm and the last two layers are Silicon Strip Detectors (SSD) at r = 38.0 cm and r = 43.0 cm. The acceptance is η = ±2 for the SPD and η = ±0.97 for the SDD and SSD. The rapidity of a particle with respect to a longitudinal axis z is

1 E + p  y = ln z (2.1) 2 E − pz

If the mass of the particle is very small compared to the energy then pZ ≈ E and rapidity becomes independent of particle type. It takes the name pseudo rapidity in this case:

1 E + E cos θ  1 1 + cos θ  η = ln = ln (2.2) 2 E − E cos θ 2 1 − cos θ

In an experiment with identical particle beams, a pseudo rapidity of η = 0 is perpendic- ular to the beam axis. As a reference the rapidity of the LHC beams is 6.6, 5.68 and 5.25 for the 3.5, 1.38 and 0.45 TeV beams respectively. The tasks of the ITS are to localize the primary vertex with a resolution better than 100

9 µm, reconstruct secondary vertices from resonance decays, track and identify low momen- tum particles from the non-relativistic region of the dE/dx spectrum and improve tracking resolution together with the TPC. The SPD is also used as trigger by requiring at least two pixel chips to be fired and with vertex pointing to the interaction region, allowing to reject background events. More complicated trigger combinations are also possible.

Figure 2.3: Schematic view of the 6 layers of silicon detectors in the Inner Tracking System [12].

The Time Projection Chamber (TPC)

The TPC was designed as the primary tracking detector, therefore it can track and identify up to 20000 particles in one event. It is located between 0.85 m and 2.5 m from the beam pipe, and its acceptance is η = ±1.5 at r = 1.4m, and η = ±1.0 at r = 2.5m. Its volume is

filled with Ne/Co2/N2 gas at 90/10/5 ratio and ionized electrons travel up to 2.5 m to the readout plates. The diameter of the TPC guaranties achieving a dE/dx resolution better than 5% and in this way it also contributes to the particle identification in the region of the relativistic rise, up to momenta of 50 GeV/c. The momentum resolution of the tracks is better than 2.5% for tracks with a momentum below 4 GeV/c. The material budget of the ITS and

TPC is on average less than 11% of a radiation length X0 (the characteristic amount of matter transversed by electrons and photons). The TPC is, due to its drift time of about 90 µs, the slowest detector in ALICE.

Transition Radiation Detector (TRD)

The TRD’s task is to distinguish electrons from pions, especially at high momentum (above 1 GeV/c). Furthermore, it contributes to the tracking of particles and acts as a trigger 10 Figure 2.4: Schematic view of the Time Projection Chamber [12].

on high-momentum electrons. The detector is based on transition radiation which are photons with wavelengths in the region of soft X-rays. Transition radiation occurs when a charged particle propagates through boundaries between media that have different dielectric constants. The detector is located at radii from 2.9 m to 3.7 m. It is segmented into 18 sectors each consisting of six layers, see Figure 2.5. In the Xe/CO2 gas mixture, transition radiation is converted at the beginning of the drift region into an electron cluster which is subsequently detected. A built-in tracklet processor combines the information from the six layers to form track- lets: these are used to identify high-momentum electrons which in turn provide an L1 trigger. Such a trigger is for example useful to increase the yield of Υs and high-pT J/Ψs.

The Time-Of-Flight Detector (TOF)

The TOF detector’s main task is to identify protons, kaons, and pions by measuring the time between the collision and the arrival of the particles in the TOF. The K/p separation up to 4 GeV/c and the π/K separation up to 2.5 GeV/c are better than 3σ. The TOF system provides a pre-trigger signal to the TRD to turn on its electronics, and an L0 trigger for ultra-peripheral collisions. The detector consists of 18 sectors and is located at a radius of 3.8 m. The 140 m2 large active area is a high-resolution array of so-called multigap resistive plate chambers. These are stacks of very thin structures (250 µm) featuring a high and uniform electric field and a C2H2F4/i-C4H10/SF6 gas mixture so that any traversing particle immediately triggers an avalanche. The setup achieves a very good time resolution of about 40 ps. Combined with

11 Figure 2.5: Schematic view of the Transition Radiation Detector. It has 18 super modules each containing 30 readout chambers (red) arranged in five stacks of six layers. [12].

other uncertainties, e.g. the uncertainty to determine the exact time of the interaction, the time of flight measurement for single particles has an overall resolution of better than 100 ps [12].

The Photon Spectrometer (PHOS)

The PHOS is a high-granularity calorimeter measuring photons. It allows for example the measurement of π0 and η via their decay photons. For this purpose photons have to be discriminated against charged hadrons and which is partly performed by topological shower analysis. It features an excellent energy resolution, for example for 1

GeV photons, σE/E is about 4% [12]. The detector consists of an electromagnetic calorimeter of dense scintillating crystals

(about 20 radiation lengths X0 ) and detection cells made out of lead-tungstate crystal

(PbWO4 ). It is located at a radius of 4.6 m and covers about 3.7% of phase space in the central region. A set of multi-wire proportional chambers in front of PHOS is used to reject charged particles, this part of the detector is called Charged-Particle Veto (CPV).

The Electro-Magnetic Calorimeter (EMCal)

The EMCal is a Pb- sampling calorimeter that measures photons, π0, and η via their decay photons like the PHOS detector. It is, however, larger than PHOS with an acceptance of about 23% of phase space of the central region, but offers lower granularity and resolution. The detector is located approximately opposite to PHOS. The EMCal has

12 been added in a late stage to the experiment’s design and therefore its construction only started in 2008.

The High-Momentum Particle Identification Detector (HMPID)

The HMPID is a proximity focusing Ring-Imaging Cherenkov (RICH) detector for particle identification of high-momentum hadrons. It extends ALICE’s capability of π/K and K/p separation to 3 and 5 GeV/c, respectively, and therefore allows the inclusive measurement of charged particles within 1-5 GeV/c. The detector’s acceptance covers about 5% of the central region phase space. The detector consists of 10 m2 of active CsI photocathode area which represents the largest scale application of a RICH.

The ALICE Cosmic Ray Detector (ACORDE)

ACORDE consists of 60 large that are used as level 0 trigger on cosmic rays. Cosmic-ray events are used for calibration and alignment. ACORDE has been used during the detector commissioning in 2007 and 2008. The rate of muons reaching the ALICE detector is about 4.5 Hz/m2 .

Forward Detectors

The Photon Multiplicity Detector (PMD) The PMD measures the multiplicity dis- tribution of photons (e.g. decay products from π0 and η) in the forward region (2.3 < η < 3.7, full azimuth). It consists of two gas proportional chambers. Between these a lead converter is located. The plane in front of the converter is used as a veto for charged particles while the information from the second plane is used to identify photons. The detector is positioned at a 3.64 m distance from the nominal interaction point. The PMD cannot be used as a trigger because of its slow readout.

The Forward Multiplicity Detector (FMD) The FMD measures the charged-particle multiplicity over a large fraction of phase space, -3.4 < η < -1.7 and 1.7 < η < 5.0, both in full azimuth. The detector is composed of silicon strips located in five rings at z = 3.2 m, 0.83 m, 0.75 m, -0.63 m and -0.75 m. Due to its slow readout (> 1.2µs) it cannot be used as a trigger either.

The V0 detector The information from the V0 detector is used as minimum-bias trigger, to reject beam- gas events, and to provide a pretrigger to the TRD. It consists of two arrays of segmented scintillator counters that are located at z = 3.4 m (2.8 < η < 5.1) and -0.9 m (-3.7 < η < -1.7). The time resolution is about 1 ns which allows beam-gas events that occurred outside of the nominal interaction region to be identified

13 The T0 detector The T0 (“time 0”) detector measures the collision time with a precision of 25 ps. This information is used as a time reference for the TOF detector and to determine the vertex position with a precision of about 1.5 cm. A vertex position outside the region where collisions should appear is used as a beam-gas rejection signal. The detector consists of two units that each comprises twelve Cherenkov counters with quartz radiators. The units are located around the beam pipe at a distance of 3.75 m (positive z) and 0.73 m (negative z) from the nominal interaction point.

The Zero-Degree Calorimeter (ZDC) The ZDC provides an estimate of the impact parameter of heavy-ion collisions by the measurement of the number of spectator nu- cleons which is related to the energy carried forward, i.e. in beam direction. The detector is located on both sides of ALICE, at a distance of 116 m from the nom- inal interaction point. The measurement is performed by two calorimeters, one for neutrons (called ZN, |η| < 8.8) and one for protons (called ZP, 6.5 < |η| < 7.5). At this distance from the interaction point neutrons and protons are separated by the magnets in the beam line. When they are not in use, the calorimeters are moved out of the beam line by a lifting platform to reduce their exposure to ionizing radiation. The measurement of the impact parameter is complemented by an electromagnetic calorimeter (called ZEM, 4.8 < η < 5.7) which measures the total forward energy at z = 7.25 m. This allows the distinction of central and very peripheral heavy-ion events, both of which deposit low energy in the forward ZDCs. In very peripheral A-A collisions, a significant number of spectator nucleons is bound into fragments having a charge-to-mass ratio similar to the one of Pb. These fragments stay in the beam pipes and therefore cannot be detected by ZDCs. As a consequence, there can be a small amount of energy in the ZDCs for central events, where the number of spectators is small, but also for very peripheral events.

The MUON Spectrometer The task of the MUON spectrometer is to measure the com- plete spectrum of quarkonia (J/Ψ, Ψ0, Υ, Υ0, Υ00) with a mass resolution that is good enough to separate these states as well as the φ meson. The separation of the Υ states requires a resolution of 100 MeV/c2 in the 10 GeV/c2 invariant mass region. The production of open charm and beauty can also be studied. The spectrometer accepts particles in the range 4 < η < 2.5 and has full azimuthal coverage for muons with p > 4 GeV/c.

14 2.3 The ALICE offline framework

The ALICE offline framework AliRoot provides ALICE users with an extensive set of tools for purposes such as simulation(including detector response through GEANT3, and Fluka), reconstruction, calibration, alignment, visualization and analysis. AliRoot started being developed in 1998 and it was used during the construction of ALICE to evaluate physics performance and design considerations. Its implementation is in C++ and some FORTRAN modules. AliRoot is complemented by AliEn (Alice En- vironment), which provides access to the data and distributed analysis on the computing grid. The amount of data that ALICE will analyze is on the order of 1 PB data per year. One proton-proton event is on average 40 kB, a Pb-Pb event is 3MB and simulated events are about twice the size. The large amount of data is stored in about 60 computer clusters around the world accessible with the computing grid. For the data analysis the user writes what is known as an “Analysis Task”, which is executed for all tracks on a sequence of events. The femtoscopic analysis presented in this thesis uses the AliFemto analysis module from the soft physics working group (PWG2). Its functionality will be described in the next section.

2.3.1 AliFemto

The AliFemto analysis library provides all the functionality for the femtoscopic analysis, which will be explained in Chapter 3. The analysis task is configured with the use of a macro, in which the user can set all the required analysis parameters such as:

• Particle types: π+π+, π−π−, or non identical particles as well.

• Particle masses.

• Vertex and multiplicity binning.

• Particle cuts: pT , η.

• Pair cuts: kT .

• Track cuts: merging and splitting.

• Track reconstruction status: TPC-track or global track.

AliFemto has a wide variety of analysis classes, the most relevant being:

AliFemtoQinvCorrFctn: A Qinv correlation function 3.2.1.

AliFemtoBPLCMSCorrFctn: A 3D correlation function in the LCMS system 3.2.2. 15 AliFemtoCorrFctn3DSpherical: A 3D correlation function in the LCMS system binned in spherical coordinates.

AliFemtoCorrFctnDirectYlm: A spherical harmonic decomposition of the correlation function 3.4.

AliFemtoModelCorrFctn: A model correlation function that uses Monte Carlo data, generates freeze-out coordinates and induces femtoscopic correlations, as explained in section 3.5.

AliFemtoQinvCorrFctnEMCIC: A Qinv correlation function that also quantifies the energy momentum conservation induced correlations (EMCICs), see section4 for de- tails.

AliFemtoBPLCMSCorrFctnEMCIC: A 3D correlation function and EMCIC histo-

grams binned in the Bertsch-Pratt Longitudinal Co-Moving system (Qout, Qside and

Qlong).

AliFemtoCorrFctn3DSphericalEMCIC: A 3D correlation function and EMCIC histo- grams binned in spherical coordinates (Q, cos(θ) and φ) with EMCICs.

16 Chapter 3 HBT Formalism

3.1 Hanbury Brown and Twiss interferometry

Hanbury Brown and Twiss developed in 1956 [16, 17] a method to study the angular size of stars, which was complementary and more effective at the time than Michelson interferome- try because it uses the intensity instead of the amplitude and phase of the incoming photon waves. The method uses two detectors to measure the light arriving from two separate emission points on the star. Since the light waves consist of photons, which are indistin- guishable, there are actually two possible routes the light waves can travel in order to reach the detectors on Earth 3.1.

Figure 3.1: Two identical particles are emitted and detected simultaneously. The two possible ways in which the particles can be emitted results in the Bose-Einstein quantum mechanical effect.

The idea behind the experimental setup is to exploit this quantum mechanical effect of boson condensation that arises from the symmetrization of the wave function, which ultimately to an increased rate of observed pairs with small relative momentum. The problem with amplitude interferometry was that in order to increase the resolution the

17 two detectors had to be placed further apart and it was difficult to transmit the phase information across large distances. In intensity interferometry, the phase of the waves is not important because the signal is averaged over a long time and the phases cancel due to their random nature. Intensity interferometry proved to be useful in another field of physics as well, namely particle physics. In 1967 Goldhaber, Goldhaber, Lee and Pais [18] observed that the π+π+ and π−π− distributions in proton-antiproton collisions at the Bevatron showed a higher probability at low momentum difference than opposite sign pairs did. In solving this puz- zle, they represented the pions with plane waves, and soon discovered that the necessary symmetrization of the wave function described the phenomenon. The initial spatial sepa- ration of particles is observed in the final state as an increased probability of finding pairs with similar momentum. HBT interferometry became an important part of the analysis in particle physics because it gives a clear picture of the geometry of the particle emission region (in one and three dimensions), the lifetime of the system and the particle dynamics. This chapter reviews the basic principles of HBT interferometry or femtoscopy as it has also been known in more recent papers [19]. The two terms will be used interchangeably in this thesis.

3.2 The theoretical correlation function

3.2.1 The one dimensional correlation function

Figure 3.2 shows a particle source emitting two particles with momenta p1 and p2 and the two possible paths which they can take to reach the detector.

Figure 3.2: The measurement of the two particle momenta, and Q = p1 − p2 allows to extract the system source size R.

If the particles are described as plane waves, the two-particle wave function for identical

18 particles can be written as:   i(p1·r1+p2·r2) i(p1·r2+p2·r1) Ψ(p1, r1, p2, r2) = u(p1, p2) e + e , (3.1)

where u(p1, p2) is a normalization constant. The wave function squared gives the probability of detecting the two particles:

2 2  2  |Ψ| = |u(p1, p2)| 1 + cos((p1 − p2) · (r1 − r2)) = |u(p1, p2)| 1 + cos(Q · ∆r) , (3.2)

where Q = p1 − p2 is the momentum difference and ∆r = r1 − r2 is the relative emission point of the particles, which is identified as the source radius, see Fig. 3.2. Pairs with small relative momentum Q will be observed at a higher rate because identical bosons are more likely to be found in the same state. By measuring the probability distribution as a function of Q we can extract the relative emission point or the HBT radius R = ∆r. The best way to achieve this is with the use of the two-particle correlation function, which is obtained by integrating over all possible pair combinations from the event. Assuming that each pair is emitted according to a source function ρ(r), the correlation function is obtained from the Koonin-Pratt equation [20, 21]: Z 0 2 0 Cth(Q) = dr |Ψ(r1, p1, r2, p2)| ρ(r ) = 1 +ρ ˆ(Q), (3.3)

which is basically the Fourier transform of the source functionρ ˆ(Q). The theoretical corre- lation function can be calculated if the functional form of the particle source ρ(r) is known. It is very common to assume a Gaussian source, because it provides the standard minimal description of experimental data. Other source functions that might describe the data better are exponential or Lorentzian. The following table shows three possible functional forms for the source function and the corresponding correlation function:

Source Gaussian Exponential Lorentzian 2 2 2 2 −1 ρ(r) = exp(−r /2Rinv) exp(−r/Rinv) (Rinvr + 1) C.F. Gaussian Lorentzian Exponential 2 2 2 2 −1 Cth(Qinv) = 1 + λ exp(−RinvQinv) 1 + λ(RinvQinv + 1) 1 + λ exp(−Qinv · Rinv) Table 3.1: Source functions and their corresponding correlation functions, λ is the correla- 1 tion strength, and a factor of ¯hc = 0.19733 GeV·fm is implied in every RinvQinv term.

19 3.2.2 The three dimensional correlation function

The particle emission source can also be characterized in three dimensions. A general Gaussian source is parametrized in the following way

ρ(r) = exp(−Aijrirj) (3.4) where Aij is a 3x3 symmetric matrix with six independent parameters related to the sizes

Rout,Rside,Rlong and the tilt angles of the out-side-long with respect to the principal axes, see Figure 3.3.

Figure 3.3: A general Gaussian source can be tilted. Figure taken from [19]

The general correlation function for identical particles in the non-symmetric case, reads:

2 C(Q) = 1 + λ exp −RijQiQj, (3.5)

2 where Rij are the three radii Rout, Rside, Rlong and the cross terms Rout−side, Rout−long, Rside−long. If the analysis is integrated in all directions and the source is symmetric, the cross terms vanish yielding

2 2 2 2 2 2 C(Qout,Qside,Qlong) = 1 + λ exp(−RoutQout − RsideQside − RlongQlong) (3.6)

where the three directions Qout,Qside,Qlong are the axes of a pair dependent coordinate system, the Longitudinal Co-Moving System (LCMS). This reference frame will be explained in detail in section 3.3.2.

3.2.3 Coulomb and residual strong force effects

The functional forms of the correlation functions shown in table 3.2.1 are for free streaming particles that do not interact after they are emitted. However, when charged particles are 20 used in the femtoscopic analysis, the Coulomb and strong interaction can have an effect on the wave function. Particles that are traveling very close in phase-space experience a Coulomb interaction with the emission source as well as with the other emitted particles, which affects the correlation function at low Q. The Bowler-Sinyukov [22, 23] formula for the correlation function , takes into account the Coulomb interaction as a function of Q:

 2 2  Cth(Qinv) = (1 − λ) + λ · K(Q) exp(−RinvQinv) (3.7) where K(Q) is the Coulomb like-sign particle wave function squared and averaged over a Gaussian source with a radius that is similar to that of the colliding system (in proton- proton collisions a 1 fm radius is used and in heavy ion collisions the average of Rout , Rside and Rlong is used). The quarks inside a hadron are bound together by the strong nuclear force which is mediated by gluon exchange. Two hadrons that are close is phase-space, such as a pion pair formed in a proton-proton collision or protons and neutrons inside a nucleus, experience a residual strong force, which diminishes rapidly with distance [24]. For pions emitted from a region with the expected size not larger than 2-3 fm, the effective radius of the emission region can be considered much larger than the range of the strong interaction potential, therefore its contribution is relatively small and can be safely ignored [25].

3.2.4 Non femtoscopic correlations

The femtoscopic correlations described in the previous sections and the theoretical correla- tion functions obtained arise strictly from spatial correlations in the initial state and they manifest as an enhancement at low relative momentum in the final state. There are, how- ever, other particle correlations present that may appear in other kinematic regions and could affect the femtoscopic correlation. Some of these are:

Jets: Jets introduce non femtoscopic particle correlations as all the particles that emerge from the parton shower have energy momentum constrained to that of the jet.

Mini-jets: Mini jets are jets that are not a product of hard parton scattering during the ini- tial collision, but rather soft parton re-scattering that introduces particle correlations in a local fashion. This might produce short and mid range correlations.

Energy Momentum Conservation: Overall energy momentum conservation will also cause correlations among particles because the total phase-space available for particle production will be finite. These correlations can affect the whole kinematic range, from small to large values of Q. In Chapter 4 a method to quantify this type of correlations will be explained in greater detail.

21 The traditional technique used to account for the non-femtoscopic correlations is by using Monte Carlo simulated events tuned to the particle multiplicity and rapidity distributions of the real data and fitting different types of functional forms, e.g a polynomial of the form 2 Bnf (Qinv) = aQinv + bQinv + 1, or sometimes a Gaussian. The fit to the Monte Carlo data is then used as an additional component of the fitting function:

2 2 Cth = (1 + λ exp(Qinv · Rinv)) · Bnf (3.8)

The polynomial Bnf does not have any physical meaning, it is just used to account for the non-flat baseline of the correlation function and it is estimated from the Monte Carlo production associated with the real data.

3.3 The experimental correlation function

3.3.1 The experimental correlation function in one dimension

The experimental correlation function is obtained by taking the ratio of the measured two-

particle distribution f(p1, p2) to the single particle distribution f(p).

f(p1, p2) Cexp(Q) = (3.9) f(p1)f(p2) The two-particle distribution is measured from same-event pairs and the single particle distributions are measured from mixed-event pairs, by calculating the momentum difference of all the possible pair combinations. The numerator and denominator are in principle identical, except that correlations are only present in the numerator and will appear as a deviation from 1.0, after proper normalization.

The numerator and denominator of Equation (3.9) are histograms of the variable Qinv = p µ µ 2 −(p1 − p2 ) with 20 to 40 MeV bins for proton-proton collisions and 5 to 10 MeV bins for Pb-Pb. Qinv is usually calculated in the pair rest frame, where the total momentum of the pair vanishes. The particle momenta are transformed into this frame with a longitudinal and a transverse boost. In the pair rest frame the time component of Qinv cancels if the particles have the same mass, and it reduces to a vector Q~ . To make the correlation signal as clean as possible it is necessary to impose single and two-track cuts in the same manner to the numerator and denominator, so that detector acceptance problems are divided out in the ratio. Single track cuts include particle identi-

fication and pT cuts. Two track cuts include splitting and merging cuts. Splitting occurs when a single track is reconstructed as two separate tracks with very close momenta, thereby adding undesired fake pairs at low Q. Different methods have been developed for identify- ing possibly split tracks, usually based on the number [26] or topology [2] of space-points associated with the track. Merging occurs when two different tracks with very close mo-

22 mentum are reconstructed as one single track, resulting in the loss of pairs at low relative momentum. In order to obtain a clean background signal in the denominator the events used for mixing should be events with similar vertex and multiplicity. It is also important that the events being mixed are recorded close in time, so that the detector acceptance is uniform. Mixing is not the only method to obtain the background, other methods used are: rotation and unlike-signed pairs. The rotation method works with detectors that have symmetric acceptance, like STAR and ALICE. The idea is to flip the sign of the momentum of one of the particles and construct the correlation. The unlike-signed pair method is useful in low energy and low multiplicity systems, because event mixing might violate energy momentum conservation. The background is calculated by taking opposite sign pairs and removing the resonance region with cuts, or by normalizing with a like- to unlike-signed pairs from Monte Carlo simulations. The algorithm to obtain Cexp(Q) with mixed events is explained in the appendix A.1. The one dimensional HBT analysis allows to study the average particle source size and it has as an advantage that the analysis can be done even when the available statistics of the experiment are low. While this brings flexibility to the experimentalist, the combination of the spatial-temporal information in one single parameter makes the interpretation more

difficult because the radius extracted from Qinv does not represent the physical size of the system. By increasing the number of dimensions one can study not only the geometric size but also properties of the evolution of the particle emitting source, as will be explained in the next section.

3.3.2 The experimental correlation function in three dimensions

The HBT analysis can also be done in three dimensions, as it was already mentioned in section 3.2.2. The coordinate system most commonly used for this purpose is the Longi- tudinal Co-Moving System (LCMS) [27, 28], in which the longitudinal momentum of the pair vanishes: p1z + p2z = 0. The coordinate axes of the LCMS are Qout along the aver- 1 age pair transverse momentum kT = 2 (p1T + p2T ), Qlong along the beam axis and Qside perpendicular to the other two, see Figure 3.3.2. The conversion between the lab frame and the LCMS frame can be done as follows: first LCMS one needs to find the necessary longitudinal velocity of the LCMS frame so that pz1 + LCMS pz2 = 0: lab p + p lab z1 z2 βz = − lab lab , (3.10) E1 + E2

Now the momenta of each particle in the lab frame are boosted by βz. The three coordinates can now be found using the total transverse momentum of the pair PT = pT1 + pT2 and the 23 Figure 3.4: Momenta of two particles as Figure 3.5: View of the transverse plane seen in the laboratory frame. The LCMS and the three axes of the LCMS coordi- frame is where pz1 = −pz2 . nate system.

momentum difference of the pair QT = pT1 − pT2 , as follows.

P~T · Q~ T Q~ out = · P~T (3.11) 2 |P~T |

Q~ side = Q~ T − Q~ out (3.12)

~ LCMS LCMS Qlong = pz1 − pz2 (3.13)

The longitudinal radius Rlong is sensitive to the lifetime of the system. The total lon- gitudinal size of the system at the time when the mid-rapidity hadrons freeze-out can be estimated in the following way. The size of the homogeneity region is inversely proportional to the velocity gradient of the expanding system. The longitudinal velocity gradient in a

high energy nuclear collision decreases with time as 1/τ. Therefore, the magnitude of Rlong is proportional to the total duration of the longitudinal expansion, i.e. to the decoupling

time τf of the system [29]. The decoupling time τf can be obtained by fitting Rlong with

2 τf T K2(mT /T ) Rlong(kT ) = (3.14) mT K1(mT /T ) q 2 2 where mT = mπ + kT is the transverse mass, mπ is the pion mass (in case of pion analysis) T is the kinetic freeze-out temperature, and K1 and K2 are the integer order modified Bessel functions [30, 29].

24 3.4 Spherical Harmonic Decomposition of the Correlation Function

Recent femtoscopy studies [31, 32] have decomposed the correlation function into spherical harmonics. This method presents an advantage over the full 3D analysis because the coef- ficients of the harmonic expansion are functions of Q only. They represent the correlation function in full because they are calculated using all of the data. This means that the correlation function can be represented by a set of one-dimensional histograms, which is usually small given that most of the coefficients vanish due to symmetry. The method followed in reference [31] obtains the correlation function in spherical co- ordinates first and then calculates the coefficients of the expansion. Another method [33] calculates the coefficients while running the algorithm to calculate Q from Appendix A.1. Both methods produce the same results, but the later is more computationally intensive. This section will explain briefly the spherical harmonic decomposition.

3.4.1 Spherical Harmonic Decomposition

The spherical coordinates can be obtained from the LCMS system coordinates as follows4:

q 2 2 2 Qlong −1 Qside Q = Qout + Qside + Qlong, cos(θ) = , φ = tan ( ); (3.15) Q Qout The correlation function binned in spherical coordinates can now be expanded in spher- ical harmonics:

∞ l ! √ X X C(Q, cos(θ), φ) = 4π Al,m(Q)Yl,m(cos(θ), φ) (3.16) l=0 m=−l

where the coefficients of the expansion Al,m(Q) are functions of Q. The orthonormality of

the Yl,m harmonics allows to isolate individual Al,m’s as follows:

Z 2π Z 1 1 ∗ Al,m(Q) = √ d(cos(θ))dφC(Q, cos(θ), φ)Yl,m(cos(θ), φ) (3.17) 4π 0 −1 Since the correlation function obtained from experiment has finite bin sizes in cos(θ) and φ, calculating the integral in Eqn. (3.17) is only approximate. Therefore a correction factor can be calculated, as explained in detail in reference [31].

In general the correlation function could be calculated using infinite number of Al,m components. However, under certain conditions in the analysis most of the components vanish. These conditions are: • Identical colliding particles, i.e. p+p, Pb+Pb, instead of p+Pb or Au+Si.

4Note that the angle φ has to be calculated in the correct quadrant, in C/C ++ one could use the function atan2(Qside/Qout) 25 • Identical particles are used to measure the correlation, i.e. π+π+, π−π−.

• The analysis is reaction plane integrated and symmetric about mid rapidity y = 0.

Under these conditions only real Al,m’s with even l and m do not vanish. High-l components are also statistically irrelevant, therefore the correlation function can be represented by A0,0,

A2,0 and A2,2. These components are identified as follows:

• A0,0 is similar to C(Qinv)

• A2,0 > 0 if Rtransverse > Rlong and vice versa.

• A2,2 > 0 if Rside > Rout and vice versa.

3.5 Simulating HBT events

Monte Carlo simulations of particle production in models such as PYTHIA [34] do not recreate femtoscopic correlations, as these are only produced in nature. There is, however, a solution that allows to simulate events with HBT correlations. The so-called afterburners can introduce correlations among particles by generating freeze-out distributions. Two of these models are the Hadronic Rescattering Model [9] and Therminator [35]. Femtoscopic correlations can be added to an event in the following way:

• Generate random emission positions ~r1 . . . ~rn for all particles in the event according to a distribution, e.g. three dimensional Gaussian with given widths. This distribution can be taken as the freeze-out distribution, or as an initial distribution to calculate the evolution of each particle until all the particles cease interacting with other particles.

• Calculate the weighted probability for each pair in the freeze-out distribution as is

expected from the two-particle wave function Eqn. (3.2): w = 1 + cos(Q~ · (~ri − ~rj))

and add w to the corresponding Qinv or Qout,Qside,Qlong bin in the histogram.

An example of a simulated HBT correlation can be seen in Figures 3.6,3.7. PYTHIA was used for the simulation of 7 TeV proton proton events and weights were added to the pairs as described above using a Gaussian distribution to generate the freeze-out positions. In this case one expects a correlation strength of 1.0 and a radius parameter equal to the width of the Gaussian distribution of the emission points. Figure 3.6 shows the numerator and denominator of the correlation function . The denominator has more entries because it was obtained with the mixing technique, which uses particles from 10 events at a time to obtain Qinv, whereas the numerator only gets counts from one event. Figure 3.7 shows the correlation function obtained after taking the ratio of the numerator and denominator from Figure 3.6. The correlation function is

26 Figure 3.6: Numerator and denominator of a simulated correlation function, taken from [36].

normalized to 1.0 using a factor which is the ratio of the integrals of the denominator and the numerator in a range where there is no femtoscopic effect (e.g 0.6 GeV to 1.0 GeV). Two types of fitting were used here: Minimization of the χ2 function weighted by errors and not weighted by errors. When the function is weighted by the error bars of the data points, the curve adjusts very well to most of the points on the Gaussian, except the points at low Qinv, which have large error bars. The non error-weighted function describes the low

Qinv better, but not so the overall shape of the Gaussian. The two fits differ by 11%, but the one weighted by the error bars is preferred because it takes into account the statistical significance of the data point, and it used throughout this thesis.

27 Figure 3.7: Simulated correlation function with a Gaussian fit, taken from [36].

28 Chapter 4 Energy Momentum Conservation Induced Correlations

Energy Momentum Conservation Induced Correlations (EMCICs) are non femtoscopic cor- relations that are caused by the finite phase-space available for particle production. Its effect is stronger in low multiplicity events, due to smaller phase-space. The EMCICs can be calculated analytically and quantified directly from the experi- mental data in order to remove their effect from the correlation function [31]. The method has been used previously by the STAR collaboration in pion femtoscopy of proton-proton √ collisions at s = 200GeV [32]. This chapter will summarize the derivation of the EMCICs parametrization.

4.1 The EMCICs parametrization

There are previous studies of EMCIC-like effects on two particle azimuthal correlations [37, 38, 39, 40]. For further details about the derivation presented here see [31]. The use of EMCICs is desirable in that their quantification is obtained from first principles and it is a physics motivated parametrization instead of an ad hoc parametrization. The derivation starts by considering the single particle momentum distribution, unaf- fected by EMCICs: d3N f(~pi) = 3 (4.1) dpi Now, the k-particle momentum distribution is in principle a product of k single particle momentum distributions, but in order to introduce the EMCICs constraint, an integration over the remaining k + 1 to N single particle distributions is needed normalized by the integral over all single particle distributions:   k ! R QN d3~p f(~p ) δ4(PN p − P ) Y j=k+1 j j i=1 i fc(~p1 . . . ~pk) = f(~pi) × , (4.2) R  QN 3  4 PN i=1 j=1 d ~pjf(~pj) δ ( i=1 pi − P )

29 where the energy-momentum conservation comes in the form of the four-dimensional delta √ function and P = ( s,~0) is the total energy-momentum of the colliding particles. The distribution of a large number of uncorrelated momenta as in Equation (4.2), is by the Central Limit Theorem [31], a multivariate normal distribution. The 1- and k- particle momentum distributions take the following form:

 N 2  b  f (~p ) = f(~p ) · × exp − (pµ − hpµi) µν (pν − hpνi) (4.3) c i i N − 1 i 2(N − k) i

k ! 2 Y  N  f (~p . . . ~p ) = f(~p ) · c 1 k i N − k i=1 " k ! k !# X bµν X × exp − (pµ − hpµi) (pν − hpνi) (4.4) i 2(N − k) i i=1 i=1 where bµν is the covariance matrix of the distribution in the integrand of Equation. (4.2). The ratio of the k-particle distribution to k single particle distributions gives the k-particle correlation function :

fc(~p1 . . . ~pk) C(p1 . . . pk) = fc(p1) . . . fc(pk) h   i Pk µ µ bµν ν ν exp − i,j=1 (pi − hp i) 2(N−k) (pi − hp i) = h    i (4.5) Pk µ µ bµν Pk ν ν exp − i=1 (pi − hp i) 2(N−1) i=1 (pi − hp i)

The two-particle correlation function is obtained to order 1/N by a Taylor expansion:

1 C(p , p ) = 1 − (pµ − hpµi) b (pν − hpνi) (4.6) 1 2 N 1 µν 2 For the special case, where only EMCICs are present (i.e no dynamical correlation due to

flow) the covariance matrix bµν is diagonal, and the correlation function reduces to: ! 1 pT1 · pT2 pz1 · pz2 (E1 − hEi)(E2 − hEi) CEMCIC (p1, p2) = 1 − 2 2 + 2 + 2 2 (4.7) N hpT i hpzi hE i − hEi

When femtoscopic correlations are present the total correlation function can be written as:

C(p1, p2) = Φfemto(p1, p2) × CEMCIC (p1, p2) (4.8)

where Φfemto is the femtoscopic or Bose-Einstein correlation. This expression implies that if all the quantities in Equation (4.7) could be measured, the EMCICs would be completely characterized and it would be possible to remove their effect from the experimental cor-

30 2 2 2 relation function. However, in an experimental setup N, hpT i, hpzi, hEi and hE i cannot be measured, simply because not every particle is detected. Parameterizing the unknown quantities yields an equation that can be used to fit experimental data:

2 M4 CEMCIC (p1, p2) = 1−M1·{pT1 · pT2}−M2·{pz1 · pz2 }−M3·{E1 · E2}+M4·{E1 + E2}− M3 (4.9) The notation {X} represents histograms of two-particle quantities that can be measured in experiment and are binned at the same time as the numerator and denominator of the correlation function. The Mi parameters are related to the quantities that are not directly measurable in the following way: 2 1 M1 = 2 M2 = 2 (4.10) NhpT i Nhpzi 1 hEi M = M = (4.11) 3 N(hE2i − hEi2) 4 N(hE2i − hEi2)

After a fit has been performed on the data, the unmeasurable physical quantities can be

calculated by inverting the equations for the Mi’s using an additional equation for the

energy of a characteristic particle with mass m∗ in the system:

2 2 2 2 hE i = hpT i + hpzi + m∗ (4.12) The total multiplicity of the event is then calculated as:

−1 −1 −1 M3 − 2M1 − M2 N = 2 2 (4.13) m∗ − (M4/M3) The estimated physical quantities can be used as a consistency check to tell if the fit results are reasonable.

It is important to note that p1, E1, p2, and E2 should be calculated in a pair-independent frame, like the collision center-of-momentum (CCM) frame. The reason is that equation (4.2) assumes some fixed total energy and momentum to be conserved. In a pair-dependent frame (e.g. LCMS), the total energy and momentum of the event will fluctuate, pair-by- pair. Unlike the femtoscopic peak, EMCICs are visible across all the Q range, but especially at large Q, where the long range correlations are very visible. It is therefore desirable that during the fitting procedure the EMCICs parameters get determined well enough at large Q so that the correlation between parameters becomes smaller.

31 Chapter 5 Femtoscopy of proton-proton collisions with the ALICE experiment

5.1 Momentum Resolution effects on the correlation function

In order to be able to determine systematic errors on the measured radii, a study of the momentum resolution effect on the radii was done and it is presented here. Monte Carlo (MC) simulated data is used for this purpose because it holds the information of the orig- inally generated particle momenta, as well as the reconstructed momenta obtained from the simulated detector response. The procedure, shown schematically in Figure 5.1, is the following:

Run the AliFemto code to obtain the Qinv correlation function in two ways:

1. A correlation function affected by the detector response:

• Use the MC momenta to generate the Bose Einstein simulated correlation, as explained in Section 3.5.

• Use the reconstructed momenta to calculate Qinv.

2. A correlation function not affected by the detector response:

• Use the reconstructed momenta to generate both, the Bose Einstein simulated

correlation and to calculate Qinv.

For this study 580k pp events at 900 GeV from PYTHIA were used and the Rinv radius introduced in the weight generator was 0.8 fm. The two correlation functions were fit with the Bowler-Sinyukov formula (3.7) to account for the Coulomb interaction and EMCIC effects were ignored. Similar results were obtained with a Gaussian 3.2.1 function used in

32 Figure 5.1: Momentum Resolution Study Flow Diagram: Two correlation functions are produced from the same Monte Carlo production, but the true and reconstructed momenta are used differently to estimate the effect of momentum resolution on the measured radii.

Figure 5.2: Momentum Resolution for ITS+TPC reconstructed tracks (left) and ITS-only reconstructed tracks (right).

33 the fit and are not shown here. Figure 5.1 shows how the radius Rinv and λ parameter extracted from the correlation functions are affected by the momentum resolution. The left figure is for tracks reconstructed by the ITS+TPC and the right figure for ITS-only reconstructed tracks. The results are summarized in table 5.1 and show that momentum resolution effects are smaller than 1% on the extracted radii. The resolution for tracks reconstructed with ITS+TPC refit have better resolution as expected.

Parameter — Track Reconstruction ITS ITS + TPC

Rinv 0.82% 0.40% λ 1.28% 1.10%

Table 5.1: Summary of momentum resolution effect on fit parameters.

√ 5.2 Femtoscopy of proton-proton collisions at s = 2.76 TeV.

5.2.1 One dimensional analysis

The analysis of the 2.76 TeV data was done in the following way: The correlation function + + was obtained from identical π π pairs initially with eight multiplicity bins and six kT bins. There were enough statistics for eight multiplicity bins, however this many bins were not providing additional information, and it was decided to keep only three bins. The analysis details are: √ • System: proton-proton collisions at s=2.76 TeV (30M events).

• Monte Carlo Sets: Pythia (3M events) , Phojet (3.3M events).

• Identical π+π+pairs.

• Qinv bin size: 20 MeV.

• 0.0 < Qinv < 2.0 GeV

• Multiplicity bins: 1-11, 12-29, 30 <

• kT bins (GeV): 0.1,0.2,0.3,0.4,0.5,0.6,0.7

• 0.1 < pT < 1.0 GeV

• -1.2 < η < 1.2

• Track reconstruction requiring TPC ITS refit. 34 The estimation of the baseline was done using the method explained in section 3.2.4, by fitting the Monte Carlo data sets with different functional forms, a quadratic polynomial and a Gaussian were used in this case. This gives a total of four different fits used to estimate the baseline (Pythia and Phojet datasets each fit with quadratic and Gaussian baselines). The radii obtained by the four fits are then averaged and the standard deviation is used as the systematic uncertainty in the measurement. Figure 5.3 shows the Monte Carlo baseline from Pythia (red) and Phojet (blue) for the lowest multiplicity bin and for all kT bins. Both models exhibit very similar behavior.

√ Figure 5.3: Comparison of Pythia and Phojet baselines in s = 2.76 GeV p+p collisions. The average pair momentum kT increases left to right.

Figure 5.4 shows an example fit to the six kT bins of the correlation function , in this case for the lowest multiplicity and the background from Pythia. This fit is repeated for all multiplicities to obtain the kT and multiplicity dependence of Rinv .

Figure 5.5 shows the results for the multiplicity and kT dependence of Rinv. The error bars are dominated by the systematic uncertainty, the statistical error is smaller than the marker size. The multiplicity dependence is very clear, Rinv increases with multiplicity in all kT bins. The kT dependence is flat within the error bars, the largest multiplicity showing

35 √ Figure 5.4: Fitting the correlation function in s = 2.76 GeV p+p collisions. The average pair momentum kT increases left to right. The black lines are fits to the data (blue) and the Monte Carlo simulated data from Pythia (red).

a very slight decrease in Rinv with increasing kT . The λ parameter is very consistent at around 0.4 for all multiplicities and kT bins.

Another source of uncertainty is the range in Qinv used for the fits. Four different ranges were chosen (0.8, 1.0, 1.3 and 1.6 GeV) to study the effect of the fit range on the radius. The results are shown in Figure 5.6. It can be seen that as the fit range increases the extracted radii increase up to 10% for most of the data points, but quantitatively the multiplicity and kT dependence remains the same. The λ parameter is less affected by the fit range, especially in the lower kT bins. It was discussed in section 3.2.1 how the source function is not known beforehand, and there are different functional forms that can be used to fit the correlation function , see table 3.2.1. The radii were also obtained with an exponential fit to the correlation function , the results being qualitatively the same, and therefore not shown here. Figure 5.7 shows the difference between the Gaussian radii and the exponential radii for all the data points. The difference is below 5% for most of the points and taking the error bars into consideration the difference is consistent with zero, meaning that using a Gaussian or an exponential function for the femtoscopic peak will not affect the kT and multiplicity dependence of Rinv .

36 √ Figure 5.5: kT and Multiplicity dependence of Rinv and λ in s = 2.76 TeV p+p.

Figure 5.6: Fit range study of the correlation function for p+p data at 2.76 GeV. Each kT range has four points corresponding to the four ranges used (slightly shifted for clarity). The four ranges are 0.8, 1.0, 1.3 and 1.6 GeV.

37 Figure 5.7: Difference between the Gaussian and exponential radii shown as a percent of the Gaussian radii.

Figure 5.8: Qout , Qside , Qlong projections of the fit to the three dimensional Correlation Function.

5.2.2 Three dimensional analysis

The analysis details for the three dimensional case are the same as in the previous section, except that the bin size in Qout , Qside , Qlong is 0.4×0.4×0.4 MeV and the range is -1.2 to 1.2 GeV. This choice of bin size improves the statistics available per bin and is still large enough to observe the Bose-Einstein effect. The fit in the three dimensional case is done with the principal maximum likelihood method [41]. This method works better than the minimization of the χ2 function, because it takes into account bins with low counts as well as empty bins, which happen very often

38 Figure 5.9: kT dependence of the three dimensional radii Rout , Rside and Rlong .

in the 3D analysis. The fit procedure is different in that the numerator (correlated pairs) of the correlation function is fit with the functional form times the denominator (uncorrelated pairs), under the assumption that the denominator has negligible error bars given that it has about ten times more data than the numerator. Using a Gaussian source, the numerator is fit with the following function:

Num(Qout,Qside,Qlong) =Den(Qout,Qside,Qlong) 2 2 2 2 2 2 × (1 + λ exp(−Rout · Qout − Rside · Qside − Rlong · Qlong)) (5.1)

The fit range used in this analysis was -0.5 to 0.5 GeV in all three directions, which is a region large enough to contain the femtoscopic features. Fitting larger ranges is possible but it requires more computation time and the results are similar. As an example, Figure 5.8 shows the projections of the three dimensional correlation function and the corresponding

fit function onto the Qout,Qside,Qlong axes for the lowest multiplicity and lowest kT bin. The projection of the fit function and the data are not entirely on top of each other because

39 the normalization is calculated for the already projected data and used for the fit function as well. The algorithm to obtain the projections is explained in the appendix A.2.

Figure 5.9 shows the kT and multiplicity dependence of the 3D radii obtained from fits to the correlation function without estimation of the background from MC data. The error shown is statistical only. The multiplicity dependence observed in the one dimensional case continues here: all three radii increase with increasing multiplicity for all kT bins.

The kT dependence is slightly different for each of the radii. In the lowest multiplicity bin Rout increases at low kT , then falls and remains fairly constant after that. The two highest multiplicities show a clear tendency of Rout to fall with kT . Rlong shows a very consistent fall with kT for all multiplicities. Rside , in contrast to Rlong and Rout , does not show such a steep fall with kT , but it is clear that there is a small kT dependence. The

λ parameter remains approximately constant for all multiplicities and kT , except for the lowest multiplicity, where it increases with kT . √ 5.3 Femtoscopy of proton-proton collisions at s = 900 GeV and 7 TeV.

5.3.1 One dimensional analysis

A similar analysis was previously done for the 900 GeV and 7 TeV energies and the results were published in these articles [42, 43]. The results for Rinv are shown in figures 5.10 and 5.11 for 900 GeV and 7 TeV respectively. The multiplicity bins were chosen so that approximately the same number of pairs fall in each bin, resulting in four bins for the 900 GeV data reaching up to 80 charged particles and 8 bins for the 7 TeV data reaching up to 140 charged particles. It can be seen in both plots clearly that the radii increase consistently with increasing multiplicity for all kT bins. The kT dependence of Rinv is flat for the lowest multiplicity bins and develops a negative slope as multiplicity increases. This is visible for both energies, however the very high statistics available for the 7 TeV energy allow to obtain a very clear distribution of the data points.

5.3.2 Three dimensional analysis

The three dimensional analysis is done in the the LCMS using equation 3.6 to fit the correlation function . The results are shown in Figures 5.12 as a function of kT together with multiplicity integrated results from 200 GeV p+p at STAR, as well as in Figure 5.13 as a function of charged particle multiplicity dNch/dη). The first observation is that the radii are universally similar for all three energies, showing that the space-time characteristics of soft particle production in proton-proton collisions are very weakly dependent on collision energy. 40 Figure 5.10: Average pair momentum Figure 5.11: Average pair momentum k k and Multiplicity dependence of R T T inv and Multiplicity dependence of R and λ and λ at 900 GeV. The systematic error inv at 7 TeV. The systematic error is on the is on the order of 10% and is not shown order of 10% and is not shown here. here.

The correlation strength λ is fairly independent of kT with value 0.55 for the lowest multiplicity and decreasing to 0.42 for the highest multiplicity. The three radii have slightly different behaviors:

Rlong : Rlong decreases monotonically with kT for all multiplicities , and increases consis-

tently with multiplicity for all kT bins.

Rside : Rside starts out rather flat for the lower multiplicities and builds up a slight negative slope with higher multiplicity. It also shows a consistent increase in size with increasing

multiplicity for all kT bins.

Rout : Rout possibly shows the most complex behavior of the three radii. The lowest

multiplicity bin first increases with kT and then remains constant. The next few

multiplicity bins also increase at first, and then fall with kT . The highest multiplicities

fall monotonically. What is most striking about Rout is that the increase in size with

multiplicity observed at lower kT values diminishes with increasing kT , unlike Rlong

and Rside , which showed an approximate constant rate of growth. This is highlighted by the trend lines added to the lowest and highest multiplicities in Figure 5.12.

Rout / Rside : The Rout / Rside ratio is very useful to visualize the different behaviors of

the radii. The ratio is close to 1.0 for the lowest kT bins, and it falls below 1.0 as kT increases. This is seen more clearly in panel d) of Figure 5.13. 41 Figure 5.12: Average pair momentum kT and multiplicity dependence of 3D Gaussian radii at 900 GeV and 7 TeV. Lines were added to the lower and higher multiplicity bins to highlight the trends.

The tendency of femtoscopic radii to decrease with increasing kT has also been seen at RHIC energies in heavy ion collisions [44, 45, 46]. This has been considered as a signature that the system of particles is strongly interacting causing thereby a fluid-like motion of the constituents. Figure 5.14 shows the ALICE femtoscopic radii from proton-proton collisions together with periferal Au-Au and Cu-Cu from STAR and Pb-Au from Ceres. The comparison here is very interesting because the charged particle multiplicities from proton-proton are comparable to periferal heavy ion multiplicities. It is observed that the heavy ion data and the proton-proton data each follow a multiplicity scaling with different slopes. The fact that they don’t follow a single universal scaling, is because the two systems have different mechanisms for particle production: the high multiplicity in pp comes from a hard scattering

42 Figure 5.13: Multiplicity dependence of Figure 5.14: Comparison of femtoscopic 3D Gaussian radii at 900 GeV and 7 radii from heavy ion and proton-proton sys- TeV. Lines show linear fits to combined tems. 900 GeV and 7 TeV points.

producing many particles, whereas the multiplicity in a heavy ion collision comes from many softer nucleon collisions.

43 Chapter 6 Femtoscopy and Energy-Momentum Conservation effects

In this chapter I present the results of studying the Energy-Momentum Conservation In- duced Correlations using the methodology from chapter 4.

6.1 Preliminary Studies

In this preliminary study, published in [47], the effectiveness of the EMCIC parametrization √ was tested. For this purpose 250k proton-proton events at s = 900 GeV were used, as well as the corresponding Monte Carlo production from PYTHIA 6. These events belong to the early physics run at ALICE recorded in December 2009 (LHC09d). The correlation function was obtained using π+π+and π−π−pairs with a momentum cut of 0.1 GeV < pT < 1.2 GeV, and multiplicity and kT integrated. Before fitting the experimental correlation function, a fit to the Monte Carlo data was performed in order to have only the non-femtoscopic correlations and see how well Equation (4.9) adjusts to the curve, see Figures 6.1 and 6.2. The left plot shows the individual component of the EMCICs ({pT1 · pT2}, {pz1 · pz2 }, {E1 · E2}, {E1 + E2}) and the yellow line shows how these four histograms and the Mi parameters form the non-femtoscopic correlation function

CEMCIC (Q). The right plot shows that the EMCIC fit to the MC data adjusts very well to the data points, describing the baseline of the correlation function very well. As explained in section 4 the femtoscopic effect is independent of the EMCICs and can be included in the fitting function as:

C(Q) = Φfemto(Q) × CEMCIC (Q) (6.1)

−R2Q2 where Φfemto = 1 + λe is the pure femtoscopic effect. Figure 6.3 shows the numer- ator and denominator of the correlation function , scaled so that the differences can be

44 Figure 6.1: The four EMCIC histo- grams {X} are shown together with Figure 6.2: Equation (4.9) is used to fit Monte Carlo Data. CEMCIC (Q) (yellow).

observed. The Bose-Einstein enhancement is clearly seen at low Qinv, as well as the long range correlations. Figure 6.4 shows the fit to the experimental correlation function with Equation (6.1) and it can be seen that both the baseline and the femtoscopic effect are described well by this method. The oscillations in the black line at large Q are due to the limited statistics which propagate to the fitting function because the CEMCIC (Q) term is formed by histograms.

Careful examination of the fit results up to this point indicate that the Mi parameters are highly correlated, which is expected from a one dimensional fit with 7 free parameters. The following initial values were tried to see if convergence of the fit was consistent:

• Set initial Mi =0.

• Set initial Mi to values found at RHIC [32].

• Calculate the average Mi on a subset of data to have an estimate of what the real values could be.

Using the different starting values yielded different minima, an indication that there are many local minima in parameter space, making the convergence to a global minimum very

difficult. Additionally the values obtained for the Mi parameters were sometimes negative, but from their definition in Equation (4.10) they should be only allowed to have positive values. The conclusion of this preliminary study is that the EMCICs parametrization can de- scribe the baseline of the correlation function but in order to obtain significant results more constraining is needed on the parameter space. Some possibilities are: 45 Figure 6.3: Numerator and denomina- Figure 6.4: Correlation function from 900 tor of the correlation function (scaled to GeV pp events, with EMCICs fit. coincide).

• Use 4 or 5 kT bins and perform a simultaneous fit to all of them with one set of Mi parameters.

• The full three dimensional analysis in multiplicity and kT bins.

6.2 EMCICs and Femtoscopy of proton-proton collisions at 900 GeV and 7 TeV

6.2.1 One dimensional EMCIC analysis

900 GeV p+p collisions

In 2010 the ALICE experiment recorded 8M minimum bias events at 900 GeV and more than 200M events at 7 TeV. For the present analysis only about 100M events were used from the 7 TeV data set, but this is sufficient to carry out femtoscopic studies in one and three dimensions.

Figure 6.5 shows the same kT and multiplicity integrated analysis as in the previous section but for the Monte Carlo production associated with the large statistics 900 GeV data set. The increased statistics do contribute to an improvement in the results, the EMCIC fit adjusts to the data points well, and it was possible to find a minimum in which 0 all the Mi s were positive. The physical quantities extracted from these parameters, assuming that the character- istic mass of a particle in the system is that of the pion, are all positive but not entirely accurate. The average energy per particle hEi = 2.337GeV is reasonable, however, the aver- age momentum hpT i = 5.1GeV and hpzi = 4.9GeV are on the high side of the expectations 46 (∼1 GeV), with large uncertainties and are only 2 or 3 standard deviations away from 0. The total event multiplicity N = 1.5 is very low compared to a realistic value, which would be between 30 and 100, but it has a large uncertainty as well and is just 2σ away from 0. These parameters with large uncertainties are not very representative of the system.

Figure 6.5: Fit to the kT and multiplicity integrated correlation function from simulated p+p events at 900 GeV.

The strategy to improve these results is to fit several kT bins of the correlation function with a common set of EMCIC parameters. The reasoning is that EMCIC parameters are global quantities and by fitting different kT bins simultaneously the minima in parameter space will be better defined. The limits of the kT bins used for the 900 GeV data are 0.1,

0.25, 0.4, 0.55, 0.7, 1.0 GeV. Figure 6.6 shows a simultaneous fit to 4 kT bins, and it can be seen that the fit adjusts very well to the data. The Mi parameters are positive and are not too different from the ones obtained in the integrated fit shown in Figure 6.5. The physical quantities, are still in the same range obtained in the integrated fit with similar uncertainties as well. Given the large uncertainties on the physical parameters the only one that remains relevant is the average energy hEi = 2.224 ± 0.016 GeV, which is a reasonable value.

A comparison of 5 kT bins of the correlation function from identical pions in proton- proton collisions at 900 GeV with the Monte Carlo simulation can be seen in Figure 6.7. The simulated and real data show the same long range effects in the baseline. It is observed that at low Qinv the MC data shows a slope developing with kT , as was also seen in section 5.2.

47 Figure 6.6: Simultaneous fit to 4 kT bins of the correlation function from simulated p+p events at 900 GeV.

The coincidence of the MC and the real correlation functions at large Qinv vary depending on the range where the correlation function is normalized; in this case the range used was

0.4 to 0.6 GeV. The largest kT bin (0.7 -1.0 GeV) was not included in the simultaneous fits due to the low statistics available. As the previous ALICE femtoscopic studies have shown [43] the shape of the source function is not represented accurately by a Gaussian source and it was shown that an exponential functional form works better:

Φ(femto) = (1 + λ exp−Qinv·Rinv ) (6.2)

The exponential functional form was chosen for the EMCICs analysis so that the fitting function adjusts better to the femtoscopic effect, and removing thereby the unnecessary

need for the EMCICs parameters to adjust to this peak. Figure 6.8 shows a fit to 4 kT bins of the real 900 GeV data with equations (4.7), (4.8) and (6.2). The femtoscopic peak is described very well by the exponential function and the long range correlations are also described successfully by the EMCICs parametrization. The physical quantities calculated

from the Mi parameters are all unphysical in this case. The insert shows the kT dependence √ of Rinv (not scaled by the π factor), decreasing Rinv with increasing kT , consistent with

48 Figure 6.7: Comparison of 5 kT bins of the correlation function from p+p data and MC events at 900 GeV.

the results presented in section 5.2.

7 TeV p+p collisions

The same 1-dimensional analysis was carried out for 30M events from 7 TeV proton-proton collisions. Figure 6.9 shows the simultaneous fit to 4 kT bins from MC data. Once again the

EMCIC fit function reproduces the data very well and the Mi parameters are all positive. The calculated physical quantities are all negative (unphysical) except the average energy, hEi = 11.65 ± 0.35GeV which is a very large value, however not entirely unphysical.

Figures 6.10 and 6.11 show the EMCIC fits to 4 and 5 kT bins of the correlation function, respectively. In both cases the data points are reproduced very well, but the calculated physical quantities are not representative of the system.

In summary, the simultaneous fit to 4 or 5 kT bins with one common set of EMCIC parameters is able to describe the data points successfully, but it was not possible to find a unique set of parameters because there are still many minima in parameter space, and the convergence is sensitive to starting values. Furthermore, the physical quantities calculated

49 Figure 6.8: Simultaneous EMCIC fit to 4 kT bins of the correlation function from p+p events at 900 GeV.The femtoscopic peak is fit with an exponential function. Insert: kT dependence of the extracted radii.

from the Mi parameters were either unphysical or not representative of the system, telling us that the quantification of the energy-momentum correlations is not happening correctly.

6.2.2 Three Dimensional EMCIC Analysis

The second strategy to constrain the parameter space is to do the binning of the correlation function and the four EMCIC histograms in the 3-dimensional Longitudinal Co-Moving

System introduced in section 3.3.2. Figure 6.12 shows a projection on the Qout and Qlong axes summing over 0 to 160 MeV in the Qside axis. The Bose Einstein enhancement can be clearly seen at the origin, its strength not being too pronounced given the large scale on the vertical axis. The long range correlations are also present here at larger values of Qout and

Qlong , but after about 1 GeV the statistics become very poor and the signal is quite noisy. In order to make the best out of the low statistics at large values of Q, it was decided to use variable bin sizes, keeping the same bin size of 40 MeV Q < 0.6 GeV, where the femtoscopic effect is strong, then increasing the bin size to 80 MeV from 0.6 to 0.92GeV and from then on to 160 MeV. Two multiplicity ranges (1-15, 15<) with two kT bins (0.1-0.3 GeV, 0.3-0.6 GeV) were used. Following the results from [43] the femtoscopic part of the fitting function used to fit the data was chosen to be exponential in Qout and Qlong , and Gaussian in

50 Figure 6.9: Simultaneous fit to 4 kT bins of the correlation function from simulated p+p events at 7 TeV.

2 Qside , as this combination was the one that gave the lowest χ . Figure 6.13 shows the

Qout,Qside,Qlong projections of the fit to the correlation function from Monte Carlo data using Equation 4.7. The fit function describes the Monte Carlo data rather well. The lower right panel of the figure shows the physical quantities calculated from the Mi parameters. It is observed that the only quantity that has statistical significance is the average energy hEi =0.45 GeV ±0.08, which is a reasonable value. The other quantities however have large uncertainties as was the case with the simultaneous kT fit in the previous section.

Figure 6.14 shows the Qout,Qside,Qlong projections of the fit to the correlation function for 900 GeV data. It can be seen that the fitting function does a good job describing the data but the extracted physical quantities are again not satisfactory except for the average energy. The EMCICs parametrization of the baseline has proven to be able to describe the baseline in one and three dimensions, however the parameters extracted from the fit are not in the expected range so that they can be interpreted as physical quantities from the system. It is possible that a finer binning in multiplicity and kT of the three dimensional correlation function would help improve the results of the EMCICs parametrization, but in order to do that the analysis would require more events.

51 Figure 6.10: Simultaneous fit to 4 kT bins of the correlation function from p+p events at 7 TeV. The femtoscopic peak is fit with an exponential.

Figure 6.11: Simultaneous fit to 5 kT bins of the correlation function from p+p events at 7 TeV. The femtoscopic peak is fit with an exponential.

52 Figure 6.12: Two dimensional projection of the correlation function on the Qout and Qlong axes using the bins from 0 to 160 MeV in Qside . The Bose-Einstein enhancement can be clearly seen at the origin. Long range correlations are also visible as Qout and Qlong increase. The large peaks are a product of low statistics in the high Q bins.

53 Figure 6.13: 3D EMCIC Fit to correlation function from 900 GeV p+p simulated data. 3D EMCIC Fit to 900 GeV p+p data. The projections shown are for low multiplicity (Mult < 15) and 0.1 < kT < 0.3

54 Figure 6.14: 3D EMCIC Fit to the correlation function from 900 GeV p+p data. The projections shown are for low multiplicity (Mult < 15) and 0.1 < kT < 0.3 .

55 Chapter 7 Summary of femtoscopy results from ALICE

Femtoscopy has been successfully used for more than 40 years to study the space-time geometry of the emission region in particle collisions in a variety of experiments. The ALICE experiment is not an exception having produced already three publications from identical pion femtoscopy [42, 43, 48]. There are event more studies being prepared like charged and neutral kaon femtoscopy [49], non-identical particle femtoscopy and azimuthal HBT, among others. This dissertation presented results from the femtoscopic study of proton-proton collisions at 0.9, 2.76 and 7.0 TeV energies. The following observations and conclusions have been made for the HBT radii (Rinv,Rout,Rside and Rlong):

The HBT radii are independent of the center of mass energy: For a given multi-

plicity and kT bin the correlation functions are basically identical for the three ener- gies, thereby rendering the same femtoscopic radii.

The HBT radii increase with multiplicity: This trend is consistent for all energies

and for all kT bins, the HBT radii increase as the event multiplicity increases, a sig- nature that the more available system constituents there are, the more they interact with one another allowing the system to expand more.

The HBT radii have a defined kT dependence: The kT dependence of the HBT radii is the most important signature observed in femtoscopy. For the lower multiplicities it is observed to be flat, but a negative slope develops as multiplicity increases showing

that the HBT radii decrease with increasing kT . In other words the source size looks smaller when measured with faster particles. This is the strongest signature of collective motion among the constituents, because through all the interactions of the particles a fluid-like motion is formed causing particles with similar momentum to

56 come from a region close in space, rather than from completely random points in the source.

Multiplicity mechanism in p+p and periferal A+A is different: The multi- plicities reached in proton proton collisions at 7.0 TeV are comparable for the first time to non central heavy ion collisions. The multiplicity dependence of the femtoscopic 1/3 radii in p+p shows a linear dependence with (dNch/dη) , just as it is seen in heavy ion collisions, but with a different slope and intercept. This is a hint that the particle production mechanism in proton-proton and in heavy ion collisions is different. The interpretation is that in proton-proton collisions high multiplicity is reached with a very hard parton collision causing particle showers from its decays, and in non central heavy ion collisions the multiplicity is reached by the product of many soft nucleon- nucleon collisions.

EMCICs: Energy-momentum conservation induced correlations are definitely present but their strength on the non-femtoscopic signal is weak. The method used here to quan- tify the background signal proved to be useful in reproducing the data but the pa- rameters obtained from the fits were not in a physically acceptable range. This is an indication that the correlations induced by conservation of energy and momentum are not the main component of non-femtoscopic correlations. Apparently the background is affected mainly by the presence of mini-jets and the most efficient way to reproduce their effect on the correlation function is with the Monte Carlo event generators.

57 Chapter 8 Quantitative Calculations For Black Hole Production At The Large Hadron Collider

The present chapter presents the qualitative study of black hole formation at the LHC. The results were obtained using the Black Hole event generator ”CATFISH” to investigate how these events could be observed in the hadronic channel at mid-rapidity using a particle tracking detector. This work was finalized before the start of the LHC and was published here [50].

8.1 Introduction

When the Large Hadron Collider (LHC) started running in late 2009, physics at the 7 TeV center of mass scale became accessible through p-p collisions. Running at the nominal design energy of 14 TeV has been postponed until at least 2014. Although searching for the Higgs particle will be one of the main research focuses, other interesting physics, such as the possible formation of Black Holes will be another direction of intense research. This phenomenon is predicted by the framework of Large Extra Dimensions (LED) in the Arkani- Hamed, Dimopolus and Dvali (ADD) model [51, 52]. This model is a proposed solution to the , the question of why gravity is much weaker than the other forces 19 in nature. The scale for gravity is characterized by the Planck mass, MP ≈ 10 GeV, whereas the strong and the weak forces have a scale on the order of 1 GeV/fm and 100 GeV, respectively. The approach to solve the problem in the ADD model uses the following four assump- tions: 1) the hierarchy of the forces in nature only exists in 3+1 dimensions, but not in higher dimensional space-time with D=3+1+n dimensions, 2) only gravity is allowed to propagate in the n extra dimensions through gravitons, 3) the LED are “compact” or finite, but too small to be detected until (possibly) now, 4) Standard Model particles only prop-

58 agate in 3+1 dimensional space-time, which is embedded in 3 + 1 + n higher dimensional space-time. Gravity seems to be weaker in 3+1 dimensions because it is diluted in 3+1+n higher dimensional space. The Planck mass in a theory with n LED is given by

n   n+2 2 ¯h n+2 MP ≈ MP 0 , (8.1) 2πrc p 19 where MP 0 = ¯h/G = 1.22 × 10 GeV is the Planck mass in D=3+1 dimensions and rc is the compactification length of the extra dimensions. The compactification length is a free parameter of the theory, and it can be seen [53] that for a given Planck mass, it decreases as

the number of extra dimensions increases, e.g. for MP = 1 TeV rc is close to 1 fm for n=7.

For fixed rc the Planck scale is lowered as the number of large extra dimensions increases and the hierarchy of the forces is largely suppressed [53]. One of the consequences of this model is the formation of D-dimensional black holes smaller than the size of the extra-dimensions and centered on the brane. The gravitational radius of the D-dimensional black holes is up to 1032 times larger than that of a usual black hole in 3+1 dimensions with the same mass. This considerably increases the possibility of creating such an object at the LHC. The main purpose of the work presented in this chapter was to study the possible signatures of black holes formation in p-p collisions, which could be observed in the hadronic channel at mid-rapidity with a charged particle tracking detector. The CATFISH (Collider grAviTational FIeld Simulator for black Holes) [54] Monte Carlo event generator was used to generate black hole event. CATFISH is a Monte Carlo code based on the ADD model and is used to generate black hole events at the LHC with center of mass energy of 14 TeV. It is similar to other MC generators like TRUENOIR [55] and CHARYBDIS [56], but it is more up to date since it uses the most recent theoretical results of black hole formation and evolution. The updates include inelasticity effects during the black hole formation phase, exact field emissivities, corrections to Hawking semi classical evaporation phase, black hole recoil on the brane, and additional final black hole decay modes (including remnants). CATFISH links to the PYTHIA [57] Monte Carlo code to simulate the evolution of the decay products of the black hole into Standard Model particles. The flexibility of CATFISH allows the study of the different theoretical models of black hole formation. A comparison between the features of CATFISH and CHARYBDIS has been carried out before[58] and a similar study of the observables of black hole at the LHC has also been done using the CHARYBDIS Code [53, 59].

8.2 Black hole formation models in CATFISH

CATFISH includes the following three models to calculate the black hole evolution:

59 • No Gravitational Loss model (NGL): This model works in the semi classical limit of black hole formation. The hoop conjecture [60] is used to estimate the possibility of black hole formation in a particle collision. It states that an apparent horizon is formed in D=4 dimensions if a mass M is compacted in a region with a circumference C such that C HD ≡ ≤ 1, (8.2) 2πrh(M)

where rh(M) is the Schwarzschild radius for the mass M given by

 1/(D−3) 16πGDM rh(M) = . (8.3) (D − 2)ΩD−2

Here ΩD−2 is the volume of the (D − 2)-sphere and GD is the gravitational constant.

The impact parameter b of two colliding partons has to be smaller than rh(M) to produce a black hole. The cross section is given approximately by the geometrical Black Disk (BD) cross section

2 σBD = πR (s, n)Θ[R(s, n) − b] (8.4) √ where R is the horizon radius and depends on the center of mass energy s, and the number of extra dimensions n. The black hole mass is equal to the center of mass energy of the partons forming the black hole. This model is the same one used in TRUENOIR and CHARYBDIS.

• Yoshino-Nambu model (YN) [61, 62, 63, 64]: This model uses the Trapped Surface approach, which gives a bound on the inelasticity of a collision by modeling two incoming partons as Aichelburg-Sexl shock waves[65]. The apparent horizon is found in the union of the two shock waves. The condition for black hole formation is better described in higher dimensions by the volume conjecture than by the hoop conjecture, and is given by " #1/(D−3) VD−3 HD ≡ D−3 ≤ 1, (8.5) ΩD−3rh (s, n)

where ΩD−3 is the volume of the (D − 3)-sphere and VD−3 is the characteristic (D − 3)-dimensional volume of the system. The volume conjecture reduces to the hoop conjecture in D = 4 dimensions. The cross section is calculated in this model as

2 σBHproduction = F (D)πrh(s, n), (8.6)

where F (D) is a numerical factor close to unity. The black hole mass is less than the center of mass energy of the partons forming the black hole due to emission of gravitons and it depends on the impact parameter. 60 • Yoshino-Rychkov model (YR) [66, 67]: This model is an improved version of the YN model in that the apparent horizon is constructed from a slice of the future light cone in the shock collision plane. The slice lies in the future of the one used in the YN model. The condition for black hole formation is also given by the volume conjecture and the cross section calculation is similar to the calculation in Equation 8.6. The black hole mass is also reduced in comparison to the black hole mass in the NGL model.

8.2.1 Event simulation in CATFISH

The event simulation in CATFISH occurs in three steps:

1. The initial black hole mass is sampled from the differential cross section.

2. The black hole is decayed through the Hawking mechanism and final hard events.

3. The unstable quanta emitted are hadronized or decayed instantaneously by PYTHIA, except top quarks, which are decayed as t → bW first.

The results of our calculations are presented in the following section.

8.3 Black hole formation signatures

The signatures of black hole production are studied in this section, first for black holes decaying completely into Standard Model particles and then for events with black hole remnants.

8.3.1 Black hole signal in the hadronic channel

The hadronic channel is considered as a possible method to detect black holes. To be conservative, the first-year luminosity, the transverse momentum resolution and the rapidity 31 −2 −1 were assumed to be L = 10 cm s , 0.1 GeV/c < pT < 300 GeV/c and -1 < y < 1 respectively (i.e. these characteristics are similar to those of the central tracking detectors of the ALICE experiment [12] at CERN). The transverse momentum distribution 1 dN was considered in earlier works as a possi- pT dpT ble signature of black hole formation [53, 59], and it was found that the distribution flattens

more in black hole events than in the QCD background events at pT > 200 GeV/c for

MP =1 TeV and at higher pT for larger values of MP . This would make the detection of black hole events difficult for tracking detectors since the momentum resolution of these detectors is poor for large pT . To overcome this problem another observable can be used

61 instead, namely the sum of the transverse momentum of all charged hadrons in each event, calculated using

X PT = pTi , (8.7) i where i runs over all charged hadrons in one event. CATFISH was used to generate black hole events, the PT was calculated for each event and the distribution was analyzed in a histogram with bins of size of 0.1 TeV/c. The total number of counts per bin for an arbitrary period of time N can be calculated using the following relation:

N σLt N = bin , (8.8) NT ot where Nbin is the number of counts in a particular bin, NT ot is the total number of events 5 generated, and σ is the cross section. For each simulation in this work, NT ot = 10 events were generated and a period of time of four months was chosen, so that t ≈ 107s.

Figure 8.1: Comparison of the PT for different values of MP with the QCD background. Model: NGL

62 In Figure 8.1 N vs. PT is plotted for the NGL model with different values of the Planck mass, together with a normal QCD background run from PYTHIA. This run is a mixture of six different runs with increasing hardness of the 2 → 2 parton collision, which allows to produce events with PT . It is seen that at low PT the normal QCD events dominate, but above ∼ 0.5 TeV the MP = 1 TeV curve takes over. This is a possible method for detecting the black holes: i.e, noticing the discontinuous change in the slope of the distribution dN from normal QCD background events to black hole events. For higher values of the dPT Planck mass the curves intersect the QCD background at higher PT values. The black

hole production becomes more suppressed but it can be seen that even at a MP of 5 TeV it would still be possible to detect a signal under these running conditions. Furthermore, by obtaining experimentally the PT value at which the change in the distribution slope happens, the actual value of the Planck mass could be determined.

Figure 8.2: Comparison of the PT with applied cuts for different values of MP and the QCD background. Model: NGL.

63 Figure 8.2 is a plot of the same quantities as in Figure 8.1 except that various cuts are applied to simulate the acceptance of a realistic tracking detector. The cuts used are:

1. A rapidity cut (−1 < y < 1)

2. A transverse momentum cut ( 0.1 GeV/c < pT < 300 GeV/c)

3. A multiplicity cut (m >65) within the rapidity cut.

The black hole signal is not degraded by these cuts, as it can be seen in Figure 8.2. Its effect is particularly visible in the lowest PT bins where the number of counts is reduced. In order to keep the detector acceptance realistic in this analysis these cuts will be applied in all subsequent plots.

Figure 8.3: Combined signal of the QCD background and the black hole signal for a MP = 1 TeV. Model: NGL.

Figure 8.3 plots the combined signal of the QCD background and the black hole con- tribution, as one would obtain experimentally. It can be seen that at PT ≈ 0.5 TeV/c, 64 the slope of the distribution changes abruptly indicating the transition from normal QCD events to black hole events.

Figure 8.4: Comparison of PT for MP of 1 TeV for different number of extra dimensions and the QCD background.

In Figure 8.4 the effect of varying the number of extra dimensions is studied. As can be seen there is no substantial difference in our results if five, six or seven extra dimensions are used in the simulations. String theory [68] favors n=7 , therefore this value is used for all of the results presented in the rest of the study.

In Figure 8.5 PT is plotted for the three theoretical models of black hole formation: NGL, YN and YR. The NGL model has the largest cross section of the three models since it uses the semi classical approximation to black hole formation. The YN and YR models take into account the gravitational energy loss at formation and this decreases the cross section. It can be seen that only the NGL curve is above the QCD background, meaning that if black holes are formed according to the YN or YR models it would not be possible

65 Figure 8.5: Comparison of the three different black hole formation models for MP = 1 TeV: No Gravitational Loss (NGL), Yoshino-Nambu (YN) and Yoshino-Rychkov(YR).

to detect any using the PT method. From this figure we assert that the NGL model is an upper bound to the formation of black holes, and the YN and YR are the lower bounds for the known models. Black holes are formed when a minimum mass black hole mass is contained within the

Scharzschild radius, and this can be controlled in CATFISH using the parameter Xmin ≥ 1 in terms of the Planck Mass:

Mmin(formation) = Xmin × MP

. Subsequently black holes evaporate through the Hawking mechanism until a minimum mass is reached. CATFISH provides the parameter Qmin ≥ 1 to control the minimum mass of evaporation

Mmin(evaporation) = Qmin × MP . For simplicity it is assumed in the present simulations that the minimum black hole mass at formation and at evaporation are the same, but in general these two values are inde-

66 Figure 8.6: Comparison of two values of the black hole mass at formation (Xmin) and black hole mass at evaporation (Qmin) and the QCD background. Model: NGL with MP = 1 TeV .

pendent. Figure 8.6 shows results for how the minimum black hole mass at formation and at evaporation affect the hadronic signal. The PT is plotted for a MP of 1 TeV and two values of Xmin = Qmin using the NGL model. It is observed that at low PT the black holes with lower initial mass have a stronger signal. This is because the decay product of the

black holes with larger masses would have a higher PT than that of black holes with lower

masses. For values above 1.5 TeV/c the PT is about the same for both, showing that black

hole production is insensitive to this parameter for higher PT .

So far it has been shown that the possibilities of detecting a black hole using the sum of the transverse momentum depend strongly on the model of black hole formation and evaporation. The NGL model provides optimistic chances of producing black holes, whereas the YN and YR models provide a more pessimistic expectation, because the black hole events would not be seen above the ordinary QCD background. Note that this does not imply that black holes cannot be detected in particle trackers at mid-rapidity. In the next

67 section we explore such a possibility by studying the detection of charged remnants, if they are formed.

8.3.2 Signal from black hole Remnants

Figure 8.7: Comparison of the PT distribution in the NGL model with and without black hole remnants.

The possibility of a black hole not decaying completely to Standard Model particles after the Hawking evaporation phase and leaving a remnant has also been discussed in the literature [69]. The black hole remnant would be a very massive and possibly charged [70] particle and these facts could be used to detect them in mid-rapidity particle trackers. The remnants are produced in CATFISH when the evaporation phase is over and the mass has reached the minimum value MBHR = Qmin × MP , which is not further decayed into Standard Model particles. Using the technique of summing the transverse momentum it is seen in Figures 8.7,8.8

68 Figure 8.8: Comparison of the PT in the YN model with and without black hole remnants.

and 8.9 that having a black hole remnant significatively reduces the total number of counts per bin. Even in the most optimistic scenario, the NGL model, Figure 8.7 shows that the signal from black hole remnant events is below the QCD background. Therefore the possibility of detecting a black holes signal via the PT method are very small because the QCD background dominates. However, in light of having a black hole remnant, another technique could be used to detect them. Black hole remnants may have a net charge [70], and that would allow the detection of these massive particles using the time-of-flight (TOF) method in particle trackers. CATFISH does not assign a charge to the black hole remnants and assumes they are all neutral, therefore in these simulations a charge is assigned to each remnant, following reference [70]. The charge distribution among the black hole remnants is plotted in Figure 8.11. It is approximately Gaussian with a standard deviation of 1.47. The plot shows that about 75 percent of the black hole remnants will have a net charge and could be detected using the tracking detector plus the TOF technique.

69 Figure 8.9: Comparison of the PT in the YR model with and without black hole remnants.

By knowing the momentum of the black hole remnant and the TOF, its mass can be reconstructed using the following relation:

p0 M = , (8.9) R γ0β0 where p0 is the momentum of the remnant. β0 and γ0 can be calculated respectively from

x 1 β0 = , γ0 = (8.10) ct p1 − β02 We assume a time resolution of 50 ps, a momentum resolution ∆p/p ≈ 10%p and a flight path of x = 3 m. Figure 8.10 shows the counts per 0.05 TeV/c2 bin of the actual black hole remnant mass obtained from the simulation and the reconstructed mass from the TOF and momentum. It can be seen that both curves are almost identical for this bin size, thus demonstrating that the reconstructed mass is accurate. Figure 8.12 shows the difference between the actual mass of the black hole and the

70 Figure 8.10: Comparison of black hole remnant mass with the reconstructed mass from the TOF for MP = 1 TeV, Qmin = 1

reconstructed mass using a 0.2 GeV bin size. The mass difference has an average of 0 GeV and a standard deviation of 0.76 GeV for this particular case, showing the high accuracy which is possible in determining the black hole mass using a charged particle tracker plus TOF with reasonable performance characteristics. The black hole remnant mass is observed to have a mass distribution, which becomes broader as the value of the minimum black hole mass at evaporation gets closer to the Planck mass. Figure 8.13 shows how the width of the black hole remnant mass distribution changes depending on how high the minimum mass is above the Planck scale. One of the strong points of this method of detecting black holes is that the signal should be very clean in these large mass regions since no QCD process generates such masses allowing the recognition of even small signals.

71 Figure 8.11: Charge Distribution among the black hole remnants for MP = 1 TeV, Qmin = 1

8.4 Summary

This chapter presented possible methods to detect black holes at the LHC (if they are indeed formed), by looking in the hadronic channel at mid-rapidity. Finding the sum of the total transverse momentum in charged hadrons is one such method, because it is expected that

the decay products of black hole events have larger PT bin counts than normal QCD events

at PT > 0.5 TeV in the NGL model. The black hole signature is the transition in the PT distribution from background QCD events to black hole events. This model represents the upper bound for black hole formation because there is no gravitational energy loss during formation. The other two models studied, the YN and YR models, take into account the energy loss at formation, and thus provide the lower bounds for the known models of black

hole formation. In this case lower counts per PT bin are expected and the hadronic signal will not be recognizable from the QCD background events. Should the PT method not work, another way of detecting black holes is possible if they do not decay entirely and, instead, leave a massive remnant. Using tracking detectors plus the TOF, the mass of a

72 Figure 8.12: Difference in the black hole remnant mass and the reconstructed mass for MP = 1 TeV, Qmin = 1

black hole remnant can be reconstructed. By looking at the spectrum of masses from many events it could be easily recognized that there is a very massive particle left over. This would be the signature for black hole events with remnants happening at the LHC. There are other possible black hole signatures which have not been considered here and have been studied elsewhere [54, 70, 71, 72, 73, 74, 75]. Examples include the missing energy, missing transverse energy and hadron energy signatures, as well as signatures due to suppression of back-to-back-correlated di-jets and di-lepton production with large transverse momentum.

8.4.1 Have black holes been observed with the ALICE detector?

The short answer is no. As an author of this study I have looked at the sum of the transverse momentum spectrum PT in proton-proton collisions at 7 TeV and I have not observed the transition of QCD events to black hole events. The data available as of September 2011 is not the type of data required for this analysis. The main reason is that the energy of the proton-proton collisions is only 7 TeV instead of

73 Figure 8.13: Comparison of different black hole remnant mass distributions for MP = 1 TeV.

the 14 TeV that has been used in this study. Another reason is that the ALICE detector has recorded data mostly with a minimum bias trigger, which only contributes to the lowest bins in the PT spectrum, but in order to reach out the tail of the distribution a large quantity of events with high multiplicity triggers are needed.

74 Bibliography

[1] Norwegian University of Science and Technology. Phase diagram of nuclear matter. √ [2] J. et al. Adams. Pion interferometry in au+au collisions at sNN = 200gev. Phys. Rev. C, 71(4):044906, Apr 2005.

[3] F. Karsch and E. Laermann. Thermodynamics and in-medium hadron properties from lattice QCD. 2003.

[4] R. J. Glauber. Lectures on Theoretical Physics, volume 1. Interscience, NY, 1959.

[5] Dmitri Kharzeev, Eugene Levin, and Marzia Nardi. Onset of classical qcd dynamics in relativistic heavy ion collisions. Phys. Rev. C, 71(5):054903, May 2005.

[6] Peter F. Kolb and Ulrich W. Heinz. Hydrodynamic description of ultrarelativistic heavy-ion collisions. 2003.

[7] Tetsufumi Hirano. Hydrodynamic approaches to relativistic heavy ion collisions. 2004. √ [8] Thomas J. Humanic. Predictions of hadronic observables in Pb+Pb collisions at sNN = 2.76 TeV from a hadronic rescattering model. 2010.

[9] Thomas J. Humanic. Hanbury-brown-twiss interferometry with identical bosons in relativistic heavy ion collisions: Comparisons with hadronic scattering models. Int. J. Mod. Phys., E15:197–236, 2006.

[10] N. Bock. Elliptic flow in relativistic heavy ion collisions, 2009.

[11] Lhc design report volume i+ii+iii, -2004-003-v-1, cern-2004-003-v-2, cern-2004- 003-v-3 (2004).

[12] K. Aamodt et al. The at the cern lhc. JINST, 2008.

[13] H.J. Specht. Experimental aspects of heavy ion physics at lhc energies. CERN-90-10- v-2, 1990.

[14] ALICE Collaboration. Alice: Physics performance report, volume i. Journal of Physics G: Nuclear and Particle Physics, 30(11):1517, 2004.

75 [15] ALICE Collaboration. Alice: Physics performance report, volume ii. Journal of Physics G: Nuclear and Particle Physics, 32(10):1295, 2006.

[16] R. Hanbury-Brown and R.Q. Twiss. Interferometry. Nature, 178:1046, 1956.

[17] R. Hanbury-Brown and R.Q. Twiss. A new type of interferometer for use in radio- astronomy. Phil. Mag., 45:663, 1954.

[18] , Sulamith Goldhaber, Won-Yong Lee, and Abraham Pais. Influence of bose-einstein statistics on the antiproton proton annihilation process. Phys. Rev., 120:300–312, 1960.

[19] M.A. Lisa, S. Pratt, R. Soltz, and U. Wiedemann. Femtoscopy in relativistic heavy ion collisions. Ann. Rev. Nucl. Part. Sci., 55:311, 2005.

[20] S. Pratt. Pion interferometry for exploding sources. Phys. Rev. Lett., 53:1219–1221, 1984.

[21] S. E. Koonin. Proton pictures of high-energy nuclear collisions. Phys. Lett., B70:43–47, 1977.

[22] M. G. Bowler. Coulomb corrections to bose-einstein correlations have been greatly exaggerated. Phys. Lett., B270:69–74, 1991.

[23] Yu. Sinyukov, R. Lednicky, S. V. Akkelin, J. Pluta, and B. Erazmus. Coulomb correc- tions for interferometry analysis of expanding hadron systems. Phys. Lett., B432:248– 257, 1998.

[24] M. G. Bowler. Extended sources, final state interactions and bose-einstein correlations. Zeitschrift fuer Physik C Particles and Fields, 39:81–88, 1988. 10.1007/BF01560395.

[25] Richard Lednicky. Finite-size effects on two-particle production in continuous and discrete spectrum. Phys.Part.Nucl., 40:307–352, 2009.

[26] M. A. Lisa et al. The E895 pi- correlation analysis: A status report. 2005.

[27] S. Pratt. Coherence and coulomb effects on pion interferometry. Phys. Rev., D33:72–79, 1986.

[28] G. Bertsch, M. Gong, and M. Tohyama. Pion interferometry in ultrarelativistic heavy ion collisions. Phys. Rev., C37:1896–1900, 1988.

[29] A. N. Makhlin and Yu. M. Sinyukov. The hydrodynamics of hadron matter under a pion interferometric microscope. Z. Phys., C39:69, 1988.

[30] M. Herrmann and G. F. Bertsch. Source dimensions in ultrarelativistic heavy ion collisions. Phys. Rev., C51:328–338, 1995.

[31] Zbigniew Chajecki and Mike Lisa. Global Conservation Laws and Femtoscopy of Small Systems. Phys. Rev., C78:064903, 2008.

[32] M. M. Aggarwal et al. Pion femtoscopy in p+p collisions at sqrt(s)=200 GeV. 2010.

76 [33] Adam Kisiel and David A. Brown. Efficient and robust calculation of femtoscopic correlation functions in spherical harmonics directly from the raw pairs measured in heavy-ion collisions. Phys. Rev., C80:064911, 2009.

[34] Torbjorn Sjostrand et al. High-energy-physics event generation with pythia 6.1. Com- put. Phys. Commun., 135:238–259, 2001.

[35] Mikolaj Chojnacki, Adam Kisiel, Wojciech Florkowski, and Wojciech Broniowski. THERMINATOR 2: THERMal heavy IoN generATOR 2. 2011.

[36] N. Bock. Two pion femtoscopy:extracting source sizes from two pion correlation func- tions in p-p collisions at 7 and 14 tev. Presented at the V Workshop on Particle Cor- relations and Femtoscopy, CERN, Geneva 14 - 17 Septemeber 2009. Not published., 2009.

[37] P. Danielewicz et al. Collective motion in nucleus-nucleus collisions at 800- mev/nucleon. Phys. Rev., C38:120–134, 1988.

[38] Nicolas Borghini, Phuong Mai Dinh, and Jean-Yves Ollitrault. Are flow measurements at sps reliable? Phys. Rev., C62:034902, 2000.

[39] Nicolas Borghini. Multiparticle correlations from momentum conservation. Eur. Phys. J., C30:381–385, 2003.

[40] Nicolas Borghini. Momentum conservation and correlation analyses in heavy- ion col- lisions at ultrarelativistic energies. Phys. Rev., C75:021904, 2007.

[41] K Nakamura and Particle Data Group. Review of particle physics. Journal of Physics G: Nuclear and Particle Physics, 37(7A):075021, 2010.

[42] K. Aamodt et al. Two-pion Bose-Einstein correlations in pp collisions at sqrt(s)=900 GeV. Phys.Rev., D82:052001, 2010. √ [43] K. Aamodt et al. Femtoscopy of pp collisions at s=0.9 and 7 TeV at the LHC with two-pion Bose-Einstein correlations. 2011.

[44] Z. Chajecki. Identical particle correlations in star. Nucl. Phys., A774:599–602, 2006.

[45] Michael Annan Lisa. Azimuthally sensitive interferometry and the source lifetime at rhic. Acta Phys. Polon., B35:37–46, 2004.

[46] Z. Chajecki, T. D. Gutierrez, M. A. Lisa, and M. Lopez-Noriega. A a versus p p (and d a): A puzzling scaling in hbt@rhic. 2005.

[47] N. Bock. Femtoscopy and energy-momentum conservation effects in proton-proton collisions at 900 GeV in ALICE. J. Phys. Conf. Ser., 270:012022, 2011.

[48] K. Aamodt et al. Two-pion Bose-Einstein correlations in central PbPb collisions at p (sNN ) = 2.76 TeV. Phys. Lett., B696:328–337, 2011. [49] T. J. Humanic and for the ALICE Collaboration. K0s-K0s correlations in 7 TeV pp collisions from the ALICE experiment at the LHC. 2011. 77 [50] N. Bock and T.J. Humanic. Quantitative Calculations for Black Hole Production at the Large Hadron Collider. Int. J. Mod. Phys., A24:1105–1118, 2009.

[51] Nima Arkani-Hamed, Savas Dimopoulos, and G.R. Dvali. The Hierarchy problem and new dimensions at a millimeter. Phys.Lett., B429:263–272, 1998.

[52] Nima Arkani-Hamed, Savas Dimopoulos, and G.R. Dvali. Phenomenology, astrophysics and cosmology of theories with submillimeter dimensions and TeV scale quantum grav- ity. Phys.Rev., D59:086004, 1999.

[53] Thomas J. Humanic. Extra-dimensional physics with p+p in the alice experiment, 2005.

[54] M. Cavaglia, R. Godang, L. Cremaldi, and D. Summers. Catfish: A Monte Carlo simulator for black holes at the LHC. Comput.Phys.Commun., 177:506–517, 2007.

[55] Savas Dimopoulos and Greg L. Landsberg. Black hole production at future . Prepared for APS / DPF / DPB Summer Study on the Future of Particle Physics (Snowmass 2001), Snowmass, Colorado, 30 Jun - 21 Jul 2001.

[56] C.M. Harris, P. Richardson, and B.R. Webber. CHARYBDIS: A Black hole event generator. JHEP, 0308:033, 2003.

[57] Torbjorn Sjostrand, Leif Lonnblad, and Stephen Mrenna. PYTHIA 6.2: Physics and manual. 2001.

[58] Douglas M. Gingrich. Comparison of black hole generators for the LHC. 2006.

[59] Thomas J. Humanic, Benjamin Koch, and Horst Stoecker. Signatures for Black Hole production from hadronic observables at the Large Hadron Collider. Int.J.Mod.Phys., E16:841–852, 2007.

[60] J. Klauder K.S. Thorne. Magic without Magic: John Archbald Wheeler. 1972.

[61] Hirotaka Yoshino and Yasusada Nambu. Black hole formation in the grazing collision of high- energy particles. Phys. Rev., D67:024009, 2003.

[62] M. B. Voloshin. More remarks on suppression of large black hole production in particle collisions. Phys. Lett., B524:376–382, 2002.

[63] Thomas G. Rizzo. Black hole production at the LHC: Effects of Voloshin suppression. JHEP, 02:011, 2002.

[64] Vyacheslav S. Rychkov. Black hole production in particle collisions and higher curva- ture gravity. Phys.Rev., D70:044003, 2004.

[65] O. I. Vasilenko. Trap surface formation in high-energy black holes collision. 2003.

[66] Hirotaka Yoshino and Vyacheslav S. Rychkov. Improved analysis of black hole forma- tion in high-energy particle collisions. Phys. Rev., D71:104028, 2005.

[67] Steven B. Giddings and Vyacheslav S. Rychkov. Black holes from colliding wavepackets. Phys. Rev., D70:104026, 2004. 78 [68] Edward Witten. Comments on string theory. 2002.

[69] Benjamin Koch, Marcus Bleicher, and Sabine Hossenfelder. Black hole remnants at the LHC. JHEP, 10:053, 2005.

[70] Sabine Hossenfelder, Benjamin Koch, and Marcus Bleicher. Trapping black hole rem- nants. 2005.

[71] Horst Stoecker. Stable TeV - Black Hole Remnants at the LHC: Discovery through Di-Jet Suppression, Mono-Jet Emission and a Supersonic Boom in the Quark-Gluon Plasma. Int.J.Mod.Phys., D16:185–205, 2007.

[72] G. L. Alberghi et al. Probing quantum gravity effects in black holes at LHC. 2006.

[73] Junichi Tanaka, Taiki Yamamura, Shoji Asai, and Junichi Kanzaki. Study of black holes with the ATLAS detector at the LHC. Eur. Phys. J., C41:19–33, 2005.

[74] Leif Lonnblad, Malin Sjodahl, and Torsten Akesson. QCD-supression by black hole production at the LHC. JHEP, 09:019, 2005.

[75] Arunava Roy and Marco Cavaglia. Discriminating and Black Holes at the Large Hadron Collider. Phys. Rev., D77:064029, 2008.

79 Appendix A Useful algorithms

This appendix presents some useful algorithms that are needed when working with particle correlations.

A.1 Algorithm to calculate the correlation functions

The first algorithm is the one used to obtain the experimental correlation functions, and it is valid for one dimensional and three dimensional analysis.

A.1.1 The correlated pairs

Pairs of correlated particles are formed from particles from a single proton-proton event, and they form the numerator of the correlation function in equation (3.9). It is necessary to read all tracks in an event and select only those tracks that belong to the desired particles used in the analysis (e.g pions, Kaons, protons...) as well as those that satisfy predefined particle identification cuts, multiplicity cuts, etc. For the particles that pass the cuts, the four- momentum information is stored in an array particleList of size n. There will be (n(n −

1)/2) possible pairs. For each pair the momentum difference (Qinv or Qout,Qside,Qlong) is calculated and a count is added to the corresponding bin in a histogram. The function CalculateQ(particle1, particle2) receives 2 particle tracks as argument and returns the value

of Q, where Q is either Qinv or an array/vector containing Qout,Qside,Qlong. The efficiency of this algorithm is of order O(n2) because it needs to transverse the list of particles twice.

80 Algorithm 1: Algorithm to calculate the numerator of the two particle correlation function. for i = 1 to n − 1 do for j = i + 1 to n do Q = CalculateQ(particleList[i], particleList[j]); Add 1 to Numerator Histogram (Q); end for end for

A.1.2 The uncorrelated pairs

The uncorrelated pairs are obtained from mixed events. As the analysis runs over the correlated pairs, a given number of events NEventsT oMix is saved in memory in a list (MixP artList) with a total of NEventsT oMix lists of particles. There are 4 loops involved in the mixing of events: The outer two loops that run over k and l, are the loops over the lists of events. Note that these two outer loops are very similar to the loops in the correlated particles. The inner two loops run over i and j, mix all particles of list k with all particles of list l. The counters i and j run up to the number of events in the lists k and l, respectively. The Function EventsIn() returns the number of events in a given list.

Algorithm 2: Algorithm to calculate the denominator of the two particle correla- tion function. for k = 1 to NEventsT oMix − 1 do for l = k + 1 to NEventsT oMix do for i = 1 to EventsIn(MixP artList[k]) do for j = 1 to EventsIn(MixP artList[j]) do Q = CalculateQ(MixPartList[k][i], MixPartList[l][j]); Add 1 to Denominator Histogram (Q); end for end for end for end for

81 A.2 Algorithm to obtain the Qout,Qside,Qlong projections of the three dimensional correlation function .

When working with the three dimensional analysis one encounters the problem that the whole correlation function cannot be pictured directly because it has three coordinates and a function value. The solution is to draw projections of the correlation function on the individual axes, that is sum over the bin contents of the other two coordinates. To obtain the Qout projection, for example, one needs to obtain the projections of the numerator and denominator first and then take the ratio of these two to get the correlation function in the

Qout direction: Num(Qout) C(Qout) = (A.1) Den(Qout) The projections of the numerator and denominator are not done over the entire range of the axes, because that will overwhelm the Bose Einstein enhancement at low Q. The approach is to sum over the bins that are around the enhancement. For a typical correlation function from proton-proton collisions the enhancement has a width of about 0.2 GeV so one would sum over the bins up to 0.2 GeV. The class TH3D from Root has methods to obtain the projections very easy. What turns out to be more difficult is to obtain the projection of the fit function. The approach to obtain the projection of the fitting function is similar as that of the correlation function. The idea is to calculate a numerator and denominator for the fit:

NumF it(Qout) CF it(Qout) = (A.2) DenF it(Qout) The numerator is obtained as follows: X NumF it(Qout) = CF it(Qout,Qside,Qlong) · Den(Qout,Qside,Qlong) (A.3)

Qside,Qlong

where CF it is the full three dimensional fit function, and Den is the denominator with the data. The denominator of the projection is obtained as follows:

X DenF it(Qout) = Den(Qout,Qside,Qlong) (A.4)

Qside,Qlong

This procedure needs to be done for Qside and Qlong also. The following algorithm calculates the histograms from Equations A.3 and A.4, as well

as those for the Qside and Qlong projections:

82 Algorithm 3: Algorithm to calculate the projections of the 3D fit function on the

Qout , Qside and Qlong axes. for i = minBinF it to maxBinF it do for j = minBinUsed to maxBinUsed do for k = minBinUsed to maxBinUsed do ////Calculate x,y,z to get Qout projection x = CalculateXforBin(i); y = CalculateYforBin(j); z = CalculateZforBin(k); C = Evaluate Correlation Function at (x, y, z); DenContent = Get Counts from Bin(i, j, k) in Denominator; Add C ∗ DenContent to Bin i of NumF itQout; Add DenContent to Bin i of DenF itQout; ////Interchange x and y (i and j) to get Qside projection; C = Evaluate Correlation Function at(y, x, z); DenContent = Get Counts from Bin(j, i, k) in Denominator; Add C ∗ DenContent to Bin i of NumF itQside; Add DenContent to Bin i of DenF itQside; ////Interchange z and x (k and i) to get Qlong projection C = Evaluate Correlation Function at(y, z, x); DenContent = Get Counts from Bin(j, k, i) in Denominator; Add C ∗ DenContent to Bin i of NumF itQlong; Add DenContent to Bin i of DenF itQlong; end for end for end for

This algorithm is very efficient because it produces the three projections at once, instead of having to run the three for cycles for each projection. The trick is that the counter i transverses the axis on which the projection is being done and j, k transverse the other two axes. By simply swapping these indices it is possible to obtain all three projections. The variables minBinF it and maxBinF it are the lower and upper limits of the fitting procedure. The variables minBinUsed and maxBinUsed are the lower and upper limits of the bins that are going to be added in the projection. The algorithm produces six histo- grams, namely NumF itQout, NumF itQside, NumF itQlong, DenF itQout, DenF itQside and DenF itQlong, with which the CF it(Qout), CF it(Qside) and CF it(Qlong) projections can be obtained.

83