ACTA UNIVERSITATIS UPSALIENSIS Uppsala Dissertations from the Faculty of Science and Technology 143

Alexander Burgman

Bright Needles in a Haystack A Search for Magnetic Monopoles Using the IceCube Observatory Dissertation presented at Uppsala University to be publicly examined in Polhemsalen, 10134, Uppsala, Wednesday, 3 February 2021 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Professor David Milstead (Stockholm University, Stockholm, Sweden).

Abstract Burgman, A. 2020. Bright Needles in a Haystack. A Search for Magnetic Monopoles Using the IceCube Neutrino Observatory. (Ljusstarka Nålar i en Höstack. En Sökning efter Magnetiska Monopoler med Neutrino-Observatoriet IceCube). Uppsala Dissertations from the Faculty of Science and Technology 143. 166 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-513-1083-1.

The IceCube Neutrino Observatory at the geographic South Pole is designed to detect the light produced by the daughter- of in-ice neutrino- interactions, using one cubic kilometer of ice instrumented with more than 5000 optical sensors. Magnetic monopoles are hypothetical particles with non-zero magnetic , predicted to exist in many extensions of the of physics. The mass is allowed within a wide range, depending on the production mechanism. A cosmic flux of magnetic monopoles would be accelerated by extraterrestrial magnetic fields to a broad final distribution that depends on the monopole mass. The analysis presented in this thesis constitutes a search for magnetic monopoles with a speed in the range [0.750;0.995] in units of the . A monopole within this speed range would produce Cherenkov light when traversing the IceCube detector, with a smooth and elongated light signature, and a high brightness. This analysis is divided into two main steps. Step I is based on a previous IceCube analysis, developed for a cosmogenic neutrino search, with similar signal event characteristics as in this analysis. The Step I event selection reduces the acceptance of atmospheric events to lower than 0.1 events per analysis livetime. Step II is developed to reject the neutrino events that Step I inherently accepts, and employs a boosted decision tree for event classification. The (astrophysical) neutrino rate is reduced to 0.265 events per analysis livetime, corresponding to a 97.4 % rejection efficiency for events with a primary energy above 1E+5 GeV. No events were observed at final analysis level over eight years of experimental data. The resulting upper limit on the flux was determined to 2.54E–19 per square centimeter per second per steradian, averaged over the covered speed region. This constitutes an improvement of around one order of magnitude over previous results.

Keywords: magnetic monopole, IceCube, astroparticle physics, neutrino telescope

Alexander Burgman, Department of Physics and Astronomy, High Energy Physics, Box 516, Uppsala University, SE-751 20 Uppsala, Sweden.

© Alexander Burgman 2020

ISSN 1104-2516 ISBN 978-91-513-1083-1 urn:nbn:se:uu:diva-425610 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-425610) This thesis is dedicated to my children Olivia and Victor, who are all that is best in me.

Contents

1 Units and Conventions ...... 13 1.1 High Energy Physics ...... 13 1.2 Speed and Relativistic Lorentz Factor ...... 13

2 Magnetic Monopoles ...... 15 2.1 Electric-Magnetic Duality ...... 15 2.2 The Dirac Monopole ...... 16 2.3 The ’t Hooft-Polyakov Monopole ...... 18 2.4 The Cosmic Monopole Population ...... 20 2.5 Magnetic Monopole Search Methods ...... 21 2.6 Monopole Search Results from Neutrino Telescopes ...... 23

3 The IceCube Neutrino Observatory ...... 26 3.1 The Detector ...... 26 3.1.1 The Detector Constituent Arrays ...... 27 3.1.2 The DOM ...... 29 3.1.3 The Detector Medium ...... 31 3.1.4 Coordinate System ...... 32 3.2 Data Acquisition and Triggering ...... 33 3.2.1 Data Filter Stream ...... 34 3.3 Typical Events ...... 34 3.4 High Energy in IceCube ...... 35 3.4.1 Typical Event Signatures ...... 36 3.5 Interpreting an Event View ...... 38

4 Magnetic Monopoles in IceCube ...... 40 4.1 Energy Loss in ...... 40 4.2 Light Production ...... 41 4.2.1 Cherenkov Radiation ...... 42 4.2.2 Indirect Cherenkov Radiation ...... 44 4.3 Magnetic Monopole Signatures in IceCube ...... 45

5 Magnetic Monopole Event Simulation in IceCube ...... 48 5.1 Magnetic Monopole Generation ...... 48 5.1.1 Generation Disk Radius ...... 50 5.1.2 Generation Disk Distance ...... 51 5.2 Magnetic Monopole Propagation ...... 54 5.3 Light Production and Detection ...... 54 5.4 Simulation Validation ...... 55 5.4.1 Validation with Experimental Data ...... 56 5.4.2 Magnetic Monopole Light Yield Validation ...... 58

6 Event Cleaning and Reconstruction Methods ...... 60 6.1 Event Cleaning Algorithms ...... 60 6.1.1 The SeededRadiusTime Cleaning Method ...... 60 6.1.2 The TimeWindow Cleaning Method ...... 61 6.2 Event Reconstruction Algorithms ...... 62 6.2.1 A Particle Track Representation ...... 62 6.2.2 The LineFit Track Reconstruction Method ...... 62 6.2.3 The EHE Reconstruction Suite ...... 63 6.2.4 The CommonVariables Event Characterization Suite ... 66 6.2.5 The Millipede Track Reconstruction Method ...... 67 6.2.6 The BrightestMedian Track Reconstruction Method .... 68

7 Data Analysis and Statistical Tools ...... 72 7.1 Analysis Strategies ...... 72 7.1.1 Cut-and-Count Analyses ...... 72 7.1.2 Multi-Variate Analyses ...... 73 7.1.3 Analysis Blindness ...... 73 7.2 Determining an Upper Limit ...... 74 7.2.1 Effective Area ...... 74 7.2.2 Upper Limit ...... 74 7.2.3 Sensitivity ...... 75 7.2.4 Including Uncertainties in the Upper Limit ...... 75 7.3 Model Rejection and Discovery Potentials ...... 76 7.3.1 Model Rejection Potential ...... 77 7.3.2 Model Discovery Potential ...... 77 7.4 Boosted Decision Trees ...... 78

8 Analysis Structure, Exposure and Assumptions ...... 80 8.1 Analysis Structure ...... 80 8.1.1 Step I ...... 80 8.1.2 Step II ...... 80 8.2 Analysis Exposure ...... 81 8.2.1 Livetime ...... 81 8.2.2 Solid Angle ...... 82 8.3 Signal and Background Parameter Space ...... 82 8.3.1 Magnetic Monopole Flux Assumptions ...... 82 8.3.2 Astrophysical Neutrino Flux Assumptions ...... 84

9 Simulated Event Samples ...... 85 9.1 Signal Monte Carlo Event Samples ...... 85 9.2 Background Monte Carlo Event Samples ...... 86 10 Event Selection ...... 87 10.1 Event Triggers and Filters ...... 87 10.2 Step I ...... 88 10.2.1 The Offline EHE Cut ...... 88 10.2.2 The Track Quality Cut ...... 88 10.2.3 The Bundle Cut ...... 92 10.2.4 The Surface Veto ...... 93 10.3 Step II ...... 93 10.3.1 Additional Reconstruction ...... 96 10.3.2 Step II Variables ...... 96 10.3.3 BDT Implementation and Performance ...... 104 10.3.4 Placing the Cut Criterion ...... 108 10.4 Expected Numbers of Events ...... 115

11 Sensitivity ...... 118 11.1 Effective Area ...... 118 11.2 Sensitivity ...... 120

12 Uncertainty on the Magnetic Monopole Efficiency ...... 122 12.1 Systematic Variation of Monte Carlo Settings ...... 123 12.2 Total Uncertainty ...... 127

13 Uncertainty on the Astrophysical Neutrino Flux ...... 128 13.1 Statistical Uncertainty ...... 128 13.2 Uncertainties in the Astrophysical Flux Measurement ...... 128 13.3 Alternative Neutrino Flux Assumptions ...... 129 13.4 Expected Flux outside of the Simulated Energy Range ...... 131 13.5 Expected Background at Final Analysis Level ...... 133

14 Result ...... 134 14.1 Experimental Event Rate ...... 134 14.1.1 Step I Accepted Events ...... 134 14.1.2 Step II Accepted Events ...... 138 14.2 Final Result ...... 138

15 Summary and Outlook ...... 142

16 Swedish Summary — Svensk Sammanfattning ...... 145 16.1 Vad är en Magnetisk Monopol? ...... 145 16.2 Vad är IceCube? ...... 146 16.3 Att Söka Magnetiska Monopoler med IceCube ...... 148 16.4 Resultat ...... 149

17 Acknowledgements ...... 150

A The IceCube EHE Analysis ...... 152

B Step I Observed Events over BDT Variables ...... 156 References ...... 160 Preface

This Thesis The topic of this thesis is an analysis that was conducted by the author with the goal of discovering a cosmic flux of magnetic monopoles with a speed above the Cherenkov threshold in ice, here restricted to the speed region between 0.750 c and 0.995 c. The analysis was conducted on data collected with the IceCube detector over the course of 8 yr, and the author has been associated with Uppsala University and the IceCube collaboration for the entire duration of the project. The contents of this thesis can roughly be divided into three main groups of chapters — background, tools and methods, and the work by the author. In addition to this, there are a few supplementary chapters, such as Preface (this chapter), Acknowledgements and the Appendices. The background chapters, Chapters 1–4, contain the background know- ledge needed to fully grasp the remainder of the thesis. These cover mag- netic monopoles, the IceCube detector, and the expected interactions between a magnetic monopole and the IceCube detector medium, as well as a few use- ful units and conventions. The next set of chapters (5–7), cover different tools and methods that have been used in this thesis. The IceCube magnetic monopole event simulation methods are described in Chapter 5, and the relevant event reconstruction and cleaning methods are described in Chapter 6. Chapter 7 covers the tools that have been used for data analysis, statistical analysis and machine learning. Finally, the work by the author is covered in Chapters 8–14. Chapter 8 marks the beginning by covering the overall strategy of this analysis, the anal- ysis scope, and additional parameter space constraints that may exist. Next, the Monte Carlo samples that were used to develop this thesis are detailed in Chapter 9. Chapter 10 covers the event selection developed for this thesis, from beginning to end, and the results that can be expected from the applica- tion of this event selection is covered in Chapter 11. After this, Chapters 12 and 13 cover the uncertainty on the analysis efficiency for magnetic mono- poles and the uncertainty on the neutrino astrophysical flux, respectively. The final chapter, Chapter 14 covers the final results produced by this analysis.

Author’s Contributions The majority of the time I have spent as a Ph.D. student with Uppsala Univer- sity has been devoted to the development of the analysis that is described in

11 this thesis. During this time, I have been an active collaborator in the IceCube collaboration, and have attended six IceCube collaboration meetings where I have presented and discussed my work, discussed the work of my collab- orators, and formed strong professional bonds. I also attended the with Neutrino Telescopes conference in October 2019, PPNT-2019, where I presented my own work along with other active and finished magnetic monopole analyses in IceCube. Additionally, my work has been presented twice at the International Conference for New Frontiers in Physics, in Au- gust 2017 and July 2018, ICNFP-2017 and -2018 respectively, for the latter of which I co-authored the presentation material. I have attended four Partikeldagarna meetings, the Particle Days, which is the annual meeting of the Swedish particle community. I have had the opportunity of presenting my work during several of these. During my Ph.D. student time, I have also completed a number of courses, both in and outside of Uppsala University. Within Uppsala University I have completed courses for a total of 42.5 HP, covering a wide scope of topics rele- vant for my Ph.D. education. I have also attended the Neutrinos Underground & in the Heavens summer school, given by the Niels Bohr Institute (Copen- hagen University) and the Detector Technologies for Particle Physics course, jointly given by the Niels Bohr Institute and Helsinki University. Addition- ally, I have attended two internal IceCube courses — the IceCube bootcamp on collaboration and analysis, and the IceCube advanced course on C++. Before commencing my IceCube data analysis work, I conduced an antenna characterization study for the ARIANNA collaboration over ∼ 6 months. The primary measurements were conducted by myself and my Ph.D. student col- league Lisa Unger, and secondary measurements were conducted later by a project worker. I performed the data analysis in this project, which resulted in an internal ARIANNA report documenting the measurements and concluded antenna characteristics. The results of the study were also used as benchmarks in an antenna Monte Carlo study conducted at the University of California, Irvine. In addition to my main tasks with IceCube and ARIANNA, I have also taken on various smaller engagements: • I was a part of the IceCube Software Strike Team for two years, with monthly code sprints to maintain and develop the IceCube software • I have conducted the Uppsala University monitoring shifts for the Ice- Cube detector over three years • I have performed teaching duties in laboratory exercises for two Upp- sala University courses, on the topics of shearing in classical mechan- ics and the photoelectric effect in

12 1. Units and Conventions

1.1 High Energy Physics Natural Units In high energy physics it is common to adopt a simplified unit convention, usually denoted by high energy physics natural units, where:

c = ε0 = h¯ = kB = 1 (1.1)

Here, c is the speed of light in , ε0 is , h¯ is the reduced and kB is Boltzmann’s constant. Additionally, the vacuum permeability is one, µ0 = 1, due to: 1 = ε µ (1.2) c2 0 0 These constants are commonly omitted in equations, such that all units can be given as different powers α of energy, usually on the form GeVα . Below follows a non-exhaustive list of the units used for different physical quantities in high energy physics natural units and how they relate to the International System of Units, SI, we are familiar with. Energy α = 1 , 1GeV ⇔ 1.60 × 10−10 J Mass α = 1 , 1GeV ⇔ 1.78 × 10−27 kg Time α = −1 , 1GeV−1 ⇔ 6.58 × 10−25 s Distance α = −1 , 1GeV−1 ⇔ 1.97 × 10−16 m Temperature α = 1 , 1GeV ⇔ 1.16 × 1013 K High energy physics natural units will be used for energy and mass through- out this thesis, while time and length will be given in SI units.

1.2 Speed and Relativistic Lorentz Factor A central quantity in the work described in this thesis is speed. Speed will here be denoted by β, which is the speed given in units of the speed of light in vacuum, c: v β = (1.3) c The Lorentz factor γ is used to quantify how relativistic a particle is, and is directly related to the speed of the particle: 1 γ = (1.4) 1 − β 2 p 13 The Lorentz factor also relates to the mass and the relativistic total energy of the particle through:

Etot (E + m ) γ = = kin 0 (1.5) m0 m0 where m0 is the rest mass of the particle, Etot is its relativistic total energy and Ekin is its relativistic kinetic energy. It is common to describe the speed of a high energy particle in terms of how relativistic it is, e.g. nonrelativistic, relativistic or ultrarelativistic. These terms are vague, and can be adapted to the use-case at hand. In IceCube, magnetic monopoles below a speed β ∼ 0.1 are usually referred to as nonrelativistic, while a speed of β ∼ 0.5 or above is commonly said to be relativistic. The ultrarelativistic regime usually denotes Lorentz factors above γ ∼ 102 to ∼ 104, i.e. speeds above β ∼ 0.99995 to ∼ 0.999999995.

14 2. Magnetic Monopoles

The existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen. It is very hard to predict when and if monopoles will be discovered. [...] But we must continue to hope that we will be lucky, or unexpectedly clever, some day.

J. Polchinski [1]

The quote above is an apt illustration of the current state of the pursuit of magnetic monopoles. On the one hand, magnetic monopoles are explicitly al- lowed in most current theories describing the fundamental laws of physics. On the other hand, several properties, such as the monopole mass and its current abundance, are unknown and vary over many orders of magnitude between theories. This chapter is an introduction to magnetic monopoles and current experi- mental efforts to find them. Magnetic monopole matter-interactions and their detectable signatures in the IceCube detector are described in Chapter 4.

2.1 Electric-Magnetic Duality The electromagnetic interaction is classically described by J. C. Maxwell’s equations of :

∇¯ · E¯ = ρe (2.1) ∇¯ · B¯ = 0 ∂ ∇¯ × E¯ = − B¯ ∂t ∂ ∇¯ × B¯ = E¯ + j¯e ∂t

Where E¯ and B¯ are the electric and magnetic fields, and j¯e and ρe are the and charge density, respectively. Isolated magnetic charges are not included in these equations due to their absence in observation. They can, however, be trivially included in the form

15 of a magnetic current j¯m and charge density ρm:

∇¯ · E¯ = ρe (2.2)

∇¯ · B¯ = ρm ∂ ∇¯ × E¯ = − B¯ − j¯m ∂t ∂ ∇¯ × B¯ = E¯ + j¯e ∂t This would introduce a symmetry to Maxwell’s equations known as electric- magnetic duality, which entails that the equations remain unchanged when introducing the substitutions [2]:

E¯, j¯e, ρe → B¯, j¯m, ρm (2.3) B¯, j¯m, ρm → −E¯, − j¯e, −ρe

2.2 The Dirac Monopole The electromagnetic interaction may also be expressed using the scalar and vector potentials φ and A¯, replacing the electric and magnetic fields of Maxwell’s equations via [2]:

∂ E¯ = − A¯ − ∇¯ φ (2.4) ∂t B¯ = ∇¯ × A¯

This disallows a magnetic charge density, ρm = ∇¯ · B¯, as the of the of a vector field is zero, ∇¯ · B¯ = ∇¯ · ∇¯ × A¯ = 0. The physical interpreta- tion of this is that magnetic field lines may not have a beginning or end. However, in 1931 P. A. M. Dirac included  magnetic monopoles in quan- tum mechanics by modeling each pole as the end of a semi-infinitely long and infinitesimally thin idealized , a [2; 3]. This allows the magnetic field around the end of the solenoid to be identical to that of a sin- gle magnetic pole, without explicitly including isolated magnetic charges in the theory. The apparent charge of the pole, g, corresponds to the magnetic flux inside of the Dirac string, ΦB = g. The Dirac monopole is illustrated in Figure 2.1. In order for the Dirac monopole to truly act as a free pole, the Dirac string must be experimentally undetectable. Classically, an infinitesimally thin ob- ject is undetectable. However, in quantum mechanics, a solenoid with mag- netic flux ΦB can be observed by the phase shift of qΦB that is introduced to the complex phase of a particle with q that is transported around the solenoid. In order for the solenoid to be undetectable, this phase

16 Figure 2.1. An illustration of a Dirac magnetic monopole, with the magnetic field lines represented in blue and the Dirac string extending to the left. The shaded area represents the region where the magnetic field does not look like the field of a point charge, but like the end of a solenoid. This region is infinitesimally small for a Dirac monopole. Figure by the author. shift must be a multiple of 2π [2], i.e. satisfy:

qΦB qg = Z (2.5) 2π 2π ∈ This requires that both the electric and magnetic charges are quantized if mag- netic monopoles exist, and the statement is known as the Dirac charge quanti- zation condition [2]. Inserting the elementary electric charge as the charge of the transported e2 particle, q = e, and the fine structure constant as α = 4π , into Equation 2.5 yields that magnetic charges, g, are allowed as:

1 g = n e (2.6) 2α where n is an integer. Setting n = 1 yields the smallest allowed magnetic charge, known as the Dirac charge, gD, as:

1 gD = e 68.5e (2.7) 2α ≈ where the fine structure constant is approximated with α 1 . ≈ 137 17 2.3 The ’t Hooft-Polyakov Monopole The three fundamental forces that are included in the Standard Model of par- ticle physics are each described by a symmetry gauge group — U (1) for elec- tromagnetism, SU (2) for the weak interaction, and SU (3) for the strong in- teraction [4]. So-called grand unified theories, GUTs, attempt to unify these three symmetries into a single symmetry group at high energy scales, a sym- metry which is broken at lower energy scales into the three gauge groups that are observed today. The energy scale of the grand unification, ΛGUT , depends on the details of the theory, with a current lower bound of ∼ 1016 GeV. With this is mind, G. ’t Hooft and A. M. Polyakov independently described a different type of magnetic monopole than Dirac [2; 5; 6; 7]. ’t Hooft and Polyakov considered the Standard Model Higgs field, which contains several components φi in internal space that may be considered as individual compo- nents of a vector. At the high energy scales of a GUT symmetry, the values of these may vary arbitrarily, but in the vacuum state their vector magnitude, v, is fixed. The magnitude v corresponds to the radius of the minimum of the Mexican hat potential. A higher symmetry, such as a GUT symmetry, may independently break into the electromagnetic U (1) symmetry in different spa- tial domains simultaneously, potentially resulting in vastly different direction of the Higgs field in adjacent domains. These domains may be arranged such that the field is always pointing away from (or always towards) a particular point in space, in the so-called hedgehog configuration. In this point the field direction is undefined, and the magnitude must be zero. This is not the vacuum configuration of the Higgs field, and it cannot be continuously transformed to the vacuum state. Thus, the point constitutes a (stable) topological , and must contain a localization of energy to uphold the unbroken symmetry [2]. This results in an effective massive particle with mass mMM:

ΛSB mMM & (2.8) αSB where ΛSB is the energy scale of the symmetry breaking and αSB is the cou- −2 pling constant at this scale (αSB ∼ 10 for the simplest allowed GUT gauge groups [8]). Topologically, the ’t Hooft and Polyakov soliton (in the hedgehog con- figuration) can arise when any gauge symmetry of a grand unifying theory breaks into the electromagnetic U (1) symmetry. It is also shown that this topological soliton carries magnetic charge, and thus constitutes a magnetic monopole [2; 6]. Thus, all grand unified theories that unify the three forces (electromagnetism, weak and strong) predict ’t Hooft-Polyakov magnetic mo- nopoles [2]. In the case that a GUT symmetry breaks directly into the U (1) symmetry, the resulting magnetic monopole mass would be given by the energy scale of the GUT and the coupling constant at this scale. These are known as GUT

18 [Point charge magnetic field]

15 10− m γ [Confinement] u¯u¯de¯ + g 18 10− m νµ ν¯µ γ dd¯ uu¯ds¯ Z ν ν¯ e e Z [EW unification] tt¯ uude− cc¯ uu¯ W − g + τ τ− + 31 + ss¯ W 10− m e e− + ¯ µ µ− Y [Grand unification] ντ ν¯τ bb X Figure 2.2. An illustration of the inner structure of a GUT scale magnetic monopole. Figure by the author, inspiration from [9].

14 17 scale monopoles, with a mass between mMM 10 GeV and 10 GeV, de- pending on the details of the theory [2; 9]. If,∼ on the other hand, the GUT symmetry does not break immediately into the U (1) symmetry, but does so via an intermediate symmetry at a lower energy scale, monopoles would arise with a correspondingly lower mass [2]. These monopoles are known as inter- 5 mediate mass monopoles, and would have a mass between mMM 10 GeV and 1013 GeV, depending on theoretical details [2; 9]. ∼ The mass of the magnetic monopole subsequently determines the inner structure of the monopole, illustrated in Figure 2.2 for a GUT scale monopole. 31 A GUT scale monopole would have a core (radius r 10− m) where the GUT symmetry is upheld, containing virtual X and Y GUT gauge [2; 9]. Here, otherwise forbidden number violating processes, such as decay, are allowed, and a proton that passes this region may decay via p π0 + e+. Outside of this region, there would be the electroweak unifi- → 18 cation region (r 10− m) containing virtual Z and W bosons, and yet out- 15 side of this would be the confinement region (r 10− m), containing virtual , , -antifermion pairs and four-fermion structures, e.g. u¯u¯de¯ +. Further out from the core, only the “classical” monopole magnetic field is observed. An intermediate mass monopole would have a similar structure, but with the inner region extending to the scale of its corresponding symmetry break- ing. This scale does not reach the grand unification scale, i.e. the scale where baryon number violation is allowed, implying that intermediate mass mono- poles do not induce baryon number violating processes.

19 2.4 The Cosmic Monopole Population If the grand unification hypothesis is correct, GUT scale magnetic monopoles would have formed as the ambient temperature of the early decreased below the energy scale of the GUT symmetry breaking. As described above, the monopole formation takes place where several domains of simultaneous symmetry breaking meet, resulting in a monopole number density on the same order as that of the domains. This mechanism is known as the Kibble mecha- nism [2; 10]. Due to magnetic charge conservation, the only magnetic monopole destruc- tion channel is annihilation with an oppositely charged monopole. This pro- cess is made exceedingly rare because of the expansion of the Universe, imply- ing that a significant fraction of the monopoles that were formed in the early Universe should remain today. Calculations based on the Kibble mechanism suggest a present GUT scale magnetic monopole number density comparable to that of [8], which clearly opposes everyday observations. This is known as the monopole problem in cosmology, and is resolved by postulat- ing a period of rapid cosmic inflation in the early Universe, thus diluting the monopole density to the currently unobserved abundance [2; 8; 9]. The speed distribution of the cosmic monopole population would depend on the magnetic monopole mass, as well as the magnitude of the ambient accelerating magnetic fields. A magnetically charged particle is accelerated linearly along a magnetic field, in analogy to an electrical charge accelerated by an electric field. The , F¯L, acting on a particle with magnetic ¯ charge ngD and velocity β is given by: ¯ F¯L = ngD B¯ + β × E¯ (2.9)

The gained kinetic energy, Ekin, over a distance L along a path ds¯ is calcu- lated as the of the Lorentz force over the path. Assuming a vanishing electric field, E¯ = 0,¯ this is given by [9]:

Ekin = F¯Lds¯ = ngD |B¯|L (2.10) ZL Here, the traversed distance L represents the coherence length of the ambi- ent magnetic field, i.e. the size of the domain where the magnetic field direc- tion remains constant. The typical domain length of the Milky Way galaxy is ∼ 300pc with a magnetic field strength ∼ 3µG. Given these values, a Dirac charged monopole 10 would gain a kinetic energy of Ekin ∼ 6 × 10 GeV [11]. Other astrophysical environments, with higher magnetic fields or larger sizes, would yield even higher kinetic energy. For example, active galactic 13 nuclei jets, with ∼ 100µG over . 10kpc, yield Ekin . 2 × 10 GeV and large scale extragalactic magnetic field structures, called sheets, with . 1µG over 14 . 30Mpc, yield . 5 × 10 GeV [11].

20 tot The total kinetic energy, Ekin, of a magnetic monopole is then the sum of the kinetic energy gained in each domain passing:

√ tot Ekin = N × Ekin (2.11) √ where the factor of N approximates the effect of a random walk through an average of N domains. A relic magnetic monopole is expected to pass N ∼ 102 extragalactic mag- netic sheets, acquiring a total kinetic energy of ∼ 1016 GeV. This allows mag- netic monopoles within a broad range of masses to be accelerated to relativis- tic, or even ultrarelativistic, speeds. The heaviest magnetic monopoles, on the other hand, are expected to be gravitationally bound to various astronomical systems. A magnetic monopole orbiting our solar system, the Milky Way galaxy or the local galaxy cluster would gain an orbital speed of β ∼ 10−4, 10−3 or 10−2, respectively.

2.5 Magnetic Monopole Search Methods Magnetic monopoles may be detectable through several different channels here at Earth — trapped in terrestrial or extraterrestrial material, produced in high energy colliders or in the form of a cosmic flux [9]. Each of these channels can also be probed using several different experimental techniques. The available magnetic monopole search channels, and different experimental search techniques, are summarized in this section, and several current results on the cosmic flux of magnetic monopoles can be found in Figure 2.3. The hypothetical cosmic flux of magnetic monopoles can be searched for using, for example, neutrino telescopes (e.g. IceCube [12; 13], ANTARES [14], BAIKAL [15]), where a large volume of water or ice is instrumented with a large number of optical modules that detect visible light. The typical scale of a 3 neutrino telescope is . 1km , with isotropic acceptance. For a wide portion of the allowed spectrum of magnetic monopole speeds, a monopole that passes through ice or water should interact with the medium in such a way that it produces optical light that is readily registered with the detector. The relevant matter-interactions of a magnetic monopole in ice are described in Chapter 4. The work described in this thesis is an example of a search for magnetic mo- nopoles with a neutrino telescope, and previous results of similar analyses are given in Section 2.6. An alternative technique to search for a cosmic flux of magnetic monopoles is to use large area air shower detectors (e.g. the Pierre Auger Ob- servatory [16]). These are designed to register the particle showers produced by cosmic rays as they enter the atmosphere. Magnetic monopoles that enter the atmosphere with an ultrarelativistic speed produce similar particle showers

21 Figure 2.3. Current upper limits on the cosmic magnetic monopole flux as a function of speed, v/c, and of the base-10 logarithm of the Lorentz factor, log10 γ), over a broad interval. Credit: A. Pollmann [19]. Included are results from IceCube [12; 13], ANTARES [14], BAIKAL [15], MACRO [20], RICE [17], ANITA [18] and the Pierre Auger Observatory [16], along with the Parker bound [21; 22; 5]. continually along their entire track, and can thus be readily detected by such detector arrays. Cosmic ray detector arrays are usually deployed over large areas of land in order to observe a large volume of the atmosphere, which also yields a very large effective volume for magnetic monopole detection. How- ever, in order to produce enough atmospheric particle showers to be detected, 8 the magnetic monopoles must be in the regime of γ 10 , thus limiting cos- 8 mic ray detectors to search for monopoles with mass mMM  10 GeV. Ultrarelativistic magnetic monopoles may also be detected by the Cheren- kov radiation that they produce in the radio frequency range. This is produced as a magnetic monopole traverses a dielectric medium, and is detectable if the medium is transparent to radio waves. Upper limits on the cosmic monopole flux have been determined through radio observations of the Antarctic ice (e.g. RICE [17], ANITA [18]). A dedicated magnetic monopole experiment, MACRO [20], used a com- bination of different detector techniques (liquid scintillation counters, lim- ited streamer tubes and nuclear track detectors) to search for a cosmic mag- netic monopole flux. This allowed them to analyze a wide range of allowed 5 magnetic monopole speeds, from β = 10− to 1, with an acceptance for an isotropic flux of monopoles of 104 m2 sr (compare to the km2 scale and ∼ ∼ 4πsr acceptance of neutrino telescopes). Another approach to search for a cosmic monopole flux is by using a mag- netometer, where the passage of a magnetic monopole through a supercon- ducting coil would be registered by the induced current [23]. This type of device has an ideal detection efficiency for magnetic monopoles, independent

22 of the monopole speed. The limiting factor of this technique is, however, the effective area, as the magnetic monopole must pass through the loop of the su- 2 perconducting coil in order to be detected, with a typical scale of . 1m [24]. A can also be used to detect terrestrially bound magnetic monopoles, i.e. monopoles that have been trapped in a piece of material. The piece of material is thus passed through the superconducting coil of the mag- netometer, and a magnetic monopole present in the material would be readily detected. Searches of this type have been conducted on several classes of ma- terial, e.g. volcanic rocks [25] and meteoritic material [26]. Additionally, instrumentation components that have been close to the colli- sion point at high energy particle colliders have also been examined for trapped magnetic monopoles (e.g. HERA [27]) that were produced in the collision. Due to magnetic charge conservation, collider produced magnetic monopoles must be produced in pairs with opposite magnetic charge, and are thus limited to a mass that is lower than half of the center-of-mass energy of the collision. In addition to searching for magnetic monopoles that are trapped in the detec- tor material, collider produced magnetic monopoles can be sought by looking for highly ionizing tracks in general purpose detectors surrounding the colli- sion point (e.g. ATLAS [28]) or in dedicated detectors (e.g. MoEDAL [29]).

2.6 Monopole Search Results from Neutrino Telescopes Several efforts have been made to experimentally examine the hypothetical flux of magnetic monopoles with a speed in the range that is relevant to the analysis that is described in this thesis. As previously mentioned, no mag- netic monopole has been detected, but in this section four different works are highlighted, each producing an upper limit on the cosmic monopole flux in the speed range from β = 0.5 to 0.995. The resulting upper limits are shown in Figure 2.4 along with the so-called Parker bound [21]. The Parker bound lim- its the galactic flux of magnetic monopoles to less than 10−15 cm−2 s−1 sr−1, by arguing that a higher flux would disallow the presently observed galac- tic magnetic fields. This limit is valid for magnetic monopole masses below 1017 GeV, above which the monopole is only slightly deflected by galactic fields [5; 22]. Each of the highlighted analyses is produced with neutrino telescopes where optical modules have been immersed in large volumes of (liquid or solid) wa- ter. Each analysis is performed by first identifying a number of characteristic signatures of a magnetic monopole event that distinguishes them from back- ground events. From the characteristic event signatures, a number of analysis- specific variables were constructed, and used to reduce the background rate. All of the analyses are focused on events with a track-like event signature, as magnetic monopoles should have large penetrative power in matter (see Chap- ter 4.1), and an upwards directed light distribution, in order to reject events

23 Figure 2.4. The (current) lowest upper limits on the magnetic monopole flux as a function of speed, in a speed interval relevant for the analysis presented in this the- sis. These results (barring the Parker bound) were produced by the analysis of data collected with neutrino telescopes. induced by atmospheric . After additional event selection the remaining events are used to obtain an upper limit on the flux of magnetic monopoles. The IceCube-40 1 yr result was produced by the analysis of one year of data from the IceCube detector in the 40 string configuration [13]. For this analysis, four benchmark magnetic monopole speeds β were selected (β = 0.760, 0.800, 0.900 and 0.995) and a sample of simulated magnetic monopoles was produced for each. The event selection criteria were the same for all magnetic monopole speeds, but differed for events that exhibited a higher or lower average number of detected photons per optical module. The selection criteria accepted a total of three events over the 1 yr of data collection with the IceCube-40 detector configuration, which is not compatible with the included atmospheric background. However, the light distributions of the three events do not correspond well to the expected distribution from a magnetic monopole event. The events may have been the product of the not-included background flux of astrophysical neutrinos. An upper limit was set for each of the four selected monopole speeds (see Figure 2.4), where the three observed events were treated as upwards fluctuations of the background flux. A similar analysis was performed on the first year of data collected with the completed IceCube detector array, denoted by IceCube-86 1 yr [13]. This analysis targets magnetic monopoles in the speed range from β = 0.5to0.75, where the dominant light production process is indirect Cherenkov light (see Chapter 4.2.2). A continuous spectrum of magnetic monopole events was sim- ulated in the speed range from β = 0.4 to 0.99, and a boosted decision tree was

24 used to distinguish between possible magnetic monopole event candidates and background events at the final level of the analysis. The event selection criteria of this analysis accepted a total of three events in the experimental data of the first year of data taking with the IceCube-86 detector configuration, which is compatible with the expected background. Conservatively, the resulting upper limit was calculated using an expected background of 0 events, and was locally worsened around the reconstructed speeds of the observed events. The upper limit is world-leading between magnetic monopole speeds of about β = 0.5 and 0.8 (see Figure 2.4). In addition to the two IceCube analyses, an analysis of five years of data collected with the ANTARES detector is included, denoted by ANTARES 5 yr [14]. This analysis targets magnetic monopoles over a wide speed range, from β = 0.5945 to 0.9950, and selects magnetic monopole candidate events based on their detected brightness and track-likeness. The speed range was divided into nine equal width bins, and an individual upper limit was set for each bin. Two events were observed in the full 5 yr of collected data, which is compatible with the expected atmospheric muon background. The resulting upper limit is world-leading above a speed of β = 0.8615 (see Figure 2.4). Finally, I highlight the upper limit produced by the analysis of five years of data collected by the BAIKAL collaboration, here denoted by BAIKAL 5 yr [15]. In this analysis, a magnetic monopole event candidate was mainly identified by its track-likeness and brightness, and quality cuts considering the event apparent direction and position relative the detector array were ap- plied. A total of 3.9 ± 2.2 background events were expected to be accepted by this event selection over the five years of collected data, and zero events were observed. This resulted in then world-leading upper limits for each of the three speeds β = 0.8, 0.9 and 1.0. These have now been superseded by more than one order of magnitude, but are included here for completeness (see Figure 2.4).

25 3. The IceCube Neutrino Observatory

The IceCube Neutrino Observatory is a multi-purpose facility that can be used for a wide range of physics topics. One of the primary objectives of IceCube has been to discover the diffuse cosmic flux of high energy neutrinos, which was achieved in 2013, through the observation of an excess of high energy neutrino events over the expected background [30]. In addition to this, IceCube has contributed in many areas, including: Neutrino astronomy Where IceCube has several world leading searches for point-like neutrino emitters, both steady [31] and transient [32], and an excellent sensitivity for discovering a nearby supernova [33]. Neutrino oscillations Using atmospheric neutrinos, where IceCube is sensitive to baseline lengths from ∼ 10km (directly above the detec- 4 tor) to ∼ 10 km (≈ 2REarth, directly below the detector) [34], several neutrino oscillation parameters have been measured. Dark matter Searches for neutrinos as dark matter decay or annihila- tion products from the galactic center [35], the Sun [36] or the center of the Earth [37]. New physics A wide category, including searches for non-standard in- teractions [38], sterile neutrinos [39], and magnetic monopoles [13], the latter of which the work described in this thesis is an example of.

3.1 The Detector The IceCube detector is designed to detect the Cherenkov light produced by the neutrino interaction products in the deep Antarctic ice. In addition to neu- trinos (astrophysical and atmospheric) the IceCube detector measures atmo- spheric muons, muon bundles and potentially also exotic particles. See Fig- ure 3.1 for a schematic illustration of the IceCube detector and its constituent components. The first IceCube string was deployed in 2005, and the detector was com- pleted in 2010 [40]. During the construction years IceCube operated and col- lected data in partial configuration modes, where the detector grew larger with each year. Since 2011, the completed detector has operated in its full configu- ration. Each year of operations, denoted by detector season, is identified with a designation such as “IceCube-XX” where XX is given by the number of op- erating strings, e.g. 40, 79, 86. The seasons of full configuration are also

26 Figure 3.1. A schematic illustration of the IceCube detector. Credit: IceCube col- laboration. An illustration of the Eiffel Tower in Paris (height 324 m) is inserted to demonstrate the scale. appended with a roman numeral, representing the year since completion. The first full configuration season (2011) is thus denoted by IceCube-86 I, and the roman numeral is incremented by 1 for each subsequent year. The season des- ignation may also be abbreviated as ICXX-I, where the XX numeral and final roman numeral behave correspondingly.

3.1.1 The Detector Constituent Arrays The IceCube detector array consists of three sub-arrays, each one with a differ- ent purpose. Each sub-array is instrumented with a number of digital optical modules (DOMs), and they all feed their data to the IceCube Laboratory (ICL) at the surface for readout [40]. The main in-ice array makes up the bulk of the IceCube detector, and is also the main tool for the majority of the measurements, e.g. astrophys- ical neutrinos, atmospheric neutrinos and muons, and exotic particles. The main in-ice array consists of 78 cables extending deep into the ice, commonly called strings. These are placed in a hexagonal grid (see Figure 3.2) with a nearest-neighbor average spacing of 125 m. Each string is instrumented with 60 DOMs with a nearest-neighbor spacing of 17 m, deployed between depths of 1450 m and 2450 m. The DOMs on each string are labeled with a numeral from 1 to 60, increasing with depth. As such, the main in-ice array can be

27 Globen Arena

Figure 3.2. A schematic view of the surface footprint of the IceCube detector. Credit: IceCube collaboration. An illustration of the Globen Arena in Stockholm (diameter 110 m) is inserted to demonstrate the scale.

divided into 60 layers of DOMs at similar depths, by selecting DOMs with identical identification numbers. An additional eight strings, distributed around the center of the main in-ice array, allow the definition of the DeepCore sub-detector [41] (see Figure 3.2). The purpose of DeepCore is to lower the energy detection threshold for inci- dent neutrinos in a region of the detector, for use in analyses on e.g. neutrino oscillations, astrophysical neutrino sources or various exotic topics. Similar to the main in-ice array, the DeepCore strings also hold 60 DOMs each, but with a denser spacing. Of these, 50 are placed below the main dust layer with a 7 m inter-DOM spacing, and the remaining 10 DOMs are placed above the dust layer with a 10 m spacing. The DOMs on the eight DeepCore strings cannot be trivially included in the DOM layers of the main array, due to the differing instrumentation depths. The definition of the DeepCore sub-detector varies by use-case, but always includes the bottom 50 DOMs of the eight DeepCore strings, often along with the DOMs on the adjacent main array strings that are vertically close to the DeepCore instrumented volume. The DeepCore strings are instrumented with DOMs with a higher quantum detection efficiency, apart from strings 79 and 80, that carry both standard and high quantum efficiency DOMs.

28 The main in-ice array and DeepCore together instrument the deep Antarctic ice with 5160 DOMs. In addition to the deep detector arrays, IceCube includes a surface array, the IceTop array, that instruments the surface footprint of the deep detector array. The IceTop array consists of a total of 162 frozen ice containers, tanks, (approximately 1 m3), each instrumented with two DOMs facing downwards. The tanks are placed in pairs at the surface coordinates of each of the 78 main array strings, and the DeepCore strings numbered 79, 80 and 82. IceTop is used as a detector for cosmic ray air showers, with a primary cosmic ray energy from E = 300TeV to 1EeV. It can be used to measure the arrival direction, the flux and the mass composition of the southern hemisphere cosmic rays. Additionally, IceTop enables a veto for the in-ice constituents against events originating in a cosmic ray atmospheric interaction by measuring the resulting particle air-shower, and correlating the arrival times of the air shower and the in-ice detection.

3.1.2 The DOM The basic building-block of the IceCube detector is the digital optical module (DOM). A total of 5484 DOMs constitute the IceCube detector, 5160 frozen into the deep Antarctic ice (the main in-ice array and DeepCore) and 324 frozen into ice-tanks at the surface level (IceTop). Each DOM operates as an independent detector, and communicates continuously with the sur- rounding DOMs and the control unit in the IceCube Laboratory (ICL) at the surface [40]. The main constituents of a DOM are a photo-multiplier tube (PMT) several calibration LEDs and a processing main-board. These are encased in a spher- ical pressure resistant 13 mm thick glass housing, made up by two equally dimensioned hemispheres. The glass housing is pierced by a cable for data exchange and power supply. See Figure 3.3 for a schematic illustration of a DOM. The photo-detector in each DOM is a 10 inch (25.4 cm) R7081-02 HAMA- MATSU PMT. Each PMT is operated with an individually calibrated voltage in order to achieve a gain (amplification factor) of ∼ 107, thereby allowing single-photon detection. The overall detection efficiency of a DOM also de- pends on the quantum efficiency of the PMT, i.e. the probability that an elec- tron is ejected by a photon incident on the photo-cathode. By default, the IceCube DOMs have a quantum efficiency of ∼ 25% (for λ ∼ 390nm), while some (the majority of DOMs deployed on DeepCore strings) are instrumented with a higher efficiency PMT (model HAMAMATSU R7081-02 MOD) and therefore have a higher quantum efficiency of ∼ 34%. One source of noise in an IceCube event is the noise originating in the PMT itself. For example, a ∼ 300Hz noise rate is caused by thermal

29 Figure 3.3. A schematic illustration of a digital optical module, DOM. Credit: Fig- ure 2 from reference [42]. that evaporate from the PMT dynodes, and may cause a detection unrelated to an incident photon. Additionally, photo-production on the first dynode (as opposed to the photo-cathode), or electrons that scatter past one or more dyn- odes, produce so-called pre-pulses. So-called after-pulses are produced by ionization of residual gas in the PMT. Finally, radioactive decays in the DOM glass housing contribute an additional ∼ 350Hz noise rate. The glass housing has a 93 % transmission efficiency for photons with a wavelength of λ = 400nm, and 50 % and 10 % for 340 nm and 315 nm respec- tively. The PMT is coupled to the glass housing with an optical gel, which is transparent to light with wavelengths from λ ≈ 350nm to 650nm, and encased in a mu-metal grid for partial shielding against the Earth magnetic field. The air is evacuated from the spherical glass housing and replaced with nitrogen gas at a pressure of ∼ 0.5atm. The lack of oxygen prevents corrosion of the electronics and the low pressure ensures the tightness of the seal between the two hemispheres prior to deployment, even at the minimum recorded South Pole air pressure. Mounted in the top hemisphere of the DOM is also the processing main- board. This is where the digitization of the registered PMT signal takes place before transmission from the DOM to the ICL. The digitization is done with two different systems, the fast Analog to Digital Converter (fADC) and the Analog Transient Waveform Digitizer (ATWD). Each DOM contains one fADC and two ATWDs, where the fADC allows a longer and coarser detection (256 samples at 40 MHz) and the ATWDs yield a more detailed and short readout (128 samples at 300 MHz) The two ATWDs alternate operations in order to minimize dead-time. Additionally, the ATWDs digitization can be done with

30 one of three different multiplication factors (16, 2 or 0.25), which is deter- mined by the amplitude of the incoming signal. In order for a detected signal to be designated as a DOM launch (also known as a hit or pulse), the total registered charge by the PMT must sur- pass a discriminator threshold of 0.25 PE, where PE denotes photo-. A photo-electron is a unit of electric charge that represents the average charge detected in a PMT per single detected photon. Thus, the total charge measured in an event is often given in units of PE, or as the number of photo-electrons, [total registered charge] nPE = 1PE . Each DOM also holds a flasher-board instrumented with 12 calibration LEDs. These are placed in pairs at equal intervals around the horizontal plane of the DOM, with each pair producing light directed horizontally through the ice as well as 48° upwards. The majority of the DOMs are instrumented with LEDs that produce light with a wavelength of λ = 399nm, while sixteen are instrumented with LEDs that yield light with λ = 340nm, 370nm, 450nm and 505nm for wavelength dependence calibration.

3.1.3 The Detector Medium The Antarctic glacial ice, where the IceCube detector is placed, was formed over many millenia as consecutive layers of snow were compressed to ice by the pressure of new snow above. Therefore, the top layer of the glacier consti- tutes a cover of snow. Below the snow comes a layer of firn which transitions into ice with trapped air bubbles. These air bubbles scatter light substantially, rendering the shallow ice opaque to optical light. As the depth increases fur- ther, the high pressure forces the air to diffuse into the ice and form air-hydrate crystals, so-called clathrates, with optical properties very similar to ice [43]. The majority of the air bubbles have dissipated into the ice around a depth of ∼ 1400m, which is why the in-ice constituents of the IceCube detector are placed below 1450 m of depth. Figure 3.4 shows the absorptivity a and the effective scattering coefficient be as functions of in-ice depth and photon wavelength. The absorptivity is the −1 reciprocal of the characteristic absorption length, λa, a = λa , where the char- acteristic absorption length is the distance traversed by a photon as it reaches 1 a 1 − e probability of having been absorbed. Correspondingly, the effective scattering coefficient is the reciprocal of the effective scattering length, λe, −1 be = λe . The effective scattering length relates to the scattering mean free path, λs, through Equation 3.1, where avg(cos(θ)) is the average cosine of the scattering angle [43].

λs λe = (3.1) 1 − avg(cos(θ))

31 Figure 3.4. The effective scattering coefficient be (left) and the absorptivity a (right) as functions of in-ice depth and photon wavelength. Credit: Figure 22 from refer- ence [43].

Below the depth of 1400 m the main scattering and absorbing agent consists of micron-sized dust particles frozen in the ice. These mainly include mineral grains, sea salt, soot and drops of acid, and are concentrated in several roughly horizontal layers. The dust layers correspond to ancient geological events resulting in higher concentration of dust particles in the air, and, thus, in the Antarctic snow. A photon incident on a dust particle has an average cosine of the scattering angle of avg(cos(θ)) 0.94 [43]. The region in the instrumented ice≈ with the highest concentration of dust, known as the main dust layer, is found around a depth of 2000 m. The ice in this region was formed 65000yr ago [44], and the effective scattering and absorption lengths here∼ are 5m and 20m respectively for light with a ∼ ∼ wavelength λ = 400nm [43]. This can be compared to the typical scattering and absorption lengths in the instrumented volume, ranging within [20;50]m and [80;200]m, respectively. The ice is clearer below the main dust layer than above it, with the longest effective scattering and absorption lengths, 100m ∼ and 250m, respectively, for λ = 400nm [43]. ∼ The (group velocity) index of refraction, nλ , decreases monotonically from nλ = 1.38 to 1.33 for wavelengths from λ = 337nm to 532nm [43; 45]. Some IceCube first-guess algorithms assume a constant index of refraction over the whole detector, with a value of nλ = 1.34.

3.1.4 Coordinate System An IceCube-local coordinate system is defined. The (x,y,z) coordinates of IceCube follow a right-handed configuration where the x-y plane is horizon- tally directed, with the y axis directed along the Global prime meridian, and the z axis is vertically directed. The origin of the IceCube coordinate sys-

32 tem, (0,0,0)IC, is located close to the center of the instrumented volume, at a vertical depth of 1946 m. The origin of the IceCube coordinate system is sometimes referred to as the center of the IceCube detector. The direction of a particle in IceCube can thus be given in terms of an (x,y,z) unit vector, or in spherical (azimuth, zenith) coordinates, (φ,θ), where:

y z tan(φ) = , cos(θ) = (3.2) x x2 + y2 + z2 The instrumented volume of IceCube can bep defined in many ways, depend- ing on the purpose of the definition. For the purpose of the analysis presented in this thesis, the IceCube instrumented volume has been defined as a hexag- onal prism extending 62.5 m outside of the outermost DOMs. This is used when determining if a set of coordinates represents a point inside or outside of the detector, e.g. when calculating the geometric length of a particle trajectory through the detector (see Chapter 6.2.1). The 62.5 m margin was chosen as half of the average horizontal nearest-neighbor distance between DOMs, thus containing the volume within which the detector is evenly instrumented (aside from the denser DeepCore volume).

3.2 Data Acquisition and Triggering The majority of IceCube data readout triggers are based on so-called hard local coincidence hits (HLC hits). An HLC hit is designated as any hit that is registered within a 1 µs time window around another hit in an adjacent or second-adjacent DOM on the same string. If a hit does not fulfill this condition it is labeled as a soft local coincidence hit (SLC hit) [40]. HLC hits are often required by the IceCube trigger conditions as SLC hits are more likely to arise from noise. Several trigger conditions are in place for triggering the data readout of the detector, where some are general purpose and others are tuned to specific use-cases. The most general trigger is the simple multiplicity trigger (SMT), which requires eight or more registered HLC hits within a 5 µs time window to trigger data readout. Other triggers may require fewer hits or longer time windows by setting additional spatial requirements (e.g. a certain number of hits occurring on the same string), or be specially designed for selecting slowly moving particles. In the analysis presented in this thesis, data collected with any active trigger is considered. The data readout triggering system continuously monitors the detector for any satisfied trigger conditions. As a trigger condition is met a time window of [−4µs;+6µs] is spanned around the trigger time window (of length 5 µs in the case of the SMT) to define the period of data readout. Overlapping periods of data readout are merged, excepting the longer running triggers (e.g. the slow

33 particle trigger, for which the time window is 102–103 times longer than a regular event). Next, the (merged) readout time window is sent to the event builder software. The event builder collects all hits (HLC as well as SLC) within the time window into a so-called pulse-map for the event, which forms the basis for further data analysis. The average total trigger rate is 2.7 kHz, with a < ±10% annual modulation that depends on atmospheric conditions, which yields an average daily data readout of ∼ 1TB d−1. Both the trigger conditions and filter algorithms are implemented solely in software, and may therefore technically be changed at any time. However, it is agreed that these procedures should be changed no more than once per year, in the transition between data collection seasons.

3.2.1 Data Filter Stream A number of higher level filters exist that monitor the event stream. These are designed to define specialized data substreams with different character- istics, e.g. cascade-like events, events that start inside the detector or events contained in the DeepCore sub-detector. An event that passes one or several data filter(-s) is transferred via satellite to the central IceCube data storage for further analysis. This constitutes a transfer rate of ∼ 100GB d−1 [40]. The majority of the current data filters base their analysis on a common pulse map, called InIcePulses. As a part of the filter procedure, the filter algo- rithms construct a custom set of variables for use in further data analysis. In the analysis presented in this thesis, events that pass the EHE filter are selected. The EHE filter was designed to be used as an initial step of the EHE analysis, searching for extremely high-energy neutrinos with energies in and above the PeV range (see Appendix A). The filter is set to reject events where 3 the number of registered photo-electrons, nPE, is less than 10 , and the average data rate is 0.8 Hz [46].

3.3 Typical Events The majority of events registered with the IceCube detector are caused by muons, and bundles of muons, that were produced by cosmic ray interactions in the atmosphere. Atmospheric muons are detected at a rate between 2.5 kHz and 2.9 kHz, depending on atmospheric conditions. The second most common class of events is induced by atmospheric neu- trinos. These are detected with a typical rate a factor of ∼ 10−6 lower than the atmospheric muon rate. Like atmospheric muons, these neutrinos are pro- duced when cosmic rays interact with nuclei in the atmosphere. In addition to the atmospheric neutrinos and muons, certain IceCube events are induced by extra-terrestrially produced neutrinos, commonly called astro-

34 physical neutrinos. The diffuse flux of astrophysical neutrinos can be experi- mentally measured by large neutrino telescopes, such as IceCube [30], but at very high energies the neutrino flux measurement suffers from low statistics and therefore exhibits a large uncertainty. At the highest energies the local flux is further reduced by neutrino absorption in the Earth. In addition to the event classes described above, very low energy neutrinos may arrive at the IceCube detector, originating in e.g. the atmosphere, the Sun or a nearby supernova. Such low energy neutrinos may give rise to only a single photo-electron, and thereby mainly contribute to the ambient stochastic noise background in IceCube. However, a close enough supernova might still produce enough low energy neutrinos to be detected as a significant increase in the collective rate of the ambient background [33].

3.4 High Energy Neutrinos in IceCube A neutrino interacts rarely with matter, and is only detectable in IceCube after a collision with the detector medium. The possible interaction channels are neutral current, NC, and charged current, CC, interactions with both the ambi- ent nuclei (dominantly) and electrons. Neutral current interactions take place through the exchange of a Z , via:

Z ν/ν¯ + X −−→ ν/ν¯ + X∗ (3.3) where a momentum exchange between the neutrino ν (antineutrino ν¯ ) and nucleus X has taken place. If the momentum transfer is large enough, the nucleus X will break up (indicated by the final state asterisk) and cause a particle shower through the medium. Charged current interactions take place through the exchange of a W boson, via:

W ± ν/ν¯ + X −−→ l−/l+ +Y (3.4) where the W exchange implies the production of an (electrically) charged lep- ton l±, as well as the conversion of the nucleus X into Y. Similar to the NC case, a momentum transfer takes place between the neutrino and the nucleus, which may break up in the impact. In the analysis that is described in this thesis, neutrino events that deposit a high amount of light in the detector are interesting as the background channel that is most difficult to reject. This implies neutrino events induced by very high energy neutrinos. For neutrinos with Eν & 10TeV the neutrino-nuclear interaction cross sections for NC and CC, σNC and σCC respectively, are very similar for neutrinos and antineutrinos. Both increase with the neutrino energy,

35 and are approximately given by the power laws [47]:

E α σ NC = ν × 2.31 × 10−36 cm2 (3.5) νX 1GeV   E α σCC = ν × 5.53 × 10−36 cm2 (3.6) νX 1GeV   with the exponent α = 0.363 (for comparison, 10−36 cm2 = 1pb). This is illustrated by the fact that a 30TeV neutrino has a ∼ 50% absorption probabil- ity when traversing the whole Earth, while this probability increases to above 95% for 300TeV neutrinos. In addition to the mainly dominant neutrino-nucleon interaction, electron antineutrinos incident on ambient electrons can produce resonant W − bosons when the collision center-of-mass energy coincides with the W boson mass, i.e. for an incident neutrino energy of Eν ≈ 6.3PeV. This is known as the Glashow resonance [48], and locally increases the electron antineutrino matter- interaction cross section by several orders of magnitude. A neutral current interaction induced by a neutrino always manifests as a hadronic cascade in the IceCube detector, originating from the break-up prod- ucts of the target nucleus. A charged current interaction may also yield a hadronic cascade, along with the final state charged . This enables vastly different signatures, determined by the lepton flavor: • A final state electron (or ) produces a local electromagnetic shower in the detector. • A final state (anti-)muon propagates long distances through the ice (energy dependent), producing light along its path both through Cher- enkov and radiative loss processes. • A final state (anti-)tauon propagates shorter than a muon, as its path is promptly ended by a tauon decay (producing yet another particle shower). A tauon may thus produce a so-called double bang event, where two consecutive particle showers are produced and connected by the track of the tauon.

3.4.1 Typical Event Signatures The great majority of all events detected by IceCube can be categorized into one of two main event categories, track-like and cascade-like events. A track-like event is an event that displays a clearly elongated light signa- ture in the detector. The elongation arises when a particle produces light while propagating through the detector, usually more than several hundred meters, which requires a particle with high penetrative power in ice. Outside of the realm of exotics, such as magnetic monopoles, this limits the particle options to muons or tauons, where the tauon is required to be highly energetic in order

36 Figure 3.5. Event view of a track-like event, here represented by a simulated muon antineutrino event with 6 energy Eν = 3.1 × 10 GeV.

Figure 3.6. Event view of a cascade-like event, here represented by a simulated event with 6 energy Eν = 5.2 × 10 GeV.

37 to propagate sufficiently before decaying. A track-like event may have a parti- cle shower at the beginning, or at the end in the case of tauons. See Figure 3.5 for an event view of a simulated track-like event. The track may start inside or outside of the detector. A track starting inside the detector, a starting track, must be induced by a neutrino (again, outside of the realm of exotics such as boosted dark matter), which enables the outer lay- ers of strings of the detector to be used as a veto against non-neutrino events. A track that starts outside of the detector, a non-starting track, may still be induced by a neutrino, but may also be an atmospheric muon or muon bundle. It is impossible to distinguish a non-starting track induced by a from an atmospheric muon, so this must be done on a statistical level based on the directions and energies of the incoming tracks. The light of a muon or tauon track is mainly produced as Cherenkov ra- diation from secondary interaction products that are produced by stochastic collisions along the trajectory of the particle. The radiative cross section in- creases with energy, with the result that more energetic particles produce more secondary light than less energetic particles do. A cascade-like event displays a shorter and broader light signature in the detector. A cascade-like event is induced by a neutrino that interacts with the ice to produce a large particle shower, and may be either hadronic or electromagnetic. These are characteristically produced by electron and tauon neutrino charged current interactions, or neutral current interactions involv- ing any neutrino flavor. The light is produced as direct Cherenkov light from the secondary charged particles in the particle shower, which spread out over a short distance, typically less than ∼ few meters. Therefore, the light in a cascade-like event appears to have a point-like production vertex compared to the characteristic instrumentation scale of the detector, and has a broad angu- lar distribution. See Figure 3.6 for an event view of a simulated cascade-like event.

3.5 Interpreting an Event View This thesis, as well as other IceCube literature, contains visual representations of registered events in the form of so-called event views. An example event view of a magnetic monopole event is shown in Figure 3.7. In Figure 3.7, the colored and gray represent DOMs with and with- out registered charge in the selected pulse-map, respectively. The apparent size of a represents the registered charge magnitude, with a customiz- able size-to-magnitude ratio. The color scale of the colored DOMs, from red to blue, represents the detection time of the first registered pulse in the DOM, from early to late, respectively. The time interval represented by the color scale is also customizable, in order to allow meaningful viewing of events with varying time widths.

38 Figure 3.7. Event view of a simulated magnetic monopole event, displaying the In- IcePulses pulse-map along with the Monte Carlo true trajectory of the particle. The horizontal deficiency of registered pulses close to the center of the track is caused by the main dust layer present in the ice.

An event view may also contain one or several straight lines. These repre- sent (reconstructed or Monte Carlo) particle trajectories through the detector (see Chapter 6.2.1). Note the horizontal deficiency of registered pulses close to the center of the track in the example event view, Figure 3.7. This is caused by the increased absorption in main dust layer present in the detector volume.

39 4. Magnetic Monopoles in IceCube

As a magnetic monopole propagates through the IceCube detector medium it interacts with the surrounding matter via a number of different processes. Several of these processes produce optical light around the trajectory of the monopole, which is readily detected by the IceCube optical modules. This chapter is dedicated to the interactions between a magnetic monopole and the IceCube ice, the light that is produced, as well as the characteristic signatures of a magnetic monopole event in IceCube.

4.1 Energy Loss in Matter As a magnetic monopole traverses a medium, it will inevitably lose energy through interactions with the surrounding matter. These energy losses occur through several different processes, depending on the speed of the monopole. Over a large portion of the speed range, from β ∼ 0.1 to 0.99995 (γ ∼ 100), the interactions between a magnetic monopole and the surrounding medium can be modeled as the interactions of a heavy electrically charged particle with a charge corresponding to the effective charge of the magnetic monopole [49; 50]. Formally, the substitution ze → gβ is made, where ze is the charge of the electrically charged particle, g is the monopole charge and β is the speed of the monopole in units of the speed of light. In this speed region, the monopole mainly loses energy by ionizing and exciting the electrons in the surrounding medium, so the average energy loss dE per unit length dx is given by Equation 4.1 below [51; 52], similar to the Bethe-Bloch formula.

2 2 2 2 2 dE 4πg e ne 2β γ mec K 1 + δ − = ln + − − B (4.1) dx m c2 I 2 2   e     Here, ne is the number density of electrons in the medium, me is the electron mass, and β and γ are the monopole speed and Lorentz factor respectively. Additionally, I is the mean excitation energy, δ is a density effect correction, K is the correction given by the δ-electron ionization cross section [53] (given by the Kazama-Yang-Goldhaber (KYG) cross section, see Section 4.2.2) and B is the Bloch correction. The Bloch correction accounts for interactions where the wavefunction of the incident monopole does not fully cover the scattering center of the ambient target electron.

40 The mean excitation energy and the density effect correction are given in [54], and the KYG and Bloch corrections in [51] as 0.406 and 0.248, re- spectively, for Dirac charged magnetic monopoles in ice. Equation 4.1 yields a slow and steady rise of the average monopole energy dE loss, − dx , over the speed range from β = 0.1 to 0.99995 (γ = 100). At the lower boundary, β = 0.1, the energy loss is ∼ 350GeV m−1, and at the upper boundary, β = 0.99995, it is ∼ 1300GeV m−1 [52].

4.2 Light Production As a magnetic monopole passes through a medium it would produce light over a wide range of frequencies through several processes, with different processes dominating in different monopole speed intervals. If the medium is transparent to visible light, such as ice or water, the visible part of the produced spectrum could be readily detected by optical modules immersed in the medium. This section will treat the production of visible light by magnetic monopoles prop- agating through ice. As described in Chapter 2.3, a GUT scale magnetic monopole is hypoth- esized to induce in the surrounding medium. The final decay products of the proton decay would produce visible light in the form of Cher- enkov radiation. This is the dominant source of visible light for magnetic monopoles passing through ice with a speed below β ∼ 0.1. Faster magnetic monopoles, with a speed above β ∼ 0.1, induce lumi- nescence light as they excite the surrounding that subsequently deex- cite [55]. The cross section for luminescence light production is currently uncertain, but this is the dominant light production process for magnetic mo- nopoles with a speed β between ∼ 0.1 and ∼ 0.5. For monopoles in the speed range that is relevant for this analysis, β ∈ [0.750;0.995], luminescence light constitutes between ∼ 0.1% and ∼ 1% of the total light yield, which is domi- nated by direct Cherenkov light. Magnetic monopoles with a speed above β ∼ 0.5 not only excite the sur- rounding medium, but also ionize it. The resulting unbound electrons may have a speed above the Cherenkov threshold, and produce Cherenkov light. In this manner, magnetic monopoles above β ∼ 0.5 produce indirect Cher- enkov light [53]. This is the dominant light source for magnetic monopoles with a speed below the Cherenkov threshold in ice, and is described further in Chapter 4.2.2. Magnetic monopoles that themselves have a speed above the Cherenkov threshold in ice induce Cherenkov light directly [56]. Due to the high ef- fective charge of a magnetic monopole it will produce almost four orders of magnitude more Cherenkov light than a muon with the same speed. Direct Cherenkov light is the dominant light source for magnetic monopoles with a

41 speed above the Cherenkov threshold in ice, and is described further in Sec- tion 4.2.1. In the final regime of magnetic monopole light production the light is pro- duced by radiative energy losses (pair production, bremsstrahlung and photo- nuclear interactions) along the path of a monopole with ultrarelativistic speed 3 (Lorentz factor γ & 10 ) [11]. This is the dominant source of visible light for ultrarelativistic magnetic monopoles passing through ice. The amount of Cherenkov light produced by a magnetic monopole, directly and indirectly, is shown in Figure 4.1 expressed in number of photons per meter, along with the direct Cherenkov light produced by a muon.

4.2.1 Cherenkov Radiation Cherenkov radiation is the electromagnetic (EM) radiation that is produced as an electromagnetically charged particle traverses a dielectric medium at a speed higher than the speed of light in the medium, cm. It was first observed by P. Cherenkov in 1934 [57], for which he received the Nobel prize in 1958. The 1958 Nobel prize was shared with I. Frank and I. Tamm, as they developed the theory describing the production of Cherenkov radiation, deriving the Frank- Tamm formula (Equation 4.3, adapted for monopoles) [58]. When a charged particle propagates through a dielectric medium it contin- uously polarizes the medium around it. As the particle passes, the polarization is relaxed and the medium returns to its ground state by emitting a wave of electromagnetic radiation. The EM waves propagate concentrically outwards from the point of origin with the speed of light in the medium. As the par- ticle moves with a speed vp, the polarization origin also moves, so the inter- val between subsequent photons will be smaller in the forward direction than the backward. Thus, when the particle speed equals the speed of light in the medium, cm, it will propagate along with its own EM wave front, and con- structive interference will induce a shock wave front of photons in the forward direction. When the speed of the particle is increased further, now above cm, the wave fronts will only be produced behind the particle, and will align with earlier fronts such that a conical shock wave front with the apex at the particle position is formed. The light in this wave front is called Cherenkov light. The direction of the wave front relative to the particle direction, θC, depends on the ratio between the speed of light in the medium and the particle speed according to:

cm 1 cos(θC) = = (4.2) vp nλ β v Here n = c is the refractive index of the medium and β = p is the speed λ cm c of the particle given as a fraction of the speed of light in vacuum. The produc- tion angle of the Cherenkov photons will thus increase with the particle speed,

42 Figure 4.1. The number of photons per meter produced by a magnetic monopole in ice through the Cherenkov (red line) and indirect Cherenkov (yellow line) processes. Also included is the number of direct Cherenkov photons produced by a 1e electrically charged particle, exemplified with a muon (blue line).

and as the speed of the particle approaches the vacuum speed of light, vp c, the characteristic Cherenkov angle of the medium is reached. The refractive→ index for visible light in the deep Antarctic ice varies between nλ = 1.38 and 1.33 (see Chapter 3.1.3), and approximating it with nλ = 1.34 yields a char- acteristic Cherenkov angle of θC = 41.7°. Equation 4.2 additionally reflects the fact that Cherenkov light is only pro- duced for combinations of particle speed β and index of refraction nλ where the cosine function is defined, which implies that there is a medium-dependent minimal particle speed required to produce Cherenkov light. This speed is la- beled as the Cherenkov threshold, βCT , and is βCT = 0.746 for deep Antarctic ice. The number of photons produced through the Cherenkov process, Nγ , per unit length dx and unit wavelength dλ is given by the Frank-Tamm formula for magnetic monopoles [56; 58; 59]:

2 2 d Nγ 2πα gnλ 1 = 2 1 2 (4.3) dλdx λ e  − (βn )   λ where α is the fine structure constant and g is the magnetic charge of the monopole. Equation 4.3 differs from the Frank-Tamm formula for electric charges by gnλ 2 the factor e . This means that a magnetic monopole carrying the Dirac

  43 gDnλ 2 charge, g = gD, that traverses deep Antarctic ice will produce e ≈ 8430 times as much Cherenkov light as a muon propagating with the same speed. Integrating Equation 4.3 over the wavelength band detectable by an IceCube optical module, 350nm – 650nm, yields that a magnetic monopole traveling with a speed β = 0.995 through the deep Antarctic ice (assuming nλ = 1.34) will produce 2.23 × 108 detectable photons per meter (see Figure 4.1). The corresponding number for a monopole with a speed close to the Cherenkov threshold, β = 0.8, is 6.61 × 107 photons per meter.

4.2.2 Indirect Cherenkov Radiation In addition to the processes described above, a high energy particle with elec- tric or magnetic charge also ionizes the medium as it traverses it, thereby re- leasing free electrons (also known as δ-rays or δ-electrons) [53]. If a high enough energy is delivered to the electron, it will exceed the Cherenkov thresh- old of the medium and produce Cherenkov light. This Cherenkov light is known as indirect Cherenkov light, as it is not produced by the passage of the primary particle itself, but of secondary particles in the event. The total amount of indirect Cherenkov light produced by a magnetic mo- nopole depends on two factors — the number of produced δ-electrons, Nδ , and the Cherenkov light yield of each δ-electron. The latter is given by the Frank-Tamm formula, and the cross section for δ-electron production is calculated using one of several popular models. The cross section calcula- tion used in IceCube is known as the Kazama-Yang-Goldhaber (KYG) cross section [53], which is calculated with full consideration of quantum electro- dynamical and special relativistic effects. The KYG cross section is chosen over the Mott cross section [49], which neither accounts for quantum mechan- ical effects, nor allows the magnetic monopole to have a non-zero . The difference in light production between between the KYG and Mott cross sec- tions is negligible in the monopole speed range that is relevant for this analysis, β ∈ [0.750;0.995] [59]. The number of δ-electrons produced by a relativistic magnetic monopole propagating through ice, Nδ , is given per unit length dx and unit of electron kinetic energy dTe by [49; 59]:

2 2 2 2 d Nδ 2πneβ g e = 2 2 FKYG (Te,β) (4.4) dTedx mec Te

where ne is the electron number-density in the medium, me is the electron mass and Te is the electron kinetic energy. The form factor FKYG (Te,β) is the sum of the contributions corresponding to the helicity flipped and non-flipped final states. The magnetic monopole does not itself need to exceed the Cherenkov thresh- old to produce δ-electrons, but only exceed a speed of β ∼ 0.5. At speeds of

44 β = 0.5, 0.8 and 0.995 a total of 2.08 × 104, 5.16 × 106 and 3.14 × 107 pho- tons are produced per meter (see Figure 4.1). This means that indirect Cheren- kov light yields a detection channel for magnetic monopoles with speeds down to β ∼ 0.5, and also gives a significant contribution to the total light yield of monopoles with speeds above the Cherenkov threshold.

4.3 Magnetic Monopole Signatures in IceCube The analysis that is described in this thesis is designed to detect magnetic monopoles with a speed above the Cherenkov threshold. These monopoles dominantly produce light via the Cherenkov process, and subdominantly as indirect Cherenkov light via δ-electrons. Direct Cherenkov light is produced with a characteristic shape around the magnetic monopole trajectory, which forces monopole events to have several characteristic signatures if detected with the IceCube detector array. In order to be identified as a monopole event, it needs to satisfy each of these signatures. Several illustrations of possible event signatures in IceCube are included in Figures 4.2 and 4.3. The major distinguishing signature of a magnetic monopole event in Ice- Cube is its brightness, i.e. the large amount of produced, and detected, light. A magnetic monopole would produce almost four orders of magnitude more direct Cherenkov light than a muon. Similar to an ultrarelativistic magnetic monopole, a muon passing through the IceCube detector volume may also pro- duce light through stochastic collisions with the medium, resulting in a non- uniform light pattern in the detector. However, the majority of muon events will exhibit less light than the average magnetic monopole event, due to the large effective charge of the monopole. Compare Figure 4.2a (monopole) with Figures 4.2b and 4.2c (dim track and cascade). Additionally, the magnetic monopole event should be non-starting/-stopp- ing, i.e. the event should neither appear to originate from inside the detec- tor volume nor finish before exiting the detector. The monopole should pass through the detector with negligible changes in direction and speed due to its high penetrative power. Compare Figure 4.2a (monopole) with Figures 4.3a and 4.3b (cascade and starting track). Cherenkov radiation is also produced with a highly consistent number of photons per unit length, yielding a very smooth, non-stochastic, light output over the length of the detector. Compare Figure 4.2a (monopole) with Fig- ure 4.3c (stochastic track). Finally, as this work aims to discover magnetic monopoles in the speed range β ∈ [0.750,0.995], the final event signature is the subluminal speed of the primary particle in the event. The large fiducial volume, the dense and uniform instrumentation, and the excellent timing resolution of IceCube makes it ideally suited to detect a cos- mic flux of magnetic monopoles above the Cherenkov threshold.

45 (a) A magnetic monopole event.

(b) A dim cascade-like event.

(c) A dim track-like event, with minor stochastic losses.

Figure 4.2. Illustration the IceCube detector volume (blue) along with a registered event. The color scale corresponds to the color scale of an actual event view, where red blue represents early late detected pulses. → →

46 (a) A bright cascade-like event.

(b) A bright track-like event with stochastic energy losses, starting inside the detector volume.

(c) A bright track-like event with stochastic energy losses, starting outside of the detector volume.

Figure 4.3. Illustration the IceCube detector volume (blue) along with a registered event. The color scale corresponds to the color scale of an actual event view, where red blue represents early late detected pulses. → →

47 5. Magnetic Monopole Event Simulation in IceCube

This chapter covers the methods that are used to produce Monte Carlo (MC) simulated magnetic monopole events in IceCube, as well as the validation of the simulation process against theoretical prediction and experimental data. The event simulation process can be divided into several steps — particle generation, particle propagation, light generation, light propagation, and light detection. In the particle generation step of the simulation process the initial parame- ters of the simulated particles are determined. These include the particle type, initial energy (or speed), and initial position and direction. The generation step can either involve an isotropic incident flux of particles within a large energy or speed range, or be conducted with more specific initial parameters. In the particle propagation step, the generated particles are propagated through the detector until they either propagate too far away or lose enough energy to no longer produce a significant amount of light. The propagation step includes simulating any stochastic scattering that the particle might expe- rience, along with any decay and resulting daughter particles. The propagation step yields a that produce light in the de- tector. This list is evaluated in the light generation step of the simulation process, where the appropriate light production processes are set up for each particle, and the Monte Carlo photons are generated. The photons are then propagated through the detector in the light propagation step. This ties into the light detection step where the detector acceptance to incident photons is accounted for.

5.1 Magnetic Monopole Generation The IceCube magnetic monopole generation software allows both for a general purpose generation of magnetic monopoles, and a targeted generation using specific generation parameters. This is achieved by allowing tuning of each parameter of the software to the specific use-case. Several of the simulation parameters can be set to hold a specific value or to be randomly sampled from a user-specified probability function. Each magnetic monopole event is generated independently and assigned the following parameters:

48 R

d

(0,0,0)IC

Figure 5.1. Illustration of the placement of the generation disk (red) around the Ice- Cube detector volume (blue) and three examples of generated magnetic monopole particles (green). The center of the IceCube detector is marked by (0,0,0)IC, and the generation disk radius and the distance between the generation disk and the center of IceCube are labeled R and d, respectively. Figure by the author.

v • Speed, β = c • Mass, mMM • Initial position,x ¯ =(x,y,z) • Direction, (θzen,φazi) The speed of the magnetic monopole is given by the user, either as one specific value, or sampled from a given distribution (uniform or a power law) with specified boundaries. The mass of the magnetic monopole is given by the user as a single value. It is common to choose a mass that allows the monopole to pass “un-scattered” over the length of the detector, i.e. a mass for which the velocity vector of the monopole is not expected to change significantly. This is then generalized to represent a wide variety of masses where this assumption is valid. The default value for the magnetic monopole mass in the IceCube simulation software is 1011 GeV, which is also used in this analysis. The initial position of the magnetic monopole is generated at random coor- dinates on a generation disk placed close to the detector volume. The gener- ation disk is placed with its normal vector directed towards the center of the

49 IceCube detector volume, (0,0,0)IC, and the magnetic monopole is directed along the normal vector of the generation disk. The monopole generation pro- cedure is illustrated in Figure 5.1. The direction of the disk normal vector is sampled separately for each monopole event from a user specified interval, where the default settings yield an isotropic flux of magnetic monopoles. The radius of the generation disk and distance between the disk and the center of IceCube are set by the user, and both have standard values of 1000 m. Two separate studies were performed in connection with this analysis to determine appropriate values for these parameters.

5.1.1 Generation Disk Radius In order to ensure that a sample of simulated magnetic monopole events ap- propriately represents the assumed natural magnetic monopole flux, the radius of the generation disk must be adequately large. If a too small value is cho- sen, the simulated sample will be lacking events that in reality would trigger a detector response. If, on the other hand, a too large value is chosen, a large number of magnetic monopole events would be generated with a trajectory far outside of the instrumented volume (that therefore do not trigger the detector), which would be computationally expensive. Therefore, a study was conducted to determine the minimal sufficiently large generation disk radius, and whether or not the 1000 m default value is appropriate to use in this analysis. A total of 2.3 × 104 magnetic monopole events were simulated in the speed interval β ∈ [0.800;0.995] with an isotropic flux generator configuration and a generator disk radius of R = 1200m. Lower speed monopoles, down to the Cherenkov threshold, were excluded as these events would exhibit less light than higher speed monopoles, which is counterproductive in a study regarding the farthest radial distance a monopole can have and still trigger a detector response. The light yield for magnetic monopole events within β ∈ [0.800;0.995] is relatively similar. The closest approach distance between each monopole trajectory and the center of the IceCube detector volume was recorded, as this is geometrically identical to the radial coordinate of the monopole generation position on the generation disk. The closest approach between a particle track and the center of IceCube is called the centrality of the particle track, dC. Just over half of the generated magnetic monopole events, 52.6 %, induced a trigger in IceCube. The remaining monopoles passed too far from the de- tector volume to yield enough light in the detector to induce a trigger. The distribution of the triggered monopoles over the centrality variable can be found in Figure 5.2. Included here is also the average distribution of the sim- ulated flux of magnetic monopoles. It was found that no magnetic monopole with a centrality larger than ∼ 1060m induced a trigger in IceCube. There-

50 Figure 5.2. The distribution of the detector triggering subsample of a simulated set of magnetic monopoles over their radial generation coordinate (red), generated with a disk radius of 1200 m. The radial generation coordinate is equal to the centrality of the monopole track, dC, which is the closest approach distance between the monopole track and the center of the IceCube detector. Included is also the average distribution of the simulated flux of magnetic monopoles (blue), with statistical uncertainty. fore, the default disk radius value of 1000 m is disregarded, and a value of 1100 m is adopted for further simulations. Note that the present monopole trigger acceptance is ideal for smaller generation radial distances, with cen- trality dC 700m.

5.1.2 Generation Disk Distance The distance between the generation disk and the center of IceCube is by de- fault set to 1000 m. This section covers a confirmation study that the default value of 1000 m is large enough. A too small value for this distance could result in the magnetic monopole appearing to have been generated inside (or close to) the detector volume, as the amount of detected light in the outer DOM layers of the detector will be too small. An estimation of the minimum disk distance that appropriately emulates an infinite track that enters the de- tector can be obtained from first principles based on the absorptivity of the deep glacial ice. Typically, light that is produced further away from the instru- mented volume than the characteristic absorption length will not be registered with the detector, which therefore indicates the minimum distance between the generation disk and the edge of the detector. The described effect is most easily identified when the magnetic monopole trajectory is parallel to the IceCube strings, as the monopole then passes a

51 1 000 m 900 m 800 m 700 m 600 m 500 m

Figure 5.3. Illustration of the IceCube detector volume (blue) along with the genera- tion disks (red) that were simulated to determine the appropriate distance between the generator disk and the center of IceCube. Figure by the author. series of consecutive DOM layers when traversing the detector, where the de- tected light can be measured as a function of depth. This is achieved when the magnetic monopole enters from e.g. the top of the instrumented volume, where the characteristic absorption length is 100m. A dedicated Monte Carlo study was performed to validate this prediction,∼ thus considering the full optical properties of the ice and the detector triggering logic. A total of 6 different distances between the generation disk and the center of IceCube were tested, ranging from 500 m to 1000 m with an increment of 100 m. The generation disk was positioned directly above the detector volume with the normal directed parallel to the strings. The instrumented volume starts around 500 m above the center of the detector, which means that a disk distance of 500 m corresponds to the generation disk being placed directly at the edge of the detector volume. Correspondingly, a 1000 m disk distance places the disk 500 m outside of the instrumented volume. The radius of the disk was set to 250 m in this study to avoid simulating partially contained events along the edges of the detector. An illustration of the generation disk configurations above the IceCube detector volume is included in Figure 5.3. For each tested disk distance a total of 1000 events were simulated in the speed interval β [0.800;0.995]. Lower speed monopoles, down to the Cher- ∈ 52 Figure 5.4. The average registered charge per DOM layer for each tested disk distance. The variation over DOM layers is an effect of the depth-dependent optical properties of the Antarctic ice (see Chapter 3.1.3, compare to Figure 3.4).

enkov threshold, were excluded as their light yield decreases rapidly with de- creasing speed, while it remains relatively stable within β [0.800;0.995]. The small generation disk radius forced all magnetic monopoles∈ to pass close to the center of IceCube, so all simulated events induced a trigger. For each simulated event the detected light was summed per horizontal DOM layer. For each tested disk distance, the average registered charge per layer of DOMs was then calculated. The average registered charge per DOM layer is shown in Figure 5.4 for each tested disk distance. It was found that a disk placed 500 m from the center of IceCube, i.e. directly above of the instrumented volume, yielded a lower average registered charge in the most shallow DOM layers in comparison with the larger tested disk distances. As no deviation was found for larger disk distances (600 m and upwards), it was concluded that it is sufficient to place the disk at least 100 m above the instru- mented detector volume, corresponding to the characteristic absorption length at this depth. The absorption length of the deep glacial ice is typically less than 200 m (maximum 250m). This is combined with the largest distance between a DOM and∼ the center of the detector, 764 m, to conclude that the default value of the disk distance parameter, 1000 m, can be accepted for use in this analysis.

53 5.2 Magnetic Monopole Propagation The propagation of a magnetic monopole in the IceCube software is based on the assumption that the magnetic monopole has negligible interactions over its propagation in the detector volume. This implies that the monopole ex- periences negligible changes to its velocity vector (direction and speed) as it propagates through the full length of the detector. The magnetic monopole propagation is realized as a series of consecutive track segments with a set length, s. The first segment is initiated at the co- ordinates given by the generation procedure, and is extended by s along the direction vector from the generation. The starting point of the track is given s the time coordinate 0, and the end point time coordinate βc , representing the time it would have taken a magnetic monopole to propagate along the seg- ment. Each following track segment is initiated at the end (t,x¯) coordinates s of the previous segment, and given a length s and duration βc . The series is finished when the sequence has propagated far outside of the detector volume. The length of a track segment, s, can be specified by the user, and has a default value of 10 m.

5.3 Light Production and Detection The IceCube light production simulation is based on the series of particle seg- ments that are created by the particle propagation procedure. This procedure is identical for any type of light-producing particle, e.g. a magnetic monopole or a muon. For each particle segment in turn, the relevant light production pro- cesses are evaluated, and the appropriate amount of light with the appropriate angular distribution is simulated. A magnetic monopole in the speed range β ∈ [0.750;0.995] mainly pro- duces light through the Cherenkov process. The light yield is thus calculated with the Frank-Tamm formula for magnetic monopoles (Equation 4.3), using the effective charge of the monopole, its speed and the refractive index of the ice. The sub-dominant light production process in this speed range is indirect Cherenkov light, where the simulated light yield is based on Equation 4.4 in conjunction with the Frank-Tamm formula. The light propagation and detection are governed by the ice-model of the IceCube simulation software. Here, “ice-model” is a slightly misleading term, as it not only describes how light propagates through the detector medium (the ice), but also how it propagates through the DOM glass housing, optical gel and PMT until its conversion to photo-electrons. An IceCube ice-model is defined by several sets of parameters, each guiding a different aspect of the light propagation and detection. Three of these param- eter categories are particularly important for the uncertainty studies conducted in this analysis, and will therefore be highlighted below.

54 The scattering and absorption parameters in the ice-model represent the characteristic scattering and absorption lengths of photons propagating in ice. They are parametrized as functions of depth and photon wavelength. The DOM efficiency parameter in the ice-model represents the compounded probability that a photon that reaches the surface of a DOM will be registered as a detector hit. It comprises several different effects that together affect the probability that an incident photon gives rise to a photo-electron in the PMT, e.g. the transmittance of the DOM glass housing and optical gel, the PMT effi- ciency, and the shadowing effect of the cable running alongside the DOM. The magnitude of each of these effects has either been measured in the laboratory, in situ or as a combination of the two. Despite the outer spherical symmetry of a DOM, it does not have isotropic photo-sensitivity. The inner geometry of the DOM is (close to) rotationally symmetric around the vertical axis, with the active detection unit (the PMT) directed downwards. Thus, the photo-sensitivity of the DOM is assumed to have the corresponding symmetry. Therefore, the DOM angular sensitivity parameters in the ice-model, p0 and p1, describe the detection probability for a photon that reaches the DOM from a particular incident zenith direction. The default settings yield zero efficiency for photons incident from directly above the DOM, and maximum efficiency for photons from almost directly below. In addition to the ice-model, the detector response is governed by the de- tector configuration file, that describes the status of each part of the detector at a given time. This is where information about individual DOMs is stored, e.g. their spatial coordinates and whether they are active or not. At the start of each detector season, the current detector configuration is extracted to rep- resent the season in Monte Carlo events production. Notice that the detector configuration file must be chosen by the user, and that there is some difference between the configuration files of each year.

5.4 Simulation Validation In order to produce reliable results, it must be validated that the simulation procedure that is used in an analysis represents well. This is commonly done by comparing the event distributions that are produced with experimental data and simulated event samples over several variables relevant to the anal- ysis. It may also be done by comparing features of the simulated events with the results of analytical calculation. The modular design of the IceCube simulation chain allows all IceCube simulation to be produced with common software for the photon propagation and detection. This, in turn, allows that validation of the simulation chain performed within one analysis may transfer to another analysis with similar event characteristics.

55 Figure 5.5. Experimental data and simulated event samples (atmospheric muons, at- mospheric neutrinos, GZK neutrinos) at the EHE filter level over the base-10 log- arithm of the number of registered photo-electrons, log10 NPE, and the cosine of the reconstructed zenith direction of the event, cosθ. Credit: Figure 4 from reference [60].

5.4.1 Validation with Experimental Data Two validation procedures are described below, both performed within Ice- Cube analyses that are relevant for the analysis that is presented in this the- sis. The first, the Extremely High Energy (EHE) analysis, is described in Ap- pendix A, and is developed to search for a population of high energy neutrinos 8 (typically Eν & 10 GeV) that were produced in cosmic ray interactions with the cosmic microwave background (via the Greisen-Zatsepin-Kusmin (GZK) mechanism). The EHE analysis event selection is used in the first step of the present analysis. The second analysis, measuring the flux of high energy astrophysical muon neutrinos from the northern hemisphere, is described in Chapter 8.3.2. The resulting flux measurement of this analysis is assumed as the background astrophysical neutrino flux in the present analysis. The variables that were used in the EHE analysis event selection have been extensively validated through comparison between simulated event sam- ples and experimental data [60]. An example of this is shown in Figure 5.5, where experimental data collected during the first complete detector season, IceCube-86 I, is compared with simulated event samples representing the sig- nal and background of the EHE analysis over two variables. The data is shown at the EHE filter level, and over the base-10 logarithm of the number of reg- istered photo-electrons, log10 NPE, and the cosine of the reconstructed zenith direction of the event, cosθ. The mass composition of the ultra-high energy cosmic ray (UHECR) flux is currently unknown, but is commonly bracketed

56 (a) The data distributions over the cosine of the reconstructed zenith angle of the event.

(b) The data distributions over a muon energy proxy variable.

Figure 5.6. The experimental data and simulated neutrino event samples (astrophysi- cal neutrinos, prompt atmospheric neutrinos, conventional atmospheric neutrinos) that are included in the analysis over the cosine of the reconstructed zenith angle and a muon energy proxy variable. Notice that the sum of the simulated neutrino distribu- tions largely falls directly on top of the conventional atmospheric distribution. Credit: Figures 1 and 2 from reference [61], respectively.

57 by two extreme composition cases — pure and pure iron nuclei [60]. The experimental data lies between the atmospheric muon flux expectations given by the two extreme UHECR cases in both of the tested variables, thus implying an agreement between the simulated samples and experimental data. A similar comparison was made in the second analysis. An example of this is displayed in Figure 5.6, where data collected during the detector seasons IceCube-79 and -86 I is shown along with simulated samples of astrophys- ical neutrinos, prompt atmospheric neutrinos and conventional atmospheric neutrinos [61]. Atmospheric muons do not enter this analysis, as they cannot traverse the Earth and therefore only arrive at the detector from above (the southern hemisphere). The data is shown over the cosine of the reconstructed zenith angle and a muon energy proxy variable, and good agreement is seen between the experimental data and the simulated event samples. Note that the high event count in a bin around the muon energy proxy value 1.4 × 105 is attributed to statistical fluctuation, which would arise with this size in 9 % of an ensemble of identical experiments. This is supported by the fact that the present resolution of the energy proxy variable would smear any feature based in physics into surrounding bins [61].

5.4.2 Magnetic Monopole Light Yield Validation Within the context of an IceCube analysis searching for magnetic monopoles within the speed range β ∈ [0.1;0.5], a study was conducted to confirm that the magnetic monopole light yield corresponds well to theoretical prediction [62]. This study was conducted by simulating a sample of magnetic monopoles with isotropic angular distribution around the detector for 17 discrete values of β, evenly spaced from β = 0.1 to 0.9. For speeds β ≤ 0.75 a total of 105 events were simulated per speed, while 103 events were simulated for each of the higher speeds. The result of this study is shown in Figure 5.7. Here, the left y axis, along with the analytical curves, indicates the theoretical light yield prediction of a magnetic monopole, given by luminescence light, indirect Cherenkov light and direct Cherenkov light. The right y axis, along with the included data points, indicate the average number of photo-electrons detected per event, N¯PE. The two y axes are normalized by aligning their values for the highest tested speed, β = 0.9. This point was separately validated through independent cal- culation and comparison. Note the two separate sets of data points. The first, denoted by arriving at DOM, accounts for all photons that produce a pulse in an optical module. The second, denoted by recorded by in-ice arrays, represents all photons that are part of a data acquisition event in IceCube. The difference between the two sets of data points in the low speed region is attributed to the low brightness

58 direct Cherenkov light total light, arriving at DOM indirect Cherenkov light total light, recorded by in-ice arrays luminescence light ref. point, left and right axis

7

5 6

4 (theory)

´ 5 1 − (simulation)

3 ¢ / cm

4 PE γ dx N dN ¡

³ 2

3 log log 1 2 0 0.2 0.4 0.6 0.8 1 β

Figure 5.7. The theoretical light yield prediction of magnetic monopoles (blue) along with the average number of detected photo-electrons in simulated magnetic monopole events (red), as functions of the monopole speed. Credit: F. Lauber [62]. and low speed of these monopoles, which results in a decreased trigger rate and an increased event truncation probability. The conclusion of this study was that the magnetic monopole light yield in simulated events corresponds well to the analytically predicted light yield.

59 6. Event Cleaning and Reconstruction Methods

In order to properly understand a recorded event in a detector, the low level recorded data must be combined into a coherent picture. This can be done by combining the low level data into more complex features of the detection. This process is called reconstruction, and the derived features are called recon- structed features. Reconstruction can be divided into two main categories — low level reconstructions and high level reconstructions. Low level reconstruc- tions concern transforming the raw electrical signal that is output from each detector module into a pulse that represents the incident signal. High level reconstructions concern combining the individual incident signals to deduce more physically relevant features of the event, such as the flavor or energy of the primary particle. This type of reconstruction is the topic of this chapter. High level reconstruction can be done to a desired complexity — where the rule of thumb is that simple reconstructions are computationally fast, e.g. fitting a simple function to the collected time series of pulses, while more complex reconstructions are slower. Therefore, a common strategy entails per- forming fast and basic reconstructions early in the data selection scheme, when the data volume typically is large, and only performing the more advanced and computationally heavy reconstructions later in the selection scheme, when the data volume is smaller. In order to implement an accurate reconstruction scheme, the features of the event that likely originate from the targeted physical phenomena must be selected, while disregarding other features. This process is known as event cleaning. In IceCube this typically entails removing DOM hits that are deter- mined to originate from stochastic noise or coincident atmospheric muons. In this chapter, the event cleaning and reconstruction methods that are used in this analysis are listed. The effects of several of the cleaning and recon- struction algorithms are exemplified with event views in Figures 6.1 and 6.2.

6.1 Event Cleaning Algorithms 6.1.1 The SeededRadiusTime Cleaning Method The SeededRadiusTime (SRT) event cleaning algorithm is designed to accept detected light pulses that are close to each other in space and time. It is often used in conjunction with the TimeWindow cleaning algorithm.

60 The SRT algorithm accepts a time duration, TSRT , and a spatial radius, RSRT , as input, along with the name of the pulse-map that will be cleaned. First, a set of seed pulses is identified as the set of all HLC:s with at least two other HLC:s within the chosen radius-time interval. The seed pulses are registered on a list of pulses accepted by the algorithm. Next, all pulses (SLC and HLC) within the input radius-time range of any seed pulse are added to the accept list. A new set of seed pulses is now defined including any HLC present on the accept list, and the procedure is repeated. This is repeated a maximum of 3 times, or until no more pulses can be added to the accept list. Finally, the algorithm produces an output in the form of a pulse-map containing the pulses on the accept list. The values of TSRT and RSRT must be input by the analyzer, and appropriate values differ depending on the type of event that is cleaned. A common choice for very bright events is a time duration of TSRT = 1000ns and radius of RSRT = 150m. The effect of the SRT cleaning method in combination with the TimeWin- dow algorithm is exemplified in Figure 6.1, where the two methods have been applied in the order SeededRadiusTime → TimeWindow.

6.1.2 The TimeWindow Cleaning Method The TimeWindow (TW) event cleaning algorithm is a basic event cleaning algorithm commonly used in IceCube, and often applied in conjunction with the SRT algorithm. The TW method accepts a pulse-map as input, along with a fixed time win- dow width, TTW . First, the pulses in the pulse-map are ordered in time. Next, a time window with duration TTW is set up, and its starting time is shifted until the largest number of registered pulses within the time window is found. The pulses inside this time window are accepted, while the remaining pulses are discarded. The TW algorithm produces a cleaned pulse-map as output. Similar to the SRT algorithm, the appropriate width of the time window depends on the type of the event that is cleaned. A common choice for very bright events is a time window of TTW = 6000ns. The TW time window is typically much longer than the SRT time window, as the TW algorithm applies its time window once per event, considering the full duration, while the SRT algorithm applies the time window many times per event, once per registered pulse. The effect of the SRT and TW cleaning methods (applied in the order SRT→TW) is exemplified in Figure 6.1.

61 6.2 Event Reconstruction Algorithms In this analysis, both simple event reconstruction methods and more advanced algorithms are employed. A common practice is that an advanced recon- struction algorithm employs a simple reconstruction algorithm, or a variation thereof, as a sub-procedure in its larger reconstruction scheme. Examples of this can be found among the methods used in this analysis.

6.2.1 A Particle Track Representation The term “track” is commonly used in IceCube to refer to the event class that features an elongated light signature, as opposed to the spheroid signature of the cascade event class. A track, when used in the context of event reconstruction features, refers to a geometrical representation of a particle trajectory with constant speed through the detector (or outside of it). This track carries both spatial and time information about the position of the particle, and is usually defined as an infinite straight line along which the particle propagates. A track is specified by the following parameters: x¯0 Spatial coordinates of an arbitrary point along the track t0 Time coordinate of the same arbitrary point along the track xˆ A directional unit vector v The particle speed, v = βc These parameters together define all (t,x¯) points along the track. In this analysis, a track is also represented by two derived properties. GeometricLength The geometrical distance between the entry and exit points of the track in the detector volume. If the track does not enter the detector, but only propagates outside of it, the Geometric- Length is 0 m. Centrality The closest approach distance between the track and the center of IceCube. This defines the centrality point, i.e. the point along the track that is closest to the detector central point.

6.2.2 The LineFit Track Reconstruction Method The LineFit track reconstruction method (LF) is a first-guess algorithm that is used to obtain a quick estimation of the primary particle trajectory. It ignores the propagation of the photons in the detector medium, and instead assumes that each detected light pulse is an independent measurement of the primary particle position, as it moves along a straight line. The LF method accepts a pulse-map as input, and produces an output in the form of a track. The track is found by minimizing the following function:

62 N 2 2 χ = ∑ (x¯i − x¯0 − v¯LF (ti −t0)) (6.1) i=1

Here, N is the total number of pulses, where (ti,x¯i) represent the pulse time and DOM position coordinates of pulse number i. Additionally, (t0,x¯0) is an arbitrary coordinate along the track andv ¯LF = vxˆ is the velocity vector. The 2 χ minimization is done with respect to the track parametersx ¯0, t0, v andx ˆ. Before applying the χ2 minimization procedure, the pulse-map is cleaned of outlier pulses and pulses determined to likely originate from noise. This cleaning is designed with the focus of enhancing the LF reconstruction perfor- mance, and is thus applied regardless of if other cleaning methods have been applied beforehand. The cleaning is done by first rejecting pulses that are detected late in relation to pulses in neighboring DOMs, as well as rejecting pulses that are too far from a first approximation of the track (obtained by a Huber regression on the pulse-map, that penalizes outlier pulses). The main component of the LineFit algorithm, the χ2 minimization, is an- alytically solvable, which makes it a very fast reconstruction algorithm. How- ever, it entirely ignores that each light pulse is generated by a photon that has propagated a non-zero distance away from the primary particle, a propaga- tion that depends on the scattering and absorption parameters of the detector medium, as well as the characteristic shape of the Cherenkov cone. These approximations reduce the accuracy of the algorithm. On the other hand, several of the advanced IceCube track reconstruction methods are unsuitable for use in the analysis that is presented in this thesis, as they assume that the primary particle is a muon. This includes assumptions on stochastic energy losses along the track as well as an assumption that the particle is propagating with the speed of light, both of which are incompatible with the physics of a magnetic monopole event in the speed interval relevant to this analysis. The LineFit algorithm makes no assumption on the shape of the detected light and, importantly, leaves the speed of the fitted track as a free parameter.

6.2.3 The EHE Reconstruction Suite The EHE reconstruction suite is a set of reconstruction algorithms that pro- duces a number of variables that are used to evaluate if an event passes the EHE filter (see Chapter 3.2.1). It starts by producing a custom pulse-map directly from the recorded DOM launch data, selecting only HLC pulses. Based on this pulse-map several collective statistics are produced, such as the number of registered photo- electrons, nPE, and the number of triggered detector channels (DOMs), nCH. Next, any pulses registered with the dedicated DeepCore strings are dis- carded, as the high DOM density of DeepCore otherwise biases further clean-

63 (a) Event view with the InIcePulses pulse-map and the true particle track.

(b) Event view with the InIcePulsesSRTTW pulse-map, the true particle track, and the EHE and BM reconstructed tracks.

Figure 6.1. Event views of the same simulated magnetic monopole event before (a) and after (b) event cleaning using the SRT and TW methods. The combined effect of the SRT and TW methods is most distinct when comparing (a) and (b) in the top-left and bottom-right outlier regions. The red, green and blue lines represent the true track of the magnetic monopole and the EHE and BM reconstructed tracks, respectively. See Chapter 3.5 for a description of how to interpret an event view.

64 (a) Event view with the InIcePulsesSRTTW pulse-map, the true particle track, the BM reconstructed track and the Millipede fitted series of energy losses.

(b) Event view with the BrightestMedian pulse-map, the true particle track, and the BM reconstructed track.

Figure 6.2. Additional event views of the same simulated magnetic monopole event as in Figure 6.1. The red and blue lines represent the true track of the magnetic mono- pole and the BM reconstructed track, respectively. The larger spheres along the BM track in (a) represent the Millipede fitted series of energy losses, with the size of the sphere representing the magnitude of the energy loss and the color scale corresponds to the color scale of the displayed DOMs. See Chapter 3.5 for a description of how to interpret an event view.

65 ing and reconstruction algorithms. The custom pulse-map is now cleaned us- ing the SRT and TW methods. The SRT cleaning is applied using a default time of 1000 ns and a radius of 150 m. The TW cleaning is applied with a default time window of 6000 ns. The LineFit track reconstruction is applied on the the cleaned pulse-map, which yields a best fit track for the registered pulses. The track fitting quality is preserved in the form of a reduced χ2 value, where the number of degrees of freedom is taken as the number of track parameters subtracted from the number of DOMs with registered charge. An example of an EHE reconstructed track can be found in Figure 6.1b, along with the InIcePulsesSRTTW pulse-map.

6.2.4 The CommonVariables Event Characterization Suite The CommonVariables (CV) event characterization suite is a collection of in- tuitively simple event characterization packages. Each of the CommonVari- ables packages focuses on a different type of event signature, e.g. the pulse timing information, and uses this to produce a number of statistics about the event. The CommonVariables suite was developed within IceCube with the purpose of providing a set of variables that could be useful in a wide variety of analyses. In this analysis three different characterization packages from the Common- Variables suite are used: TimeCharacteristics Produces statistics based on the timing informa- tion of the registered pulses of an input pulse-map. TrackCharacteristics Produces pulse distribution statistics based on an input pulse-map and an input reconstructed track. HitStatistics Produces general statistics related to the registered pulses of an input pulse-map. The variables that are used from each of the event characterization packages are listed below. TimeLengthFWHM (TimeCharacteristics) This variable quantifies the duration of the event. It is given as the full width at half maximum of the time distribution of the first registered pulse in each DOM with registered charge in the input pulse-map. AvgDomDistQTotDom (TrackCharacteristics) This variable con- siders all DOMs with registered charge in the input pulse-map. The distance between each hit DOM and the input reconstructed track is obtained, and is assigned a weight equal to the total registered charge in that DOM. The variable finally constitutes the weighted average of the distance between all hit DOMs and the reconstructed track. TrackHitsSeparationLength (TrackCharacteristics) This variable represents the geometrical span of detected light in the detector. First,

66 the center of gravity coordinates of the first and fourth quartiles of the time distribution of the registered pulses in the input pulse-map are found. Then, the two center of gravity points are projected onto the input track. And finally, the variable is taken as the distance between the on-track projections of the two center of gravity points. COG (HitStatistics) This variable is the center of gravity of all reg- istered pulses. It is calculated as the average position of all DOMS with registered charge in the input pulse-map, where the weight of each DOM is given by its total registered charge.

6.2.5 The Millipede Track Reconstruction Method The Millipede reconstruction algorithm is an advanced IceCube track recon- struction package that enables full reconstruction of a track, including vertex, direction and intermediate energy losses [63]. For computational reasons, only the reconstruction of energy losses along the track is applied in this analysis. This requires a pulse-map and a track as input. The energy loss reconstruction algorithm begins by modeling the track as a series of consecutive independent cascade energy losses along a series of con- secutive minimally ionizing track segments. The cascades and track segments are all placed at (t,x¯) points along the input track, directed in the forward di- rection of the track, and placed with a separation that is given as an input to the algorithm. The energies of the cascades and track segments are obtained by fitting them to the light pulses registered in the input pulse-map. The output of the Millipede energy loss reconstruction algorithm consists of the fitted series of consecutive cascades and track segments, for each cascade and segment including vertex, direction and energy information. In this analysis, the Millipede energy loss reconstruction algorithm is ap- plied on the InIcePulsesSRTTW pulse-map, i.e. the standard InIcePulses pulse- map cleaned with the SRT and TW algorithms, using the BrightestMedian reconstructed track, and setting the energy loss separation to 10 m. An inves- tigation was conducted to find the optimal track segment length and energy loss separation, based on obtaining the best separative power between events induced by magnetic monopoles and astrophysical muons in the energy loss RSD variable. This variable represents the relative standard deviation of the estimated energy losses of cascades and tracks inside of the detector volume, (see Chapter 10.3.2). An example of a Millipede fitted series of energy losses can be found in Figure 6.2a, along with the InIcePulsesSRTTW pulse-map.

67 6.2.6 The BrightestMedian Track Reconstruction Method The BrightestMedian (BM) reconstruction method is a method that was specif- ically developed for this analysis, and is specifically tailored to reconstruct events that exhibit the unique event signatures of a magnetic monopole event (see Chapter 4.3). The algorithm is divided into two consecutive stages — the pulse cleaning stage and the track reconstruction stage. In the track recon- struction stage, the LineFit algorithm is applied to the specialized pulse-map that is constructed in the pulse cleaning stage. The pulse cleaning stage is thus specifically designed to account for the characteristic monopole event features, as well as the machinery of the LineFit algorithm. Before applying the pulse selection, the SRT and TW cleaning methods are applied to the standard InIcePulses pulse-map with the settings used in the EHE reconstruction suite. The cleaned pulse-map is denoted by InIcePulsesS- RTTW. Next, a custom pulse-map is constructed to form the basis of the following track reconstruction. Initially, 90 % of the DOMs with registered charge in the InIcePulsesSRTTW pulse-map are rejected, accepting only the 10 % DOMs with the highest registered charge. For each remaining DOM, the distribu- tion of pulses is sorted in time. The pulse at the median time position is then kept (favoring the earlier pulse in case of an even number of pulses), while the remaining pulses are discarded. The resulting pulse-map, i.e. the map contain- ing the median time-positioned pulses of each of the 10 % brightest DOMs in the InIcePulsesSRTTW pulse-map, is labeled the BrightestMedianMap pulse- map. The brightest DOMs of the event are selected as these are likely to be lo- cated close to the true trajectory of the primary particle. Since the light output of a magnetic monopole is homogeneous over the full track length, the bright- est DOMs should be distributed evenly along the full length of the track. Additionally, only one pulse is chosen per accepted DOM as the LineFit method assumes each detected pulse to be an independent measurement of the (t,x¯) coordinates of the particle, which is incompatible with detecting multiple pulses in the same DOM. The pulse at the median time position is chosen in order to safeguard against outliers at the start and the end of the pulse series. In the next stage of the BM reconstruction method, the track reconstruction stage, the LineFit method is applied to the BrightestMedianMap pulse-map. The LineFit method is chosen as it leaves the particle speed as a free parameter, which is important as the monopoles that are pursued in this work propagate with a speed that may be significantly lower than c. An example of a BrightestMedianMap pulse-map along with the resulting BM reconstructed track can be found in Figure 6.2b. The same BM recon- structed track can also be found in Figure 6.1b along with the corresponding InIcePulsesSRTTW pulse-map.

68 In order to enable an evaluation of the BrightestMedian reconstruction algo- rithm, the reconstructed speed and direction are compared to the true particle speed and direction below. The magnetic monopole simulated event rate before Step II of the event selection scheme (see Chapter 10.3) is displayed in Figures 6.3, 6.4 and 6.5, as a function of the BM reconstructed and the true particle speed, cosine of the zenith direction and azimuthal direction, respectively. In each figure the 1:1 diagonal is marked with a black line. Note that there is a systematic tendency for overestimation of the particle speed. This variable will only be used in event selection, so this effect will not reduce the performance of the variable. If its purpose would have been to estimate the real particle speed, a correction would be applied. Additionally, the difference between the BM reconstructed speed and the true particle speed is shown in Figure 6.6 along with the corresponding quan- tity for the EHE reconstruction for the same sample of magnetic monopole events. The root-mean-squares of the distributions are 0.0373 and 0.0803 for BM and EHE, respectively. Correspondingly, the angular difference between the BM reconstructed di- rection and the true particle track direction is shown in Figure 6.7 along with the corresponding quantity for the EHE reconstruction. The root-mean-squares are 5.69° and 7.90° for BM and EHE, respectively. It is concluded that the BM track reconstruction method reconstructs the track more accurately than the EHE track reconstruction method, both consid- ering the speed and the direction of the magnetic monopole.

69 Figure 6.3. The magnetic monopole simulated event rate before Step II of the event se- lection scheme as a function of the BM reconstructed speed, βBM, and the true particle speed, βMC. The black line represents the 1:1 diagonal.

Figure 6.4. The magnetic monopole simulated event rate before Step II of the event selection scheme as a function of the cosine of the BM reconstructed zenith direction, cos(θzen,BM), and the cosine of the true particle zenith direction, cos(θzen,MC). The black line represents the 1:1 diagonal.

Figure 6.5. The magnetic monopole simulated event rate before Step II of the event selection scheme as a function of the BM recon- structed azimuthal direction, φazi,BM, and the true particle azimuthal direction, φazi,MC. The black line represents the 1:1 diagonal.

70 Figure 6.6. The difference between the BM reconstructed speed and the true par- ticle speed (red) along with the corresponding quantity for the EHE reconstruction (blue) for the simulated magnetic monopole events before Step II of the event selec- tion scheme.

Figure 6.7. The angular difference between the BM reconstructed direction and the true particle track direction (red) along with the corresponding quantity for the EHE reconstruction (blue) for the simulated magnetic monopole events before Step II of the event selection scheme.

71 7. Data Analysis and Statistical Tools

In order to conduct an analysis in contemporary physics research a number of tools and methods may be employed to guide the procedure. These may for example guide the overall strategy of the analysis, the statistical data treatment or the event classification. The most important tools and methods that were used in the present analysis are described in this chapter.

7.1 Analysis Strategies A data analysis can be conducted using several overall strategies, concerning e.g. the data selection, the statistical data treatment, and what data may be used for analysis development. A brief introduction is given below for three common strategies.

7.1.1 Cut-and-Count Analyses As the name suggests, a cut-and-count analysis is divided into two main stage — the cutting, where a series of acceptance criteria are employed to select a subset of data — and the counting, where the number of events in the selected subset are counted and used to draw conclusions on the initial hypothesis. The cutting stage of a cut-and-count analysis is done with the purpose of enhancing the relative contribution of the signal (the phenomenon you are searching for) in the total data volume. Therefore, the cut criteria must be designed in such a way that they favor signal events over background events. To do this, a number of cut variables should be identified as quantifiable event features where signal and background events appear dissimilar. With each cut variable, a cut value should also be defined, which represents the boundary between the signal- and background-like region. After having designed a series of selection criteria, i.e. a multi-level data se- lection scheme, that increases the relative signal contribution to a satisfactory degree, the number of both signal and background events that survive to the final level (after all cuts have been applied) is estimated. The final result will emerge from the statistical comparison of the expected number of final-level background events, and the observed number of final-level events in real data.

72 7.1.2 Multi-Variate Analyses A multivariate analysis is, as the name suggests, an analysis that considers multiple variables simultaneously. This differs from the cut-and-count ap- proach, where each variable is evaluated separately, and yields increased sep- arative power (between signal and background events) by examining the data sample shape in multi-dimensional space [64]. A multivariate analysis quickly grows in complexity with an increasing number of variables. Therefore, it is common to conduct such an analysis by the use of multivariate analysis tools, such as boosted decision trees or neu- ral networks. These are trained on known samples of signal and background events with the purpose of creating an algorithm that is able to extract one or more features of each event. In the present analysis, a boosted decision tree is used to classify an event on a floating scale between background- and signal- like. Additionally, a neural network has recently been used within IceCube to classify events into event type categories such as cascade, track, double bang and starting track [65]. Due to the complexity of a multivariate analysis, a common approach en- tails including this as a final step in a cut-and-count analysis and to define a cut criterion based on the final classification result of the multivariate analysis tool.

7.1.3 Analysis Blindness A blind analysis is an analysis where the data selection method is devel- oped without investigating the full sample of experimental data. This avoids bias that can arise based on the preconceptions and expectations of the ana- lyzer [66]. An analysis can be either fully or partially blind: Fully blind The analysis strategy cannot be based on any specific aspect of the experimental data set. Partially blind Experimental data may be used in the design of the analysis strategy if one or several key features are kept hidden, e.g. through randomization. In order to construct a well-grounded analysis strategy, it is common to base the analysis on simulated samples of events for both signal and background, or to use a subsample of experimental data (a burn sample, which is later discarded) as background. Relying solely on simulated event samples requires an accurate knowledge of the detector response, as well as detailed models of the signal and background, in order to reliably represent nature. When the full analysis strategy is finalized it is time to apply the analysis on experimental data, after which the analysis strategy should not be altered. This process is called analysis unblinding.

73 7.2 Determining an Upper Limit An analysis targeting an exotic phenomenon (e.g. the existence of a hypotheti- cal particle) will have one of two possible outcomes — either the discovery of the phenomenon — or the setting of an upper limit (UL) on the abundance of the phenomenon. A discovery may be claimed when the collected data cannot reasonably be described without invoking the phenomenon (usually requiring a deviation of 5σ or more from the background-only expectation value). The upper limit represents the upper bound of the confidence interval, i.e. the interval over the interesting observable that includes the true value of the observable in a fraction α of identical experiments. Here, α is called confi- dence level (CL), and a larger CL implies a wider confidence interval over the interesting observable (which, in the present analysis, is the magnetic mono- pole flux). In this analysis, a confidence level of 90 % is adopted.

7.2.1 Effective Area The signal efficiency of an analysis scheme can be quantified in terms of the effective area, which represents the cross-sectional area of an ideal detector recording with 100 % efficiency. Thus, the effective area is a general measure of the efficiency of the analysis scheme that allows comparison with other experimental efforts. LV The effective area at event selection level LV of an analysis, Aeff , is given by:

LV NLV Aeff = Agen × (7.1) Ngen

Here, Agen is the area of the generation disk that is used in the magnetic monopole simulation scheme, Ngen is the number of generated events, and N is the number of registered events at analysis level LV. The ratio NLV thus LV Ngen represents the signal detection efficiency at LV.

7.2.2 Upper Limit In the analysis that is described in this thesis, the 90 % CL upper limit on the MM magnetic monopole flux, Φ90 , is calculated with the method developed by G. Feldman and R. Cousins (F&C) [67], which is given by:

MM MM µ90 (nOB,nBG) µ90 (nOB,nBG) Φ90 = Φ0 × = (7.2) nSG Aeff ×t × Ω

MM Here, Φ0 is the assumed magnetic monopole flux, nOB is the observed number of events, and nSG and nBG are the number of expected signal and background events respectively. The F&C upper limit, µ90, is the upper limit

74 on the number of signal events that can be set given an observed number of events, nOB, and an expected number of background events, nBG. The second equality above is given by the calculation of the expected number of signal events, MM nSG = Φ0 × Aeff ×t × Ω (7.3) where Aeff represents the effective area, t the analysis livetime, and Ω the covered solid angle. In the analysis described in this thesis, the monopole flux assumption (and the expected number of signal events) is taken at the level of the previous best upper limit in the relevant β range, for illustrative purposes. (see Chapters 2.6 and 8.3.1). The absolute level of this assumption is irrelevant for the calcula- tion of the upper limit, which is clear from the final equality of Equation 7.2, MM MM where the factor Φ0 has canceled with a factor Φ0 in nSG. The expected number of background events, on the other hand, is relevant, and enters the calculation of µ90. In this analysis, this number is evaluated using simulated event samples.

7.2.3 Sensitivity The sensitivity of an experiment is given by the average of the possible upper limits that can be expected to be set by an analysis before knowing the number of observed events, nOB. It is calculated as a Poisson-weighted average of the possible upper limits that can be set for all possible values of nOB, and given the expected number of background events, nBG. MM The sensitivity, Φ90 , is thus calculated through: ∞ −nBG nOB MM MM e (nBG) Φ90 = ∑ Φ90 × nOB! nOB=0   MM ∞ −nBG nOB Φ0 e (nBG) = ∑ µ90 (nOB,nBG) × nSG nOB! nOB=0   MM µ¯90 (nBG) µ¯90 (nBG) = Φ0 × = (7.4) nSG Aeff ×t × Ω

Where µ¯90 (nBG) represents the F&C sensitivity for a given number of ex- pected background events, nBG, and is given by: ∞ −n nOB e BG (nBG) µ¯90 (nBG) = ∑ µ90 (nOB,nBG) × (7.5) nOB! nOB=0  

7.2.4 Including Uncertainties in the Upper Limit In the upper limit calculations above the expected number of final level signal and background events have been treated as absolute numbers with negligible

75 uncertainty. This, however, is rarely true in real experiments, so the uncertain- ties on these quantities must be considered. The uncertainties are included by transforming nSG and nBG in Equation 7.2 to their corresponding average expected equivalents with a modified Poisson- weighting, given by Equation 7.6 [13]. The transformed nBG enters the calcu- lation of µ90 as before. The nSG and nBG transformations are fully equivalent to each other, and based on their central nSG and nBG values and absolute uncertainties σSG and σBG, given by Monte Carlo estimation. The transformations are done as below, where A represents SG or BG: ∞ 0 0 nA → nˆA = ∑ nA × P nA|nA,σA (7.6) n0 =0 A  0 0 Here, nA is the summation variable, and P(nA|nA,σA) is the modified Poisson- weight with an included factor w that represents a Gaussian uncertainty with width σA: 0 ∞ e−(nA+x) (n + x)nA P n0 |n ,σ = A w(x|σ )dx (7.7) A A A n0 ! A Z−nA A  The Gaussian uncertainty, w(x|σA), is a weight factor with a mean value of 2 0 and a variance of σA, given by Equation 7.8. The uncertainty on the expected number of signal or background events, σA, is thus included as the Gaussian weight, w, along with the assumed number of events, (nA + x). The lower boundary of the integral is set to −nA in order to avoid unphysical negative values of the expression (nA + x).

2 1 x 1 − 2 σ w(x|σA) = √ e A (7.8) σA 2π   The second equality of the upper limit calculation (Equation 7.2) removes the explicit dependence of the value of nSG. Therefore, the inclusion of the nSG uncertainty in the upper limit has an additional step where a multiplicative factor nSG is included as such: nˆSG

MM nSG MM Φ90 → Φ90 (7.9) nˆSG

7.3 Model Rejection and Discovery Potentials There are several methods for evaluating the performance of an event selection scheme before it is applied to data, each highlighting a different quality of the selection scheme. Two methods are used in this analysis: the model rejection potential and the model discovery potential.

76 7.3.1 Model Rejection Potential The model rejection potential (MRP) constitutes the ratio between the F&C sensitivity, µ¯90, and the number of signal events that can be expected assuming a model flux, nSG [68]: µ¯ MRP = 90 (7.10) nSG This expression for the MRP is recognized as the coefficient that yields the MM flux sensitivity, Φ90 , from the assumed flux in Equation 7.4. The MRP thus represents the sensitivity that is achieved per expected sig- nal event. Therefore, finding the minimal MRP over a range of possible cut values yields the optimal cut value with regards to the optimization of both the sensitivity and signal acceptance.

7.3.2 Model Discovery Potential The model discovery potential (MDP) represents the relation between the ex- pected number of signal events and the number of events that need to be ob- served in order to claim a discovery [69]. The MDP is calculated as the ratio between the least detectable signal, µLDS, and the expected number of signal events, nSG: µ MDP = LDS (7.11) nSG Finding the minimal MDP over a range of cut values yields the optimal cut value with regards to optimizing both the signal acceptance and the least detectable signal. In order to calculate the least detectable signal, the critical number of ob- served events, ncrit, must be calculated. The critical number of observed events is defined as the lowest number of observed events that is required to reject the background-only hypothesis with a confidence level (1 − α) and is found by solving:

P(nOB ≥ ncrit|nBG) < α (7.12)

Where P(nOB ≥ ncrit|nBG) represents the probability of observing a num- ber of events, nOB, larger than or equal to ncrit given the expectation of nBG background events. The least detectable signal, µLDS, is the lowest number of expected signal events that yield a (1 − β) probability of observing ncrit or more events. This is found by solving:

P(nOB ≥ ncrit|nBG + µLDS) = 1 − β (7.13)

77 A significance level of α = 5.73 × 10−7 is commonly required to claim a discovery in high energy physics research (corresponding to the area under the 5σ tails of a Gaussian distribution). This significance level is used to calculate the MDP in this analysis, along with a confidence level of (1 − β) = 50% (commonly used for a discovery).

7.4 Boosted Decision Trees A boosted decision tree (BDT) is a machine learning based multivariate analy- sis tool that can be trained to classify events into different categories based on their appearance in multiple variables [70; 71]. When a BDT is implemented in an event selection scheme it will award each event with a score that signi- fies its likeness to a signal event, e.g. in the range [−1,+1] corresponding to [background-like,signal-like]. In order to employ a BDT for event classification purposes, it must be trained to distinguish between signal and background events. In the training stage, the BDT is exposed to two multivariate samples of events, one contain- ing signal events and the other containing background. These are known as the training samples, and serve as the basis of the classification criteria that are developed for the BDT event classification scheme. The training samples can either consist of simulated events, experimental data or a mix of both. The training of a BDT is done by sequentially training a series of decision trees (DTs) to distinguish signal events from background. Each DT is itself a multivariate based binary classifier tool, where the classification is defined as a series of divisions, or branchings, of the data, until the resulting subsamples contain a pure enough sample of either signal or background events. Branch- ings that yield a low gain in separative power may be reversed, or pruned. The DTs are trained sequentially in order to allow a boosting of wrongly classi- fied events between the training of two trees to enhance the separating perfor- mance. This is done by awarding each wrongly classified event in the training data with a higher importance weighting than the correctly classified events before using the data to train the next tree in the sequence. Finally, the overall BDT score is formed as the average score awarded by the constituent DTs. The branching depth, the pruning strength, the boost factor and the number of constituent decision trees are all tunable parameters of the BDT training stage. Additionally, it is common to only include a (randomly selected) sub- sample of the training data for the training of each DT, the size of which can also be tuned by the analyzer. In the subsequent validation stage, the BDT is exposed to two additional data samples, one containing known signal and the other known background events. These are called the validation samples, and their events are classified using the newly trained BDT classification scheme. The resulting BDT score distributions are compared to the distributions of the training samples using a

78 Kolmogorov-Smirnov test (KS-test) [72]. If the BDT score distributions of the validation samples are determined to not originate from the same underlying distribution as the training samples (using the KS test resulting p-value), the BDT is labeled as overtrained, and unfit for use. This implies that the BDT has learned to recognize the individual event features in the training samples, as opposed to the common broad features of the full samples. The result is an event selection scheme that is specialized in either selecting the individual events that make up the training signal sample (and thus fail to select other signal events), or in rejecting the individual background training events (and thereby fail to reject other background events). An overtraining problem is usually solved by training a new BDT with tuned training parameters. After the training and the validation are finished the BDT algorithm should have produced a functioning and well performing event scoring scheme. The BDT can now be implemented as an event classification algorithm directly in the event characterization and selection scheme. However, before applying the BDT event scoring algorithm to the chosen data samples, it is important to remove the training sample from the full data set. To use the training samples in the further event selection development and evaluation would result in an overestimation of the separative power of the event selection scheme, as the BDT performance is inherently optimal when applied to these samples. Additionally, it is important to remember that it is up to the analyzer to determine the BDT score range that corresponds to signal-like events, and the range that does not. In this work, the BDT is implemented through the pyBDT software pack- age, a standard BDT package in IceCube [71].

79 8. Analysis Structure, Exposure and Assumptions

The aim of the analysis that is described in this thesis is to examine the hypo- thetical cosmic flux of magnetic monopoles with speeds above the Cherenkov threshold in ice. This work constitutes a fully blind cut-and-count analysis with a multivariate final stage.

8.1 Analysis Structure The analysis described in this thesis is divided into two main steps, Step I and Step II.

8.1.1 Step I Step I constitutes a one-fell-swoop procedure to drastically reduce the con- tribution of atmospheric neutrino and muon events, as well as dim neutrino events with astrophysical origin. The event selection was initially developed for an earlier IceCube analysis, the Extremely High Energy (EHE) analy- sis [46], searching for a so-far unobserved population of astrophysical neu- 6 trinos with extremely high energy (Eν & 10 GeV). These neutrinos would induce very bright events in IceCube, typically with a registered charge higher than ∼ 105 PE. The EHE event selection consists of a small number of simple cuts, thus enabling a highly general selection of bright events that imposes minimal con- straints on the shape of the light distribution. Additionally, the event selection is developed to efficiently reject the back- ground flux of atmospheric neutrinos and muons. Less than 0.085 atmo- spheric events are expected after the application of the event selection over the full EHE analysis livetime (9 yr, IceCube-40, IceCube-59, IceCube-79, and IceCube-86 I–VI). The full procedure of the Step I event selection is described in Chapter 10.2, and the EHE analysis is detailed in Appendix A.

8.1.2 Step II Step II was developed to remove any neutrino events that are accepted by the EHE analysis, both the hypothetical GZK neutrinos and additional astrophys-

80 ical neutrinos. This step is designed using custom reconstruction methods and employs a boosted decision tree for particle type classification. Using Monte Carlo simulated events for both the signal and the background samples, several distinguishing monopole event signatures were examined to determine the degree to which they show distinction between events induced by magnetic monopoles and astrophysical neutrinos. This involved, but was not limited to, designing a dedicated reconstruction algorithm to reconstruct the signal events more accurately (the BrightestMedian method), and employ- ing an advanced IceCube reconstruction algorithm in an unconventional man- ner (the Millipede method). Finally, nine cut variables were chosen to train a BDT to construct an event scoring scheme that ranks each event according to how signal-like it is, and a final cut is made on the BDT score variable. The value for the final BDT score cut criterion is set where the best model rejection potential is obtained. The procedure and performance of Step II is described in Chapter 10.3.

8.2 Analysis Exposure 8.2.1 Livetime This analysis was applied on 8 yr of data collected with the IceCube detector array, IceCube-86 I through IceCube-86 VIII. This constitutes all of the data that is collected with the completed detector array at the date of the unblinding of this analysis. The livetime per year is listed in Table 8.1. The event selection scheme of Step I is based on the event selection of the EHE analysis. The most recent EHE analysis iteration covers 6 yr of data collected with the full detector configuration (IC86-I through IC86-VI), along with 3 yr of data collected in partial configuration (IC40, IC59 and IC79). When developing the event selection scheme of Step II, the total amount of data that would be available at completion was not known. Therefore, it was chosen that the selection would be developed assuming a six year livetime (IC86-I through IC86-VI), corresponding to the IC86 portion of the EHE anal- ysis livetime. The analysis is thus developed assuming a livetime of 1935 d, while the final results include data from a total of 2715 d. For the purpose of this analysis, the experimental data can be divided into two main categories: the physics sample and the burn sample. The burn sam- ple constitutes the portion of data that is reserved for developing the analysis event selection scheme. Therefore, in order to avoid bias, the data that belongs to the burn sample cannot be included in the evaluation of the final results. The remaining experimental data is denoted by the physics sample. The livetime of the physics and burn samples, tphysics and tburn respectively, are listed on a per-season basis in Table 8.1, where the burn sample roughly constitutes 10 % of the full per-season data sample. The total livetime, ttotal, is also listed in Table 8.1.

81 Table 8.1. The per-season livetime — listing the physics, burn and total livetimes.

Detector season tphysics [d] tburn [d] ttotal [d] IC86-I 309 31.5 341 IC86-II 295 32.5 327 IC86-III 322 34.1 356 IC86-IV 325 35.0 360 IC86-V 328 37.5 365 IC86-VI 357 0.0 357 IC86-VII 411 0.0 411 IC86-VIII 369 0.0 369 Total 2715 170.6 2886

The experimental data is only divided into physics and burn subsamples for the first five seasons (IC86-I through IC86-V), while the full experimen- tal data set is used as the physics sample for the remaining seasons (IC86-VI through IC86-VIII). This is because the Step I event selection (from the EHE analysis) was developed only using the first five seasons. The sixth season of IC86 that was used in the EHE analysis was included after the event selection strategy was finalized, and therefore did not need to be subdivided. Addition- ally, no burn sample is used for the design of the Step II event selection, as the experimental data rate is too low after the application of Step I. For the reminder of this thesis the full livetime refers to the total livetime available for physics analysis (tphysics over the 8 yr in Table 8.1), i.e. excluding the burn sample livetime.

8.2.2 Solid Angle This analysis is designed to find magnetic monopoles isotropically over the full sky. The event selection scheme may reduce the signal efficiency to almost zero over certain portions of the sky. However, the results are translated to correspond to a solid angle, Ω, of 4π.

8.3 Signal and Background Parameter Space 8.3.1 Magnetic Monopole Flux Assumptions Here I describe the conditions on the magnetic monopole parameter space that constrain the fiducial region. Magnetic monopoles that lie outside of the fiducial region are not targeted by this analysis, but may still be selected by the analysis event selection. The first and most basic constraint is that the analysis is designed to search for magnetic monopoles carrying the Dirac magnetic charge, gMM = 1gD. The results obtained in this analysis will not be trivially generalizable to other

82 charges, as the number of photons produced through the Cherenkov effect, 2 Nγ , depends strongly on the charge of the monopole, Nγ ∝ gMM (see Chap- ter 4.2.1). The second constraint is one that has been touched upon previously in this thesis, namely that the magnetic monopole should have a speed that is above the Cherenkov threshold in ice. The Cherenkov threshold for light within the sensitive range of IceCube in deep Antarctic ice is around a speed of β = 0.746. In addition to this, the speed is bounded from above to the region where direct Cherenkov light is the dominant light production mechanism i.e. up to a Lorentz factor of γ ∼ 100, which corresponds to a speed of β ≈ 0.99995. In order to draw conclusions about monopoles in a higher speed range, radiative energy loss processes would need to be included in the simulation software, which is beyond the scope of this work. The speed region between β = 0.995 and 0.99995 is omitted in this analysis, which is developed using simulated magnetic monopole events in the speed range β ∈ [0.750;0.995]. An upper bound of the magnetic monopole mass mMM can be deduced from these speed constraints in combination with the allowed kinetic energy range (see Chapter 2.4 along with Equation 1.5). A magnetic monopole must be lighter than ∼ 1015 GeV in order to fall within this speed range. A higher monopole mass is expected to display the same event shape in the detector, and thus be accepted by the analysis, but is not expected to fully populate of the presently studied monopole speed range. Therefore, this upper mass bound represents a conservative order-of-magnitude estimation. Additionally, the monopole velocity vector is required to remain unchanged over the monopole path through the detector, which in turn requires that the monopole has negligible energy losses along its trajectory. The energy loss per unit length is independent of the monopole mass in the interval from β = 0.1 to γ = 100 (see Chapter 4.1), and ranges from 350 GeV m−1 to 1300 GeV m−1. The scale of IceCube, LIC, is ∼ 1km, yielding a maximal energy loss of ∼ 106 GeV when crossing the IceCube volume. The total energy loss over kin IceCube must be negligible compared with the monopole kinetic energy EMM, i.e.: dE Ekin  L × (8.1) MM IC dx Equations 1.4 and 1.5 can be used to determine the ratio between the par- ticle kinetic energy and its mass, depending on its speed, β. A particle speed 1 β = 0.75 yields the kinetic energy as Ekin ∼ 2 m0 (and β = 0.995 yields Ekin ∼ 9m0). This, in combination with Equation 8.1, yields the lower mass boundary of this analysis as 1 dE m L × (8.2) 2 MM & IC dx 8 i.e. mMM & 10 GeV.

83 Formally, the analysis is developed using Monte Carlo simulated magnetic 11 monopoles with a mass of mMM = 10 GeV, which is assumed to be repre- sentative of the entire allowed mass range. MM The final assumption considers the overall level of the monopole flux, Φ0 , which is taken as the approximate average of the most recent flux upper limits in the relevant parameter range [13]:

MM −18 −2 −1 −1 Φ0 = 3.46 × 10 cm s sr (8.3) The actual value of the assumed flux has no effect on the calculation of the final result. This flux assumption is thus included here for illustrative pur- poses only, and will be used to calculate concrete numbers for the expected magnetic monopole rate at each level of the analysis. Thus, for the remainder of this thesis, the expected number of magnetic monopole events refers to the expected number of events assuming the flux given above.

8.3.2 Astrophysical Neutrino Flux Assumptions As described above, the Step I event selection efficiently rejects any significant atmospheric contribution. Therefore, the remaining background consists of astrophysical neutrinos. For this analysis the astrophysical neutrino flux Φν is assumed to be well described by a single power law

E −γν Φ = φ × ν × 10−18 GeV−1 cm−2 s−1 sr−1 (8.4) ν ν 100TeV   +0.26 with the astrophysical flux normalization φν = 1.01−0.23 and spectral index γν = 2.19 ± 0.10. These values are the result of an IceCube analysis targeting the diffuse flux of upwards directed muon neutrino events with astrophysical origin, which was presented at the International Cosmic Ray Conference in 2017 (ICRC-2017) [73]. The analysis made use of 8 yr of recorded high en- ergy events, with 90 % of the likelihood contribution originating in the energy range Eν ∈ [199TeV;4.8PeV]. The validation of an earlier iteration of this analysis, through the comparison of simulated event samples with experimen- tal data, is presented in Chapter 5.4.1. For the remainder of this thesis, this as- trophysical neutrino flux will be denoted by Φ2017 . Two alternative IceCube DIF-νµ measurements of the astrophysical neutrino flux are discussed in Chapter 13.3. In addition to this, the astrophysical neutrino flux is assumed to arrive at Earth with a 1:1:1 flavor ratio (for [νe:νµ :ντ ]) and consisting of equal amounts of neutrinos and antineutrinos. The 1:1:1 ratio at Earth is expected after flavor- oscillation of astrophysical neutrinos produced via decay considering the IceCube energy resolution for high energy events [74; 75; 76], and is consis- tent with recent measurements [74].

84 9. Simulated Event Samples

The event selection that is used in this analysis is developed by only consid- ering Monte Carlo simulated events. It is crucial that these MC event samples are as similar as possible to the natural flux that they represent. Therefore, the parameters that are used to guide the simulation procedure must be chosen with care, and the software packages that are used must be thoroughly vali- dated. The validation of two IceCube analyses that are relevant for the present analysis is described in Chapter 5.4.1. Additionally, the magnetic monopole simulated light yield has been validated against theoretical prediction, which is described in Chapter 5.4.2. Several samples of simulated magnetic monopole events have been pro- duced specifically to represent the signal in this analysis. Additionally, several simulated neutrino event samples that are available for use within IceCube have been selected to represent the background. This chapter is dedicated to the description of these simulated event samples.

9.1 Signal Monte Carlo Event Samples The magnetic monopole simulation procedure is detailed in Chapter 5. A set of 4 × 105 magnetic monopole events was simulated for the develop- ment of the event selection for this analysis. The simulation was performed as- suming a uniform speed distribution over the speed interval β ∈ [0.750;0.995], and with an isotropic directional distribution. The magnetic monopole mass was set to 1011 GeV, and the IC86-VI detector configuration was used. This set of simulated magnetic monopole events is denoted by the baseline set. In addition to the baseline set an additional 10 event samples were sim- ulated with varied simulation parameters, each one containing 105 magnetic monopole events. The purpose of the additional samples is to study the ef- fect of systematic uncertainties in the modeling of the detector medium and its performance on the analysis signal efficiency. One type of parameters were varied per simulated event sample, while the others were kept at the standard values. The varied parameter settings and the full procedure of the systematic uncertainty study are described in Chapter 12.

85 Table 9.1. The Monte Carlo simulated event samples that were used to represent the background flux of astrophysical neutrinos. The spectral indices given here were used to promote the simulation of high energy events. Before being used in the development of the analysis, the event samples were weighted to represent the Φ2017 spectrum. DIF-νµ Blank table entries indicate the value(-s) given above. Energy Number of Spectral Detector Identification Neutrino range simulated index, configuration number flavor (GeV) events γ season 3 7 8 20364 νe 5 × 10 ;10 1.2 × 10 1.5 2016 8 20493 νµ 2.4 × 10   8 20622 ντ 1.2 × 10 6 8 4 20407 νe 10 ;10 4.9 × 10 1.0 2016 5 20536 νµ 1.0 × 10   4 20665 ντ 5.0 × 10 7 9 6 11070 νµ 10 ;10 2.0 × 10 1.0 2012 11297 ν 2.0 × 106 τ  

9.2 Background Monte Carlo Event Samples The simulated event samples that were used to emulate the background flux of astrophysical neutrinos were combined to represent the background to the best possible degree. A total of eight samples of neutrino events were used, each representing a different portion of parameter space. For each sample, the primary neutrino in- teraction vertices were simulated using the NuGen neutrino interaction pack- age, and the particle propagation and detection was done with the standard IceCube software. The samples are listed with their respective internal Ice- Cube identification numbers in Table 9.1. Here, the primary neutrino flavors and covered energy ranges are also listed, along with the numbers of simulated events, the power law spectral indices, and the detector configuration seasons. The selected MC event samples cover the energy ranges 5 × 103;108 GeV for electron neutrinos and 5 × 103;109 GeV for muon and tauon neutrinos.   No available sample of ν events extends to a primary energy of 109 GeV. The e   impact of the high energy deficiency is evaluated in Chapter 13.4. Addition- ally, the samples were truncated from below at a primary energy of 105 GeV, which is allowed as the EHE analysis event selection has negligible acceptance for neutrinos with an energy below 105.5 GeV. Spectral indices of 1.5 and 1.0 were used in MC sample production instead of more realistic values (such as ∼ 2.2) in order to promote event generation with higher primary energy. The samples were subsequently weighted to rep- resent the Φ2017 spectrum. DIF-νµ The choice of using simulated events produced with two different detector configuration seasons, 2012 and 2016, has no effect on the final result.

86 10. Event Selection

As was described in Chapter 8, this analysis follows a cut-and-count scheme in two steps, with the goal of rejecting the majority of background events while maintaining a high acceptance of signal events. Before Steps I and II are implemented, an initial event selection is per- formed on the detector event stream, in the form of data acquisition triggers and filters. Next, the Step I event selection is applied, based on a small number of selection variables that originate from a set of simple reconstruction algo- rithms, and Step II employs a set of additional reconstruction algorithms in order to promote the selection of magnetic monopole events over the remain- ing astrophysical neutrino flux. A more sophisticated selection is required in Step II as many neutrino and monopole events share similar characteristics after the Step I selection scheme. This chapter covers the full event selection scheme for this analysis, from the initial data acquisition triggers to the final classification of an event as a candidate magnetic monopole or a background event.

10.1 Event Triggers and Filters The trigger and filter algorithms that are centrally applied to the IceCube data stream are described in Chapter 3.2. In this analysis no trigger information is used to discriminate the event streams, but events that trigger any of the active trigger algorithms are ac- cepted. Next, events that pass the EHE filter (see Chapter 3.2.1) are selected. This filter was designed for use with the EHE analysis and rejects all events that exhibit too low registered brightness. All events that pass the EHE fil- ter are also subject to the EHE reconstruction scheme, which is described in Chapter 6.2.3. Assuming the magnetic monopole flux given in Chapter 8.3.1, a total of 244 magnetic monopole events with a speed above the Cherenkov threshold are expected to pass the initial trigger conditions during the 8 yr livetime of this analysis. Of these, a total of 178 events are expected to pass the EHE filter cri- teria. The corresponding numbers for astrophysical neutrino events with an energy above 105 GeV are 838 and 371 respectively. Atmospheric muons trig- ger data acquisition with a rate between 2.5 kHz and 2.9 kHz, and atmospheric neutrinos with an event rate a factor of ∼ 10−6 lower. The corresponding rates at the EHE filter level are 0.8 Hz and 7.6 × 10−6 Hz (Table A.1).

87 10.2 Step I The Step I event selection, replicated from the EHE analysis (Appendix A), employs a series of simple selection criteria that efficiently reject the majority of atmospheric background events while selecting high energy astrophysical events with a high detected brightness. The cut variables originate from the EHE reconstruction scheme, and include: • The number of registered photo-electrons, nPE, and its base-10 loga- rithm, log10 (nPE). • The number of detector channels (DOMs) with registered charge, nCH. • The fit quality (the reduced χ2 parameter) of the EHE track recon- 2 struction, χred,EHE. • The zenith direction of the EHE reconstructed track, cos(θzen,EHE). The EHE selection scheme is efficient for any search that targets events with a bright signature in IceCube, including the present search for magnetic monopoles with a speed above the Cherenkov threshold. The scheme also presents an efficient rejection of events with atmospheric origin, with a final expected number of atmospheric events lower than 0.085 over the detector seasons IC40 through IC86-VI. Below, the details of the Step I event selection scheme are described along with its performance on the simulated samples of signal and background events that are used in this analysis. The corresponding performance on the signal and background samples of the EHE analysis can be found in Table A.1.

10.2.1 The Offline EHE Cut At this cut level, three cut criteria are defined to reject events with too little registered light. A cut criterion is applied to the three variables nPE, nCH and 2 χred,EHE separately, such that an accepted event must satisfy the following:

nPE ≥ 25000 (10.1)

nCH ≥ 100 2 χred,EHE ≥ 30 The distributions of magnetic monopole and astrophysical neutrino events over these variables are shown in Figure 10.1. These cuts accept 50.6 % and 15.4 % of the assumed magnetic monopole and astrophysical neutrino events respectively.

10.2.2 The Track Quality Cut The next cut level is specifically designed to reject a population of atmospheric electron neutrino events. This is done by setting a brightness requirement that

88 (a) Event distributions over the registered number of photo-electrons, nPE.

(b) Event distributions over the number of channels with registered charge, nCH.

(c) Event distributions over the EHE reconstructed track 2 fit-quality, χred,EHE.

Figure 10.1. Event distributions at the Step I offline EHE cut level over the three cut 2 variables, nPE, nCH and χred,EHE. The selection criteria are set to accept events that satisfy the conditions given in Equation 10.1, and represented by vertical black lines in the figures (with the acceptance region to the right).

89 (a) Magnetic monopole events.

(b) Astrophysical neutrino events.

Figure 10.2. Event distributions at the Step I track quality cut level over the registered number of photo-electrons, nPE, and the track fit quality of the EHE track reconstruc- 2 tion, χred,EHE. The selection criterion is set to accept events that satisfy the conditions given in Equation 10.2, represented by a black line (with the acceptance region above).

90 (a) Astrophysical electron neutrino events.

(b) Astrophysical muon neutrino events.

(c) Astrophysical tauon neutrino events.

Figure 10.3. Event distributions at the Step I track quality cut level over the registered number of photo-electrons, nPE, and the track fit quality of the EHE track reconstruc- 2 tion, χred,EHE. The selection criterion is set to accept events that satisfy the conditions given in Equation 10.2, represented by a black line (with the acceptance region above).

91 depends on the track fit quality of the EHE track reconstruction. An event must satisfy the following relation in order to pass this selection criterion:

2 4.6 if χred,EHE < 80 2 2 log10 (nPE) ≥ 4.6 + 0.015 × χred,EHE − 80 if 80 ≤ χred,EHE < 120 5 2 if 120 ≤ 2  .  χred,EHE (10.2)  The distributions of magnetic monopole and astrophysical neutrino events over these variables are shown in Figure 10.2. The corresponding distribu- tions for each neutrino flavor in turn are shown in Figure 10.3. The selection criterion is represented by a black line. An event that falls below this line is rejected, while events falling above the line are accepted to the next level of the analysis. Figure A.1 contains the corresponding distributions for the signal and background samples of the EHE analysis. The total acceptance rate for this cut is 71.3 % for magnetic monopole events and 35.7 % for astrophysical neutrino events. A large fraction of the magnetic monopole events is accepted as these are typically well fitted by the EHE track reconstruction.

10.2.3 The Muon Bundle Cut This cut level is designed to reject bright events induced by atmospheric muons, muon bundles or neutrinos. This is achieved by setting a brightness require- ment that depends on the reconstructed incoming direction of the event, where a significantly higher brightness is required for a downwards directed event than an event with an upwards directed trajectory. An event is accepted at this level if it satisfies the following relation:

4.6 if cos(θzen,EHE) < 0.06 2  cos(θzen,EHE)−1 log10 (nPE) ≥ 4.6 + 1.85× 1 − (10.3)  0.94  r if 0.06 ≤ cos(θzen,EHE)  The boundary value of cos(θzen,EHE) = 0.06 corresponds to θzen,EHE = 86.6°, i.e. events with a slightly downwards directed trajectory. The distributions of magnetic monopole and astrophysical neutrino events over these variables are shown in Figure 10.4. The corresponding distribu- tions for each neutrino flavor in turn are shown in Figure 10.5. The selection criterion is represented by a black line. An event that falls below this line is rejected, while events falling above the line are accepted to the next level of the analysis. Figure A.2 contains the corresponding distributions for the signal and background samples of the EHE analysis.

92 The total acceptance rate for this cut is 55.4 % for magnetic monopole events and 49.6 % for astrophysical neutrino events.

10.2.4 The Surface Veto The final event selection level of the EHE analysis consists of a veto against any event that shows a correlation with an atmospheric particle shower in- duced by a cosmic ray interaction. This is achieved by examining pulses that were registered in the IceTop array, and correlating them to the reconstructed particle trajectory of the event seen in the main IceCube detector. The surface veto cut is only applied to downwards directed events, here de- fined by θzen,EHE < 85°. An event is rejected at this level if two or more pulses are registered in IceTop within a time window of [−1000ns;+1500ns] around the time coordinate of the closest approach between the reconstructed particle trajectory and IceTop. Several alternative time windows were examined within the context of the EHE analysis, and the selected time window was chosen as the best compromise between background rejection and coincidental veto rate. A study was also performed within the EHE analysis to estimate the coinci- dental veto rate where two or more IceTop pulses fall in the given time window of an unrelated EHE event in the main IceCube detector. A coincidental veto rate of 10.6 % was found. Since the simulated Monte Carlo samples that were used to develop the selection scheme of the present analysis do not include Ice- Top simulation, the surface veto cut was applied as a 10.6 % down-weighting of any event with θzen,EHE < 85° at this level. As the subsample of events with a downwards pointing direction was heav- ily reduced at the previous cut level, only 0.222 % of the remaining mag- netic monopole events and 12.6 % of the remaining astrophysical neutrino events has θzen,EHE < 85°. This results in a fractional event rate reduction of 2.35 × 10−4 for magnetic monopole events and 1.34 % for astrophysical neutrino events.

10.3 Step II The event selection of Step I was designed to search for an astrophysical flux of EHE neutrinos, and thus has an inherent acceptance for such neutrinos. Therefore, a second analysis step is required in the present analysis, to reject these neutrino events while maintaining a high efficiency for magnetic mono- pole events. To achieve this, several high level reconstructions are applied, targeted at different characteristic magnetic monopole event signatures. From these a set of variables are concretized and subsequently used to train a boosted decision tree for the final classification of events.

93 (a) Magnetic monopole events.

(b) Astrophysical neutrino events.

Figure 10.4. Event distributions at the Step I muon bundle cut level over the registered number of photo-electrons, nPE, and the zenith angle of the EHE reconstructed track, cos(θzen,EHE). The selection criterion is set to accept events that satisfy the conditions given in Equation 10.3, represented by a black line (with the acceptance region above).

94 (a) Astrophysical electron neutrino events.

(b) Astrophysical muon neutrino events.

(c) Astrophysical tauon neutrino events.

Figure 10.5. Event distributions at the Step I muon bundle cut level over the registered number of photo-electrons, nPE, and the zenith angle of the EHE reconstructed track, cos(θzen,EHE). The selection criterion is set to accept events that satisfy the conditions given in Equation 10.3, represented by a black line (with the acceptance region above).

95 10.3.1 Additional Reconstruction The additional reconstructions applied in Step II are performed on a noise- cleaned pulse-map. This is the same pulse-map as was used for the EHE reconstructions, labeled InIcePulsesSRTTW, and was subject to the SRT and TW cleaning methods, as well as the purging of any pulses registered with DOMs on the DeepCore detector strings. These pulses are discarded as the high DOM density of DeepCore otherwise may bias further cleaning and re- construction algorithms. Due to the high brightness of the selected events at this level, no reconstruction accuracy is lost by this rejection. The EHE reconstruction of Step I was designed for high speed and compar- atively low accuracy. For the application of further advanced reconstructions a more accurate reconstruction of the particle trajectory must be performed. This was the motivation for developing the BrightestMedian (BM) recon- struction algorithm specifically for the present analysis, and tailoring it to the specific event characteristics of magnetic monopole events with a speed above the Cherenkov threshold. The BM reconstruction yields a parameterized track representing the trajectory of the magnetic monopole, which serves as the de- fault track in the subsequent reconstruction algorithms. In addition to the BrightestMedian reconstruction, three of the CommonVa- riables (CV) event characterization packages were applied. TimeCharacteristics applied on the InIcePulsesSRTTW pulse-map. TrackCharacteristics applied on the InIcePulsesSRTTW pulse-map and seeded with the BrightestMedian track reconstruction. HitStatistics also applied on the InIcePulsesSRTTW pulse-map. In addition to this, the Millipede track reconstruction package was used to reconstruct stochastic energy losses along the BM reconstructed track. The BM reconstructed track along with the InIcePulsesSRTTW pulse-map are used as input track and pulse-map, and the energy loss along the track is discretized with a 10 m interval. The algorithm returns an array of reconstructed consec- utive energy losses along the BM track, with space and time coordinates as well as the fitted magnitude of the energy loss. Note that these energy losses are reconstructed under the assumption that they are produced as particle cas- cades along a muon track, originating from muon-nucleus interactions in the ice. Magnetic monopoles in the speed range of this analysis do not collide with in-ice nuclei, and mainly produce light directly via the Cherenkov process, so the fitted series of energy losses should be homogeneous along the track.

10.3.2 Step II Variables Nine variables are input into the BDT for the characterization of an event as signal- or background-like. The majority of the variables are chosen to repre- sent a typical event signature of a magnetic monopole event that differentiates it from a neutrino event. These variables are labeled as signature variables.

96 The remaining variables are chosen to represent other features of the event that are not directly related to characteristic event signatures of a magnetic mono- pole event, but that could affect the appearance of the event in the signature variables. These are labeled helper variables. There are four major magnetic monopole event signatures in the parameter space of this analysis. These are described in Chapter 4.3, and summarized below: Brightness The high effective charge of a magnetic monopole yields a high production of direct Cherenkov light. Non-starting/-stopping The magnetic monopole enters from outside of the detector and passes through it with completely negligible changes in direction and speed. Non-stochastic The magnetic monopole is assumed to have negli- gible stochastic energy losses along the track, leading to negligible stochastic cascades. Subluminal speed The speed is allowed in the range β ∈ [0.750;0.995], i.e. distinctly separate from the speed of light in vacuum, which is not the case for more commonly detected particles such as electrons, muons and tauons The first of the monopole signatures given above, the high brightness of the event, was aggressively selected for in Step I. There, only the brightest mo- nopole and neutrino events were selected, with the effect that the remaining monopole events no longer are significantly brighter than the other remain- ing events. Nonetheless, the event brightness is included among the Step II variables, as some discriminating power still remains in the differently shaped event distributions of monopoles and neutrinos. The nine Step II variables are listed below, along with a brief description of each. A discussion of the shape of the signal and background distributions over each variable is also included. A common feature among several of these variables is that the population of background events is divided into two sub-populations, one that is dom- inated by muon neutrinos and one by electron neutrinos, with tauon neutri- nos populating both. The former of these populations generally takes simi- lar values as the magnetic monopole distribution, whereas the latter is gener- ally more dissimilar to monopoles. As muon neutrinos give rise to track-like events, electron neutrinos give cascade-like events, and tauon neutrinos give both (see Chapter 3.4.1), the two populations will be labeled the the track-like and cascade-like populations, respectively, throughout the remainder of this chapter. Note that no formal event-type classification has been carried out, and the track- and cascade-like labels are only used for illustrative purposes. As was described in Chapter 5.4, the selection variables of Step I have been thoroughly validated through comparison between simulated event samples and experimental data. Additionally, the common IceCube software, as well as the monopole light yield, has been validated, which is exemplified in the same

97 chapter. As the Step II variables are derived from the same underlying pulse populations as the previously validated variables, they are also considered as validated. A direct comparison between MC events and experimental data is not meaningful after the application of Step I due to the low experimental event count (see Chapter 14.1.1).

Step II Variable — Speed • Signature variable — subluminal speed Taken as the reconstructed speed of the BrightestMedian track reconstruc- tion, βBM. This variable is used to distinguish primary particles that propagate with the speed of light from particles that propagate slower. See Figure 10.6a for the distributions of magnetic monopole and neutrino events over βBM. Overall, events are registered with reconstructed speeds from β = 0.4 to 1.3. Of course, true speeds above β = 1.0 are disallowed by spe- cial relativity, but this does not influence the allowed parameter space of the LineFit reconstruction. The magnetic monopole events are mainly distributed between a recon- structed speed of βBM = 0.8 and 1.1, while the two neutrino event popula- tions — track-like and cascade-like — are distributed around βBM = 1.0 and between βBM = 0.5 and 0.9 respectively. The speed variable is a powerful sep- arator not only between magnetic monopole events and light-speed track-like events, but also between track-like events and cascade-like events.

Step II Variable — Energy Loss RSD • Signature variable — non-stochastic The energy loss RSD (relative standard deviation) variable, rsd(EMIL), is based on the track reconstruction of the Millipede package. The Millipede track reconstruction is used to reconstruct the stochastic energy losses along the BM reconstructed track. The energy loss RSD is calculated as the relative standard deviation of all reconstructed stochastic energy losses with origin coordinates inside of the IceCube detector volume. This variable is included to distinguish events that show a high degree of stochastic energy losses along the track from the uniform light production that is expected from a magnetic monopole. See Figure 10.6b for the distributions of magnetic monopole and neutrino events over rsd(EMIL). The distributions cover roughly the same range, from rsd(EMIL) = 1 to 12, and peak at approximately the same value, rsd(EMIL) = 3. Both the magnetic monopole and neutrino population show a monotonic decrease towards higher values, but the monopoles have a steep decrease, i.e. a sharp peak at 3, while neutrino events have a flatter decrease and therefore a higher ratio of events in the high value tail. The relative standard deviation of a variable is unitless, as it is calculated as the ratio of the standard deviation and the mean value of the variable, that both carry the same unit.

98 (a) The Step II speed variable, βBM.

(b) The Step II energy loss RSD variable, rsd(EMIL).

(c) The Step II average pulse distance variable, avg (dDOM,Q)CV-TrackChar.

Figure 10.6. Simulated magnetic monopole and neutrino event distributions over the Step II BDT variables.

99 Step II Variable — Average Pulse Distance • Signature variable — non-stochastic The average pulse distance variable, avg(dDOM,Q)CV-TrackChar, is identical to the AvgDomDistQTotDom parameter of the CommonVariables TrackChar- acteristics reconstruction on the InIcePulsesSRTTW pulsemap and using the BM track reconstruction. The purpose of this variable is, similar to the energy loss RSD variable, to separate events that have a smooth light production along the track from events that have stochastic losses. See Figure 10.6c for the distributions of magnetic monopole and neutrino events over avg(dDOM,Q)CV-TrackChar. The two neutrino event populations (track- like and cascade-like) are not well separated over this variable. However, the variable does show discrimination power between neutrino and magnetic mo- nopole events. Neutrino events cluster around a value of avg(dDOM,Q)CV-TrackChar = 60m, whereas the magnetic monopole events cluster around a lower value of 40m. Additionally, the shapes of the monopole and neutrino distributions are distinctly different, which can be beneficial for the BDT classification.

Step II Variable — Pulse-Time FWHM • Signature variable — non-starting/-stopping The pulse-time FWHM variable, tFWHM,CV-TimeChar, is identical to the Time- LengthFWHM parameter of the CommonVariables TimeCharacteristics re- construction on the InIcePulsesSRTTW pulse map. This variable is included to distinguish between through going track events and events that are (fully or partially) contained in the detector, such as cascade and starting/stopping track events. See Figure 10.7a for the distributions of magnetic monopole and neutrino events over tFWHM,CV-TimeChar. The neutrino population shows two peaks over this variable, one at shorter times, tFWHM,CV-TimeChar = 2µs, populated mainly by electron and tauon neutrinos, and one at longer times, slightly below 3 µs, populated by all three neutrino flavors. The magnetic monopole population is clustered around slightly longer times, centered around 3 µs. Therefore, the cascade-like population is well distinguishable from the mag- netic monopole population, while the track-like population is not.

Step II Variable — Length Fill Ratio • Signature variable — non-starting/-stopping The length fill ratio variable, LFRCV-TrackChar, is constructed to represent to what extent the geometric length of the track, i.e. the distance between the track entry and exit points in the IceCube volume, corresponds to the length of the light pattern in the detector, represented by the TrackHitsSepara- tionLength parameter from the CommonVariables TrackCharacteristics pack-

100 (a) The Step II pulse-time FWHM variable, tFWHM,CV-TimeChar.

(b) The Step II length fill ratio variable, LFRCV-TrackChar.

(c) The Step II relative CoG offset variable, RCOCV-HitStats.

Figure 10.7. Simulated magnetic monopole and neutrino event distributions over the Step II BDT variables.

101 age. The LFRCV-TrackChar is thus calculated through: [TrackHitsSeparationLength] LFR = (10.4) CV-TrackChar [GeometricLength] The purpose of this variable is to distinguish fully through going track events from cascade events and starting/stopping track events. See Figure 10.7b for the distributions of magnetic monopole and neutrino events over LFRCV-TrackChar. The magnetic monopole and track-like neutrino event populations both peak around a value of LFRCV-TrackChar = 0.7, while the cascade-like neutrino event population peaks around a value of 0.3. The length fill ratio is unitless, as it is calculated as the ratio of two variables with the same dimension (length).

Step II Variable — Relative CoG Offset • Signature variable — non-starting/-stopping The relative CoG offset variable, RCOCV-HitStats, is constructed to identify events where the center of gravity, CoG, of pulses is separated from the mid- point of the reconstructed track. This allows the separation of tracks that ex- hibit a uniform light production along the full track, i.e. with a small distance between the CoG and the track mid-point, and events that have a significant clustering of pulses separate from the track mid-point. The latter is a common feature among cascade events, and is also seen among muon tracks with high stochastic losses. The variable is calculated using the centrality point and geometrical length of the BrightestMedian reconstructed track, along with the COG variable of the CommonVariables HitStatistics package. The relative CoG offset is cal- culated as the distance between the centrality point of the BrightestMedian reconstructed track and the HitStatistics COG position divided by the geomet- ric length of the track. See Figure 10.7c for the distributions of magnetic monopole and neutrino events over RCOCV-HitStats. The magnetic monopole event population peaks close to 0 and has a steep decrease towards higher values of the variable. The neutrino event populations are more flatly distributed and also extend to higher values of relative CoG offset. The relative CoG offset is unitless, as it is calculated as the ratio of two variables with the same dimension (length).

Step II Variable — Log-Brightness • Helper variable The log-brightness, log10 (nPE), is the base-10 logarithm of the number of photo-electrons, nPE, which is given by the initial EHE reconstructions. This variable is included as a helper variable. It shows significant correlation with other variables, e.g. the average pulse distance variable, where brighter events generally have higher values.

102 (a) The Step II log-brightness variable, log10 (nPE).

(b) The Step II cos-zenith variable, cos(θzen,BM).

(c) The Step II centrality variable, dC,BM.

Figure 10.8. Simulated magnetic monopole and neutrino event distributions over the Step II BDT variables.

103 See Figure 10.8a for the distributions of magnetic monopole and neutrino events over log10 (nPE). The magnetic monopole distribution is peaked around a value slightly below log10 (nPE) = 5.0, and decreases sharply above this value. The neutrino event distributions, on the other hand, extend to values above 6.0 — significantly higher than the magnetic monopole population.

Step II Variable — Cos-Zenith • Helper variable This variable, cos(θzen,BM), is the cosine of the zenith direction of the BrightestMedian reconstructed track, and is included as a helper variable. It shows significant correlations with other variables, e.g. the pulse-time FWHM variable, where diagonally directed events generally have higher values as their in-detector path is longer than that of vertically or horizontally directed events. See Figure 10.8b for the distributions of magnetic monopole and neutrino events over cos(θzen,BM). Magnetic monopole events are only expected in the up-going region of parameter space, while neutrino events are expected in the down-going region as well. This is an effect of the muon bundle cut in Step I, where only the brightest events are allowed to be downwards directed. Additionally, high energy neutrino events interact substantially in the Earth, and are therefore expected to have a lower upwards directed flux.

Step II Variable — Centrality • Helper variable The centrality of a track represents its closest-approach distance to the cen- ter of the IceCube detector volume. The centrality of the event, dC,BM, is cal- culated on the BrightestMedian reconstructed track, and is included as a helper variable. It shows significant correlations with other variables, e.g. the pulse- time FWHM variable, where more central events have a longer in-detector path than less central events. See Figure 10.8c for the distributions of magnetic monopole and neutrino events over dC,BM. All included distributions are highly similar, but separative power is achieved in combination with other variables.

10.3.3 BDT Implementation and Performance Before training the boosted decision tree (BDT), the simulated signal and background event samples were randomly divided into training and valida- tion samples with a 1:3 ratio. This ratio was chosen in order to preserve the majority of the simulated events, as the training sample must be discarded from further event selection development to avoid bias. The BDT was trained with the designated training samples, and possi- ble overtraining was evaluated by applying a Kolmogorov-Smirnov test (KS- test) [72] to the training and validation samples. The null hypothesis that is

104 Figure 10.9. The distributions of astrophysical neutrino and magnetic monopole events over the score of the Step II BDT.

evaluated with the KS-test is that the training and validation samples are drawn from the same underlying distribution, for which the test yielded p-values of 38 % and 84 % for signal and background respectively. This indicated that the null hypothesis can be rejected with (< 1)σ for both signal and background. Therefore, the BDT is labeled as not overtrained. The event scoring scheme of the trained and validated BDT was now ap- plied to the non-training event samples, and the resulting event distributions over the BDT score are shown in Figure 10.9 for each particle flavor. Anal- ogous to several of the nine BDT classification variables, the neutrino events form two clusters over the BDT score variable — one dominated by electron neutrino events, labeled as cascade-like, and centered around a BDT score value of 0.6 — and dominated by muon neutrino events, labeled track-like, and centered− around a score of 0.05. The magnetic monopole event distri- bution is centered around a BDT− score of +0.2. Figure 10.10 contains the correlation matrices for the nine BDT variables and the BDT score, for the events of the signal and background samples sep- arately as they enter the BDT. Several variables show some degree of corre- lation (or anticorrelation) with each other. This is expected, as they may be constructed to represent the same monopole event signature. The expected correlation is also demonstrated by the common separation of the background into two distributions among several of the variables — the track- and cascade- like populations. Figure 10.10 implies that the signal and background samples mainly are characterized by different variables. The signal sample shows a strong cor-

105 (a) Signal event sample.

(b) Background event sample.

Figure 10.10. The correlation matrices of the signal and background event samples with the nine BDT variables and the BDT score. The diagonal values have been removed for viewing purposes. Note that the top row and leftmost column display the correlation between each variable and the BDT score, of which the latter is not a BDT variable. 106 Figure 10.11. The correlation for the nine BDT variables and the BDT score for the full Step I MC sample (signal combined with background). The diagonal values have been removed for viewing purposes. Note that the top row and leftmost column display the correlation between each variable and the BDT score, of which the latter is not a BDT variable. relation between the speed and log-brightness variables, which is expected from the Frank-Tamm formula, Equation 4.3. Additionally, a strong anti- correlation is found between the centrality and pulse-time FWHM variables, which is expected from the geometry of a through-going track-like event. The background sample shows a low internal correlation, with the most signifi- cant correlation between the pulse-time FWHM and length-fill-ratio variables. This corresponds to the division of the background sample into its two sub- populations. All of the variables, apart from the energy loss RSD variable, show notice- able correlation with the BDT score for either signal or background or both. For both the signal and background samples, the BDT score shows a rela- tively high correlation with the pulse-time FWHM. Additionally, considering the signal sample, the BDT score is mainly correlated with the reconstructed speed, while for the background sample the BDT score is mainly correlated to the length fill ratio, and the log-brightness. It is also interesting to study the correlation between the variables and the BDT score for the full sample of events, i.e. the combination of signal and background. This is shown in Figure 10.11. The correlation of a variable with the BDT score indicates the importance of that variable in the event classifi-

107 cation scheme. It appears that all variables, including the energy loss RSD variable, have some correlation with the BDT score except for the centrality variable. This implies that the energy loss RSD has some importance for the classification scheme between signal and background despite showing weak correlation with the BDT score for the signal and background samples sep- arately. The centrality variable shows weak correlation with almost all vari- ables, both for the signal and background samples, with the exception of the pulse-time FWHM for the signal sample, with which it is highly anticorre- lated. The fact that it has such a weak correlation with the BDT score indicates that it would be safe to remove from the scoring scheme without reducing se- lection efficiency significantly. However, allowing it to remain does not imply any reduction in efficiency. Generally, there is weak correlation between the variables, with the strongest correlation found between the pulse-time FWHM and the length-fill-ratio as well as the pulse-time FWHM and the energy loss RSD variable. If two vari- ables show a strong correlation (or anticorrelation) with each other they would provide equivalent information to the BDT, which implies that it would be re- dundant to include both in the classification scheme. This redundancy does not imply a reduced algorithm efficiency or increased algorithm bias. Additionally, a strong (anti-)correlation between a variable and the BDT score indicates that this variable has high importance in the BDT scoring scheme, but the opposite — that a weak correlation indicates a low importance of the variable — is not strictly true. This is exemplified with the reconstructed speed variable, which was shown to have good separative power between the monopole population and the two neutrino populations (Figure 10.6a). It does, however, only show weak correlation with the BDT score. The explanation lies in the bimodal shape of the background sample, and the fact that the mo- nopole population lies between the two peaks. The BDT algorithm makes full use of this separation, but it also results in a weak correlation to the BDT score.

10.3.4 Placing the Cut Criterion The BDT score exhibits a powerful discrimination between signal and back- ground events. But in order to use this in the event selection scheme, a cut criterion on the BDT score spectrum must be defined. The model rejection and discovery potentials, MRP and MDP, respectively, are two measures of the performance of an event selection scheme (see Chap- ter 7.3). The optimal cut criteria according to the MRP or MDP can thus be found by scanning all possible cut values, along with the resulting nSG and nBG, to find the cut value that yields the minimal MRP or MDP, respectively. The expected nSG and nBG that result from each possible cut value can be seen in Figure 10.12, assuming the model fluxes, and a livetime of 6 yr.

108 Figure 10.12. The total expected numbers of signal and background events assuming the model fluxes, nSG and nBG respectively, as functions of the cut criterion on the BDT score.

The model rejection potential represents the sensitivity that is achieved per expected signal event, and is calculated with Equation 7.10 as the ratio be- tween the F&C sensitivity of the analysis, µ¯90, and the number of expected signal events, nSG. Figure 10.13 contains µ¯90 and nSG as functions of the BDT score cut, and the resulting MRP is shown in Figure 10.14. For lower val- ued BDT score cuts, the MRP is mainly affected by the decreasing sensitivity, showing the bimodal structure of the background shape (one dominated by electron neutrinos, on by muon neutrinos), and for higher BDT score cuts by the decreasing signal rate. It is minimized at a BDT score cut value of 0.047, with a rejection potential value of 0.11. The signal and background accep- tances at this cut value are 93.5 % and 2.65 % respectively. The model discovery potential is calculated with Equation 7.11 as the ratio between the least detectable signal, µLDS, i.e, the lowest number of observed signal events that is required to claim a discovery, and the expected number of signal events, nSG. Thus, the minimum of the MDP as a function of possible cut values represents the cut value that yields the highest expected number of signal events compared to the number of signal events required to claim a dis- covery. Determining the value of µLDS (Equation 7.13) requires knowledge of the number of expected background events, nBG, as well as the total number of observed events that is required to claim that the observation is not compatible exclusively with the background expectation, ncrit, the latter calculated using Equation 7.12. The least detectable signal is shown in Figure 10.15 as a function of the BDT score cut, along with the expected number of signal events. The saw-

109 Figure 10.13. The F&C sensitivity, µ¯90, and the expected number of signal events, nSG, assuming the model fluxes over 6 yr of data as functions of the Step II BDT score cut value.

Figure 10.14. The model rejection potential for 6 yr of data as a function of the Step II BDT score cut value. The preferred BDT score cut value of 0.047 is marked with a dark red line.

110 tooth shape of µLDS originates in the discrete nature of ncrit, so in order to mimic a continuous curve, a cubic spline interpolation has been computed using the saw-tooth tips as spline knots. The spline and spline knots are also shown in Figure 10.15. The MDP is shown as a function of the BDT score cut in Figure 10.16, where the MDP as calculated with the µLDS spline and the accompanying spline knots are also included. The model discovery potential is minimized at a plateau between BDT score cut values of 0.101 and 0.309 (marked in Figures 10.16 and 10.15), where the discovery potential takes a value around 0.17 with less than a 3 % differ- ence over the plateau. The signal and background acceptances range between 82.7 % and 16.4 % (signal) and 0.857 % and 7.13 × 10−8 (background) over the plateau, monotonously decreasing with increasing cut value. The rejec- tion factor takes values between 0.12 and 0.59 over the width of the plateau, increasing with higher cut value. The MDP plateau arises as the slopes of nSG and the µLDS spline are propor- tional in this region, as can be obtained from Figure 10.15. Several different methods can be employed to search for a preferred cut value within the plateau range, e.g. to fit a function (such as a parabola) to a suitable set of MDP values around the minimum range. This method would yield a minimum value at one definitive BDT score cut value, but this minimum would be without physical meaning. The reason for this is that the MDP does not come from a proba- bility distribution with a particular shape that should be mimicked, but it is the combination of two separate distributions that both originate directly from simulated events. As no cut value is preferred solely based on the MDP, an additional criterion is imposed to select a cut value over the MDP plateau. This is taken as the cut value with the best model rejection potential. The lowest MRP is found at the low- BDT score edge of the plateau, with a cut value of 0.101, and is where the signal and background acceptances are maximized. This cut value is taken as the cut value that is preferred by the MDP. The signal and background acceptance of the cut values preferred by the MRP and MDP minimization procedures are shown in Table 10.1 along with the expected numbers of signal and background events for 8 yr analysis live- time, as well as the corresponding model rejection and discovery potential values. The MRP preferred cut has a discovery potential of 0.23 (∼ 34% higher than the minimum), and a signal and background acceptance of 94 % and 2.6 %, respectively. The MDP preferred cut has a rejection potential of 0.12 (∼ 8.1% higher than the minimum), and a signal and background accep- tance of 83 % and 0.86 %, respectively. Both cut options predict a number of background events significantly lower than one per full analysis livetime. The two cut options offer similar rejection potentials as well as similar dis- covery potentials, and the signal and background acceptances are also similar, with the MDP preferred cut being slightly more aggressive.

111 Figure 10.15. The least detectable signal, µLDS, and the expected number of signal events, nSG, assuming the model fluxes over 6 yr of data as functions of the Step II BDT score cut value. Included are also the µLDS spline and accompanying spline knots, as well as the range of the MDP plateau (blue shaded region).

Figure 10.16. The model discovery potential for 6 yr of data as a function of the Step II BDT score cut value. The MDP minimum plateau, between scores of 0.101 and 0.309, is marked with a blue shaded region, and the MDP spline and spline knots are also included.

112 Table 10.1. The minima of the model rejection and discovery potentials over the BDT score cut range, and their accompanying preferred BDT cut values. Listed are also the corresponding expected numbers of signal and background events assuming the model fluxes over 8 yr analysis livetime, as well as the signal and background acceptances relative the Step I expectations. BDT Signal Background score nSG, nBG, acceptance acceptance cut MRP MDP 8 yr 8 yr [%] [%] MRP 0.047 0.110 0.231 33.2 0.265 93.5 2.65 MDP 0.101 0.119 0.172 29.4 0.0856 82.7 0.857

The final result of this analysis — either a flux upper limit or a magnetic monopole discovery — will be shown as a function of the speed of the particle, β. Therefore, it is important to understand the signal efficiency as a function of the particle speed for each of the suggested cut values before making a final decision. The expected number of signal events is shown as a function of the true particle speed in Figure 10.17 for the preferred MRP and MDP cut values. Both cut options mainly reject signal in the high-β region, and higher BDT score cut values yield higher reduction over the full speed range. The MRP preferred cut shows negligible signal rejection for speeds below 0.95, and the rejection in the highest speed region is . 20%. Correspondingly, the MDP preferred cut shows negligible rejection below 0.90, and the rejection in the highest speed region is & 40%. The reason for why the signal rejection is concentrated in the high speed region is that there is a strong anticorrelation between the BDT score and the true monopole speed, βMC. This is indicated in Figure 10.18, where the expected number of signal events is shown as a function of the BDT score as well as the particle speed. A clear anticorrelation can be seen, indicating that higher speed monopoles are awarded lower BDT scores, presumably since high speed monopoles appear more muon-like in the eyes of the BDT than those with low speed. Similarly, a strong anticorrelation is found between the BDT score and the reconstructed particle speed, βBM, for magnetic monopole events (see Figure 10.10a). In the context of this analysis, it is important to produce a competitive result over the full included β range. Therefore, the MDP preferred cut value is rejected in favor of the MRP preferred cut. Thus, the Step II signal and background acceptances become 93.5 % and 2.65 % respectively. The per-flavor neutrino acceptance is 6.97 × 10−4, 6.37 % and 1.11 % for νe, νµ and ντ respectively. The final level expected magnetic monopole flux (assuming the model flux, Chapter 8.3.1) and astrophysical neutrino flux are thus 33.2 and 0.265 events per analysis livetime, respec- tively. The expected neutrino flavor ratio at final level is 1.1% : 91% : 7.9% for [νe : νµ : ντ ].

113 Figure 10.17. The signal event rate as a function of true particle speed for the alterna- tive BDT score cut values.

Figure 10.18. The magnetic monopole expected event rate as a function of the true particle speed and the Step II BDT score.

114 Figure 10.19. The model rejection potential for 6 yr of collected data and for the full 8 yr data sample as functions of the Step II BDT score cut value. The preferred BDT score cut value of 0.047 is marked with a dark red line.

Finally, as was described in Chapter 8.2.1, the event selection was devel- oped assuming a livetime of 6 yr, while the results are produced considering a total of 8 yr collected data. The MRP calculated using the full data sample is shown together with the 6 yr MRP in Figure 10.19. The full data MRP is min- imized at the same BDT score cut value as the 6 yr MRP, at the value 0.047, where it has a rejection potential value of 0.080.

10.4 Expected Numbers of Events This section summarizes the outcome of the event selection scheme described above. The expected signal and background event rates over the full analysis livetime after each cut level are listed in Table 10.2. The signal and back- ground acceptance relative to the trigger rate after each cut level is listed in Table 10.3. For the corresponding event rates of atmospheric muons and neu- trinos, see Table A.1. Note that the 2.7 kHz trigger rate of atmospheric muon events corresponds to 6.3 1011 triggered events over the 8 yr livetime of the present analysis. Similarly,× the EHE filter acceptance rate of atmospheric muons, 0.8 Hz, corresponds to 1.9 108 events over 8 yr. The Step I and Step II acceptances× of triggered magnetic monopole events are 14.6 % and 13.6 % respectively. The corresponding acceptances for astro- physical neutrino events are 1.19 % and 0.0316 %. Assuming the model flux, this indicates a final expected number of signal events of 35.5 and 33.2 after the Step I and Step II event selections, respectively. Correspondingly, astro-

115 Table 10.2. The expected number of events per analysis livetime (8 yr) at each level of the analysis, assuming the model signal and background fluxes.

Analysis level nSG nBG nνe nνµ nντ Trigger 244 838 146 548 144 EHE filter 178 371 90.1 202 78.2 Step I Offline EHE cut 89.9 57.2 23.4 20.0 13.8 Track quality cut 64.1 20.4 6.77 9.91 3.72 Muon bundle cut 35.5 10.1 4.39 3.83 1.90 Surface veto 35.5 9.99 4.33 3.78 1.88 Step II 33.2 0.265 0.00302 0.241 0.0209

Table 10.3. The expected number of events, n, per analysis livetime after each cut level, relative to the number of expected triggering events, nTrigger, assuming the model signal and background fluxes. Trigger Trigger nSG/nSG nBG/nBG Analysis level [%] [%] Trigger 100.0 100.0 EHE filter 72.9 44.2 Step I Offline EHE cut 36.9 6.83 Track quality cut 26.3 2.43 Muon bundle cut 14.6 1.21 Surface veto 14.6 1.19 Step II 13.6 0.0316

116 Figure 10.20. The (bin-wise) expected signal event rate per analysis livetime, assum- ing the model monopole flux. This is given as a function of the true particle speed, βMC, and for the generation, trigger, EHE filter, Step I and Step II analysis levels. physical neutrino events are expected to levels of 9.99 and 0.265 after Step I and Step II, respectively. This can be compared with the 0.085 expected atmo- spheric events over the 9 yr livetime of the EHE analysis (after the EHE event selection, i.e. Step I). This is more than one order of magnitude lower than the Step I expected astrophysical background, and a factor of 3 lower than after Step II. ∼ As already mentioned, it is important to understand the acceptance of mag- netic monopoles over the relevant range of the true particle speed. The signal acceptance as a function of the particle speed is shown in Figure 10.20. The trigger and EHE filter has an approximate uniform acceptance of magnetic monopole events over this β range. The Step I selection scheme preserves this uniform acceptance over most of the range, barring the low β region. A reduced acceptance is expected in this region, as the main selection variable of Step I, the event brightness, is highly correlated with the speed of the particle. The opposite trend is seen in the Step II selection, where the acceptance is only reduced for high speed monopoles. This stems from the strong correlation be- tween β and BDT score, where a high speed yields a low BDT score (see Figure 10.18). This can be attributed to the fact that higher speed monopoles appear more muon-like in the BDT classification scheme.

117 11. Sensitivity

Chapter 10 covered the event selection scheme that was used to search for magnetic monopoles above the Cherenkov threshold within the context of this analysis. It is concluded with a listing of the expected numbers of signal and background events over the full analysis livetime, under the assumptions of the model magnetic monopole and astrophysical neutrino fluxes (Chapter 8.3). This, however, does not constitute the final result of the analysis, which will either be the discovery of a magnetic monopole or an upper limit on the cosmic magnetic monopole flux. This chapter covers the upper limit that is expect to be set given the expected final level event rates, barring the extraordinary discovery of a magnetic mo- nopole. The average expected upper limit is known as the sensitivity of the experiment.

11.1 Effective Area The signal efficiency of an analysis scheme can be quantified in terms of an ef- fective area, representing the cross sectional area of an ideal detector recording with 100 % signal efficiency. This is calculated as the size of the generation area times the detected fraction of generated events (Equation 7.1). The effective area at the trigger, EHE filter, Step I and Step II levels of this analysis can be found in Table 11.1, averaged over the β range of the analysis. The effective area is comparable in order of magnitude to the geometrical cross sectional area of the detector, commonly quoted as 1 km2, which is due to the high detectability of magnetic monopoles above the Cherenkov threshold. The trigger level effective area of 2.39 km2 is significantly larger than this, which mainly is attributed to the fact that bright magnetic monopoles can cause a

Table 11.1. The average effective area at the trigger, EHE filter, Step I and Step II levels of the analysis. Analysis Effective area level [km2] Trigger 2.39 EHE Filter 1.74 Step I 0.348 Step II 0.326

118 Figure 11.1. The effective area as a function of true particle speed at the trigger, EHE filter, Step I and Step II levels of the analysis.

trigger in IceCube even if they pass far outside of the instrumented detector volume. Since the final results of the analysis will be presented as a function of the true particle speed, it is important to understand the effective area over the full range of considered speeds. The effective area at the trigger, EHE filter, Step I and Step II levels of the analysis are shown as a function of the true particle speed in Figure 11.1. Here, the effective area is calculated using a kernel density estimator (KDE) [64; 77; 78] with the events that remain at each level of the analysis. In a KDE, each event is described by a kernel with a specific shape in the chosen variable, and the KDE curve is given as the sum of all individual single-event kernels. Thus, a KDE is used to smoothen statistical fluctuations between adjacent histogram bins. For the present effective area calculation, a Gaussian kernel shape is used over the true particle speed, with the kernel width chosen such that the statis- tical fluctuations observed in the total event count of Figure 10.20 are smooth- ened, but the overall shape of the curve is maintained. The effective area is truncated to the speed range [0.780;0.995], which cor- responds to the range containing final level MC monopole events. As is shown in Figure 11.1, the final level effective area is increasing with the true particle speed from βMC = 0.78 to approximately 0.87, where it plateaus between 0.45 km2 and 0.50 km2.

119 11.2 Sensitivity The sensitivity is calculated using Equation 7.4. At final analysis level the number of expected background events is nBG = 0.265 over the full 8 yr live- time, 2715 d. This yields a F&C sensitivity of µ¯90 (nBG) = 2.67. The solid angle covered by the analysis is 4π and the average effective area Aeff = 0.326km2. The 90 % CL sensitivity to the magnetic monopole flux thus be- comes 2.78 × 10−19 cm−2 s−1 sr−1 over the full monopole speed range. The sensitivity is shown as a function of the true particle speed in Fig- ure 11.2. The shape of the sensitivity is the inverse of the effective area shape, as the effective area is the only parameter in Equation 7.4 that varies with the particle speed. The sensitivity can thus be described as rapidly decreas- ing to a speed of approximately βMC = 0.87, whereafter it plateaus around 2 × 10−19 cm−2 s−1 sr−1. Additionally, the sensitivity of this analysis is shown in Figure 11.3 along with the current best limits on the magnetic monopole flux. The results of the present analysis are thus expected to be competitive for speeds above β = 0.79, and yield an improvement of around an order of magnitude over the previous results for speeds above 0.82.

120 Figure 11.2. The 90 % CL sensitivity of this analysis to the cosmic flux of magnetic monopoles as a function of true particle speed.

Figure 11.3. The 90 % CL sensitivity of this analysis to the cosmic flux of mag- netic monopoles as a function of true particle speed (green dashed curve, denoted by IceCube-86 8 yr, sens.). Included are also the current best upper limits on the magnetic monopole flux (see Chapter 2.6 for further description and references).

121 12. Uncertainty on the Magnetic Monopole Efficiency

Any scientific result carries some degree of uncertainty, and the size and nature of this uncertainty is determined by several aspects of the underlying analy- sis. This chapter covers the statistical and systematical uncertainties that con- tribute to the final uncertainty on the acceptance of signal events. In addition to this, there will be an uncertainty on the flux of background events. This is treated in the next chapter. The statistical uncertainty in an experiment is related to the fact that we do not have infinite data. This inherently yields the result as an approximation of the true, unknown, value. In addition to the statistical uncertainty, there is an uncertainty introduced by systematic effects that are due to limited intrinsic accuracies in the descrip- tion of the detector performance or in the physical models used to describe the underlying physical processes. A physics or detector model with limited accu- racy can introduce a systematic shift of the final result such that it is no longer centered around the true value of the measured parameter. The magnitude of this effect can be estimated by simulating an event sample with shifted model parameters and applying the analysis selection criteria to the new simulated events. The appropriate parameter shift is determined by the estimated un- certainties of the parameters themselves, and is commonly taken as the ±1σ interval. Such a study has been conducted within the scope of this work, where three properties of the IceCube detector and detector medium have been varied. Pa- rameters of the signal modeling, such as the magnetic monopole mass and charge, are not varied here. This is because they are considered as model choices that define the fiducial region of the analysis, as opposed to measured parameters with testable uncertainty regions. The parameters that define the fiducial region are described in Chapter 8.3.1. In the present chapter, the final signal efficiency uncertainty is presented in the form of the relative uncertainty corresponding to the one standard deviation range. Due to the definitions of the effective area and the number of expected signal events (Equations 7.1 and 7.3, respectively), the relative uncertainty on the signal efficiency, the effective area and the number of expected signal events will be identical.

122 12.1 Systematic Variation of Monte Carlo Settings The main sources of systematic uncertainty in the present analysis are related to the propagation and detection of light in the detector. The optical properties of the ice, and the in-situ detection efficiency of the optical modules, are areas with significant uncertainties, due to the difficulty of measuring them. These properties are all described by the ice-model of the IceCube sim- ulation software (Chapter 5.3). The ice-model determines each aspect of a photon’s life in IceCube, from the production by a charged particle, via its propagation through the bulk ice, to the triggering of a photo-electron in a PMT or to being absorbed by the surrounding material. Therefore, in order to estimate the systematic uncertainties introduced by the ice-model in the event simulation, three ice-model parameter categories have been studied, one of which considers the light propagation through the bulk ice, and two that consider different aspects of the DOM photon detection efficiency: Scattering and absorption The characteristic scattering and absorp- tion lengths of optical light in the ice. DOM efficiency The probability that a photon hitting the DOM surface is detected by the DOM. DOM angular sensitivity The relative sensitivity of a DOM depend- ing on the incident zenith angle of the photon. Based on the internal geometry of a DOM, its sensitivity is assumed to be rotationally sym- metric around the vertical axis. A total of 10 specific variations, listed in Table 12.1, were selected to cover the true parameter values of the given parameter categories. For each variation 105 magnetic monopole events were simulated at generation level and brought through the full event selection scheme of the analysis. The effective area for each case was calculated, and compared to the effective area of the baseline case.

Table 12.1. The systematic variation parameter sets to be tested for each parameter category. Parameter category Systematic variation Scattering and absorption Scat. +5%, abs. +5% Scat. +5%, abs. −5% Scat. −5%, abs. +5% Scat. −5%, abs. −5% DOM efficiency DOM eff. +10% DOM eff. −10% Angular sensitivity Ang. sens. set 5 Ang. sens. set 9 Ang. sens. set 10 Ang. sens. set 14

123 As they are both affected by the same impurities in the detector ice, the scat- tering and absorption characteristic lengths are highly correlated (see Chap- ter 3.1.3). These are therefore varied together in the four combinations given by changes of ±5% on each variable. This range has been determined through systematic studies of the detector response to the calibration LEDs installed in each DOM. The variations are relative as both the scattering and absorption lengths fluctuate strongly with the in-ice depth and the photon wavelength (see Figure 3.4). The resulting shifts in effective area from baseline are listed in Table 12.2 and Figure 12.1, and range from −4.6% to +4.5%. The DOM efficiency setting was shifted by −10% and +10%, resulting in shifts in effective area by −6.9% and +5.0% with respect to the baseline value, respectively (see Table 12.2 and Figure 12.1). The range of ±10% has been determined through a combination of several internal IceCube com- parative studies of the detector response to atmospheric muon events. The variations are applied as overall shifts in the absolute DOM efficiency, which varies over the photon wavelength spectrum. The DOM angular sensitivity is parametrized by two parameters, p0 and p1. Studies of the effect of the systematic uncertainties on the DOM angular sensitivity on the event selection are conducted by examining the effect of sev- eral sets of (p0, p1) from the region likely to contain the true (p0, p1) values. A total of 50 ice-models with varied (p0, p1) values are available for this pur- pose. The corresponding parameter values are shown in Figure 12.2 as green crosses, where the baseline (p0, p1) set is marked with a red rhombus. A very bright event is assumed to be less sensitive to variations in the angu- lar sensitivity of each DOM, and more sensitive to variations in the absolute DOM efficiency. Therefore, a simplified approach is adopted in this analysis for estimating the systematic uncertainty introduced by the uncertainty on the DOM angular sensitivity. Here, four of the available (p0, p1) sets that approxi- mately frame the recommended distribution are selected for evaluation. These sets, numbered 5, 9, 10 and 14, are marked with yellow circles in Figure 12.2. The difference in effective area between the selected (p0, p1) sets and the baseline are listed in Table 12.2 and Figure 12.1, and they range between −1.5% and lower than +1%. These variations are thereby found to be sig- nificantly smaller than the differences in effective area found for the other parameter categories. Therefore, an extended study using the full sample of 50 (p0, p1) sets is omitted, and the DOM angular sensitivity systematic uncer- tainty is assumed to be on this level. The final-level effective areas of the systematic variations are asymmetri- cally distributed around the baseline effective area, with a tendency towards lower values. At first glance, this may be unexpected, as the shifted param- eter settings are symmetrically distributed around the baseline settings. The asymmetry is not present at the previous step of the event selection, Step I, (see Table 12.3) so it must arise in the Step II BDT classification. The signal efficiency of the BDT cut is listed for each varied event sample in Table 12.3

124 Figure 12.1. The relative difference between the baseline effective area and the effec- tive area for each tested systematic variation. The points representing the ang. sens. sets 10 and 14 lie at similar values, and may be difficult to distinguish. Note that there is a 1.1 % statistical uncertainty on the effective area of the varied sets. See also Table 12.2.

Figure 12.2. The (p0, p1) values of the 50 ice-models that may be used to evaluate the effect of the systematic uncertainties on the DOM angular sensitivity of an analysis (green crosses). Also shown are the (p0, p1) baseline values (red rhombus), and the four numbered sets (5, 9, 10, 14) that were used in this analysis (yellow circles).

125 Table 12.2. The effective area and relative effective area difference to the baseline case for each tested systematic variation at final level of the analysis. Note that there is a 1.1 % statistical uncertainty on the effective area of the varied sets. See also Figure 12.1. Effective area Relative difference to Systematic variation [km2] baseline [%] Baseline 0.326 Scat. +5%, abs. +5% 0.312 −4.3 Scat. +5%, abs. −5% 0.340 +4.5 Scat. −5%, abs. +5% 0.311 −4.6 Scat. −5%, abs. −5% 0.334 +2.5 DOM eff. +10% 0.342 +5.0 DOM eff. −10% 0.303 −6.9 Ang. sens. set 5 0.327 +(< 1) Ang. sens. set 9 0.321 −1.5 Ang. sens. set 10 0.323 −(< 1) Ang. sens. set 14 0.323 −(< 1)

Table 12.3. The efficiency of the BDT cut for each tested systematic variation after Step I of the analysis. Step I Step I relative BDT cut effective area difference to baseline signal efficiency Systematic variation [km2] [%] [%] Baseline 0.348 93.5 Scat. +5%, abs. +5% 0.336 −3.4 92.7 Scat. +5%, abs. −5% 0.365 +4.8 93.3 Scat. −5%, abs. +5% 0.337 −3.2 92.2 Scat. −5%, abs. −5% 0.358 +2.8 93.3 DOM eff. +10% 0.372 +6.8 91.9 DOM eff. −10% 0.324 −6.9 93.5 Ang. sens. set 5 0.351 +(< 1) 93.1 Ang. sens. set 9 0.348 −(< 1) 92.3 Ang. sens. set 10 0.345 −(< 1) 93.5 Ang. sens. set 14 0.347 −(< 1) 93.1

Table 12.4. The largest relative effective area differences to baseline for each tested systematic settings category. Systematic Specific Magnitude of relative variation category systematic variation difference to baseline [%] Scattering and absorption Scat. −5%, abs. +5% 4.6 DOM efficiency DOM eff. −10% 6.9 Angular sensitivity Ang. sens. set 9 1.5 Total 8.4

126 (final column), where the highest efficiency is achieved for the baseline set. This confirms that the BDT classification favors monopole events that were produced with the baseline settings over events produced with alternative set- tings, which is expected as the BDT was trained specifically on baseline mag- netic monopole events.

12.2 Total Uncertainty

The total uncertainty, σtot, on the analysis acceptance of final level magnetic monopole events is calculated as the quadratic sum of the systematic and sta- tistical uncertainties, σsyst and σstat, respectively:

2 2 2 σtot = σsyst + σstat (12.1)

The total systematic uncertainty on the final level effective area, σsyst, is con- servatively taken as a symmetric interval around the baseline value, given by the quadratic sum of the largest values of each parameter category, σScat. abs., σDOM eff. and σAng. sens. (see Table 12.4):

2 2 2 2 σsyst = σScat. abs. + σDOM eff. + σAng. sens. (12.2) This results in a total systematic uncertainty of 8.4 %. The three contributions in Equation 12.2 are uncorrelated by design, which allows this calculation. The scattering and absorption characteristic lengths, being properties of the bulk ice, are naturally uncorrelated with the two DOM- related parameter categories. The two DOM-related categories are also or- thogonal, as only the direction-dependent relative sensitivity is varied in the DOM angular sensitivity category (preserving the overall efficiency between variations), while only the absolute efficiency is varied in the DOM efficiency category (with no directional dependence). The statistical uncertainty on the final level effective area, σstat, is directly given by the total number of baseline Monte Carlo signal events that remain at Final level final level, NMM , through:

Final level NMM σ = (12.3) stat q Final level NMM

Final level The present NMM = 34306 yields a total statistical uncertainty of σstat = 0.5%, negligible in comparison to the systematic uncertainty. The total uncertainty on the acceptance of magnetic monopoles remains at σtot = 8.4%.

127 13. Uncertainty on the Astrophysical Neutrino Flux

The main background to the present search for a cosmic flux of magnetic mo- nopoles are astrophysical neutrinos. The topic of this chapter is the uncertainty that exists in the assumed background flux measurement, as well as the uncer- tainties that are introduced by the assumption of this particular flux and by the limitations of the available Monte Carlo event samples. The uncertainties presented in this chapter will be presented both in the form of an absolute and a relative uncertainty on the expected number of back- ground events, and the distinction between the two will be made clear by the context.

13.1 Statistical Uncertainty Similarly to the statistical uncertainty on the signal event efficiency, the num- ber of expected background events will have an uncertainty originating in the finite size of the employed Monte Carlo samples. The background statistical uncertainty, σstat,BG, is calculated with: Final level Nν σstat,BG = Final level (13.1) pNν A total of 4191 neutrino events remain at final level, giving a statistical un- certainty of σstat,BG = 1.5%. This is negligible with respect to the uncertainties presented below.

13.2 Uncertainties in the Astrophysical Flux Measurement In the present analysis, the background flux has been assumed to be well de- scribed by the Φ2017 flux, detailed in Chapter 8.3.2. The total uncertainty DIF-νµ (the sum of the statistical and systematic uncertainties) on the assumed flux is specified in terms of the uncertainty on the astrophysical flux normalization, φν , and on the spectral index, γν , that enter the formula for a single power law (Equation 8.4): E −γν Φ = φ × ν × 10−18 GeV−1 cm−2 s−1 sr−1 (13.2) ν ν 100TeV   128 The flux normalization and spectral index of Φ2017 are: DIF-νµ +0.26 φν = 1.01−0.23 , γ = 2.19 ± 0.10 (13.3) The quoted errors propagate to the final level of the monopole analysis and +0.265 give an expected number of background events of 0.265−0.135 over the in- cluded 8 yr of data. The relatively small uncertainties of the flux normaliza- tion and spectral index are amplified to the present factor of ∼ 2 uncertainty on the number of expected background events. This is attributed to the spec- tral index uncertainty that enters exponentially in the calculation, resulting in a large uncertainty in energy ranges away from the energy normalization (here 100 TeV). This effect is demonstrated in Figure 13.3, where the number of expected background events after Step I, along with its uncertainty, is shown as a func- tion of the neutrino energy. Note the increasing relative uncertainty between the low and high energy regions of the distribution.

13.3 Alternative Neutrino Flux Assumptions This section is dedicated to investigating what the average final background flux would be if nature is better represented by a different flux than the as- sumed Φ2017 . Two additional flux measurements, both performed by Ice- DIF-νµ Cube and presented at the International Cosmic Ray Conference in 2019 (ICRC- 2019), are examined here: Φ2019 An update to Φ2017 , presented at the ICRC-2019 [79] DIF-νµ DIF-νµ 2019 ΦHESE Result of the high energy starting events (HESE) analysis, pre- sented at the ICRC-2019 [75] The Φ2019 measurement is an update to the Φ2017 measurement, where DIF-νµ DIF-νµ the same analysis has been applied to an additional 2 yr of collected data. The updated analysis obtains 90 % of the likelihood contribution originating in the 2019 energy range Eν ∈ [40TeV;3.5PeV]. The latter flux measurement, ΦHESE, is the result of the high energy starting events (HESE) analysis, targeting events of all flavors with a total deposited energy in the detector of at least 60 TeV. As indicated by its name, this analysis only includes starting events, which implies events that are caused by a neutrino. Both the Φ2019 and the Φ2019 flux measurements are given in the form of DIF-νµ HESE single power laws, which is the same parametrization as the Φ2017 flux. The DIF-νµ astrophysical flux normalization and spectral index of each tested astrophysi- cal flux are given in Table 13.1. The simulated samples of astrophysical neutrinos were weighted to each of these fluxes in turn, and the number of remaining neutrino events were recorded at each level of the analysis. The expected number of events, nν , given each astrophysical neutrino flux assumption can be found in Table 13.1.

129 Table 13.1. The astrophysical flux normalization and spectral index of each tested astrophysical flux measurement, along with the expected numbers of events at Steps I and II, nStep I and nStep II, respectively. Astrophysical Spectral Flux normalization, φν index, γν nStep I nStep II Φ2017 1 01+0.26 2 19+0.10 9 99+10.3 0 265+0.265 DIF-νµ . −0.23 . −0.10 . −5.13 . −0.135 Φ2019 1 44+0.25 2 28+0.09 9 41+7.32 0 251+0.192 DIF-νµ . −0.24 . −0.08 . −3.93 . −0.105 2019 +0.49 +0.20 +1.74 +0.046 ΦHESE 2.15−0.15 2.89−0.19 1.14−0.618 0.029−0.016

Table 13.2. The Step II rejection efficiency of the highest energy decade of the avail- able simulated event samples for each neutrino flavor along with the corresponding energy range. Energy range of highest MC Step II rejection Neutrino available energy decade efficiency flavor [GeV] [%] 7 8 νe 10 ;10 100.0 ν 108;109 98.3 µ   ν 108;109 97.0 τ    

Table 13.3. The expected number of background events at Steps I and II that are outside of the energy ranges of the simulated event samples.

Analysis level νtot νe νµ ντ +3.14 +0.44 +1.38 +1.31 Step I 1.59−1.06 0.24−0.16 0.71−0.47 0.65−0.43 +0.063 +0.024 +0.039 Step II 0.032−0.021 0 0.012−0.008 0.019−0.013

130 The Φ2017 flux (assumed in this analysis) yields the highest number of DIF-νµ expected events at Steps I and II of the analysis, indicating that this flux repre- sents an upper limit on the astrophysical neutrino flux. Note, though, that the 2019 flux may be as low as indicated by the ΦHESE flux.

13.4 Expected Flux outside of the Simulated Energy Range The Monte Carlo simulated samples that were used to estimate the background 5 8 region for this analysis cover the energy range Eν ∈ 10 ;10 GeV for elec- tron neutrinos and E ∈ 105;109 GeV for muon and tauon neutrinos. The ν   single power law that is assumed to represent the natural flux, on the other hand, has an exponential falloff that continues to infinity. This section is ded- icated to estimating the final level background contribution of the high energy tail that lies outside of the available Monte Carlo samples. In order to achieve this, two main ingredients are needed — an assumption on the astrophysical neutrino flux and the effective area of the analysis above 108–109 GeV. The former is naturally the same background flux assumption that is made throughout this thesis, the Φ2017 flux, shown as a function of DIF-νµ energy in Figure 13.1. The latter was calculated for Step I within the EHE analysis, using other samples of Monte Carlo events than in the present mono- pole analysis, extending the upper energy bound to 1011 GeV. This effective area is shown as a function of energy in Figure 13.2 for each neutrino flavor. Major changes have been made both in the trigger and filter systems of Ice- Cube and in the IceCube reconstruction software since these event samples were produced. These changes have negligible effect on the simple variables used in the Step I event selection, but render the MC samples incompatible with data when considering the advanced reconstructions in Step II. In order to estimate the number of neutrinos after Step I, i.e. after EHE anal- ysis event selection, the EHE analysis effective area to neutrinos was folded with the Φ2017 neutrino flux, along with the total livetime of the analysis, the DIF-νµ solid angle covered by the analysis, and the energy bin width. Using this method, the total number of astrophysical neutrino events ex- +12.2 pected after Step I, nν , is 11.1−5.79. See Figure 13.3 for the number of ex- pected events as a function of energy. Truncating this to the energy range covered by the available simulated samples yields an expected number of +9.05 +3.14 9.50−4.72. This leaves a total of 1.59−1.06 expected events that lie outside of the Monte Carlo sample energy ranges after Step I (see Table 13.3, Row 1 for a per-flavor listing). +9.05 This estimate of the number of Step I expected events, 9.50−4.72, allows a validity check of the procedure through a comparison with the number of ex- pected events that was obtained through direct evaluation of the Monte Carlo

131 Figure 13.1. The 2017 ΦDIF-νµ flux as a function of neutrino energy, including the 1σ uncertainty region (blue± band).

Figure 13.2. The Step I ef- fective area for astrophysical neutrino events as a function of neutrino energy, shown for each neutrino flavor in turn (yellow, turquoise, purple) as well as the sum of all flavors (blue).

Figure 13.3. The number of expected astrophysical neutrino events over the 8 yr livetime of the present monopole analysis, as a function of neutrino energy. This is calculated by folding the incident neutrino flux with the neutrino effective area, the energy bin-width, and the total livetime and solid angle of the analysis. Included is also the 1σ uncertainty region (blue± band).

132 +10.3 samples, 9.99−5.13. These numbers are expected to agree significantly bet- ter than their uncertainty region indicates, as they should represent the same astrophysical flux measured with the same detector. The two numbers agree within 5 %, a small difference that may be attributed to the major differences in the IceCube data treatment between the MC samples that were used in the two estimates. In order to estimate the expected number of Monte Carlo events outside the simulated energy range after Step II, the Step II rejection efficiency for these events must be evaluated per flavor. This is approximated as the rejection efficiency of the events in the highest energy decade available in the simulated event samples of each corresponding flavor. The Step II rejection efficiency in the highest energy decade can be found in Table 13.2 for each neutrino flavor, along with the corresponding energy range. The result of this estimation is shown in Table 13.3. The final level to- tal expected number of background events outside of the simulated energy +0.063 range, 0.032−0.021, is small with respect to the number of expected background +0.265 events within the simulated range, 0.265−0.135, and its uncertainty region.

13.5 Expected Background at Final Analysis Level The chosen astrophysical neutrino flux model, Φ2017 , results in an expected DIF-νµ number of events at final level of nBG = 0.265. This number of events carries an uncertainty of a factor of ∼ 2 originating in the uncertainties on the flux nor- +0.265 malization, φν , and spectral index, γν , yielding a background of 0.265−0.135. The statistical uncertainty originating in the the finite MC sample size is ∼ 1.5%, and therefore negligible. Additionally, two alternative astrophysical neutrino flux models were eval- uated (Φ2019 and Φ2019 ). The Φ2019 flux yielded an expectation on the DIF-νµ HESE DIF-νµ number of events at final level consistent with the Φ2017 flux, with slightly DIF-νµ 2019 smaller uncertainty. The ΦHESE flux, on the other hand, yielded an nBG expec- tation ∼ 1 order of magnitude lower than Φ2017 , 0 029+0.046. DIF-νµ . −0.016 Finally, the contribution of the high energy tail of the Φ2017 flux, outside DIF-νµ of the energy ranges of the MC event samples, was evaluated. This region may +0.063 contribute a total of 0.032−0.021 events at the final level of the analysis. This results in an expected number of astrophysical neutrino events at final level over the full analysis livetime ranging from ∼ 0.01 (assuming the lower 2019 1σ bound of the ΦHESE flux) to ∼ 0.6 (assuming the upper 1σ bound of the Φ2017 flux, including the estimated high energy tail). This is a wide range, DIF-νµ but its upper bound is still significantly below one event per livetime of the analysis. Considering this, in the case of a non-detection the upper limit will be evaluated using a background-free assumption, i.e. with nBG = 0.

133 14. Result

This chapter covers the results of the analysis that is described in this thesis, which are produced in the counting stage of a cut-and-count analysis.

14.1 Experimental Event Rate This section covers the number of events that, after unblinding the analysis, are observed in experimental data at each level of the analysis. These are listed in Table 14.1 along with the corresponding expected numbers of signal and background events, assuming the model fluxes (compare to Table 10.2). At the lower analysis levels, a significant flux contribution is expected from neutrinos and muons produced in the atmosphere (see Table A.1) as well as from astrophysical neutrinos with a primary neutrino energy below 105 GeV. These event classes are not covered by the used Monte Carlo samples, since they contribute negligibly after Step I. +10.3 After Step I of the analysis, a total of 9.99−5.13 background events are ex- pected over the full analysis livetime, where 3 events are observed in experi- mental data. None of the three experimental data events are accepted by the Step II event +0.265 selection, with a total number of expected background events of 0.265−0.135 events.

14.1.1 Step I Accepted Events The three events that were accepted by the Step I event selection scheme were observed in the IC86-IV, -VI and -VIII seasons, and are listed with their obser- vation date in Table 14.2. Throughout this section the events will be denoted by Events A, B and C. Event views of each of the three observed events A, B and C are displayed Figures 14.2, 14.3 and 14.4 respectively. The values taken by the observed events over the variables of the event selection are also listed in Table 14.3. The awarded BDT scores are also displayed in Figure 14.1 along with the distributions of signal and background events over the BDT score. The events are displayed over the variables of the Step II BDT in Appendix B. None of the Step I observed events were accepted by the selection criteria of Step II. Event A was awarded the highest BDT score of the three, −0.089, and

134 Table 14.1. The number of observed events, nOB, at each level of the analysis, along with the corresponding numbers of expected signal and background (astrophysical neutrino) events, nSG and nBG, assuming the model fluxes.

Analysis level nOB nSG nBG EHE filter 1.63 × 108 178 371 Step I Initial cuts 3.16 × 104 89.9 57.2 Cascade cut 8.46 × 103 64.1 20.4 Down-going cut 3 35.5 10.1 +10.3 Surface veto 3 35.5 9.99−5.13 +0.265 Step II 0 33.2 0.265−0.135

Table 14.2. The observation date and season of the three Step I observed events. Season Date Event A IC86–IV 2014–06–11 Event B IC86–VI 2016–12–08 Event C IC86–VIII 2019–03–31

Table 14.3. The values taken by the three observed events at Step I over the event selection variables of this analysis. Event A Event B Event C

log10 (nPE) 5.10 5.30 5.32 nCH 219 227 633 2 χred,EHE 52.3 102 124 cos(θzen,EHE) −0.209 −0.0489 −0.172

βBM 1.127 0.628 0.942 rsd(EMIL) 3.20 4.97 7.25 avg(dDOM,Q)CV-TrackChar 42.8 m 67.6 m 54.1 m tFWHM,CV-TimeChar 2.76 µs 2.78 µs 2.56 µs LFRCV-TrackChar 0.615 0.362 0.344 RCOCV-HitStats 0.0458 0.434 0.0886 log10 (nPE) 5.10 5.30 5.32 cos(θzen,BM) −0.203 0.0205 0.391 dC,BM 314 m 418 m 137 m BDT score −0.089 −0.742 −0.626

135 Figure 14.1. The BDT scores of the Step I observed events A, B and C, along with the magnetic monopole and neutrino event distributions over the Step II BDT score. The cut criterion at a BDT score of 0.047 is also included (black).

Figure 14.2. Event view of the Step I observed Event A. See Chapter 3.5 for a description of how to interpret an event view. The blue line represents the BM track reconstruction.

136 Figure 14.3. Event view of the Step I observed Event B. See Chapter 3.5 for a description of how to interpret an event view. The blue line represents the BM track reconstruction.

Figure 14.4. Event view of the Step I observed Event C. See Chapter 3.5 for a description of how to interpret an event view. The blue line represents the BM track reconstruction.

137 was thus closest to be accepted as a magnetic monopole candidate. The frac- tion of magnetic monopole events at this level of the analysis that are awarded a lower BDT score than −0.089 is 0.22 %. Correspondingly, 32 % of the ex- pected muon neutrino events are scored higher than −0.089. Note that the Step II event classification and selection procedures were de- veloped fully blind to these events, i.e. without using any information about the experimental acceptance of the Step I selection.

14.1.2 Step II Accepted Events The Step II event selection did not accept any experimental data events over the full 8 yr livetime of the analysis.

14.2 Final Result As no events were observed at the final level of this analysis, the analysis cannot result in a discovery of magnetic monopoles. The result is thus an upper limit on the magnetic monopole flux. The number of final level expected background events over the livetime of the analysis is estimated to 0 265+0.265, assuming the Φ2017 flux. An . −0.135 DIF-νµ 2019 alternative flux measurement, ΦHESE, yields an expectation on the number +0.046 of final level background events of 0.029−0.016, and it is currently unknown which of these represents reality best. The conservative choice is to assume an entirely background-free analysis, i.e. to calculate the final result using a final level background expectation of zero. This is motivated by the large uncertainty on the number of background events, and is expected to differ from the true number of background events by less than < 0.6 events over the full livetime of the analysis (discussed in Chapter 13.5). Therefore, the final result will be reported with the background-free assumption. The upper limit is calculated through Equation 7.2 with the effect of the nSG uncertainty σSG included according to Equation 7.9. Here, the number of observed events, nOB, and the expected number of background events, nBG, are both zero, resulting in a F&C upper limit of µ90 (nOB,nBG) = 2.44. The (relative) signal efficiency uncertainty is σSG = 8.4%. The 90 % CL upper limit on the magnetic monopole flux thus becomes 2.54 × 10−19 cm−2 s−1 sr−1 averaged over the full monopole speed range. If the expected number of background events was taken as 0.265, i.e. the number that arises from Monte Carlo estimation, with a 100 % uncertainty, the F&C upper limit would be 2.15 and the final flux upper limit would be 2.24 × 10−19 cm−2 s−1 sr−1 over the full monopole speed range. This is ∼ 13% lower (better) than the present result. The upper limit is also shown as a function of the true particle speed in Figure 14.5 along with the sensitivity. The numerical values of the upper limit

138 and sensitivity are listed over the true particle speed in Table 14.4. Note that each upper limit point in Figure 14.5 represents an individual result on the flux, as if having been produced with an analysis only considering that speed. The shape of the upper limit over the particle speed is identical to the shape of the sensitivity, as they both are shaped by the inverse of the effective area over the true particle speed. The upper limit and the sensitivity differ by:

MM Φ90 µ¯90 MM − 1 = − 1 = 9.5% Φ90 µ90 Analogous to the sensitivity, the upper limit can be described as sharply de- creasing to a speed of approximately βMC = 0.87, whereafter it plateaus slightly below 2 × 10−19 cm−2 s−1 sr−1. Additionally, the upper limit is shown in Figure 14.6 in the context of the current upper limits on the cosmic flux of magnetic monopoles (compare to Figure 2.4). It is concluded that the results of the current analysis are compet- itive for speeds above β = 0.79, and yield an improvement of around an order of magnitude over the previous results for speeds above 0.82. In several of the previous analyses that have been conducted in this β range the upper limit has been extended from β = 0.995 to β = 1. This is allowed if there is no reason to believe that the event selection scheme will select against events in the range β ∈ [0.995;1]. In the present analysis, however, this is not allowed, as the particle speed is strongly anticorrelated with the BDT score of the Step II BDT, i.e. that high speed events are selected against in Step II of the analysis. Additionally, a magnetic monopole in the ultrarelativistic regime (Lorentz factor γ & 100) will produce particle showers along its track, which is a char- acteristic signature of an ultrarelativistic muon event. Events induced by an ul- trarelativistic magnetic monopole would therefore be heavily selected against by the Step II BDT, and a search for a cosmic flux of these would require a dedicated analysis strategy separate from the one described in this thesis. Con- sidering this, it is decided that the final upper limit on the magnetic monopole flux will only cover the simulated range, and extend to β = 0.995.

139 Figure 14.5. The 90 % CL upper limit on the cosmic flux of magnetic monopoles that can be set through this analysis, as a function of the true particle speed. Included is also the sensitivity.

Figure 14.6. The 90 % CL upper limit on the cosmic flux of magnetic monopoles that can be set by employing this analysis (yellow curves, denoted by IceCube-86 8 yr), as a function of the true particle speed. Included are also the current best upper limits on the magnetic monopole flux (see Chapter 2.6 for further description and references).

140 Table 14.4. The 90 % CL upper limit and sensitivity on the cosmic magnetic monopole flux that can be set with this analysis, as functions of the true particle speed. Speed [c] Sensitivity [cm−2 s−1 sr−1] Upper limit [cm−2 s−1 sr−1] 0.780 1.26 × 10−17 1.15 × 10−17 0.785 5.98 × 10−18 5.46 × 10−18 0.790 3.18 × 10−18 2.90 × 10−18 0.795 1.88 × 10−18 1.71 × 10−18 0.800 1.23 × 10−18 1.12 × 10−18 0.805 8.72 × 10−19 7.96 × 10−19 0.810 6.65 × 10−19 6.07 × 10−19 0.815 5.34 × 10−19 4.88 × 10−19 0.820 4.46 × 10−19 4.08 × 10−19 0.825 3.84 × 10−19 3.51 × 10−19 0.830 3.39 × 10−19 3.10 × 10−19 0.835 3.07 × 10−19 2.80 × 10−19 0.840 2.83 × 10−19 2.58 × 10−19 0.845 2.65 × 10−19 2.42 × 10−19 0.850 2.50 × 10−19 2.29 × 10−19 0.855 2.38 × 10−19 2.17 × 10−19 0.860 2.26 × 10−19 2.06 × 10−19 0.865 2.16 × 10−19 1.97 × 10−19 0.870 2.08 × 10−19 1.90 × 10−19 0.875 2.02 × 10−19 1.84 × 10−19 0.880 1.98 × 10−19 1.81 × 10−19 0.885 1.96 × 10−19 1.79 × 10−19 0.890 1.94 × 10−19 1.77 × 10−19 0.895 1.92 × 10−19 1.75 × 10−19 0.900 1.89 × 10−19 1.73 × 10−19 0.905 1.87 × 10−19 1.70 × 10−19 0.910 1.85 × 10−19 1.69 × 10−19 0.915 1.84 × 10−19 1.68 × 10−19 0.920 1.84 × 10−19 1.68 × 10−19 0.925 1.84 × 10−19 1.68 × 10−19 0.930 1.84 × 10−19 1.68 × 10−19 0.935 1.84 × 10−19 1.68 × 10−19 0.940 1.84 × 10−19 1.68 × 10−19 0.945 1.84 × 10−19 1.68 × 10−19 0.950 1.85 × 10−19 1.69 × 10−19 0.955 1.86 × 10−19 1.70 × 10−19 0.960 1.89 × 10−19 1.72 × 10−19 0.965 1.90 × 10−19 1.74 × 10−19 0.970 1.92 × 10−19 1.75 × 10−19 0.975 1.92 × 10−19 1.76 × 10−19 0.980 1.93 × 10−19 1.76 × 10−19 0.985 1.94 × 10−19 1.77 × 10−19 0.990 1.95 × 10−19 1.79 × 10−19 0.995 1.97 × 10−19 1.80 × 10−19 141 15. Summary and Outlook

The purpose of the analysis described in this thesis was to discover a cosmic flux of magnetic monopoles with a speed within the range β ∈ [0.750;0.995], v where β = c is the magnetic monopole speed. In the case of a non-detection, the aim was to determine a competitive upper limit on the cosmic monopole flux. The cosmic flux search is conducted using data collected with the IceCube detector (Chapter 3), that instruments ∼ 1km3 of the deep Antarctic ice with 5160 optical modules, during 8 yr of operation. For the development of the analysis strategy, the (background) flux of astrophysical neutrinos is assumed to be described by a power law as given by a previous IceCube measurement of the diffuse flux of upwards directed muon neutrino events (Chapter 8.3.2). The analysis strategy of the present search is divided into two steps (Chap- ter 8.1) — Step I and Step II. In Step I the event selection scheme of a previous IceCube analysis, the EHE analysis (Appendix A), is employed (Chapter 10.2). This analysis was selected as it was designed to search for neutrino events that deposit a large amount of light in the detector, similar to magnetic monopoles with speed above the Cherenkov threshold in ice. Additionally, the EHE event selection consisted of a small number of simple cuts, thus enabling a highly general se- lection of bright events that imposes minimal constraints on the shape of the light distribution. The EHE analysis is conducted with the aim of discovering a flux of astrophysical neutrinos, and is therefore also made to reject atmo- spheric events with a high efficiency. At the final level of the EHE analysis, the atmospheric contribution is expected to be lower than 0.085 events over the full 9yr livetime. Since the aim of the Step I event selection is to select high energy neutrino events, and not limited to magnetic monopole events, an additional step of the analysis — Step II — was developed (Chapter 10.3). The aim of Step II is to efficiently reject astrophysical neutrino events, while maintaining a high ac- ceptance for magnetic monopole events. To achieve this, a dedicated sample of magnetic monopole Monte Carlo events was produced (Chapter 9.1), and examined together with several simulated samples representing the astrophys- ical neutrino events (Chapter 9.2), in order to determine event signatures that distinguish magnetic monopole events from neutrinos. The astrophysical neu- 5 trino Monte Carlo samples were truncated at Eν = 10 GeV, as events with a lower energy than this would not contribute to the final distributions. A se- lection of nine variables were used to train a boosted decision tree (BDT) to

142 identify the primary particle of an event (Chapters 10.3.2 and 10.3.3). The BDT awards a score to each event that represents its likeness with a monopole event, and a cut criterion over the BDT score was determined by finding the optimal model rejection potential (Chapter 10.3.4). With the flux assumptions given above, the basic IceCube trigger condi- tions would accept a total of 244 magnetic monopole events over the full 8 yr analysis livetime, as well as 838 astrophysical neutrino events with an energy above 105 GeV (Chapter 10.4). Over the full analysis livetime, the Step I event selection would accept 35.5 magnetic monopole events and 9.99 astrophysical neutrino events. The Step II selection would accept 33.2 monopole events, corresponding to 93.5 % of the Step I monopoles, and accept only 0.265 as- trophysical neutrino events, i.e. 2.65 % from Step I. The sensitivity to the cosmic magnetic monopole flux of this analysis thus becomes 2.78 × 10−19 cm−2 s−1 sr−1 averaged over the considered speed range (Chapter 11.2). See Table 14.4 and Figure 11.2 for the sensitivity as a function of speed. A study of the systematic uncertainty on the signal detection efficiency of this analysis was also conducted (Chapter 12). The total uncertainty was found to be 8.4%, with negligible statistical contribution. After the analysis procedure had been approved by the IceCube collabora- tion, it was applied to experimental data. The Step I event selection accepted a total of three events in experimental data, all of which were rejected in Step II (Chapter 14.1). An upper limit of 2.54 × 10−19 cm−2 s−1 sr−1, average over the full β range, on the magnetic monopole flux was obtained (Chapter 14.2). See Table 14.4 and Figure 14.5 for the upper limit as a function of speed. This result constitutes an improvement of about one order of magnitude over the previously published results in the relevant β range, most recently by the ANTARES collaboration [14] (Figure 14.6). This result is valid if the magnetic monopole mass, mMM, lies in the range 8 15 mMM ∈ 10 ,10 GeV (Chapter 8.3.1). Here, the upper constraint arises from the upper bound on the magnetic monopole kinetic energy (Chapter 2.4), and the lower constraint comes from the requirement that the monopole loses negligible energy while traversing the length of the IceCube detector (Chap- ter 4.1). Future magnetic monopole analyses with IceCube may enjoy improved re- construction and particle identification techniques, such as an event type clas- sifying neural network [65], and a perpetually increasing data volume. How- ever, when the magnetic monopole efficiency approaches perfect, and if the background is kept low, the IceCube monopole sensitivity will only improve with the inverse of the collected livetime (Equation 7.4). This means that adding one additional year of data to an analysis covering 8 yr would at best produce a 12.5 % improvement of the sensitivity. Similarly, an improvement of a factor of 10 would require a total of 80 yr of observation.

143 Therefore, in order to continue to make significant progress in the search for a cosmic flux of magnetic monopoles, a larger detector is required, such as the IceCube Gen2 detector [80]. The IceCube Gen2 detector is proposed to have 1/3 an effective volume Veff one order of magnitude larger than that of IceCube, corresponding to an effective area a factor of ∼ 4.6 larger than IceCube, as 1/2 1/3 Aeff ≈ Veff . The sensitivity to a magnetic monopole flux would thus improve with the collected livetime ∼ 4.6 times faster than for IceCube, but the same limitation would be reached, where each added year of data yields a smaller relative sensitivity improvement. This may seem discouraging, as no larger future detectors are planned, but the search for a cosmic flux of magnetic monopoles must continue. The dis- covery of magnetic monopoles would constitute a monumental feat in physics, and the flux may lie just beneath the upper limit presented in this thesis.

144 16. Swedish Summary — Svensk Sammanfattning

Avhandlingens Titel Ljusstarka Nålar i en Höstack — En Sökning efter Magnetiska Monopoler med Neutrino-Observatoriet IceCube

Målet med det forskningsprojekt, den analys, som beskrivs i den här avhand- lingen var att upptäcka en typ av exotiska partiklar som bär magnetisk ladd- ning, så kallade magnetiska monopoler. Sökningen efter monopolerna genom- fördes med data som insamlats med IceCube-detektorn under loppet av åtta år. Projektet har genomförst vid Uppsala Universitet, och inom ramarna för IceCube-kollaborationen.

16.1 Vad är en Magnetisk Monopol? En magnetisk monopol är en hypotetisk partikel, alltså en partikel som inte har observerats (ännu). En monopol bär endast en magnetisk laddning, antingen en nord- eller sydpol. Detta kan jämföras med exempelvis en elektriskt laddad partikel, som bär antingen positiv eller negativ elektrisk laddning. I den här analysen har vi sökt efter magnetiska monopoler som kommer från rymden med en hastighet nära ljusets hastighet, mellan 75 % och 99,5 % av ljusets hastighet. Dessa hastigheter uppnås då monopolerna acceleras kraftigt av de starka magnetfält som finns i Universum, exempelvis kring aktiva galax- kärnor. De monopoler vi söker har även väldigt hög massa, vilket i kombina- tion med den höga hastigheten gör att de kan passera genom hela Jorden. Inom det här hastighetsspannet skulle en magnetisk monopol sända ut Cher- enkovljus när den passerar genom vissa material, bland annat is och vatten. Cherenkovljus produceras med många olika våglängder, däribland vanligt op- tiskt ljus. Ljuset sänds ut jämnt längs med partikelns spår, med en vinkel som bestäms av partikelns hastighet. Hastigheten bestämmer även mängden ljus som produceras, där de snabbaste monopolerna ger mest ljus. Man vet i nuläget inte om magnetiska monopoler existerar, men en stor grupp av fundamentala teorier, alltså teorier om hur världen fungerar på grund- läggande nivå, anger att magnetiska monopoler borde finnas. Magnetiska mo- nopoler beskrevs 1931 av , vilket förfinades 1974 av Gerard ’t Hooft och Aleksandr Polyakov, oberoende av varandra. Dirac beskrev att att fri mag- netisk laddning endast får existera om den är kvantiserad, vilket innebär att

145 laddningen byggs upp av ett antal minsta-möjliga laddningar. I och med detta måste även den elektriska laddningen vara kvantiserad. Dessa krav är kända som Diracs kvantiseringsvillkor. Man har fastställt att elektrisk laddning är kvantiserad i verkligheten, men hittills inte kunnat bekräfta varför. Om man upptäcker en magnetisk monopol skulle man därmed fastställa varför den elektriska laddningen är kvantiserad, och då lösa ett av de stora olösta mysterierna i modern fundamentalfysik.

16.2 Vad är IceCube? IceCube-detektorn är placerad vid den geografiska sydpolen och färdigställdes år 2011. Detektorn drivs och underhålls av IceCube-kollaborationen — ett in- ternationellt samarbete mellan nästan 300 forskare vid över 50 universitet i tolv länder, däribland cirka 15 forskare vid Stockholms och Uppsala Univer- sitet. I den här analysen behandlas data som samlats in med IceCube-detektorn mellan år 2011 och 2019. IceCube-detektorn kan användas till forskning på flera olika områden, och utvecklades för att registrera neutrin-partiklar som producerats i rymden. De registrerade neutrinerna kan analyseras för att exempelvis dra slutsatser om olika kosmiska objekt, för att söka efter mörk materia eller för att undersöka neutrinerna själva. Utöver detta kan detektorn även användas till studier av den glaciäris i vilken den är placerad. IceCube-detektorn är uppbyggd av tre övergripande komponenter, var och en med ett specialiserat syfte. Dessa är den huvudsakliga detektorn, DeepCore och IceTop, samt en kontrollenhet, IceCube-Laboratoriet, där all data samlas och kategoriseras. DeepCore och IceTop avser del-detektorer som fokuserar på lågenergi-neutriner respektive partiklar som bildats i atmosfären, och är placerade i center av den huvudsakliga detektorn respektive på glaciärens yta. Datan som behandlas i den här analysen har insamlats med den huvudsakliga detektorn. I figur 16.1 visas en illustration av IceCube-detektorn. Huvud-detektorn består av nästan 5000 digitala optiska moduler (DOMar), fördelade i längder om 60 på totalt 78 kablar som är nedsänkta djupt ner i den Antarktiska glaciärisen. En DOM är ett instrument som mäter ljusmängd med så pass hög noggrannhet den kan uppfatta enstaka fotoner (ljuspartiklar). Detektor-kablarna är nedsänkta i ett hexagonalt mönster med ett avstånd på 125 meter mellan varje kabel. På varje kabel är de 60 DOMarna placerade med 17 meters mellanrum längs med den nedersta kilometern, mellan 1450 meters och 2450 meters djup. Således har totalt en kubikkilometer av den djupa glaciärisen på Antarktis instrumenteras med ljusdetekterande DOMar. Data samlas kontinuerligt in med IceCube-detektorn, dag liksom natt och sommar liksom vinter. Den data som samlas in med IceCube delas upp i så- kallade händelser. Varje händelse innehåller den data som samlats in från en och samma ursprungspartikel, exempelvis en atmosfärisk myon eller neutrin.

146 Globen Arena

Figure 16.1. En illustration över IceCube-detektorn, inklusive IceCube-laboratoriet och dess del-detektorer. Credit: IceCube-kollaborationen. En illustration av Globen i Stockholm (diameter 110 meter) inkluderas för att demonstrera detektorns skala.

De vanligaste partiklarna att detekteras med IceCube, cirka 2700 gånger per sekund, är atmosfäriska myoner. Dessa bildas när partiklar i den kosmiska strålningen krockar med atomer i atmosfären, och färdas sedan ner genom luften och isen tills de passerar genom detektorn. Myoner sänder ut Cheren- kovljus när de passerar genom isen, på samma sätt som magnetiska monopoler skulle, och det här ljuset detekteras sedemera av IceCubes DOMar. Mängen Cherenkovljus som sänds ut av en myon är dock mycket mindre än för en mo- nopol, men detta kompenseras för myoner med hög energi, då de framkallar ljusgivande partikelskurar längs med sin bana genom isen. Då myonerna kan färdas långväga genom isen, och sänder ut ljus längs med hela sitt spår, syns myoner som avlånga streck i detektorn. Näst vanligast i detektorn är atmosfäriska neutriner, vilka också bildats när kosmisk strålning krockar med atmosfären. Atmosfäriska neutriner detekteras cirka en miljon gånger mer sällan än atmosfäriska myoner. Dessa färdas också ner genom atmosfären och isen, men växelverkar så lite med sin omgivning att de inte själva syns i detektorn. En liten andel av dessa neutriner krockar dock med en i isen, och ger då upphov till antingen en myon eller en par- tikelskur. Myonerna sänder ut ljus längs med sitt spår, liksom de atmosfäriska myonerna, medan partikelskurarna endast sänder ut ljus nära sin start-punkt. Ljuset som sänds ut detekteras sedan av detekorn. Utöver de atmosfäriska myonerna och neutrinerna detekteras också astro- fysikaliska neutriner, alltså neutriner som bildats långt ifrån Jorden. I detek- torn ser dessa likadana ut som de atmosfäriska neutrinerna, och kommer be- höva särskiljas på statistisk nivå (baserat på deras energi och ingångsvinkel).

147 Magnetisk monopol Genomfarande myon Partikelskur Ljussvag myon

Figure 16.2. Illustrerade exempel på händelser i IceCube — en magnetisk monopol- händelse och tre andra händelser med distinkt särskiljda signaturer. Det blå området representerar IceCube-detektorn i glaciärisen, och det rödgrön-färgade området repre- senterar det ljus som detekterats i händelsen. Färgskalan från rött till grönt represen- terar tiden då ljuset detekterats, där rött indikerar tidigt i händelsen och grönt indikerar sent. Pilarna representerar de ljusskapande partiklarnas väg genom detektorn.

16.3 Att Söka Magnetiska Monopoler med IceCube Det första man gör i en sökning efter en okänd partikel är att ta reda precis vad man letar efter, alltså hur en sådan händelse borde se ut i detektorn, och hur detta skiljer sig ifrån händelser framkallade av andra partiklar. Detta gör man genom att konstruera en modell av sin detekor i datorn, och simulera ett antal händelser med partikeln i modellen. För den här analysen har vi simulerat 400 000 magnetiska monopol-händelser, och jämfört dessa med ca 500 miljoner simulerade händelser med astrofysik- aliska neutriner. Med hjälp av detta har vi tagit fram fem signaturer för mag- netiska monopoler, alltså specifika egenskaper som särskiljer magnetiska mo- nopol-händelser ifrån neutrin-händelser. Illustrerade exempel på händelser med olika signatur kan ses i figur 16.2. I nästa etapp av analysen analyseras alla simulerade händelser, och ett urval utvecklas som ska sortera fram händelser som ser ut att härröra magnetiska monopoler. Detta urval kallas för ett händelseurval, och kriterierna för händ- elseurvalet baseras på de signaturer som tidigare har tagits fram. En karakteristisk signatur för magnetiska monopoler är att de producerar väldigt mycket ljus, vilket leder till att magnetiska monopol-händelser in- nehåller väldigt mycket detekterat ljus. I samband med en tidigare analys inom IceCube, den så kallade EHE-analysen (Extremely High Energy — Ex- tremt Hög Energi), utvecklades ett händelseurval som sorterar bort alla händ- elser som inte uppvisar väldigt mycket detekterat ljus i detektorn. Målet med EHE-analysen var att hitta en viss typ av astrofysikaliska neutriner, så urvalet sorterade även bort atmosfäriska neutriner och myoner väldigt effektivt. I vår analys återanvände vi händelseurvalet från EHE-analysen i vad vi kallar Steg I av analysen. Detta hjälpte oss att sortera bort atmosfäriska händelser och sam- tidigt behålla en stor andel monopol-händelser.

148 Då EHE-analysens händelseurval utformades för att behålla astrofysikaliska neutriner, medan vi söker efter magnetiska monopoler, måste vi lägga till yt- terligare ett steg i analysen — Steg II. I Steg II vill vi behålla så stor an- del av de magnetiska monopol-händelserna som möjligt, och samtidigt kasta så många neutrin-händelser som möjligt. Detta åstadkommer vi genom att använda de magnetiska monopol-signaturer som vi tagit fram, och träna ett BDT (Boosted Decision Tree — Förbättrat BeslutsTräd) till att särskilja mag- netiska monopol-händelser ifrån neutrin-händelser. Ett BDT är ett maskin- inlärningsverktyg som man tränar till att känna igen ett antal aspekter hos en händelse, och som baserat på dessa poängsätter varje händelse på en flytande skala mellan −1 och +1. I vår analys anger en högre poäng att en händelse är monopol-lik, medan lägre poäng anger att den är neutrin-lik. Det beslutades att kasta alla händelser med en poäng under 0,047, vilket är det värde som ger analysen bäst känslighet för lägsta möjliga monopol-flöde.

16.4 Resultat Den här analysen har utförts i blindo, vilket innebär att analysmetoden har utformas till fullo innan den har använts på experimentell data insamlad med detektorn. På detta sätt kan vi utveckla analysen så opartiskt som möjligt, och utformar inte händelseurvalet specifikt med hänsyn till de individuella händelser som detekterats. Därmed vet vi heller inte på förhand vilket resultat som analysen kommer att ge, utan detta vet vi först i efterhand då analysen inte längre får redigeras. Däremot kan vi på förhand jämföra med resultaten från tidigare sökningar efter magnetiska monopoler inom samma hastighetsspann. Det har tidigare gjorts flera sökningar efter monopoler med hastigheter inom det intervall som den här analysen avser. Som bekant har inga - iska monopoler hittats, utan alla har resulterat i en övre gräns på flödet. Det senaste resultatet inom detta intervall satte en övre gräns på flödet till färre än 3,46 × 10−18 monopoler per kvadratcentimeter per sekund per steradian (en- heten för rymdvinkel). Den gränsen innebär att vårt händelseurval borde se färre än 33,2 detekterade monopoler under de åtta år som analysen behandlar. Utöver detta beräknade vi hur effektivt vårt händelseurval är på att avvisa astrofysikaliska neutrinhändelser. Händelseurvalet borde registrera i genom- snitt 0,256 neutriner under analysens åtta år, vilket innebär att vi skulle behöva observera nästan fyra gånger så lång tid för att detektera en enda neutrin. När vi sedemera använde vår analysmetod på experimentell data visade det sig att ingen experimentell händelse uppfyllde urvalsvillkoren, alltså att ingen händelse bedömdes som monopol-lik. Den övre gräns som vi då kan sätta är att vi har observerat färre än 2,44 magnetiska monopoler under hela analysens åtta år, vilket motsvarar en övre gräns på flödet på färre än 2,54 × 10−19 monopoler per kvadratcentimeter per sekund per steradian. Vårt resultat är alltså mer än 10 gånger bättre än det senaste motsvarande resultatet.

149 17. Acknowledgements

This thesis is dedicated to my children Olivia and Victor, who are all that is best in me.

The five years that have led up to the writing of this thesis have been world- altering for me. Not only was I introduced to the role of a modern day re- searcher, and allowed to immerse myself in the international collaboration that is IceCube, but I also got married and had two children, as well as bought and sold two apartments and bought my first house, In addition to this, the final year of my Ph.D. period has been afflicted with a global pandemic, that has affected every aspect of our lives. In this section I would like to highlight and thank some of the people who have contributed to my endeavor over the last five years. To my main supervisor — Carlos Pérez de los Heros — thank you for your guidance over the last five years. Thank you indulging my unconventional ideas and discussions about all aspects of our work, from the scientific method to the nature of particle physics, from the state of Swedish particle physics to career advice, and personal topics such as vacations, children, Spain, etc. Your engagement and (figuratively) always open door is an inspiration for me. To my former main supervisor, now co-supervisor — Allan Hallgren — thank you for brining me into the group and for your generous support over these years. I have met few people with such a high physics creativity as you have, and many a problem has been solved by your unprecedented “but have you tried this way?” or “have you checked this aspect?”. This boundless creativity is something that I continue to try to to internalize. To my co-supervisor — Olga Botner — thank you for your inspiration and support over these years. Never have I met someone with such a piercing thought processes as yours. No matter what the issue is, you always find the core and twist and turn it until a solution can be found. You are the epitome of a physicist, with an unmatched physics intuition that I will always strive for. To all of my supervisors, thank you for allowing me the freedom to trial different topics within our group, before finally selecting beyond the standard model astroparticle physics. Thank you to Rickard and Henric for welcoming me to the group. And thank you to all of my previous and current colleagues at Uppsala University — Jim, Max, Mikael, Jan, Maja, Myrto, Olga S. G., Thomas, Venu, Walter, Elisabetta, Johan, Erin, Nora, Bo, and all others — for having made my time here very enjoyable through all of the lunches, fika and discussions. And thank you to Arnaud for your thorough feedback on this thesis. An additional thank

150 you to my colleagues in the Stockholm University IceCube group — Samuel, Jon, Martin, Marcel, Kunal, Matti, Maryon, Chad, Klas, Christian. In the wider IceCube collaboration I would like to thank Anna P., who has grown into a close friend, for your mentorship and your patience with my endless questions about monopoles, about simulations, about weighting and upper limits, and for your support both in work and in life. And to Sophie — welcome to the world! Additionally, I also want to extend thanks to several colleagues within IceCube — Morten, Joakim, Mike, Liz, Ward, Christoph, Shivesh, Elim, Anna N, Michael, Justin, Frederik — for the good times we had in collaboration meetings and bootcamps. Of all my colleagues I most want to send thanks to Lisa. You welcomed me with open arms for my very first job-interview with the group, and the time we shared as office mates will always shine bright in my memory. When you were there the office was large and warm and welcoming, and when you were out it was cold and small and unforgiving. Countless hours have we spent discussing, trouble-shooting and laughing, and you always provided support when needed. It was impossible to be unhappy when you were around. And thank you to Jerome for your endless patience, and to Jeli just for existing. Thank you to my father, my mother and my brother — Douglas, Anne and Christian — for having been there through the good times and the bad, and for always believing in me. And thank you to my mother-in-law — Annette — for the numerous times you have helped us solve our life-puzzle over the last few years. Also thank you to my remaining family — Anneli, Pierre, Orvokki & Jari, Christian & Jenny, Linus, Hampus, Pontus, Linnea & Ted, Simon & Elin, Boel & Bosse, Janne & Lena, Anne-Lie, Kicki — for cheering me on and for providing support when needed. A special thanks to my closest friends — Niklas, Camilla, Jonas — as well as Edda and Mira — for always being there, and for your patience with my work. An additional thank you to my friends — Martin, Marcus, Daniel, Filip, Douglas, Chuck, Alexandra, Veronica, Angelica & Marcus, Sofi & Simon. I want to thank my children — Olivia and Victor — for bringing the biggest possible joy to my life. You are the reasons that I get up in the morning, and who I think about before falling asleep. Thank you for existing, for surprising me with your new capabilities, for your humor and for providing me with unconditional love and kindness. Last but not least, as the cliché goes, I want to thank my loving wife — Angelica. Thank you for having put up with all of this. Thank you for taking on all of the roles that I needed in my Ph.D. endeavor — for cheering me on when I was out, for pushing me when I was lazy, for being my sounding board when I needed to vent and break things down, for strengthening me when I needed to stand up for myself and thank you for pulling me back up when I was down. Without your partnership and your support the last five years would not have been possible, thank you for making them the best years so far. I love you.

151 A. The IceCube EHE Analysis

Step I of the analysis that is presented in this thesis is formed by the event selection of another IceCube analysis — the Extremely High Energy (EHE) analysis. The purpose of the EHE analysis [46] is to discover a flux of cosmo- 8 genic neutrinos with high energy (typically Eν & 10 GeV) that was formed through the Greisen-Zatsepin-Kusmin (GZK) mechanism [81]. These neutri- nos are produced through the interaction between ultra high energy cosmic ray particles (protons or nuclei) and the cosmic microwave background, and are expected to have a different energy distribution than the diffuse astrophysical neutrino flux that is the background in the analysis presented in this thesis. The EHE event selection is designed to accept as many neutrinos as possible with a high incident energy, while rejecting the majority of events with an atmospheric origin, and is described in Chapter 10.2. The event selection mainly selects based on the registered brightness of an event, which is highly correlated with the event deposited energy, and the cut value is determined by the reconstructed direction of the incident neutrino and its track fit quality. The selection variables in the EHE analysis are: • The number of registered photo-electrons, nPE, and its base-10 loga- rithm, log10 (nPE). • The number of detector channels (DOMs) with registered charge, nCH. • The fit quality (the reduced χ2 parameter) of the EHE track recon- 2 struction, χred,EHE. • The cosine of the zenith direction of the EHE reconstructed track, cos(θzen,EHE). The selection criteria per analysis level are summarized below: The EHE filter Data reduction by selection on nPE:

nPE ≥ 1000 (A.1)

The offline EHE cut Quality and data reduction cuts on nPE, nCH and 2 χred,EHE separately:

nPE ≥ 25000 (A.2)

nCH ≥ 100 2 χred,EHE ≥ 30

152 Figure A.1. Distributions of the signal (cosmogenic neutrinos) and background (atmo- spheric muons, conventional atmospheric neutrinos, prompt atmospheric neutrinos) of the EHE analysis, over event brightness, denoted by NPE, and fit quality, denoted by χ2/ndf, along with the track quality cut criterion (Equation A.3). Credit: Figure 1 from reference [46].

Figure A.2. Distributions of the signal (cosmogenic neutrinos) and background (atmo- spheric muons, conventional atmospheric neutrinos, prompt atmospheric neutrinos) of the EHE analysis, over event brightness, denoted by NPE, and fit quality, denoted by cos(θLF), along with the muon bundle cut criterion (Equation A.4). Credit: Figure 2 from reference [46].

153 The track quality cut Rejection of prompt atmospheric electron neutrinos:

2 4.6 if χred,EHE < 80 2 2 log10 (nPE) ≥ 4.6 + 0.015 × χred,EHE − 80 if 80 ≤ χred,EHE < 120 5 2 if 120 ≤ 2  .  χred,EHE (A.3)  The muon bundle cut Rejection of atmospheric muon and muon neutrino events from above.

4.6 if cos(θzen,EHE) < 0.06 2  cos(θzen,EHE)−1 log10 (nPE) ≥ 4.6 + 1.85× 1 − (A.4)  0.94  r if 0.06 ≤ cos(θzen,EHE)  The surface veto Rejecting downwards directed events (θzen,EHE < 85°) in coincidence with two or more registered photons in the IceTop surface array. Simulated event distributions of the EHE analysis signal (cosmogenic neu- trinos) and background (atmospheric muons, conventional atmospheric neu- trinos, prompt atmospheric neutrinos) are shown in Figures A.1 and A.2. Fig- ure A.1 shows the distributions over event brightness, here denoted by NPE, and fit quality, denoted by χ2/ndf, along with the applied selection criterion of the track quality cut. Correspondingly, Figure A.2 shows the distributions over event brightness and reconstructed zenith direction, here denoted by cos(θLF), along with the applied selection criterion of the muon bundle cut. This results in a total rejection of all events with a brightness of log10 (nPE) < 4 4.6 (i.e. nPE . 4.0 × 10 ) and full acceptance of all events with log10 (nPE) ≥ 6 6.45 (i.e. nPE & 2.8 × 10 ). Between these values, the acceptance depends on the direction and fit quality of the event. The atmospheric muon and neu- trino event rates over each analysis level are displayed in Table A.1 along with the corresponding acceptance of cosmogenic neutrinos relative the EHE filter level.

Table A.1. The expected event rate of atmospheric muons and neutrinos, along with the acceptance of cosmogenic neutrinos relative the EHE filter level, for the EHE analysis levels [46]. Atmospheric Atmospheric Cosmogenic muon event neutrino event neutrino relative Analysis level rate [Hz] rate [Hz] acceptance EHE filter 0.8 7.6 × 10−6 1.00 Offline EHE cut 6.7 × 10−4 1.0 × 10−8 0.74 Track quality cut 1.6 × 10−4 6.1 × 10−10 0.61 Muon bundle cut 3.0 × 10−10 3.6 × 10−10 0.43

154 Before applying the selection criteria, the selection variables in this analysis were validated against experimental data. This is described in Chapter 5.4.1. The EHE analysis has been applied to 9 yr of experimental data, constituted by the detector seasons IceCube-40, -59, -79 and IceCube-86 I–VI. The EHE analysis was initially developed for the first four of these seasons, and the following seasons were added incrementally. One effect of this was that the full final season, IC86-VI, was designated as physics sample, as opposed to being divided into physics and burn samples. Over the full 9 yr period, the EHE event selection was expected to accept an average of less than 0.085 events of atmospheric origin. The neutrino effective area of the event selection using the full detector configuration (IceCube-86) is shown in Figure 13.2. The accepted neutrino events are expected to originate both from the dif- fuse astrophysical neutrino flux, measured by IceCube up to ∼ PeV ener- gies [73; 75; 79], and the GZK neutrino flux, which is not yet experimentally discovered. The diffuse astrophysical neutrino flux thus forms the dominant background for the EHE analysis. In the EHE analysis, the astrophysical flux is distinguished from the GZK neutrinos through statistical methods, as well as an event-by-event consideration. The result is an upper limit on the abundance of cosmogenic neutrinos.

155 B. Step I Observed Events over BDT Variables

The Step II BDT score and variable values of the Step I observed events A, B and C are displayed in Figures B.1, B.2, B.3 and B.4, and listed in Table B.1.

Figure B.1. The Step I observed events A, B and C over the Step II BDT score, along with the simulated magnetic monopole and astrophysical neutrino event distributions.

Table B.1. The values taken by the three Step I observed events A, B and C in the Step II BDT variables, as well as the BDT score. Event A Event B Event C BDT score 0.089 0.742 0.626 − − − βBM 1.127 0.628 0.942 rsd(EMIL) 3.20 4.97 7.25 avg (dDOM,Q)CV-TrackChar 42.8 m 67.6 m 54.1m tFWHM,CV-TimeChar 2.76 µs 2.78 µs 2.56 µs LFRCV-TrackChar 0.615 0.362 0.344 RCOCV-HitStats 0.0458 0.434 0.0886 log10 (nPE) 5.10 5.30 5.32 cos(θzen,BM) 0.203 0.0205 0.391 − dC,BM 314 m 418 m 137 m

156 (a) The Step II speed variable, βBM.

(b) The Step II energy loss RSD variable, rsd(EMIL).

(c) The Step II average pulse distance variable, avg (dDOM,Q)CV-TrackChar.

Figure B.2. The Step I observed events A, B and C over the Step II BDT variables, along with the simulated magnetic monopole and astrophysical neutrino event distri- butions.

157 (a) The Step II pulse-time FWHM variable, tFWHM,CV-TimeChar.

(b) The Step II length fill ratio variable, LFRCV-TrackChar.

(c) The Step II relative CoG offset variable, RCOCV-HitStats.

Figure B.3. The Step I observed events A, B and C over the Step II BDT variables, along with the simulated magnetic monopole and astrophysical neutrino event distri- butions.

158 (a) The Step II log-brightness variable, log10 (nPE).

(b) The Step II cos-zenith variable, cos(θzen,BM).

(c) The Step II centrality variable, dC,BM.

Figure B.4. The Step I observed events A, B and C over the Step II BDT variables, along with the simulated magnetic monopole and astrophysical neutrino event distri- butions.

159 References

[1] H. A. Baer and A. Belyaev, editors. Proceedings of the Dirac Centennial Symposium. World Scientific Publishing Company, 2003. ISBN 978-981-238-412-6. doi: 10.1142/5310.

[2] A. Rajantie. Introduction to Magnetic Monopoles. Contemp. Phys., 53: 195–211, 2012. doi: 10.1080/00107514.2012.685693.

[3] P. A. M. Dirac. Quantised Singularities in the Electromagnetic . Proc. R. Soc. Lond. A, 133(821):60–72, 1931. doi: 10.1098/rspa.1931.0130.

[4] G. L. Kane. Modern Physics — the Fundamental Particles and Forces? Westview Press, 2nd edition, 1993. ISBN 9780201624601.

[5] M. Tanabashi et al. Review of particle physics. Phys. Rev. D, 98:030001, 2018. doi: 10.1103/PhysRevD.98.030001.

[6] G. ’t Hooft. Magnetic Monopoles in Unified Gauge Theories. Nucl. Phys. B, 79:276–284, 1974. doi: 10.1016/0550-3213(74)90486-6.

[7] A. M. Polyakov. Particle Spectrum in the . J. Exp. Theor. Phys. Lett., 20:194–195, 1974.

[8] J. Preskill. Magnetic Monopoles. Annu. Rev. Nucl. Part. Sci., 34:461–530, 1984. doi: 10.1146/annurev.ns.34.120184.002333.

[9] M. Spurio. Searches for Magnetic Monopoles and other Massive Particles in Probing Particle Physics with Neutrino Telescopes. C. Pérez de los Heros, editor. WSP, 2020. ISBN 978-981-327-501-0. doi: 10.1142/11122.

[10] T. W. B. Kibble. of Cosmic Domains and Strings. J. Phys. A, 9: 1387–1398, 1976. doi: 10.1088/0305-4470/9/8/029.

[11] S. D. Wick, T. W. Kephart, T. J. Weiler, and P. L. Biermann. Signatures for a cosmic flux of magnetic monopoles. Astropart. Phys., 18:663–687, 2003. doi: 10.1016/S0927-6505(02)00200-1.

[12] M. G. Aartsen et al. Search for non-relativistic Magnetic Monopoles with IceCube. Eur. Phys. J. C, 74(7):2938, 2014. doi: 10.1140/epjc/s10052-014-2938-8. [Erratum: Eur. Phys. J. C 79, 124 (2019)].

[13] M. G. Aartsen et al. Searches for Relativistic Magnetic Monopoles in IceCube. Eur. Phys. J. C, 76(3):133, 2016. doi: 10.1140/epjc/s10052-016-3953-8.

160 [14] A. Albert et al. Search for relativistic magnetic monopoles with five years of the ANTARES detector data. J. High Energy Phys., 07:054, 2017. doi: 10.1007/JHEP07(2017)054.

[15] K. Antipin et al. Search for relativistic magnetic monopoles with the Baikal Neutrino Telescope. Astropart. Phys., 29:366–372, 2008. doi: 10.1016/j.astropartphys.2008.03.006.

[16] A. Aab et al. Search for ultrarelativistic magnetic monopoles with the Pierre Auger Observatory. Phys. Rev. D, 94(8):082002, 2016. doi: 10.1103/PhysRevD.94.082002.

[17] D. P. Hogan, D. Z. Besson, J. P. Ralston, I. Kravchenko, and D. Seckel. Relativistic Magnetic Monopole Flux Constraints from RICE. Phys. Rev. D, 78: 075031, 2008. doi: 10.1103/PhysRevD.78.075031.

[18] M. Detrixhe et al. Ultra-Relativistic Magnetic Monopole Search with the ANITA-II Balloon-borne Radio Interferometer. Phys. Rev. D, 83:023513, 2011. doi: 10.1103/PhysRevD.83.023513.

[19] A. Pollmann. Private communication, 2020.

[20] M. Ambrosio et al. Final results of magnetic monopole searches with the MACRO experiment. Eur. Phys. J. C, 25:511–522, 2002. doi: 10.1140/epjc/s2002-01046-9.

[21] E. N. Parker. The Origin of Magnetic Fields. Astrophys. J., 160:383, 1970. doi: 10.1086/150442.

[22] M. S. Turner, E. N. Parker, and T. J. Bogdan. Magnetic Monopoles and the Survival of Galactic Magnetic Fields. Phys. Rev. D, 26:1296, 1982. doi: 10.1103/PhysRevD.26.1296.

[23] B. Cabrera. First Results from a Superconductive Detector for Moving Magnetic Monopoles. Phys. Rev. Lett., 48:1378–1380, 1982. doi: 10.1103/PhysRevLett.48.1378.

[24] S. Burdin et al. Non-collider searches for stable massive particles. Phys. Rep., 582:1–52, 2015. doi: 10.1016/j.physrep.2015.03.004.

[25] K. Bendtz et al. Search for magnetic monopoles in polar volcanic rocks. Phys. Rev. Lett., 110(12):121803, 2013. doi: 10.1103/PhysRevLett.110.121803.

[26] H. Jeon and M. J. Longo. Search for magnetic monopoles trapped in matter. Phys. Rev. Lett., 75:1443–1446, 1995. doi: 10.1103/PhysRevLett.75.1443. [Erratum: Phys. Rev. Lett. 76, 159 (1996)].

[27] A. Aktas et al. A Direct search for stable magnetic monopoles produced in positron-proton collisions at HERA. Eur. Phys. J. C, 41:133–141, 2005. doi: 10.1140/epjc/s2005-02201-6.

161 [28] G. Aad et al. Search for Magnetic Monopoles and Stable High-Electric-Charge Objects in 13 Tev Proton-Proton Collisions with the ATLAS Detector. Phys. Rev. Lett., 124(3):031802, 2020. doi: 10.1103/PhysRevLett.124.031802.

[29] B. Acharya et al. Magnetic Monopole Search with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions Interpreted in Photon-Fusion and Drell-Yan Production. Phys. Rev. Lett., 123(2):021802, 2019. doi: 10.1103/PhysRevLett.123.021802.

[30] M. G. Aartsen et al. Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector. Science, 342:1242856, 2013. doi: 10.1126/science.1242856.

[31] T. Carver. Ten years of All-sky Neutrino Point-Source Searches. In Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), volume 358, page 851, 2019. doi: 10.22323/1.358.0851.

[32] M. G. Aartsen et al. Multimessenger observations of a flaring blazar coincident with high-energy neutrino IceCube-170922A. Science, 361(6398):eaat1378, 2018. doi: 10.1126/science.aat1378.

[33] L. Köpke. Improved Detection of Supernovae with the IceCube Observatory. J. Phys. Conf. Ser., 1029(1):012001, 2018. doi: 10.1088/1742-6596/1029/1/012001.

[34] M. G. Aartsen et al. Measurement of Atmospheric Neutrino Oscillations at 6–56 GeV with IceCube DeepCore. Phys. Rev. Lett., 120(7):071801, 2018. doi: 10.1103/PhysRevLett.120.071801.

[35] A. Albert et al. Combined search for neutrinos from dark matter self-annihilation in the Galactic Centre with ANTARES and IceCube. Phys. Rev. D, 102(8):082002, 2020. doi: 10.1103/PhysRevD.102.082002.

[36] M. G. Aartsen et al. Search for annihilating dark matter in the Sun with 3 years of IceCube data. Eur. Phys. J. C, 77(3):146, 2017. doi: 10.1140/epjc/s10052-017-4689-9. [Erratum: Eur. Phys. J. C 79, 214 (2019)].

[37] M. G. Aartsen et al. First search for dark matter annihilations in the Earth with the IceCube Detector. Eur. Phys. J. C, 77(2):82, 2017. doi: 10.1140/epjc/s10052-016-4582-y.

[38] M. G. Aartsen et al. Search for Nonstandard Neutrino Interactions with IceCube DeepCore. Phys. Rev. D, 97(7):072009, 2018. doi: 10.1103/PhysRevD.97.072009.

[39] M. G. Aartsen et al. Search for mixing using three years of IceCube DeepCore data. Phys. Rev. D, 95(11):112002, 2017. doi: 10.1103/PhysRevD.95.112002.

[40] M. G. Aartsen et al. The IceCube Neutrino Observatory: Instrumentation and Online Systems. J. Instrum., 12(03):P03012, 2017. doi:

162 10.1088/1748-0221/12/03/P03012.

[41] R. Abbasi et al. The Design and Performance of IceCube DeepCore. Astropart. Phys., 35:615–624, 2012. doi: 10.1016/j.astropartphys.2012.01.004.

[42] R. Abbasi et al. The IceCube Data Acquisition System: Signal Capture, Digitization, and Timestamping. Nucl. Instrum. Methods Phys. Res. A, 601: 294–316, 2009. doi: 10.1016/j.nima.2009.01.001.

[43] M. Ackermann et al. Optical properties of deep glacial ice at the South Pole. J. Geophys. Res., 111(D13):D13203, 2006. doi: 10.1029/2005JD006687.

[44] P. B. Price, K. Woschnagg, and D. Chirkin. Age vs depth of glacial ice at south pole. Geophys. Res. Lett., 27:2129–2132, 2000. doi: 10.1029/2000GL011351.

[45] P. B. Price and K. Woschnagg. Role of group and phase velocity in high-energy neutrino observatories. Astropart. Phys., 15:97–100, 2001. doi: 10.1016/S0927-6505(00)00142-0.

[46] M. G. Aartsen et al. Differential limit on the extremely-high-energy cosmic neutrino flux in the presence of astrophysical background from nine years of IceCube data. Phys. Rev. D, 98(6):062003, 2018. doi: 10.1103/PhysRevD.98.062003.

[47] J. A. Formaggio and G. P. Zeller. From eV to EeV: Neutrino Cross Sections Across Energy Scales. Rev. Mod. Phys., 84:1307–1341, 2012. doi: 10.1103/RevModPhys.84.1307.

[48] S. L. Glashow. Resonant Scattering of Antineutrinos. Phys. Rev., 118:316–317, 1960. doi: 10.1103/PhysRev.118.316.

[49] S. P. Ahlen. Monopole Track Characteristics in Plastic Detectors. Phys. Rev. D, 14:2935–2940, 1976. doi: 10.1103/PhysRevD.14.2935.

[50] E. Bauer. The energy loss of free magnetic poles in passing through matter. Math. Proc. Camb. Philos. Soc., 47(4):777–789, 1951. doi: 10.1017/S0305004100027225.

[51] S. P. Ahlen. Stopping Power Formula for Magnetic Monopoles. Phys. Rev. D, 17:229–233, 1978. doi: 10.1103/PhysRevD.17.229.

[52] B. A. P. van Rens. Detection of Magnetic Monopoles Below the Cherenkov Limit. PhD thesis, Amsterdam U., 2006.

[53] Y. Kazama, C. N. Yang, and A. S. Goldhaber. Scattering of a Dirac Particle with Charge Ze by a Fixed Magnetic Monopole. Phys. Rev. D, 15:2287–2299, 1977. doi: 10.1103/PhysRevD.15.2287.

[54] R. M. Sternheimer. Density effect for the ionization loss in various materials. Phys. Rev., 103:511–515, 1956. doi: 10.1103/PhysRev.103.511.

163 [55] A. Obertacke Pollmann. Luminescence of water or ice as a new detection method for magnetic monopoles. Eur. Phys. J. Web Conf., 164:07019, 2017. doi: 10.1051/epjconf/201716407019.

[56] D. R. Tompkins. Total energy loss and Cerenkovˇ emission from monopoles. Phys. Rev., 138:B248–B250, 1965. doi: 10.1103/PhysRev.138.B248.

[57] P. A. Cherenkov. Visible luminescence of pure liquids under the influence of γ-radiation. Dokl. Akad. Nauk SSSR, 2(8):451–454, 1934. doi: 10.3367/UFNr.0093.196710n.0385.

[58] I. M. Frank and I. E. Tamm. Coherent visible radiation of fast electrons passing through matter. Compt. Rend. Acad. Sci. URSS, 14(3):109–114, 1937. doi: 10.3367/UFNr.0093.196710o.0388.

[59] A. Pollmann. Search for mildly relativistic magnetic monopoles with IceCube. PhD thesis, Bergische Universität Wuppertal, 2015.

[60] M. G. Aartsen et al. Probing the origin of cosmic rays with extremely high energy neutrinos using the IceCube Observatory. Phys. Rev. D, 88:112008, 2013. doi: 10.1103/PhysRevD.88.112008.

[61] M. G. Aartsen et al. Evidence for Astrophysical Muon Neutrinos from the Northern Sky with IceCube. Phys. Rev. Lett., 115(8):081102, 2015. doi: 10.1103/PhysRevLett.115.081102.

[62] F. Lauber. Private communication, 2020.

[63] M. G. Aartsen et al. Energy Reconstruction Methods in the IceCube Neutrino Telescope. J. Instrum., 9:P03009, 2014. doi: 10.1088/1748-0221/9/03/P03009.

[64] W. K. Härdle and L. Simar. Applied Multivariate Statistical Analysis. Springer-Verlag Berlin Heidelberg, 4th edition, 2015. ISBN 978-3-662-45170-0. doi: 10.1007/978-3-662-45171-7.

[65] M. Kronmueller and T. Glauch. Application of Deep Neural Networks to Event Type Classification in IceCube. In Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), volume 358, page 937, 2019. doi: 10.22323/1.358.0937.

[66] J. R. Klein and A. Roodman. Blind analysis in nuclear and particle physics. Annu. Rev. Nucl. Part. Sci., 55:141–163, 2005. doi: 10.1146/annurev.nucl.55.090704.151521.

[67] G. J. Feldman and R. D. Cousins. A Unified approach to the classical statistical analysis of small signals. Phys. Rev. D, 57:3873–3889, 1998. doi: 10.1103/PhysRevD.57.3873.

[68] G. C. Hill and K. Rawlins. Unbiased cut selection for optimal upper limits in neutrino detectors: The Model rejection potential technique. Astropart. Phys.,

164 19:393–402, 2003. doi: 10.1016/S0927-6505(02)00240-2.

[69] G. C. Hill, J. Hodges, B. Hughey, A. Karle, and M. Stamatikos. Examining the balance between optimising an analysis for best limit setting and best discovery potential. In Statistical Problems in Particle Physics, Astrophysics and Cosmology, pages 108–111, 2005. doi: 10.1142/9781860948985_0025.

[70] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55:119–139, 1997. doi: 10.1006/jcss.1997.1504.

[71] M. D. Richman. A Search For Muon Neutrinos Coincident With Gamma-Ray Bursts Using IceCube. PhD thesis, Maryland University, 2015.

[72] F. J. Massey. The kolmogorov-smirnov test for goodness of fit. J. Am. Stat. Assoc., 46(253):68–78, 1951. doi: 10.2307/2280095.

[73] C. Haack and C. Wiebusch. A measurement of the diffuse astrophysical muon neutrino flux using eight years of IceCube data. In Proceedings of 35th International Cosmic Ray Conference — PoS(ICRC2017), volume 301, page 1005, 2017. doi: 10.22323/1.301.1005.

[74] J. Stachurska. First double cascade neutrino candidates in icecube and a new measurement of the flavor composition. In Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), volume 358, page 1015, 2019. doi: 10.22323/1.358.1015.

[75] A. Schneider. Characterization of the astrophysical diffuse neutrino flux with icecube high-energy starting events. In Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), volume 358, page 1004, 2019. doi: 10.22323/1.358.1004.

[76] C. A. Argüelles, T. Katori, and J. Salvado. New Physics in Astrophysical Neutrino Flavor. Phys. Rev. Lett., 115:161303, 2015. doi: 10.1103/PhysRevLett.115.161303.

[77] M. Rosenblatt. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat., 27(3):832–837, 1956. doi: 10.1214/aoms/1177728190.

[78] E. Parzen. On estimation of a probability density function and mode. Ann. Math. Stat., 33(3):1065–1076, 1962. doi: 10.1214/aoms/1177704472.

[79] J. Stettner. Measurement of the diffuse astrophysical muon-neutrino spectrum with ten years of icecube data. In Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), volume 358, page 1017, 2019. doi: 10.22323/1.358.1017.

[80] J. van Santen. IceCube-Gen2: the next-generation neutrino observatory for the South Pole. In Proceedings of 35th International Cosmic Ray Conference — PoS(ICRC2017), volume 301, page 991, 2017. doi: 10.22323/1.301.0991.

165 [81] V. S. Berezinsky and G. T. Zatsepin. Cosmic rays at ultrahigh-energies (neutrino?). Phys. Lett. B, 28:423–424, 1969. doi: 10.1016/0370-2693(69)90341-4.

166 Acta Universitatis Upsaliensis Uppsala Dissertations from the Faculty of Science Editor: The Dean of the Faculty of Science

1–11: 1970–1975 12. Lars Thofelt: Studies on leaf temperature recorded by direct measurement and by thermography. 1975. 13. Monica Henricsson: Nutritional studies on Chara globularis Thuill., Chara zey- lanica Willd., and Chara haitensis Turpin. 1976. 14. Göran Kloow: Studies on Regenerated Cellulose by the Fluorescence Depolar- ization Technique. 1976. 15. Carl-Magnus Backman: A High Pressure Study of the Photolytic Decomposi- tion of Azoethane and Propionyl Peroxide. 1976. 16. Lennart Källströmer: The significance of biotin and certain monosaccharides for the growth of Aspergillus niger on rhamnose medium at elevated tempera- ture. 1977. 17. Staffan Renlund: Identification of Oxytocin and Vasopressin in the Bovine Ade- nohypophysis. 1978. 18. Bengt Finnström: Effects of pH, Ionic Strength and Light Intensity on the Flash Photolysis of L-tryptophan. 1978. 19. Thomas C. Amu: Diffusion in Dilute Solutions: An Experimental Study with Special Reference to the Effect of Size and Shape of Solute and Solvent Mole- cules. 1978. 20. Lars Tegnér: A Flash Photolysis Study of the Thermal Cis-Trans Isomerization of Some Aromatic Schiff Bases in Solution. 1979. 21. Stig Tormod: A High-Speed Stopped Flow Laser Light Scattering Apparatus and its Application in a Study of Conformational Changes in Bovine Serum Albu- min. 1985. 22. Björn Varnestig: Coulomb Excitation of Rotational Nuclei. 1987. 23. Frans Lettenström: A study of nuclear effects in deep inelastic muon scattering. 1988. 24. Göran Ericsson: Production of Heavy Hypernuclei in Annihilation. Study of their decay in the fission channel. 1988. 25. Fang Peng: The Geopotential: Modelling Techniques and Physical Implications with Case Studies in the South and East China Sea and Fennoscandia. 1989. 26. Md. Anowar Hossain: Seismic Refraction Studies in the Baltic Shield along the Fennolora Profile. 1989. 27. Lars Erik Svensson: Coulomb Excitation of Vibrational Nuclei. 1989. 28. Bengt Carlsson: Digital differentiating filters and model based fault detection. 1989. 29. Alexander Edgar Kavka: Coulomb Excitation. Analytical Methods and Experi- mental Results on even Selenium Nuclei. 1989. 30. Christopher Juhlin: Seismic Attenuation, Shear Wave Anisotropy and Some Aspects of Fracturing in the Crystalline Rock of the Siljan Ring Area, Central Sweden. 1990. 31. Torbjörn Wigren: Recursive Identification Based on the Nonlinear Wiener Model. 1990. 32. Kjell Janson: Experimental investigations of the proton and deuteron structure functions. 1991. 33. Suzanne W. Harris: Positive Muons in Crystalline and Amorphous Solids. 1991. 34. Jan Blomgren: Experimental Studies of Giant Resonances in Medium-Weight Spherical Nuclei. 1991. 35. Jonas Lindgren: Waveform Inversion of Seismic Reflection Data through Local Optimisation Methods. 1992. 36. Liqi Fang: Dynamic Light Scattering from Polymer Gels and Semidilute Solutions. 1992. 37. Raymond Munier: Segmentation, Fragmentation and Jostling of the Baltic Shield with Time. 1993.

Prior to January 1994, the series was called Uppsala Dissertations from the Faculty of Science.

Acta Universitatis Upsaliensis Uppsala Dissertations from the Faculty of Science and Technology Editor: The Dean of the Faculty of Science

1–14: 1994–1997. 15–21: 1998–1999. 22–35: 2000–2001. 36–51: 2002–2003. 52. Erik Larsson: Identification of Stochastic Continuous-time Systems. Algorithms, Irregular Sampling and Cramér-Rao Bounds. 2004. 53. Per Åhgren: On System Identification and Acoustic Echo Cancellation. 2004. 54. Felix Wehrmann: On Modelling Nonlinear Variation in Discrete Appearances of Objects. 2004. 55. Peter S. Hammerstein: Stochastic Resonance and Noise-Assisted Signal Transfer. On Coupling-Effects of Stochastic Resonators and Spectral Optimization of Fluctu- ations in Random Network Switches. 2004. 56. Esteban Damián Avendaño Soto: Electrochromism in Nickel-based Oxides. Color- ation Mechanisms and Optimization of Sputter-deposited Thin Films. 2004. 57. Jenny Öhman Persson: The Obvious & The Essential. Interpreting Software Devel- opment & Organizational Change. 2004. 58. Chariklia Rouki: Experimental Studies of the Synthesis and the Survival Probabili- ty of Transactinides. 2004. 59. Emad Abd-Elrady: Nonlinear Approaches to Periodic Signal Modeling. 2005. 60. Marcus Nilsson: Regular Model Checking. 2005. 61. Pritha Mahata: Model Checking Parameterized Timed Systems. 2005. 62. Anders Berglund: Learning computer systems in a distributed project course: The what, why, how and where. 2005. 63. Barbara Piechocinska: Physics from Wholeness. Dynamical Totality as a Concep- tual Foundation for Physical Theories. 2005. 64. Pär Samuelsson: Control of Nitrogen Removal in Activated Sludge Processes. 2005. 65. Mats Ekman: Modeling and Control of Bilinear Systems. Application to the Acti- vated Sludge Process. 2005. 66. Milena Ivanova: Scalable Scientific Stream Query Processing. 2005. 67. Zoran Radovic´: Software Techniques for Distributed Shared Memory. 2005. 68. Richard Abrahamsson: Estimation Problems in Array Signal Processing, System Identification, and Radar Imagery. 2006. 69. Fredrik Robelius: Giant Oil Fields – The Highway to Oil. Giant Oil Fields and their Importance for Future Oil Production. 2007. 70. Anna Davour: Search for low mass WIMPs with the AMANDA neutrino telescope. 2007. 71. Magnus Ågren: Set Constraints for Local Search. 2007. 72. Ahmed Rezine: Parameterized Systems: Generalizing and Simplifying Automatic Verification. 2008. 73. Linda Brus: Nonlinear Identification and Control with Solar Energy Applications. 2008. 74. Peter Nauclér: Estimation and Control of Resonant Systems with Stochastic Distur- bances. 2008. 75. Johan Petrini: Querying RDF Schema Views of Relational Databases. 2008. 76. Noomene Ben Henda: Infinite-state Stochastic and Parameterized Systems. 2008. 77. Samson Keleta: Double Pion Production in dd→αππ Reaction. 2008. 78. Mei Hong: Analysis of Some Methods for Identifying Dynamic Errors-invariables Systems. 2008. 79. Robin Strand: Distance Functions and Image Processing on Point-Lattices With Focus on the 3D Face-and Body-centered Cubic Grids. 2008. 80. Ruslan Fomkin: Optimization and Execution of Complex Scientific Queries. 2009. 81. John Airey: Science, Language and Literacy. Case Studies of Learning in Swedish University Physics. 2009. 82. Arvid Pohl: Search for Subrelativistic Particles with the AMANDA Neutrino Tele- scope. 2009. 83. Anna Danielsson: Doing Physics – Doing Gender. An Exploration of Physics Stu- dents’ Identity Constitution in the Context of Laboratory Work. 2009. 84. Karin Schönning: Production in pd Collisions. 2009. 85. Henrik Petrén: η Meson Production in Proton-Proton Collisions at Excess Energies of 40 and 72 MeV. 2009. 86. Jan Nyström: Analysing Fault Tolerance for ERLANG Applications. 2009. 87. John Håkansson: Design and Verification of Component Based Real-Time Sys- tems. 2009. 88. Sophie Grape: Studies of PWO Crystals and Simulations of the pp¯ → ¯ΛΛ, ¯ΛΣ0 Re- actions for the PANDA Experiment. 2009. 90. Agnes Rensfelt. Viscoelastic Materials. Identification and Experiment Design. 2010. 91. Erik Gudmundson. Signal Processing for Spectroscopic Applications. 2010. 92. Björn Halvarsson. Interaction Analysis in Multivariable Control Systems. Applica- tions to Bioreactors for Nitrogen Removal. 2010. 93. Jesper Bengtson. Formalising process calculi. 2010. 94. Magnus Johansson. Psi-calculi: a Framework for Mobile Process Calculi. Cook your own correct process calculus – just add data and logic. 2010. 95. Karin Rathsman. Modeling of Electron Cooling. Theory, Data and Applications. 2010. 96. Liselott Dominicus van den Bussche. Getting the Picture of University Physics. 2010. 97. Olle Engdegård. A Search for Dark Matter in the Sun with AMANDA and IceCube. 2011. 98. Matthias Hudl. Magnetic materials with tunable thermal, electrical, and dynamic properties. An experimental study of magnetocaloric, multiferroic, and spin-glass materials. 2012. 99. Marcio Costa. First-principles Studies of Local Structure Effects in Magnetic Mate- rials. 2012. 100. Patrik Adlarson. Studies of the Decay η→π+π-π0 with WASA-at-COSY. 2012. 101. Erik Thomé. Multi-Strange and Charmed Antihyperon- Physics for PAN- DA. 2012. 102. Anette Löfström. Implementing a Vision. Studying Leaders’ Strategic Use of an Intranet while Exploring Ethnography within HCI. 2014. 103. Martin Stigge. Real-Time Workload Models: Expressiveness vs. Analysis Efficiency. 2014. 104. Linda Åmand. Ammonium Feedback Control in Wastewater Treatment Plants. 2014. 105. Mikael Laaksoharju. Designing for Autonomy. 2014. 106. Soma Tayamon. Nonlinear System Identification and Control Applied to Selective Catalytic Reduction Systems. 2014. 107. Adrian Bahne. Multichannel Audio Signal Processing. Room Correction and Sound Perception. 2014. 108. Mojtaba Soltanalian. Signal Design for Active Sensing and Communications. 2014. 109. Håkan Selg. Researching the Use of the Internet — A Beginner’s Guide. 2014. 110. Andrzej Pyszniak. Development and Applications of Tracking of Pellet Streams. 2014. 111. Olov Rosén. Parallel Stochastic Estimation on Multicore Platforms. 2015. 112. Yajun Wei. Ferromagnetic Resonance as a Probe of Magnetization Dynamics. A Study of FeCo Thin Films and Trilayers. 2015. 113. Marcus Björk. Contributions to Signal Processing for MRI. 2015. 114. Alexander Madsen. Hunting the Charged with Lepton Signatures in the ATLAS Experiment. 2015. 115. Daniel Jansson. Identification Techniques for Mathematical Modeling of the Human Smooth Pursuit System. 2015. 116. Henric Taavola. Dark Matter in the Galactic Halo. A Search Using Neutrino Induced Cascades in the DeepCore Extension of IceCube. 2015. 117. Rickard Ström. Exploring the Universe Using Neutrinos. A Search for Point Sources in the Southern Hemisphere Using the IceCube Neutrino Observatory. 2015. 118. Li Caldeira Balkeståhl. Measurement of the Dalitz Plot Distribution for η→π+π− π0 with KLOE. 2015. 119. Johannes Nygren. Input-Output Stability Analysis of Networked Control Systems. 2016. 120. Joseph Scott. Other Things Besides Number. Abstraction, Constraint Propagation, and String Variable Types. 2016. 121. Andrej Andrejev. Semantic Web Queries over Scientific Data. 2016. 122. Johan Blom. Model-Based Protocol Testing in an Erlang Environment. 2016. 123. Liang Dai. Identification using Convexification and Recursion. 2016. 124. Adriaan Larmuseau. Protecting Functional Programs From Low-Level Attackers. 2016. 125. Lena Heijkenskjöld. Hadronic Decays of the ω Meson. 2016. 126. Delphine Misao Lebrun. Photonic crystals and photocatalysis. Study of titania in- verse opals. 2016. 127. Per Mattsson. Modeling and identification of nonlinear and impulsive systems. 2016. 128. Lars Melander. Integrating Visual Data Flow Programming with Data Stream Management. 2016. 129. Kristofer Severinsson. Samarbete = Samverkan? En fallstudie av AIMday vid Uppsala universitet. 2016. 130. Nina Fowler. Walking the Plank of the Entrepreneurial University. The little spin- out that could? 2017. 131. Kaj Jansson. Measurements of -induced Nuclear Reactions for More Pre- cise Standard Cross Sections and Correlated Fission Properties. 2017. 132. Petter Bertilsson Forsberg. Collaboration in practice. A multiple case study on col- laboration between small enterprises and university researchers. 2018. 133. Andreas Löscher. Targeted Property-Based Testing with Applications in Sensor Networks. 2018. 134. Simon Widmark. Causal MMSE Filters for Personal Audio. A Polynomial Matrix Approach. 2018. 135. Damian Pszczel. Search for a new light boson in meson decays. 2018. 136. Joachim Pettersson. From Strange to Charm. Meson production in electron-positron collisions. 2018. 137. Elisabeth Unger. The Extremes of Neutrino Astronomy. From Fermi Bubbles with IceCube to Ice Studies with ARIANNA. 2019. 138. Monica Norberg. Engagerat ledarskap för att skapa förutsättningar för allas delak- tighet. Utgångspunkter i kvalitetsarbetet. 2019. 139. Peter Backeman. Quantifiers and Theories. A Lazy Aproach. 2019. 140. Walter Ikegami Andersson. Exploring the Merits and Challenges of Hyperon Physics. with PANDA at FAIR. 2020. 141. Petar Bokan. Pair production of Higgs bosons in the final state with bottom and τ in the ATLAS experiment. Search results using LHC Run 2 data and prospect studies at the HL-LHC. 2020. 142. Carl Kronlid. Engineered temporary networks. Effects of control and temporality on inter-organizational interaction. 2020. 143. Alexander Burgman. Bright Needles in a Haystack. A Search for Magnetic Mono- poles Using the IceCube Neutrino Observatory. 2020.