Quick viewing(Text Mode)

MAGIC Observations with Bright Moon and Their Application to Measuring the VHE Gamma-Ray Spectral Cut-O of the Pevatron Candida

MAGIC Observations with Bright Moon and Their Application to Measuring the VHE Gamma-Ray Spectral Cut-O of the Pevatron Candida

ADVERTIMENT. Lʼaccés als continguts dʼaquesta tesi queda condicionat a lʼacceptació de les condicions dʼús establertes per la següent llicència Creative Commons: http://cat.creativecommons.org/?page_id=184

ADVERTENCIA. El acceso a los contenidos de esta tesis queda condicionado a la aceptación de las condiciones de uso establecidas por la siguiente licencia Creative Commons: http://es.creativecommons.org/blog/licencias/

WARNING. The access to the contents of this doctoral thesis it is limited to the acceptance of the use conditions set by the following Creative Commons license: https://creativecommons.org/licenses/?lang=en Doctoral Thesis

MAGIC observations with bright Moon and their application to measuring the VHE gamma-ray spectral cut-off of the PeVatron candidate Cassiopeia A

Daniel Alberto Guberman

Departament de F´ısica Universitat Aut`onomade Barcelona Programa de doctorat en F´ısica

Director: Director: Tutor: Dr. Abelardo Dr. Juan Cortina Dr. Enrique Moralejo Olaizola Blanco Fern´andezS´anchez

A thesis submitted for the degree of PhilosophiæDoctor

Submitted in 2018

Director: Director: Dr. Abelardo Moralejo Olaizola Dr. Juan Cortina Blanco IFAE IFAE Edifici Cn, UAB Edifici Cn, UAB 08193 Bellaterra (Barcelona) 08193 Bellaterra (Barcelona) Spain [email protected] [email protected]

Tutor: Dr. Enrique Fern´andezS´anchez IFAE & UAB Edifici Cn, UAB 08193 Bellaterra (Barcelona) Spain [email protected]

I herewith declare that I have produced this thesis without the prohibited assistance of third parties and without making use of aids other than those specified; notions taken over directly or indirectly from other sources have been identified as such. This paper has not previously been presented in identical or similar form to any other Spanish or foreign examination board.

The thesis work was conducted from January 2014 to September 2018 under the supervision of Abelardo Moralejo and Juan Cortina at Barcelona.

Barcelona, June 28, 2018. Contents

Introduction 7

1 Cosmic Rays 9 1.1 Galactic and extragalactic Cosmic Rays ...... 9 1.2 The puzzle of the galactic cosmic-ray origin ...... 11 1.3 Cosmic-ray electrons ...... 12

2 Searching for sources of cosmic rays 15 2.1 Direct and indirect detection of cosmic rays ...... 15 2.2 Signatures of sources in gamma rays ...... 17 2.2.1 Gamma-ray production from cosmic rays and electrons . . . . . 18 2.3 Gamma-ray detection techniques ...... 21 2.3.1 Gamma-ray direct detection with Fermi-LAT ...... 23 2.3.2 The Imaging Atmospheric Cherenkov Technique ...... 24 2.4 Signatures of cosmic-ray sources in ...... 26

3 SNRs as cosmic ray accelerators 29 3.1 Diffusive Shock Acceleration ...... 31 3.2 Evidence of particle acceleration in SNRs ...... 33 3.3 Cassiopeia A ...... 34 3.4 Alternatives to the SNR hypothesis ...... 37

4 Adapting MAGIC for moonlight observations 39 4.1 The MAGIC telescopes ...... 39 4.1.1 The MAGIC cameras ...... 41 4.1.2 Readout and trigger system ...... 41 4.1.3 Data taking ...... 44 4.1.4 Analysis chain ...... 45 4.1.5 Monte Carlo Simulations ...... 53 4.1.6 High level analysis products ...... 55 4.2 Moonlight observations ...... 61 4.2.1 Hardware settings for moonlight observations ...... 62 4.2.2 UV-pass Filters ...... 64 4.2.3 Data sample ...... 70 4.2.4 Moonlight adapted analysis ...... 71 4.2.5 Performance ...... 75 4.2.6 Towards increasing the duty cycle ...... 88

5 6 CONTENTS

5 Cosmic-ray acceleration in Cas A 91 5.1 Cas A observations ...... 91 5.1.1 Observations with MAGIC ...... 91 5.1.2 Observations with Fermi-LAT ...... 93 5.2 Results ...... 93 5.2.1 Spectrum ...... 93 5.3 Statistical and systematic uncertainties ...... 96 5.3.1 Individual spectra ...... 96 5.3.2 Statistical and systematic uncertainties in the spectrum . . . . . 97 5.3.3 Fit Results ...... 100 5.3.4 Safety check with Crab Nebula samples ...... 100 5.4 Interpretation ...... 105 5.4.1 Leptonic model ...... 105 5.4.2 Hadronic model ...... 106

Conclusions 109

A Performance under moonlight: auxiliary analysis 113 A.1 Hillas distributions ...... 113 A.2 15-30 ×NSBDark UV-pass filter spectrum ...... 113

B The Light-Trap detector 121 B.1 Silicon photomultipliers: a new era begins? ...... 121 B.2 Silicon photomultipliers: a brief technical review ...... 122 B.2.1 Operational principle ...... 122 B.2.2 Pulse shape ...... 123 B.2.3 Over-voltage, gain and PDE ...... 126 B.2.4 Noise ...... 127 B.2.5 Temperature dependence ...... 130 B.3 The Light-Trap pixel principles ...... 130 B.4 Proof-of-Concept Pixel ...... 131 B.4.1 SiPM ...... 131 B.4.2 Disc ...... 131 B.4.3 Optical coupling ...... 135 B.4.4 Reflective walls ...... 135 B.4.5 Pixel holder and readout electronics ...... 135 B.4.6 Laboratory measurements ...... 135

C Moon shadow observations with MAGIC 145 C.1 Feasibility study ...... 146 C.1.1 Energy range and camera acceptance ...... 147 C.1.2 Background and electron rates ...... 147 C.1.3 Observation time needed to detect the electron shadow . . . . . 150

D The naima package 153 Introduction

In 1962 Bruno Rossi wrote the last pages of a book in which he described how high- energy emerged and developed over the 50 years that followed what we now consider as its birth. This book was called Cosmic Rays [174] and commenced as follows:

“A steady rain of charged particles, moving at nearly the speed of light, falls upon our planet at all times and from all directions.”

More than fifty years after Rossi’s book high-energy astrophysics has evolved into a very mature and productive field. But still nowadays many of the questions that were asked by that time still remain unanswered. The origin of this steady rain of charged particles, the cosmic rays, is probably one of the biggest mysteries that we still have not been able to solve. For years we hoped, an we still do, that gamma-ray instruments would point us to the source in which the bulk of these cosmic rays acquire their energy. And for a long time it was believed, and it is still believed, that young supernova remnants would be those sources. But even if we have found evidence of particle acceleration in these systems we still could not find one remnant that is capable of accelerating protons up to energies high enough to explain the observed cosmic-ray flux. This thesis aims to shed some light on this matter. In the following pages I will describe a study on particle acceleration in the young Cassiopeia A, one of the most promising candidates to explain the origin of galactic cosmic rays. It reports on my work during the last four years, based on observations performed with the MAGIC gamma-ray telescopes, which I performed with the invaluable assistance of many colleagues. My will is also that by the end of the thesis I could transmit the message that behind a main result there is a technical development that played an essential role. And of course, to acknowledge the work of many others, in the theoretical and the experimental side, that motivated the observations and the analysis I will describe. In Chapter 1 I briefly introduce the problem of the origin of cosmic rays. Chapter 2 describes different experimental approaches to detect cosmic rays and to try to identify their potential sources. Chapter 3 is devoted to supernova remnants, the most popular candidates to be the sources that accelerate galactic cosmic rays up to the highest observed energies. I will focus on Cassiopeia A, the observational target of this thesis. In Chapter 4 I describe the instrument I used to perform the observations, the MAGIC telescopes. Our strategy to accumulate enough observation time to have good statistics was to extend the telescopes duty cycle by operating them under moonlight. I will then also describe all the hardware and software modifications I had to perform to safely

7 8 INTRODUCTION operate the telescopes under such conditions, analyze the data taken and evaluate their performance. Finally, in Chapter 5 I present the observation campaign on Cassiopeia A, the results obtained and their interpretation. Appendix B hosts the work I did in the development and characterization of a new concept of pixel detector, initially meant for Cherenkov astronomy, but with potential applications beyond the field. I decided it to keep it outside the main body of the text because it does not directly fit in the discussion of the cosmic-ray origin. But since I devoted a considerable amount of time to this task and I consider it to be an essential part of my formation I thought it deserved its place in this thesis. Chapter 1

Cosmic Rays

The history of cosmic-ray astrophysics can be traced back to 1912. Aiming to explain a mysterious intrinsic radiation that was observed in electrometers, Victor Hess per- formed seven balloon flights to measure the radiation in the atmosphere at different heights. He found that the radiation increased at high altitudes (above 2000 m), from which he concluded that it could not be originated in the Earth as he initially thought. In his article he wrote: “The results of the present observations seem to be most readily explained by the assumption that a radiation of very high penetrating power enters our atmosphere from above”[93]. By that time the most penetrating radiation known were gamma rays from radioactive materials. Then it was natural to think that this new unknown radiation could also be of electromagnetic origin. Robert Millikan then called this radiation cosmic rays [174]. Nowadays we know that cosmic rays are composed of heavy charged particles orig- inated outside the solar system. Most of them are protons (∼90%), about 9% are helium nuclei and the remaining fraction is composed by heavier nuclei. The incoming cosmic radiation is also composed by other species like electrons or gamma rays which arrive in much lower fluxes, but provide unique information about the nature of cosmic rays. These other forms of radiation are sometimes considered to be also part of the cosmic rays. In this work instead “cosmic rays” will refer only to the nuclei. One hundred years after Hess flights, we still ignore where these cosmic nuclei come from. This chapter briefly goes through our current understanding on the nature and origin of the cosmic rays.

1.1 Galactic and extragalactic Cosmic Rays

The measured cosmic-ray spectrum at the vicinity of the Earth spans over at least 13 decades in energy (Figure 1.1). Above ∼ 3 × 1010 eV and at least up to ∼ 4 × 1019 eV the Cosmic Ray spectrum can be described by a broken power-law with two breaks: −2.7 −3.1 15 changing from ∝ E to ∝ E at a knee energy of Eknee ∼ 3×10 eV and flattening −2.7 18 again to ∝ E at an ankle energy of Eankle ∼ 3 × 10 eV. There is also evidence of a cut-off at 4 − 6 × 1019 eV [4, 10, 171]. Above those energies the flux falls to very low values, making measurements extremely difficult. It is widely accepted that the bulk of the cosmic rays observed up to the knee are of galactic origin. There are several arguments that support this hypothesis. The existence of a break in the spectrum itself already suggests that there is some source

9 10 CHAPTER 1. COSMIC RAYS

Figure 1.1: Cosmic-ray spectrum at the Earth combining measurements from different experiments (original figure from [75]) effect. Besides, supposing that the Galaxy could be immersed in a sea of Cosmic Rays that permeates the entire universe already appears as an idea hard to support [91]. The spatial distribution of cosmic rays at ∼1 GeV per nucleon can be inferred from the observation of diffuse gamma-ray emission from the Galaxy and its dwarf satellites1 [7] combined with the knowledge about the gas distribution in these systems. The results show that these mildly relativistic particles are smoothly distributed throughout the Galaxy disc with a radial gradient that is higher in the inner galaxy and lower towards the outer disc, which suggests that particles are being produced inside the galaxy and not being diffused in from outside. In addition, the measured cosmic-ray density in the Magellanic clouds is significantly lower than the observed one in the Galaxy [8, 14], proving that the Milky Way and its satellite galaxies cannot be immersed in a universal cosmic ray sea. This does not mean that all the cosmic rays are produced within the Galaxy. In fact, there is enough evidence to support that those with energies higher than the ankle are of extragalactic origin. Particles at those energies, known as Ultra High Energy Cosmic Rays (UHECRs) cannot be confined in the galaxy because their Larmor radii in typical galactic magnetic fields of ∼ 0.3 nT would be of the order of the galaxy size or higher. Then, if they were produced inside the Galaxy they would arrive to

1gamma-rays provide a signature of the presence of relativistic protons, since they are produced in the decay of π0 mesons that arise from proton-proton collisions (see Section 2.2). 1.2. THE PUZZLE OF THE GALACTIC COSMIC-RAY ORIGIN 11 the Earth without almost being deflected, pointing to their sources. Measurements show however that the distribution of cosmic rays with energies higher than Eankle is nearly isotropic [157]. Actually, the hardening of the spectrum at the ankle is already suggesting the presence of a new component. The situation at the energies between the knee and the ankle is more controversial, although the general trend also points to a galactic origin of these particles. The chem- ical composition changes at those energies: beyond the knee the relative abundance of heavier nuclei becomes higher. This could be explained if the cosmic-ray acceleration mechanism was rigidity dependent and would be able to accelerate protons only up to 17 energies of Eknee. Iron nuclei would then be accelerated up to E ∼ 26×Eknee ∼ 10 eV. A naive conclusion would then be that the galactic cosmic-ray spectrum should end at ∼ 1017eV. But there could perfectly exist a rare type of sources capable of accelerating particles to higher energies while leaving the knee interpretation unaffected [75]. In this thesis I will focus only in the galactic cosmic rays and in the next section I will describe briefly which are the requirements that their hypothetical source should fulfill. But before leaving the UHECRs out of the discussion it would be fair to point out that the question about the cosmic-ray origin is unavoidably connected with the nature of the transition from galactic cosmic rays to UHECRs [75].

1.2 The puzzle of the galactic cosmic-ray origin

In [91] Drury points out that the question about the origin of cosmic rays actually asks for three different answers at the same time: which is the source of the matter that produces these nuclei that eventually reach the Earth, which is the source of the energy that powers their acceleration and which is the place where this acceleration is happening.

Source of matter The chemical and isotopic composition of the cosmic rays could be the key to identify which is the source that holds the particles that then end up as cosmic rays. Disap- pointingly, the observed chemical composition does not differ much from the standard solar-system abundances. This “normality” naturally does not point to any particular source, but can still be used to rule out some exotic cosmic-ray origin models in which some specific species would be enhanced. Perhaps the most interesting feature in the galactic cosmic-ray composition is the measured isotopic ratio 22Ne/20Ne, which is ∼5 times larger than in the solar wind. The measured ratio can be achieved if the source material of cosmic rays consists of ∼20% of material from Wolf-Rayet (WR) stars and ∼80% from the Interstellar Medium (ISM) [73, 159]. The most massive stars present in OB associations are believed to evolve into WR stars that inject their wind material rich in 22Ne into the local ISM, forming regions known as superbubbles.

Source of energy From our understanding about cosmic-ray propagation in the Galaxy it is derived that the power required to maintain the observed galactic cosmic-ray population must be 12 CHAPTER 1. COSMIC RAYS

∼ 1041 erg s−1. This is supported by recent calculations using the Galprop model[186]. Supernova explosions seem to be the only plausible sources that could supply that amount of power in a form capable of driving particle acceleration. In Chapter 3 it will be shown that the requirement would be accomplished if ∼10% of the energy released in the explosion goes into accelerating particles. There are other sources that could in principle provide the power required, like the bulk luminosity of all the stars in the Galaxy (∼ 1044 erg s−1) or the energy released in core-collapse supernovae that goes into neutrinos and gravitational waves. However, there are no known acceleration mechanisms that could use that stored energy to accelerate particles. Pulsars, on the other hand, can accelerate particles, but they are about one order of magnitude less powerful than supernovae and cannot be the dominant type of source of cosmic rays. The same applies to winds from OB stars [91].

Acceleration site and mechanism The Diffusive Shock Acceleration (DSA) mechanism in the vicinity of shock waves is the most popular process considered by cosmic-ray origin models. It depends on robust and rather simple physics and naturally produces power-law spectra of accelerated particles. Since it is so central in these discussions it will be properly discussed in its dedicated Section 3.1.

From the discussion above Supernova Remnants (SNRs) arise as the most promising candidates to be the accelerators of galactic cosmic rays and seem to be the key element to solve the mystery of their origin. SNRs not only release enough energy to power the acceleration, their forward shocks appear as a plausible scenario for DSA to occur efficiently and the acceleration in SNRs would be compatible with any of the possible sources of matter discussed above. Perhaps the main argument against this hypothesis is that it still has not been found any SNR that acts a PeVatron, a system capable of accelerating particles up to the PeV energies of the knee.

1.3 Cosmic-ray electrons

The fluxes of cosmic-ray electrons (e−) and positrons (e+) reaching the Earth are orders of magnitude lower than the fluxes of protons and other nuclei, as can be seen in Figure 1.1. The mean free path of electrons in the Galaxy is also much smaller. Conventional cosmic-ray propagation models suggest that most of these electrons (and all the positrons) are of secondary origin, resulting from the interaction of protons and other nuclei in the interstellar medium. However, recent measurements from different experiments have questioned the validity of those models. Figure 1.2 shows the electron and positron spectra measured by different experiments. The spectral index of the positron flux increases above 10 GeV, while the spectral index of the electron spectrum decreases. As a result, the ratio of positrons to electrons (or to all electrons, e− + e+) increases, which is not consistent with being of secondary origin, at least within the frame of the conventional cosmic-ray propagation models [179]. Many new theories have been proposed to explain this anomaly. Still, some of them hold the idea that electrons are mainly of secondary origin and that the anomaly can be explained without 1.3. COSMIC-RAY ELECTRONS 13

Figure 1.2: Electron (top) and positron (bottom) spectra reported by AMS-02, compared with measurements from other instruments. Figure from [109] adding a primary source of positrons [76]. But more commonly it is explained by setting a new primary source, both for electrons and positrons. Pulsars [180] and annihilation/decay [190] are the most popular candidates for those sources. A precise measurement of the positron spectrum between hundreds of GeV and a few TeV could be essential to favour (or disfavour) one of these hypotheses. The central discussion of this thesis is focused only in the question about the origin of the (hadronic) cosmic rays and in the following chapters I will “forget” about the electrons. A rather unique and creative, but unfortunately also very challenging, strategy to measure this spectrum that was studied as part of this thesis is described in Appendix C. 14 CHAPTER 1. COSMIC RAYS Chapter 2

Searching for sources of cosmic rays

On their journey from their hypothetical source to the Earth, galactic cosmic rays diffuse through the ISM, changing their trajectories as dictated by the magnetic fields they encounter. Hence, when they reach the Earth their arrival direction has lost all information about their original source. Actually, at least up to energies close to the knee, the incoming cosmic-ray flux is nearly isotropic1. Furthermore, cosmic rays cannot reach the Earth surface, as they interact with the atmosphere way above the ground, which makes their detection a challenging task. In this chapter I will briefly discuss the different detection methods of cosmic rays that were used to obtain the well known spectrum presented of Figure 1.1. Then I will show how gamma rays represent the best opportunity to identify cosmic-ray sources. I will also discuss the connection between accelerated charged particles and gamma rays and how the latter can be detected. But since gamma rays are not the only cosmic-ray messengers, the last section discusses the possibilities that astronomy might provide in order to help unveiling the mystery of the origin of cosmic rays.

2.1 Direct and indirect detection of cosmic rays

As cosmic rays interact high in the atmosphere, their direct detection is only possible near its top or in the outer space. With balloons like BESS [30], CREAM [25] or the more recent SuperTIGER [72] or with space-born instruments like PAMELA [164], AMS-02 [19] or CALET [16] it is possible to measure the chemical and isotopic com- position of cosmic rays up to a few tens of TeV. A good understanding of the spectra of the different species can be essential to understand the nature of the acceleration processes. Different detection techniques are used in these experiments to infer the energy and charge of the incoming particle. The main drawback of these kind of mea- surements is their limited detection area and/or flight duration in the case of balloons. This becomes critical at energies starting at ∼100 TeV, where the cosmic-ray event rate is ∼5 m−2 sr−1 day−1 [103]. Thus, to access the energies close to the knee we are interested in, different strategies are needed. All cosmic-ray measurements at the highest energies come from ground-based ex- periments that indirectly detect cosmic rays from the products of their interaction with

1There is a small but still measurable spatial anisotropy. The relative amplitude of this anisotropy is 10−4 − 10−3 (see, for instance [9, 189])

15 16 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS the atmosphere. The incoming ultra-relativistic cosmic ray first interacts with an at- mosphere nucleus to give rise to several particles, which includes neutral and charged pions. The products of the interaction will also create more particles and so on. A sin- gle primary cosmic ray can produce several thousands of secondary particles that move downwards in the atmosphere developing a cascade, known as Extensive Air Shower (EAS, Figure 2.1). An EAS develops over hundreds of meters in width and several kilometers in length and achieve their maximum, in terms of number of particles, at heights of 8 to 12 kilometers above seal level. Charged pions may interact with other nuclei or decay into muons and neutrinos. As it will be shown in Section 2.2.1, neutral pions decay quickly into two gamma rays that can initiate a special type of particle cascade known as electromagnetic shower (left panel in Figure 2.1). The interaction between a and the atmosphere creates an electron-positron pair. These new particles emit new gamma rays through bremsstrahlung (see Section 2.2.1) that will produce more electron-positron pairs and so on. The created particles are less energetic on every step. When their energy is such that the bremsstrahlung probability is lower than the energy losses through ionization the shower stops producing new particles. When the primary particles are gamma rays, electrons or positrons, they develop this kind of showers. There is one extra key feature about EAS. As a large fraction of the created parti- cles travel faster than the speed of light in the atmosphere, a flash of Cherenkov light is produced by the medium. Indirect cosmic-ray detection techniques work by detecting from ground some of the secondary particles produced in the showers and/or by col- lecting the Cherenkov light. Typically from the number of detected electrons Ne and muons Nµ it is possible to reconstruct the energy of the primary cosmic ray, while from the ratio Ne/Nµ one can estimate its mass [131]. Experiments like KASCADE [52], KASCADE-Grande [53] or EAS-TOP [92] use shielded and unshielded scintillators to discriminate between electrons and muons. The Pierre Auger Observatory [166], that looks at UHECRs, uses water Cherenkov detectors: relativistic particles that encounter Auger water tanks, produce Cherenkov light that is then collected with a photomulti- plier tube (PMT). In that case the discrimination between muons and electrons come from the number of Cherenkov pulses produced within the same event and their inten- sity. IceTop [3] uses ice instead of water and measures the electromagnetic component of the showers in coincidence with muon detectors in IceCube. A combination between surface and underground detectors is also used by EAS-TOP to identify the nature of the detected particles. Other techniques aim to measure the lateral (HEGRA [123], CASA-BLANCA[100]) and longitudinal (fluorescence detectors of Pierre Auger [165], Telescope Array [187], HiRes[5]) development of the showers. In any case, in indirect detection methods the ability to deduce nuclear composi- tion relies on the theoretical understanding of the shower development and the hadronic interactions that occur within the cascade [103]. The measured observables are inter- preted in terms of a primary mass and energy by comparison to air shower Monte Carlo simulations. Then, the energy estimation of the primary particle depends on the models of hadronic interactions considered in the simulations. Considering this fact it is remarkable that all the experiments exhibit a similar spectral shape near the knee, even despite the different absolute normalization and detection methods employed by each of them [129]. Thanks to indirect cosmic-ray detectors, the existence of the knee has been largely 2.2. SIGNATURES OF COSMIC RAY SOURCES IN GAMMA RAYS 17

Figure 2.1: Left: Sketch of an electromagnetic shower initiated by a gamma ray. Right: Air shower initiated by a cosmic ray. Image obtained from [35].

confirmed and its energy has been rather well established. More accurate spectral measurements of the different nuclei at energies between ∼ 1012 and ∼ 1018 eV would undoubtedly contribute to a better understanding the nature of the cosmic-ray acceler- ation processes and the transition between galactic and extragalactic cosmic rays. But since the measured overall cosmic-ray composition by itself does not unequivocally di- rect us to any known astrophysical source, the question about where those PeV nuclei are coming from should be addressed to other type of messengers. Messengers that let us identify the presence of cosmic rays, while pointing to the source in which they were created. gamma rays (and maybe neutrinos) could be those messengers that would deliver us the information we need to finally solve the puzzle of the cosmic-ray origin.

2.2 Signatures of cosmic ray sources in gamma rays

Gamma rays are the most energetic form of the electromagnetic radiation (energies above ∼500 keV). They are produced by different processes and some of them will be discussed in this section. Unlike charged particles, gamma rays can travel large distances within the Galaxy without being deflected or absorbed, pointing directly to their sources. This is the essential feature that has turned gamma-ray astronomy into one of the most active and productive fields in high energy astrophysics. The birth of gamma-ray astronomy was strongly connected to the discovery of cosmic-rays. It was believed that it would solve the mystery of the cosmic-ray origin long time ago, proving that they were accelerated in SNRs. Today the mystery remains unsolved, but a large variety of sources were discovered to emit gamma rays through different mechanisms, which has opened a new era in high energy astronomy. 18 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS

2.2.1 Gamma-ray production from cosmic rays and electrons The cosmic gamma rays we observe are produced by the interaction of relativistic particles with matter or radiation fields. By reconstructing their arrival direction it is possible to locate the particles responsible for such radiation. Relativistic protons produce neutral pions when they collide with the ambient gas that quickly decay into two gamma rays. If cosmic rays are accelerated to PeV energies, we should observe gamma-ray photons of ∼100 TeV[22]. The detection of those photons would then indicate that we have found a PeVatron. But hadrons are not the only particles that produce gamma rays. High-energy electrons also do. The most important leptonic radiation mechanisms at high energies are non thermal Bremsstrahlung and Inverse Compton. Then, to be able prove that we have found a source of cosmic-rays we need first to identify if the parent population responsible for the observed radiation are hadrons or electrons. Since the spectral shape of the emission depends on its associated radiation mechanism, precise measurements of the spectrum at different wavelengths play a key-role in this identification process.

Pion decay Gamma-ray emission through pion decay is one of the most common radiation mecha- nisms at high energies. It is for instance the responsible of the galactic diffuse emission, which was used to map the cosmic-ray spatial distribution in the Milky Way and its vicinity. This is in fact a clear example of how we can use gamma rays to find the sources of cosmic rays. Relativistic protons and nuclei (like those we find among the cosmic rays) interact with the ambient gas (thermal protons and other nuclei) by inelastic collisions. After such collisions a set of secondary particles are produced, including neutral and charged pions. The π0-mesons quickly decay into two gamma rays. Charged pions have longer lifetime and they might either interact with other ambient particles or decay producing a muon and a neutrino [22]. The proton energy threshold for the production of π0-mesons is ∼280 MeV [22]. In the case of a power-law distribution of primary protons, the spectrum of the gamma- ray emission from pion decay has a very distinct feature close to this energy threshold, often known as the “pion bump”. Figure 2.2 shows the expected gamma-ray emission from a proton population that follows a power-law with an exponential cut-off:

 −a   dNp(E) E E = N0 exp − (2.1) dE E0 Ec

where N0 is a normalization factor, E0 a reference energy and Ec is the cut-off energy (in other words, the maximum energy at which the protons are accelerated). The curves were obtained using the naima python package ([199], see Appendix D) that computes the non-thermal radiation produced by a given population of relativistic particles. 50 −1 In these calculations the normalization was fixed at N0 = 10 TeV at a reference energy of E0 = 100 GeV. Different spectral indexes a, cut-off energies and target proton densities from the ambient gas nh were considered. In all cases the gamma-ray spectra rise quickly at low energies until they reach their maximum at energies close to 1 GeV and then slowly fall towards higher energies. The slope depends on the spectral index of the parent population (Figure 2.2a). The target proton density nh does not affect the 2.2. SIGNATURES OF COSMIC RAY SOURCES IN GAMMA RAYS 19

−3 (a) nh = 10 cm , Ec = 3 PeV (b) a = 2.2, Ec = 3 PeV

−3 (c) nh = 10 cm , a = 2.2 PeV

Figure 2.2: Gamma-ray emission from π0 decay, computed for proton populations that follow Eq. 2.1. shape of the curve, but the overall intensity: the higher the density is, the higher the probability of having collisions that produce neural pions (Figure 2.2b). Finally, the highest energies of the gamma-ray spectrum provide information about the maximum energy of the parent particles, as can be seen in Figure 2.2c.

Inverse Compton scattering

Inverse Compton is the emission of gamma-rays due to the scattering of soft photons by relativistic electrons. Typical photon fields are the Cosmic Microwave Background (CMB) or the Near and Far Infrared Fields (NIR, FIR, respectively). It is the main leptonic radiation process at close to TeV energies, where it is especially effective as the Compton cooling time decreases linearly with energy. Inverse Compton scattering by protons with mass mp is in principle also possible, but compared to the scattering 20 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS

Figure 2.3: Gamma-ray emission from Inverse Compton, computed for an electron popula- tion that follows Eq. 2.1, assuming different spectral indexes a and cut-off energies Ec.

4 by electrons (with mass me) it is suppressed by a factor (me/mp) [22]. We can assume again a population of relativistic particles that follow a distribution like the one in Eq. 2.1, but this time composed of electrons, and compute its expected gamma-ray emission (Figure 2.3). The intensity of the radiation grows toward higher energies, while the slope of the spectrum depends on the spectral index of the electron population. After reaching its maximum, the flux drops quickly. The steepening of the gamma-ray spectrum reflects the transition from the Thomson regime, in which the energy loss rate of an electron is proportional to the square of its energy, to the Klein- Nishina regime, in which the loss rate is almost energy independent. The shape of the gamma-ray spectrum at the highest energies, near the cut-off, provides the essential information to understand which is the maximum energy of the accelerated electrons.

Bremsstrahlung Bremsstrahlung radiation occurs when a relativistic electron is accelerated by the elec- tric field of a nucleus. Heavier charged particles of mass M can also emit Bremsstrahlung, −2 but it is typically neglected since its intensity is proportional to (M/me) . It typically peaks at lower wavelengths compared to Inverse Compton scattering and it strongly depends on the ambient gas density, as can be seen in Figure 2.4. At mid energies, if the electron parent population is a power-law, the Bremmstrahlung gamma-ray spectrum also follows a power-law with the same spectral index [22].

Synchrotron radiation Charged particles that are forced to follow a curved trajectory by a magnetic field emit synchrotron radiation. Except on some rare scenarios, this emission typically does not extend beyond the X-ray band [22]. However, it is a mechanism that is always at work when a relativistic electron population is present and it provides invaluable information on the nature of such population. As can be seen in Figure 2.5, it can be 2.3. GAMMA-RAY DETECTION TECHNIQUES 21

Figure 2.4: Gamma-ray emission from Bremsstrahlung, computed for an electron popula- tion that follows Eq. 2.1 used to constrain the spectral index or the cut-off energy electrons or to set limits to the magnetic field B that drives those electrons.

In most scenarios the same electron population will emit radiation through the three last processes and they should all be considered when trying to model the broadband electromagnetic distribution (see Figure 2.6). For instance, the normalization factor N0 could be tuned to match the observed emission at TeV energies assuming Inverse Compton, but then the computed Synchrotron emission could contradict measurements in Radio and X-ray bands. That is why multiwavelength measurements of the gamma- ray spectrum can be used to prove, discard or constrain parameters in a hypothetical scenario in which the radiation has a leptonic origin. At the same time they can be used to favour a hypothetical hadronic origin of the radiation, if the measurements at GeV and TeV energies are compatible with the distinct spectral shape that results from the pion decay emission. But, reality happens to be usually not so simple and on many occasions more than one population might be needed to explain the observed spectrum at all bands.

2.3 Gamma-ray detection techniques

As explained in section 2.1, gamma rays interact with nuclei in the atmosphere start- ing an electromagnetic shower. Then, as it happens with cosmic rays, gamma-ray direct detection is only possible with balloons flying on the top of the atmosphere or with satellites. Only with spaced-borne experiments it is possible to reach the lower energies (below a few tens of GeV) of the gamma-ray domain. Two satellite experi- ments represented major breakthroughs in the field and set the stage for what today we may consider as modern gamma-ray astronomy. The first one was the Energetic 22 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS

Figure 2.5: Synchrotron radiation computed for an electron population that follows Eq. 2.1

Figure 2.6: Electromagnetic radiation computed for an electron population that emits through Inverse Compton, Bremsstrahlung and Synchrotron, and follows Eq. 2.1 with a = −3 2.4, Ec = 100 TeV, nh = 10cm , and B = 300 µG . 2.3. GAMMA-RAY DETECTION TECHNIQUES 23

Figure 2.7: Left: The Fermi Gamma-ray Space Telescope. Right: The converter/tracker, calorimeter and anticoincidence shield in the LAT detector. Image Credit: LAT collaboration

Gamma-Ray Experiment Telescope (EGRET, 1991-2000), that discovered more than 270 sources that emit radiation between 100 MeV and 10 GeV [122]. The other one is the Large Area Telescope (LAT), on board of the Fermi Gamma-ray Space Telescope [58], successor of EGRET since 2008 and currently the most advanced and productive direct detection instrument. Besides being capable of directly detecting gamma rays, satellites profit from a long duty cycle, low energy threshold and can reject background with very high efficiency. Undoubtedly their greatest disadvantage is their low collection area. This becomes critical at very high energies (VHE, E>50 GeV) where the fluxes are much lower. In this domain ground based telescopes achieve collection areas that are orders of magnitude higher than those in satellites. These telescopes indirectly detect photons using the Imaging Atmospheric Cherenkov Technique (IACT). IACT telescopes are strongly affected by background (mainly cosmic rays), have much lower duty cycles and have an energy threshold of at least some tens of GeV.

2.3.1 Gamma-ray direct detection with Fermi-LAT

The LAT (Figure 2.7) uses a converter consisting on 16 planes of high-Z material in which gamma rays interact producing electron-positron pairs. The converter planes are interleaved with position-sensitive detectors that measure the tracks of the particles that resulted from the pair conversion, from which the direction of the incoming gamma ray can be inferred. The created secondary particles develop an electromagnetic shower inside a calorimeter based on CsI(Tl) crystals, from which the energy of the gamma ray can be reconstructed. An anti-coincidence detector covering the top and lateral sides of the instrument is used to reject 99.97% of the hadronic background and in general to identify events triggered by electrons, positrons or any charged particle. Fermi-LAT has a large field of view that covers ∼ 20% of the sky at any time. It operates in the energy range going from ∼20 MeV to ∼300 GeV, which is particularly interesting for cosmic-ray acceleration studies because it is sensitive to the energies of the ‘pion bump’ (Section 2.2.1). In fact, the first evidence of cosmic-ray acceleration in SNRs was obtained from Fermi observations, as it will be further discussed in Section 3.2. 24 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS

Figure 2.8: Sketch describing the IACT during stereo observations. Reflectors allocated within the light pool can focus a fraction of the Cherenkov light into their telescopes cameras to image the shower. Figure from [144].

2.3.2 The Imaging Atmospheric Cherenkov Technique As already anticipated in Section 2.1, a gamma ray interacts with the atmosphere initiating an electromagnetic shower. The positrons and electrons produced in this shower travel faster than the speed of light in the medium and a Cherenkov flash is produced. This flash, which lasts only a few nanoseconds, produces on ground a compact and homogeneous distribution of photons usually called the light pool. For a vertical electromagnetic shower observed at ∼2000 m a.s.l the Cherenkov light density is almost constant in a circle of ∼120 m radius, centered in the shower core (Figure 2.8). The IACT uses one or several optical telescopes equipped with fast photodetectors that collect a fraction of the Cherenkov light to image the showers (Figure 2.8). With image-reconstruction algorithms it is possible to identify the energy and incoming di- rection of the gamma ray that initiated the shower. That identification is more precise if more than one telescope image the same shower. The principal source of background in this technique are the showers initiated by cosmic rays, which occur at a ∼ 104 higher rate. Fortunately, the typical morphology of hadron-induced showers differs from those initiated by gamma rays, as can be seen in Figure 2.9. Then, with proper algorithms it is possible to reject more than 99% of this hadronic background. As electrons and positrons also can induce electromagnetic showers similar to those produced by gamma rays, they form an additional source of background that cannot be suppressed. The same applies for the diffuse gamma-ray emission. Fortunately, both sources of background are nearly isotropic and have rather low fluxes, which limits their impact on the observations. Muons that are produced in the hadronic showers can radiate Cherenkov photons that in some cases can mimic the images produced by low energy electromagnetic showers. 2.3. GAMMA-RAY DETECTION TECHNIQUES 25

Figure 2.9: Vertical (top) and horizontal (bottom) cross sections of showers initiated by a 100 GeV gamma ray (left) and a 100 GeV proton (right), simulated with CORSIKA. Taken from https://www.ikp.kit.edu/corsika/index.php 26 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS

Probably the main drawback of this technique is that it is largely affected by noise, affecting the energy threshold and limiting the duty cycle. The IACT has to deal with two sources of noise: the electronic noise and the ambient light. Since IACT telescopes are operated only during night time, from now on I will use the term Night Sky Back- ground (NSB) to refer to the ambient light affecting this technique. As both sources of noise can produce random triggers on the telescopes, a proper trigger threshold has to be set to minimize these unwanted events. In this sense NSB is particularly disturb- ing: the higher it is, the higher the trigger threshold has to be set (and as it will be shown in Section 4.2, the higher the resulting energy threshold is). But NSB does not only affect the energy threshold: IACT telescopes typically use photomultiplier tubes (PMTs) as photodetectors which can get damaged in a too bright environment. That is why they are normally not operated during nights of moderate and strong moonlight, even if for some observations they could achieve a reasonable energy threshold. As a result their duty cycle is limited to only ∼18%. How observations can be extended during moonlight time and the impact of both sources of noise in the analysis and in the performance of the technique will be discussed detail in Section 4.2. The first detection of a VHE gamma-ray source by a Cherenkov telescope was announced in 1989. The instrument was the Whipple 10-meter telescope and the source was the Crab Nebula [196], of which we will read more in the next chapter. Later instruments as HEGRA [87] or CAT [62] were essential to prove the possibilities that the technique could offer. In the last decade the three most sensitive currently operating instruments, VERITAS [128], H.E.S.S.[21] and MAGIC [41], have discovered more than 150 sources, comprising a large variety of astronomical objects (see [88] for a recent review). The number of entries in the VHE source catalogue is expected to increase to ∼1000 with the future Cherenkov Telescope Array (CTA) [15], which would produce a new turn in the field. The results presented in this work are derived from data taken with the MAGIC telescopes. Section 4.1 describes in detail how the IACT works in these telescopes, how background is treated and how the energy and arrival direction of gamma rays is inferred.

2.4 Signatures of cosmic-ray sources in neutrinos

When relativistic protons or other cosmic rays interact with the ambient gas they not only produce π0-mesons, but also charged pions that may eventually decay into muons and neutrinos. Same as gamma rays, those neutrinos travel long distances in straight lines, pointing to the place where they were produced. The detection of neutrinos could then be used to study the cosmic-ray acceleration mechanisms and to identify eventual PeVatrons. Neutrinos can interact near the Earth surface producing charged particles. The neutrinos can be detected by observing the Cherenkov light that is produced in the medium in which those particles travel: ice in the case of the IceCube detector, located in the South Pole [1] and water in the case of Antares telescope, immersed near the French coast in the Mediterranean Sea [18], or of the future KM3Net [17]. The problem with neutrinos is that they are very hard to detect, due to their low interaction cross section. Detection rates are then orders of magnitude lower than in the case of gamma rays and only about ∼20% of them can be reconstructed with a reasonable angular 2.4. SIGNATURES OF COSMIC-RAY SOURCES IN NEUTRINOS 27

◦ resolution (. 1 ). Neutrino astronomy is still in an early phase and not a single neutrino individual source has been detected yet. A promising future is augured for the field, especially after the IceCube collaboration presented the first evidence for a high-energy neutrino flux of extraterrestrial origin [2]. 28 CHAPTER 2. SEARCHING FOR SOURCES OF COSMIC RAYS Chapter 3

Supernova Remnants as galactic cosmic-ray accelerators

A supernova is an event that occurs at the very end of stellar evolution, when a star that is thermally and mechanically unstable explodes releasing a kinetic energy of typically ∼ 1051 erg. After the explosion a blast wave, known as the forward shock, moves outward propagating into the surrounding medium and accelerating the ISM material away from the supernova (Figure 3.1). The forward shock heats the ambient gas which then radiates, for instance, in the optical band [173]. As the material ejected from the supernova is transferring momentum to the ambient medium it is slowed down producing a second shock, the reverse shock, that moves inwards through the expanding ejecta. Supernovae are classified in two types, SN I and SN II, depending on whether they exhibit or not hydrogen lines, respectively. SN I can be sub-divided in SN Ia, where the spectrum is dominated by Fe II and Fe II lines and SN Ib, where it is dominated by O I emission. SN Ib and SN II are found in spiral galaxies and star forming regions, suggesting that their progenitors are young and massive stars: Wolf-Rayet stars in the case of SN Ib, O or early B stars with M > 6 − 8M for SN II. In these explosions a compact object, a or a neutron star, is produced. This is the case of the core-collapse SN introduced in Section 1.1. SN Ia supernovae, instead, occur in the outskirts of elliptical galaxies and have their origin in old stars of ∼ 1M . It is believed that they occur in binary systems where one of the stars is a white dwarf. As a result of the accretion onto such a compact star, all the fuel is burned, completely disrupting the star [173]. After a supernova explosion a fraction of the stellar mass is ejected at an almost constant and highly supersonic velocity. The ejecta forms a shell that expands freely while its density is higher than the one from the surrounding medium (ejecta-dominated phase). When the mass of the circumstellar medium becomes comparable to that in the ejecta the shock velocity starts to decrease, entering what is known as the Sedov-Taylor phase. This stage is achieved typically 50 − 200 years after the explosion [157]. A third phase occurs when about half of the energy of the supernova has been radiated. The matter behind the shock cools fast and the expanding shell moves at constant radial momentum. The final phase of the SNR happens when the shock velocity reaches the level of the random velocities in the ambient gas and the SNR dissipates itself in the interstellar medium. This can occur between hundred and thousand years after the

29 30 CHAPTER 3. SNRS AS COSMIC RAY ACCELERATORS

Figure 3.1: Left: Scheme representing the forward and the reverse shocks in a SNR. Right: Particle gaining energy by crossing the shock. explosion. Particle acceleration is expected to be efficient only during the first phase, while the shock velocity is constant. The connection between SNRs and cosmic rays was first proposed by Baade and Zwicky in 1934[60], using simple energetic arguments. The power needed to account for the observed galactic cosmic ray flux considering losses due to escape is [157]

UCRVCR 40 PCR ∼ ' 10 erg/s (3.1) τesc

−3 3 where UCR ' 0.5eVcm is the cosmic-ray density measured at the Earth, VCR ' 400 kpc is the volume of the galactic halo in which cosmic rays are confined and τesc ' 5 Myr is the time spent by a cosmic ray in the galaxy before escaping. 51 On the other side, the energy released by a supernova explosion is ESN ∼ 10 erg. −1 With a supernova explosion rate in the Galaxy of RSN ∼ 0.03 yr , the total power available in SNRs would be

41 PSNR = RSNESN ' 3 × 10 erg/s (3.2)

If we take into account the uncertainties of the different parameters, the observed energy density of the galactic cosmic rays could be explained if between 3 and 30% of the energy released in a SN explosion was transferred to non-thermal particles. Baade and Zwicky showed that SNRs had the energy budget needed to accelerate cosmic rays up to the knee. But until the 1970’s it was still missing a mechanism able to convert that available energy into kinetic energy of the particles that would eventually end up being PeV cosmic rays. Next section is devoted to Diffusive Shock Acceleration, the acceleration process that appeared to fill this blank and that since then is an essential component in almost any cosmic-ray origin model. 3.1. DIFFUSIVE SHOCK ACCELERATION 31

3.1 Diffusive Shock Acceleration

In the 1950’s Fermi proposed that cosmic rays could be accelerated by repeatedly and randomly scattering in a turbulent magnetic field [95, 96]. The scattering centers were magnetized clouds moving around the Galaxy with random velocities. This theory, nowadays known as second order Fermi acceleration, was unable to explain the spectral shape and the maximum energy reached by galactic cosmic rays, but it set the baseline for what today is the most invoked acceleration mechanism in astrophysics. The key of the Diffusive Shock Acceleration (DSA) mechanism (also known as first order Fermi acceleration) was realizing that particles gained energy more efficiently when Fermi’s idea was applied in in the vicinity of a shock wave [59, 66, 67, 74, 136, 183, 184]. In DSA mechanism particles increase their energy by bouncing between upstream and downstream regions of a flow near a shock wave [151]. The magnetic turbulence acts as scattering centers that confine the particles in the vicinity of the shock, allowing them to cross it repeatedly. Each time they cross the shock, they collide with these scattering centers and gain energy, that is subtracted from the bulk motion of the plasma [157]. One of the most appealing features of DSA is that it naturally produces a power- law spectrum of the accelerated particles, something we would expect to happen in a cosmic-ray accelerator judging from the measured cosmic-ray spectrum. This spectral shape arises from the balance between the energy gain ∆E, which is proportional to the energy E and the escaping probability Pesc, which is barely energy independent (although not completely true for particles at maximum energy). If we consider a plane shock that moves with velocity ush as the one from Figure 3.1, the energy gain depends on the difference between the velocity u1 of the flow upstream and the velocity u2 of the flow downstream as [157]:

∆E 4 (u − u ) = 1 2 (3.3) E 3c If  ≡ ∆E/E is the energy gain in a full cycle (crossing twice the shock, from down- stream to upstream and vice versa), then a particle that initially had an energy E0 would have, after k cycles: k E = E0 (1 + ) (3.4) The number of particles with energy greater than E after k cycles would be

∞ k  −δ X i (1 − Pesc) 1 E N (> E) ∝ (1 − Pesc) = = (3.5) Pesc Pesc E0 i=k With ln (1 − P ) δ = − esc (3.6) ln (1 + ) From Eq. 3.5, the differential energy spectrum appears as a power law:

f(E) = dN/dE ∝ E−1+δ (3.7)

The maximum energy the particles can achieve depends on the acceleration time, and the minimum between the energy loss time and the age of the accelerator. Energy 32 CHAPTER 3. SNRS AS COSMIC RAY ACCELERATORS losses are actually only relevant if the accelerated particles are electrons which lose energy through Inverse Compton and Synchrotron radiation. The maximum energy that nuclei can reach results from the balance between the acceleration time tacc, the time it takes for a particle of energy E to increase its energy by ∆E, and the age of the SNR tSNR. tacc depends on the properties of the scattering process, which determines how long it takes for a particle to cross the shock each time. It depends on the upstream and downstream velocities and on the diffusion coefficients at each side of the shock, D1 and D2. If we assume these two to be spatially constant, a well known result of linear shock acceleration theory is [91]   3 D1 D2 tacc(E) = + (3.8) u1 − u2 u1 u2

The acceleration time is then dominated by particle diffusion in the region with less scattering (larger diffusion coefficient), which typically occurs upstream[75]. To order of magnitude, we have: 2 tacc ' 10D/ush (3.9) For ultra-relativistic particles and in a weak-turbulence regime, the diffusion coefficient can be estimated as r c D ' L (3.10) 3z

Where rL = γmc/(qB0) is the Larmor radius of the particles of charge q = Ze, Lorenz factor γ and mass m in a magnetic field of intensity B0 and z is defined in [157] as the normalized energy density per unit logarithmic bandwidth of magnetic per- turbations, but for our purpose it can be understood as a coefficient that characterizes how turbulent is the regime. A typical turbulence in the ISM yields z  1, while if z = 1 the Bohm limit (DBohm ≡ rLc/3) is reached. In principle, from the discussion above the maximum energy would depend on how much time the particle can spend crossing the shock without escaping. The upper bound for that time would be obtained when tacc(EMAX) = tSNR. But since the accel- eration mechanism is efficient only during the ejecta-dominated phase, the maximum energy that any particle could ever reach all over the SNR evolution would be actually constrained by the time it takes the SNR to reach the Sedov-Taylor phase tST. Then, equating tacc and tST and inserting Eq. 3.9 and 3.10 it is possible to estimate the maximum energy EMAX 2 EMAX ∼ tST z ush Ze B0 (3.11) The maximum energy is proportional to the atomic charge Z, as required to explain the knee. But for a typical shock velocity of 3000 km/s, a standard ISM magnetic field of 0.3 nT [91] and tST ∼ 100 yr we would have that protons could reach an EMAX ∼ 13 10 z eV. Then, the diffusion coefficient would have to be  1 for EMAX to reach PeV energies. This means that the magnetic turbulence would have to be much larger than the average field, i. e., δB  B0. Naturally, Eq. 3.10 would not hold anymore (and hence neither Eq. 3.11) but still this linear approach is useful to understand the basics of DSA and to show that if there is no mechanism capable of amplifying the magnetic turbulence then SNRs would be able to accelerate protons only to GeV energies. What is more, EMAX can only be increased if the magnetic field amplification occurs both upstream and downstream, as otherwise particles would escape. The magnetic field 3.2. EVIDENCE OF PARTICLE ACCELERATION IN SNRS 33 downstream is typically highly turbulent, but there are a priori no obvious reasons to assume that the plasma over which the shock expands could be turbulent [157]. An apparent explanation came in the late 1970’s [183, 184, 66, 67, 139, 140]: the same accelerated particles can amplify the magnetic field upstream while they are diffusing away from the shock. With this idea and with a formal treatment of the DSA theory in the non-linear regime it is possible to achieve the desired energies of Eknee. Finally, the accelerated particles should be able at some point to leave the SNR while the acceleration is still ongoing and some of them should eventually reach the Earth. How particles escape the remnant is still not fully understood, basically because of the uncertainties on how particles reach the maximum energy. What is certain is that once they are injected into the ISM they travel diffusely, scattered by irregularities in the Galactic magnetic field. The diffusion coefficient in the Galaxy affects the time τesc that particles can spend in the Milky Way, is also energy dependent and can be written δ as D(E) = D0E . Then, if the cosmic rays are injected in the ISM with an spectral index γ then the measured cosmic-ray spectrum at Earth would be N(E) ∝ E−γ−δ[157]. Hence, to infer γ we need first to measure δ, which can be estimated from the ratio of secondary to primary cosmic rays [75], where with ‘primary’ we refer to the cosmic rays that were accelerated in the SNR and with ‘secondary’ to those particles that are created as a result of the interaction of primary cosmic rays in the ISM.

3.2 Evidence of particle acceleration in SNRs

Following the arguments exposed in Chapter 2, gamma-ray observations provide the best opportunity to identify the cosmic-ray acceleration sites. A putative detection of gamma-ray emission from SNRs would prove that particles are being accelerated to ultra-relativistic energies. But since the particles emitting those gamma rays could be either nuclei or electrons, to test if SNRs are cosmic-ray accelerators first we should be able to discriminate between hadronic and leptonic origin of such emission. Actually, we have enough evidence of acceleration of both, hadrons and electrons, in SNRs. Multiwavelength observations of SNRs from radio to X-rays exhibit non thermal emission that can only be explained as synchrotron radiation from electrons that are accelerated up to GeV or TeV energies. A clear evidence of proton acceleration in SNRs comes from Fermi-LAT spectral measurements of SNRs IC443 and W441 [13]. As can be seen in Figure 3.2, the spectra show the characteristic “pion bump” that can only be associated to a hadronic population. The problem with these remnants concerning cosmic-ray origin studies is that they are relatively old: their age is estimated to be ∼ 104 years. They both reached the Sedov-Taylor phase long time ago, which results in a cut-off at GeV energies in their gamma-ray spectra. In fact, at those ages the TeV particles cannot be confined in the shell of the remnant and the gamma-ray emission is likely to come from the interaction of particles that already escaped and interacted with the surrounding gas [22]. Fermi-LAT measurements of IC443 and W44 were essential to prove that there is proton acceleration in SNRs, but to identify SNRs that may accelerate CRs up to the knee the focus should be set on young SNRs with ages not so far from tST. Cassiopeia A [28, 137] and Tycho [56] in the northern hemisphere and SN 1006 [11], RX J1713.7-3946

1Previous AGILE observations had also claimed evidence of pion emission in W44 [107]. 34 CHAPTER 3. SNRS AS COSMIC RAY ACCELERATORS

Figure 3.2: Pion Decay signature in the spectra of W44 and IC443 measured by Fermi-LAT. Figure extracted from [13]

[117], RX J0852-4622 [118] and RCW 86 [119] in the southern hemisphere are currently the only known young (or assumed to be young) remnants that emit in TeV energies. RX J0852-4622 and RCW 86 gamma-ray spectra exhibit a cut-off close to 5 TeV, dis- carding them as potential candidates in the PeVatron quest. The bulk of the emission in SN1006 is very likely to be of leptonic origin, although a hadronic contribution can- not be discarded. This hadronic contribution could in principle produce gamma rays of ∼ 100 TeV, which turns this remnant into an interesting candidate. But its flux at VHE is very weak, which makes it not the most suitable target for current gamma-ray instruments. On the contrary, RX J1713.7-3946 has the largest surface brightness of them all, which makes it a very interesting object for studying its morphology or to identify different gamma-ray emission regions. Tycho is another promising candidate: its spectrum measured up to ∼10 TeV does not show evidence of a cut-off, although recent observations show that its spectrum could be softer than previous expectations. Finally, Cas A emerges as probably one of the best studied non-thermal objects in our Galaxy [22] and as one of the best candidates for PeVatron studies. The arguments supporting this statement are discussed in the next Section.

3.3 Cassiopeia A

Cassiopeia A (also known as Cas A) is a relatively young supernova remnant. Its associated supernova explosion was estimated to be ∼300 yr ago and it is commonly linked to a star that was catalogued by Flamsteed in 1680 [57]. There is evidence suggesting that it was a type IIb supernova, originated from the collapse of the helium core of a red supergiant that had lost most of its hydrogen before exploding [135]. +0.3 Located at a distance of 3.4−0.1 kpc and with an angular diameter of 5’ [169], it is the brightest radio source outside our solar system. It is in fact bright all over the electromagnetic spectrum. That, plus the precise knowledge of its age and the fact that it has been deeply observed at all wavelengths (which allow to constrain otherwise free parameters in radiation and acceleration models) turns Cas A into one of the best candidates for studying particle acceleration. Cas A has been extensively observed in radio wavelengths [46, 78, 121, 142, 153, 161]. Most of the emission comes from a bright radio ring of ∼1.7 pc radius and a faint outer 3.3. CASSIOPEIA A 35

Figure 3.3: Left: Radio false-color image of Cas A taken with the VLA telescope. Images at three different frequencies are overlayed: 1.4 GHz (L band), 5.0 GHz (C band), and 8.4 GHz (X band). Credits: http://images.nrao.edu. Right: X-ray false-color image taken with Chandra X-ray observatory. Credits: http://chandra.harvard.edu plateau of ∼2.5 pc radius [201]. From high-resolution VLA radio synchrotron maps [51], the inner ring is believed to be a trace of the reverse shock, while the outer one is normally associated to the forward shock (see left panel in Figure 3.3). Several compact and bright radio knots have also been identified [50]. In total, the spectral index of the radio flux can vary from ∼0.6 to ∼0.9 over the whole remnant. X-ray observations exhibit a line-emitting shell that coincides with the bright radio ring (see right panel in Figure 3.3). Chandra X-ray images [108] show a thin outer edge to the SNR that has been interpreted to represent the forward shock where the blast wave encounters the circumstellar medium [89]. High-resolution observations [108, 124, 158, 162] also show a reverse shock formed well behind the forward shock that decelerates the impinging ejecta. Several faint X-ray filaments mark the position of the forward shock. By following the motion of those filaments in Chandra images it was possible to calculate the average velocity of the forward shock, which was estimated to be '4900 km/s [163]. The observation of year-scale variability in those and in many more inner filaments suggest that the magnetic field in those regions is very high, of the order of ∼1 mG [191]. Such a high magnetic field would be consistent with the emission observed in the bright radio knots, associated to synchrotron radiation from relativistic electrons. In [191] it was suggested that such emission was originated at the reverse shock and that then the inner X-ray filaments were located there. But in [163] it was argued that since the inner and outer filaments exhibit similar non-thermal spectra and inferred magnetic field intensities, the inner filaments could be also from the forward shock, seen in projection. In the same work, Patnaude and Fesen were unable to model the shock size and velocity without resorting to shock modification due to cosmic-ray acceleration. When included, they found that the models that best adjusted to the observational evidence assumed a high cosmic-ray acceleration efficiency at the forward shock, where '30% of the supernova explosion energy was transferred to the accelerated particles. In the gamma-ray domain, F ermi-LAT detected the source at GeV energies [6] 36 CHAPTER 3. SNRS AS COSMIC RAY ACCELERATORS and later derived a spectrum that displays a low energy spectral break at 1.72±1.35 GeV [198]. In the TeV energy range, Cas A was first detected by HEGRA [20] and later confirmed by MAGIC [33]. VERITAS has recently reported a spectrum extending well above 1 TeV [137, 127], where the measured spectral index is larger than the Fermi- LAT index of 2.17±0.09. The spectrum seems to steepen from the F ermi-LAT energy range to the TeV bands, according to all IACT measurements. Still, the statistical and systematic errors are too large for a final conclusion. While the synchrotron emission observed in radio and X-ray is a strong evidence of the presence of relativistic electrons, the bulk of the observed gamma-ray emission could be either produced by the same electrons, by accelerated cosmic-rays or by a more complex mix involving different populations. Due to the limited angular resolu- tion of the gamma-ray instruments it is impossible, at least by now, to discriminate from where in the remnant is the gamma-ray emission coming from, which could be very handful if, for instance, it could be associated to some of the knots and fila- ments observed at other wavelengths. The standard approach used to try to identify the origin of the gamma-ray emission is to attempt to model the observed spectrum considering the different radiation mechanisms explained in Section 2.2.1. Figure 3.4 pictures the context in the gamma-ray band before the observations performed for this thesis. Current models have not yet resulted in a clear discrimination between hadronic and/or leptonic origin of the observed radiation in the GeV to TeV energy range (e.g. [71, 175, 194, 198, 201]). However the break in the F ermi-LAT spectrum at ∼1 GeV combined with the observations at TeV energies practically discard the possibility of a purely leptonic scenario and suggest that the observed gamma-ray flux has either a pure, or at least a highly-dominated, hadronic origin, or that more than one population would be needed to explain the measured flux. The eventual presence of different populations of relativistic particles is supported by the fact that several plau- sible acceleration regions of amplified magnetic fields have been found, associated both to the forward and the reverse shock. The parameters that characterize each shock can be significantly different, enhancing different dominant radiation mechanisms on each zone. For instance, inverse Compton (IC) contribution, up-scattering the large FIR photon field of Cas A itself (with energy density of ∼2 eV/cm3 and temperature of 97 K, [154]), is more significant in a region of lower magnetic field like the reverse shock, as otherwise it would be suppressed due to fast cooling of electrons. Considering the mentioned limitations concerning angular resolution of gamma- ray instruments, measuring the spectrum at energies below 1 GeV may be the best chance to fully discriminate between hadronic and leptonic models. In a hypothetic hadronic scenario, precise measurements at TeV energies (ideally up to ∼100 TeV) would be needed to favour or disfavour Cas A as an eventual PeVatron. Current hadronic models [175, 201] either assume or derive a cut-off energy for the parent proton population of the order of ∼100 TeV, one order of magnitude lower than the energy of the knee. In [68], Bell suggests that to accelerate cosmic rays to the a few PeV a shock velocity of ∼ 104 km/s would be required and that then historical SNRs as Cas A would only be able to accelerate protons up to 100 − 200 TeV. But most of the assumptions and speculations on what may happen at the highest energies suffer from the lack of precise measurements at TeV energies. Quoting Aharonian [22], “the large statistical uncertainties of TeV gamma-ray fluxes, leaves open the question whether Cas A accelerate particles to PeV energies”. Precise measurements of the 3.4. ALTERNATIVES TO THE SNR HYPOTHESIS 37 ]

•1 −11 s 10 •2 [ TeV cm dN dE dA dt

2 −12 E 10

VERITAS (2015) MAGIC (2007) Fermi (2013) Saha et al. (L) Zirakashvili et al. (L) Saha et al. (LH) Zirakashvili et al. (H1) Saha et al. (H) − 10 13 − 10 1 1 10 102 103 104 Energy [GeV]

Figure 3.4: Cas A spectrum in the gamma-ray band before the observations performed with MAGIC for this thesis. Spectral points were measured by Fermi-LAT [198], VERITAS [137] and MAGIC [33]. Some of the models that aimed to described the emission observed by that time are included. Three are from [175]: a pure leptonic model (L), a pure hadronic one (H) and an hybrid model including electrons and protons (LH). The other two are from [201]: a pure leptonic model (L) and a pure hadronic one (H1). gamma-ray spectrum at VHE are the key to understand which is the maximum energy of the accelerated particles in Cas A. This is the starting point that motivated the deep campaign performed with the MAGIC telescopes on Cas A that is described in Chapter 5.1.

3.4 Alternatives to the SNR hypothesis

As mentioned before, SNRs have been for the last eighty years the favourite candidates to be the PeVatrons that could explain the observed flux of Galactic cosmic rays. But before discussing the observation and results of this thesis performed in such direction I wanted to briefly introduce other plausible explanations that, although not as popular as the SNR scenario, deserve to be explored. My intention is just to present them and to refer the reader to their corresponding references for a wider overview. HESS observations of the Galactic Centre provided the first (and by now only) evidence of a PeV accelerator [126]. The supermassive black hole Sgr A* is suggested to be the supplier of the accelerated particles. However, its current rate of particle acceleration is not sufficient to significantly contribute to the observed cosmic-ray flux. It is argued that it may have been more active in the past. In such case, an average acceleration rate of 1039 erg s−1 over 106-107 yr would be enough to explain the knee. The interacting winds of massive, luminous stars have been proposed as poten- tial cosmic-ray accelerators in the 1980s [81]. The acceleration could take place near massive stars or in the previously introduced superbubbles (see Section 1.1). There is evidence of cosmic-ray injection and diffusion into the ISM in the vicinity of a few galactic clusters and the analysis and modelling of the observed gamma-ray emission points to a parent proton population that reaches ∼1 PeV [23]. Besides, stellar winds can convert kinetic energy with an efficiency that can be as high as 10%, which would 38 CHAPTER 3. SNRS AS COSMIC RAY ACCELERATORS be enough to explain the observed galactic cosmic-ray flux. Colliding winds in massive binary systems generate shock fronts that provide an alternative scenario for particle acceleration and have also been proposed as potential candidates to explain the origin of galactic cosmic rays [65]. The hypothesis is sup- ported, for instance, by the finding of evidence of hadron acceleration in η Carinae [94]. These systems also offer dense wind scenarios for particle interaction, producing gamma rays and neutrinos. Whether these systems can produce and host nuclei of PeV energies seems then to be a task for future gamma-ray and neutrino experiments. Chapter 4

Adapting MAGIC for moonlight observations

In the previous chapters I explained why and how supernova remnants could be the key to finally solve the mystery of the galactic cosmic-ray origin. Essential would be to identify one remnant that acts as a PeVatron. I hope I could also convince the reader that gamma-ray observations offer, at least by now, the best opportunity to materialize such identification and that Cas A is one of the few known promising candidates for this PeVatron quest. I argued that the success of this quest requires precise measurements of the gamma-ray spectrum at TeV energies and that IACT telescopes are, at least by now, the most suitable instruments for this task. Since the fluxes at the highest energies are very low, measuring the spectrum up to several tens of TeV with current IACT instruments may require several hundreds or even thousands of observation hours. Those numbers are not particularly easy to achieve with Cherenkov telescopes that only operate during dark time and where the available observation time has to be shared among several different projects. The strategy performed here to overcome this issue was to extend MAGIC duty cycle tuning the telescopes to operate also under moonlight. Only by profiting from this additional time it was possible to perform the deep observation campaign on Cas A with the MAGIC telescopes that is described in Chapter 5. This chapter starts with a general overview of the MAGIC telescopes and a descrip- tion of the standard analysis chain. Then it is explained in detail how the hardware and the analysis are adapted for moonlight observations, followed by a a detailed study on how the performance of the telescopes is affected.

4.1 The MAGIC telescopes

MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) is a system of two 17 m-diameter imaging atmospheric Cherenkov telescopes located at the Roque de los Muchachos Observatory on the Canary Island of La Palma, Spain, at an altitude of 2200 m a.s.l (Figure 4.1). The telescopes were designed to reach the lowest possible energy threshold, which is achieved with a large mirror area, finely-pixelated cameras equipped with photomultiplier tubes (PMTs) and fast sampling electronics. Their light weight (<70 tons) structure, made out of reinforced carbon fiber tubes, allows for fast repositioning to rapidly follow up transient events [41].

39 40 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.1: The MAGIC telescopes. Credits: Daniel L´opez

Despite a few technical differences, the two telescopes, MAGIC 1 and 2, can be considered nearly identical for practical purposes. In a very simplified scheme, each of these alt-azimuthal mount telescopes can be decomposed in three parts: a structure, a reflector and a camera. The structure holds the drive system that moves the telescopes, the reflector dish and the camera. During observations it may experience slight defor- mations due to the gravitational load. This effect is corrected with an active control of the mirrors that adjusts their focusing depending on the zenith angle of the pointing position and with the use of a CCD camera (Starguider camera) installed in the middle of the reflector dish that simultaneously observes the position of the PMT camera and background stars that uses as reference [170]. The telescope can track sources with an accuracy of 0.02◦. The reflector is a parabolic dish of 17 m diameter and focal length. Composed by 1 m × 1 m panels, the resulting active reflective mirror surface of each telescope is ∼236 m2. Each panel in MAGIC 2 each panel is occupied with a single 1 m × 1 m mirror element. In MAGIC 1 ∼70% of the panels consist on four 0.5 m × 0.5 m mirror elements. The remaining ones are similar to the MAGIC 2 panels. In both telescopes there is a combination of glass and aluminum mirrors. Thanks to the parabolic shape of the reflector, the time spread of synchronous light signals is negligible compared to the typical time spread of 1-2 ns in Cherenkov pulses [37]. This means that the reflector does not introduce any significant broadening to the observed pulses. The Point Spread Function (PSF) for each of the mirrors, r39, defined as the 39% containment radius of the reflected spot of a point-like source on the focal plane of the mirror, is less than 10 mm wide. For further details on the structure and mirrors of the telescopes see [63, 146]. The cameras deserve a special treatment since they play a central role in this thesis. 4.1. THE MAGIC TELESCOPES 41

4.1.1 The MAGIC cameras

The cameras of both MAGIC telescopes consist of 1039 pixels, arranged in a circle of roughly 1 m diameter, giving each camera a field of view of 3.5◦ (Figure 4.2). The photosensors are 25.4 mm diameter PMTs made by Hamamatsu (type R10408) with a hemispherical cathode and 6 dynodes. A hexagonal Winston cone is mounted on top of each PMT. The PMT bases include a compact DC-DC converter providing the bias voltage for the PMT and circuit boards for the front-end analog signal processing. The pixels are grouped in clusters of 7 units for an easier installation and access for maintenance (Figure 4.3). A single cluster weights ∼1 kg, is ∼50 cm long and has a maximum width of ∼9 cm. Distance between pixel centers is ∼3 cm. The nominal High Voltage (HV) supplied to the PMTs during standard MAGIC observations is on average of ∼900 V. The electrical signals are amplified (AC coupled, ∼25 dB amplification) and transmitted by individual optical fibers by means of vertical cavity surface emitting lasers (VCSELs) [41] to the Counting House, where the readout is performed (see next section). MAGIC was designed from the beginning to operate also under low and moderate moonlight. The PMTs are operated at a relatively low gain, typically of 3-4 ×104, to decrease the amount of charge that hits the last dynode (anode) during bright sky observations, preventing fast aging. With the same criteria, there are established safety limits for the current generated in the PMTs. Individual PMT pixels are automatically switched off if their anode currents (DCs) are higher than 47 µA and the telescopes are typically not operated if the median current in one of the cameras is above 15 µA (as a reference, during dark time the median current is about 1 µA). A detailed study on the gain drop of the MAGIC PMTs when exposed to high illumination levels was reported in [34], which shows that while the detectors are operated at low gain and within the imposed safety limits no significant degradation is expected in the lifetime of MAGIC [29]. The cameras are constantly being calibrated with a passively Q-switched Nd:YAG laser (third harmonics, wavelength of 355 nm) that produces pulses of 0.4 ns width. This laser is inside the so-called calibration box that is placed at the centre of the mirror dish, near the Staguider camera, about 17 meters away from the camera plane. The light intensity can be adjusted by means of calibrated optical filters that are placed right after the laser output. Before exiting the calibration box, the light is diffused by making it go through an Ulbricht sphere, so that the PMTs illumination is nearly uniform.

4.1.2 Readout and trigger system

The signals produced after the VCSELs are transmitted via optical fibers from the telescopes cameras to the Counting House, a building located ∼100 meters away from the telescopes where the readout, trigger and data acquisition system are placed. The optical signals arrive to the so-called receiver board and are converted back into electric signals using photodiodes. Each signal is then split into two branches, one for the trigger and another one for the readout. 42 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.2: Left: One of the MAGIC cameras. Right: PMTs and Winston cones arranged in the camera. Pictures from https://magic.mpp.mpg.de.

Figure 4.3: Top: A pixel module consisting on a PMT, a High Voltage (HV) supply unit, a pre-amplifier and a VCSEL. Bottom: An assembled 7-pixel cluster. Images from [41]. 4.1. THE MAGIC TELESCOPES 43

Trigger The standard MAGIC trigger has three levels: 1. The L0 trigger, an amplitude discriminator that operates individually on each pixel of the camera trigger area. 2. The L1 trigger, a digital system that operates independently on each telescope, looking for time-coincident L0 triggers in a minimum number of neighboring pixels (typically three). 3. The L3 trigger, that looks for a time coincidence of the two L1 triggers. There are two alternative trigger options that have been installed with the aim of improving the performance at the lowest energies: the Sum Trigger-II [106] and the ToPo-Trigger [145]. Since they were not used in this thesis, I will not describe them. Further information can be found in their corresponding references.

The trigger rates depend on the discriminator threshold (DT) set on each PMT at the L0 level. The DTs are controlled by the Individual Pixel Rate Control (IPRC) routine, which aims to keep the L0 rates of every pixel between two preset values. These values are optimized to provide the lowest possible energy threshold while keeping accidental rates at a level that can be handled by the data acquisition system (DAQ) without incurring a significant dead time. Accidental L0 triggers are dominated by NSB fluctuations. As they can vary significantly during observations, the DTs are constantly changed by the IPRC. If the L0 rate of one pixel moves temporarily outside the imposed limits, as it could happen if, e.g., a bright star is in the FOV, the IPRC adjusts its DT until the rate is back within the desired levels (for more details see Section 5.3.4 of [41]). Noise fluctuations are higher in a region with high density of bright stars, like the galactic plane, than in an extragalactic one. During relatively bright moonlight observations the main contribution to NSB comes from the Moon itself. Unlike stars, that only affect a few pixels, moonlight scattered by the atmosphere affects the whole camera almost uniformly (with the exception of a region within a few degrees of the Moon). The induced noise depends on the zenith angle of the observations, the angular distance between the pointing direction and the Moon, its phase, its position in the sky and its distance to the Earth [79]. Essentially, accidental L0 rates get higher during moonlight observations and IPRC reacts increasing the DTs, resulting in a higher trigger-level energy threshold. This effect will be further discussed in Section 4.2.

Readout The readout is based on a Domino Ring Sampler version 4 (DRS4) analog memory chip [188]. The analog signal coming from a receiver channel is connected to an ar- ray of 1024 capacitors. Each of the capacitors is charged by the signal for a time that is proportional to the period of the clock controlling the switching (the so-called Domino wave). The capacitors are overwritten after 1024 clock cycles. When a trigger is received, the Domino wave stops and the charge value of a few of those capacitors is digitized by an Analog-to-Digital Converter (ADC) and stored. Since the installa- tion of the DRS4 chips during major upgrade of the telescopes in 2011-2012 and until November 2015 the readout signal sampling was of 2 Gsample/s and 60 capacitors were 44 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

11400

11200 FADC counts

11000

10800

10600

10400

10200

10000

9800 0 5 10 15 20 25 30 35 40 45 50 Slice number

Figure 4.4: A 30 ns DAQ window storing the ADC counts recorded in 50 slices of one PMT of MAGIC 1 camera during a calibration event. The main pulse is followed by an overshoot [182]. stored by the DAQ [41]. Since November 2015 the sampling frequency is 1.66 Gsam- ple/s and 50 capacitors are stored in each pixel for each event. Those capacitors are read at 33 MHz, producing a dead time of 27 µs [144]. For a detailed characterization of the DRS4 see [181]. Essentially, when the trigger conditions are fulfilled, the signal of each pixel is recorded into a 30 ns waveform (DAQ window) storing the raw ADC counts, like the one in Figure 4.4. Before the down-sampling, the resolution was of 0.5 ns (the 30 ns window was split into 60 slices, one corresponding to each stored capacitor). Since the down-sampling was introduced the window resolution is of 0.6 ns (50 slices).

4.1.3 Data taking A typical MAGIC observation starts with a pedestal and a calibration run and it is followed by several data runs, typically 20 minutes long each. A pedestal run consists of several events (typically 2000) taken with the camera open and triggered by an internal clock. It is used to evaluate the background level (from the NSB and from the electronic noise), which is later substracted during the data calibration. A calibration run consist on several events triggered by the calibration system in which the pulses from the calibration box are recorded. It is used to convert the stored charge values into photoelectrons (see Section 4.1.4.1). During data runs the events are triggered following the three stages described in Section 4.1.2. A trigger can be produced by cosmic rays, gamma rays, muons or just by noise. Interleaved pedestal and calibration 4.1. THE MAGIC TELESCOPES 45 events are taken during data runs to constantly calibrate the different channels. Almost all MAGIC observations are taken in the so-called wobble mode, in which the source position (ON region) in the camera has a fixed offset angle φ in a given direction with respect to the centre of the field of view (FOV). The background can be then estimated from other positions (OFF region) in the sky that are at the same offset φ, as shown in Figure 4.5. Details about this observational method can be found in [99]. Typical MAGIC observations use an offset angle φ = 0.4◦ and three simultaneous background regions.

4.1.4 Analysis chain The waveforms recorded at the readout level are processed using MARS (MAGIC Analysis and Reconstruction Software), the standard software tools for MAGIC data analysis [200]. This software uses C++ routines combined with ROOT1 libraries to extract the charge and timing information of each pixel to later reconstruct the nature of the primary particle that initiated the shower and determine the direction and energy of the potential gamma-ray candidate. This section briefly describes the different stages of the standard MAGIC analysis chain, from the calibration of the raw data until their more advanced state, from which later it is possible to extract high-level products such as spectra or light curves.

4.1.4.1 Signal extraction and calibration The first stage of the analysis chain consists on extracting the signal from the DAQ window and convert from ADC counts into number of photoelectrons (phe). This is performed with a program called Sorcerer (Simple, Outright Raw Calibration; Easy, Reliable Extraction Routines). Basically, it receives as an input the 30 ns DAQ wave- form and outputs the charge (in phe) and arrival time, for each pixel, for every triggered event. The first step in the calibration process is the baseline estimation2. It is obtained by getting the mean value of a gaussian fit to the charge distribution of all slices recorded during pedestal events. Next step is the charge extraction: an algorithm looks over that waveform for the largest integrated charge in a sliding window of 3 ns width. Left panel in Figure 4.6 shows the charge extraction process for a calibration pulse, which is rather similar to what would be seen in the case of an event triggered by a Cherenkov pulse produced during an air shower. In the absence of signal, the sliding window picks up the largest noise fluctuation of the waveform, as can be seen in the right panel of Figure 4.6. Once the integration window is fixed, the signal arrival time is obtained from the center of that window. The conversion from ADC counts to number of phe is performed using the so-called F-factor method over the calibration runs [156]. For calibration pulses it is assumed that the number√ of phe follows a Poissonian distribution with mean N and standard deviation N. If the distribution of the observed pulses measured in ADC counts has a charge hQi and a deviation σQ , both distributions are connected by an F-factor F

1https://root.cern.ch/ 2Before, the DRS4 readout system requires some special calibration, which is automatically per- formed during data taking and that is discussed in [182]. 46 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.5: Schematic view of the wobble pointing mode. A region placed 0.4◦ away from the source is tracked, being the source all the time situated at this distance from the center of the camera (green circle). In the case of 3 OFF regions, the background is taken from regions separated the same distance from one another in the circle. Over consecutive runs (W1, W2, W3, W4) the source appears on different positions in the camera (1, 2, 3, 4). After four runs the source has appeared in the four selected positions and all four positions have been used for the background estimation. In this way the effect of inhomogeneities in the PMTs response and in the FOV are minimized. 4.1. THE MAGIC TELESCOPES 47 that can be measured at the lab and that is different for each PMT: √ N σ F = Q (4.1) N hQi

The conversion factor C between ADC counts and phe is then given by:

N 2 hQi C = = F 2 (4.2) hQi σQ As the gain of the VCSELs is not constant in time, C must be updated using interleaved calibration events.

Figure 4.6: Charge extraction from a calibration event. Dashed green area is the baseline level calculated from the pedestal events. Blue dashed lines delimit the five consecutive slices that give the maximum integrated charge (filled area).

4.1.4.2 Image cleaning and parametrization After the calibration of the acquired data, we have an image in which charge and timing information of each pixel is recorded. Most pixel signals contain only noise. The so-called sum-image cleaning [42] is then performed to remove those pixels, using a program called Star. In this procedure we search for groups of 4, 3 and 2 neighboring (4NN, 3NN, 2NN) pixels with a summed charge above a given level, within a given time window. The charge threshold and time windows defined for each case are shown in Table 4.1. Charge thresholds are expressed in terms of a global factor Lvl1 that can be adapted to the noise level of the observations. Pixels belonging to those groups are identified as core pixels. Then all the pixels neighboring a core pixel that have a charge higher than a given threshold (Lvl2) and an arrival time within 1.5 ns with respect to that core pixel (boundary pixels), are also included in the image. A comparison of the images after and before the image cleaning is shown in Figure 4.7. In the MAGIC standard analysis [42] the cleaning levels are set to Lvl1 = 6 phe and Lvl2 = 3.5 phe, which provide good image cleaning for any moonless-night observation. 48 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Configuration Qt [phe] tW [ns] 4NN 4 × 1.0 × Lvl1 1.1 3NN 3 × 1.3 × Lvl1 0.7 2NN 2 × 1.8 × Lvl1 0.5

Table 4.1: Charge threshold (Qt) and time window (tW ) defined for the different neighboring pixel configurations used to identify core pixels. Lvl1 is a global factor adapted to the noise level of the observations (6 phe for standard observations).

Higher cleaning levels would result in a higher energy threshold at the analysis level. In contrast, lower cleaning levels can also be used for ultra dark observations (as in some extragalactic regions of the sky) to push the analysis threshold as low as possible [26]. The standard-analysis cleaning levels are then a compromise between robustness and performance, optimized to be used for any FOV, galactic or extragalactic, under dark and dim moonlight conditions.

Once the core and boundary pixels are selected, the images are parametrized. The obtained parameters are later used to reject background events and to infer the energy and arrival direction of the incoming particle in the case of gamma-ray induced showers (Section 4.1.4.5). Some of the image parameters used in the MAGIC analysis are the following:

Hillas parameters. The image obtained after cleaning is approximated by an ellipse. Momenta up to second order of the light distribution on the camera are used to parametrize the image [148]. Probably the most relevant ones are:

• Size: Total charge (in phe) contained in the image. It is closely related to the energy of the primary gamma ray.

• Width: Second moment along the minor axis of the image (Figure 4.8). It is a measurement of the lateral development of the shower.

• Length: Second moment along the major axis of the image (Figure 4.8). It is a measurement of the longitudinal development of the shower.

• Center of Gravity (CoG): Center of gravity of the image. It is computed as the mean of the X and Y weighted mean signal along the camera coordinates.

• Conc(N): Fraction of the image concentrated in the N brightest pixels. It mea- sures how compact the shower is and it is generally larger in gamma-ray initiated showers than in hadronic ones.

Timing parameters. They characterize timing properties of the shower.

• Time RMS: The RMS of the arrival time of all the pixels that survived the image cleaning. Typically smaller in gamma-ray events.

• Time Gradient: Slope of the linear fit to the arrival time along the major axis of the ellipse. 4.1. THE MAGIC TELESCOPES 49

Figure 4.7: Real events as seen in MAGIC II camera before (left) and after (right) the image cleaning for a gamma-ray like event (top), a hadron like event (middle) and a muon like event (bottom). The color scale labels the charge extracted in each pixel in arbitrary units. Figure from [35].

50 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

S [au] S

50

187mm

100

3

° 150 0.60

200

70

250

136 300

350

203

400 S [au]

269 Azimuth Profile Azimuth

336

Radius from camera center [mm] center camera from Radius

0 0 0 0 0 600 500 400 300 200 100 0 0 Length

402

50

469

100

150 Width

536

200

602

250

669 300

S [au]

350

735

Radial Profile Radial

802

S [au] S

0 0 0 0 1000 800 600 400 200 0

868

1

935

1001

10

1068

242.3 RMS

10

2 134.3 Mean

Figure 4.8: Width and length parameters defined from the ellipse fitting the image. Image Credits: Alba Fern´andezBarral.

Image quality parameters. They are used to evaluate how noisy and well con- tained within the camera is the image.

• LeakageN : Fraction of the total charge contained in the N (typically 1 or 2) outermost pixel rings of the camera. It is used to estimate which fraction of the shower image is contained in the camera (typically events with large LeakageN are badly reconstructed).

• Number of islands: Number of isolated groups of pixels that survived the image cleaning. Typically gamma-ray showers produce only one island while hadron- initiated showers may contain two or more.

4.1.4.3 Stereo reconstruction After the image cleaning and parametrization the information concerning single pixels is no longer recorded. Each event has its associated (cleaned) image in each telescope with its calculated image parameters. A program named Superstar combines the information from both telescopes to produce stereo parameters that are later used for the energy and direction reconstruction. Some of them are the following:

• Shower axis: The direction of the shower is obtained from the intersection of the major axes of the two images. This information, combined with knowledge of the position and pointing direction of the telescopes gives the impact point, which finally determines the shower axis (see Figure 4.9).

• Impact parameter: Distance between the shower axis and the center of the tele- scope mirror (an individual impact parameter is computed for each telescope).

• Height of shower maximum (HMAX): It is defined as the height in which the maximum of the shower occurs, in terms of number of particles. It is obtained from the shower axis and the CoG.

All the analysis steps until stereoscopic reconstruction are performed automatically by an On Site Analysis team that takes care of the calibration and cleaning of every 4.1. THE MAGIC TELESCOPES 51

Figure 4.9: Top: Shower axis reconstruction in stereoscopic view. Bottom left: Shower axis reconstruction from the superimposed images. Bottom right: Estimation of the impact point. Images from [144]. night data. Most of the MAGIC analysis begin at this point, by directly downloading data at Superstar level. An individual analysis may start at earlier stages if non- standard actions have to be taken at the low-level data processing, like applying non- standard cleaning levels.

4.1.4.4 Data quality selection The data taking procedure can be affected by bad weather conditions, technical prob- lems or unexpected sources of noise (as it could be a car flash, for instance). Most of the data recorded under such situations must be discarded. On some occasions, data affected by clouds or dust can be corrected. Several indicators can be used to decide if a given data set must be corrected or removed. A technical problem can be easily spotted by looking at the trigger rates or at the rate of events above a given size (of 100 phe, for instance). Clouds and dust absorb and scatter Cherenkov light. This reduces the collected light for a given shower with respect to the light that would be collected under normal conditions, which could in the end result in a wrong energy estimation. If the Cherenkov light absorption is considerable this would also appear as a decrease in the event rates. Not only because the images would have a smaller size, but also because the noise trigger rates could increase by scattering of the NSB light by clouds or dust, which would make the IPRC increase the trigger thresholds (see Section 4.1.1). The best way to estimate how Cherenkov light transmission in the atmosphere is being affected is to use a Laser Imaging Detection And Ranging (LIDAR). This sys- 52 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS tem shoots a light pulse to the sky and measures the backscattered light by atoms and molecules in the atmosphere at different heights. This gives a direct measure of the amount of clouds and dust that can be used to calculate the atmospheric trans- mission for Cherenkov light. While transmission is not dramatically affected data can be corrected following the method described in [101]. Data with low transmission are discarded. Atmospheric corrections using LIDAR information have been optimized for observations under dark conditions. Their application on data taken under moonlight has not been studied yet. Besides, LIDAR cannot be operated if the background light intensity is too high (e.g. pointing close to a nearly full Moon). Measuring the so-called cloudiness is another way to evaluate how transparent is the sky during observations. The cloudiness is measured with a pyrometer installed in MAGIC 1 reflector that points parallel to the observations. The pyrometer measures infrared radiation in the 8-14 µm range and fits it to a black-body spectrum to infer the temperature of the sky. If the sky is cloudy, it reflects thermal radiation from the surface and the pyrometer measures a higher temperature. The cloudiness computation is based on the comparison of the measured temperature to a reference temperature that is achieved under optimal observation conditions. A measurement of the number of visible stars with the Starguider camera can also be used as complementary information that can be used to evaluate the quality of the observation conditions. Normally, all the information described here is taken into account during the data quality selection process.

4.1.4.5 Event characterization After image cleaning and data quality selection most of the noise has been rejected. At this stage hadronic showers are the main source of background and still at this point their event rates are orders of magnitude higher than those from gamma rays. A program called Coach builds look-up tables and multivariate decision trees (random forest). Look-up tables are used for the energy reconstruction while random forests are used for the gamma/hadron separation and direction reconstruction. A program named Melibea makes that each event in the original data sample goes through the created random forest and that it is confronted to the built look-up tables. After that, such event has an estimated energy Eest, an arrival direction and a tag indicating how likely it is to have been initiated by a gamma ray. The random forest algorithm [31] uses training samples representing gamma-ray and hadronic events. The gamma-ray training sample is obtained from Monte Carlo simulations (see Section 4.1.5). The hadronic sample is obtained from real data: since the hadronic event rate is by far higher than the gamma-ray event rate any sample containing non detected (or weak) sources is suitable for this task. Both training samples should match the zenith angle (Zd) distribution of the real data that are being analyzed. The algorithm “grows” a certain number of decisional trees (typically 100) for a certain set of parameters that could potentially be used as discriminators between gamma-ray and hadronic showers (like image width or time RMS, for instance). Once an event has gone through all the grown trees it is assigned a number going from 0 to 1. This number is called hadronness and it is closer to 0 for gamma-ray like events. The final background rejection is performed later applying cuts in this hadronness parameter. The look-up tables for the energy estimation are filled with the true energy (ETrue) 4.1. THE MAGIC TELESCOPES 53 MMcTriggerCamMagicTwoSumMMcTriggerCamMagicTwoSumMMcTriggerCamMagicTwoL1 MMcTriggerCamMagicTwoSumMMcTriggerCamMagicTwoSumMMcTriggerCamMagicTwoL1

Entries Entries Mean Mean RMS RMS

Real source position 42 Reconstructed 42 2 41 position 2 41 3214 6 40 3214 θ 6 40 24 8 11 24 8 11 3419 θ2313Real source 3419 2313 2552 10 33position 2552 10 33 4315 19 1 15 4315 19 1 15 0 2643 22 7 0 2643 22 7 20disp90 31 20 90 31 4410 21146 18 4410 21146 18 327 4 3018CoG 327 4 3018 828135 39 828135 39 4516 291217 4516 291217 35 11 38 35 11 38 7 123617 7 123617 1637 1637

0.60° 0.60° 187mm 187mm

Figure 4.10: Scheme of the DispRF method for the estimation of the arrival direction of an event. Left: Two possible reconstructed source positions exist along the shower axis for a given disp value. Right: Illustration of the arrival direction reconstruction after merging the imaged from the two telescopes. Images from [97]. and the RMS of Monte Carlo simulated gamma rays. They are binned in size and impact parameter = rC , where rC is the Cherenkov radius defined in Section 4.1.4.2. The estimated energy of each event is the weighted average of both telescopes where the weight is given by the RMS of the bin. The arrival direction can be directly reconstructed once the shower axis has been calculated as described in section 4.1.4.2. This is a good estimation, although it fails for parallel and small size images. The direction reconstruction is improved introducing a new parameter, disp, defined as the estimated distance along the major axis from the CoG to the reconstructed source position. MAGIC uses the DispRF method, which introduces all those parameters that may influence the disp estimation into a random forest algorithm that is trained with a Monte Carlo gamma-ray sample (as in the hadronness estimation). For each shower image in each telescope there are two possible reconstructed source positions along the shower axis for a given disp value (left panel in Figure 4.10). When the images from both telescopes are merged the distances between all possible combinations of positions pares are calculated. The closest one is kept and the arrival direction is calculated as the weighted average of the two chosen positions plus the crossing point of the main axes of the images (right panel in Figure 4.10).

4.1.5 Monte Carlo Simulations MC simulations are essential for IACT analysis, since there is no gamma-ray source that could be used for calibration. They have mainly two functions in the MAGIC data analysis chain. A first sample (train sample) is used to build look-up tables and multivariate decision trees (random forest), which are employed for the energy and direction reconstruction and gamma/hadron separation, as explained in Section 4.1.4.5. A second, independent sample (test sample) is used for the telescope response 54 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS estimation during the source flux/spectrum reconstruction (Section 4.1.6.5). The MC simulations are performed outside MARS framework and are composed of three stages, associated to three different programs (for further information, see [150]):

1. Corsika. It stands for COsmic Ray SImulations for KAscade (was developed at the Forschungszentrum Karlsruhe for the KASCADE experiment) and it is a pro- gram that simulates the cascades initiated by primary particles in the atmosphere. MAGIC uses a customized version of CORSIKA, that stores the information of the direction and position in the ground of the Cherenkov photons produced in the showers. CORSIKA simulates the transport of particles in the atmosphere and their interaction with air nuclei as well as their decays. All secondary par- ticles produced are tracked down until their energy is low enough so that they produce no more Cherenkov light. There are several atmospheric models im- plemented in CORSIKA. The one used in MAGIC (MagicWinter atmosphere) contains a mixture of N2 (78.1%), O2 (21.0%) and Ar (0.9%). The density vari- ation of the atmosphere with altitude is also taken into account and optimized for the MAGIC site [77]. 2. Reflector. This program simulates the atmospheric absorption of the Cherenkov photons (which is not performed in the modified version of Corsika that MAGIC uses) and their reflection by the mirrors onto the camera plane. The absorption of the photons is computed considering Rayleigh and Mie scattering and Ozone absorption, taking into account the emission height, arrival angle and wavelength of the photons. The program then traces the photons that are reflected in the mirrors to the camera plane. The output file stores the arrival time and position in the camera plane of these photons. 3. Camera. This program simulates the camera, triggers, and readout electronics of the telescopes. It basically checks if photons arriving to the camera can be detected by the PMTs and that if the charge that the event generates is enough to produce a trigger. Among the main inputs of the program, there are the (wavelength-dependent) quantum efficiency of the PMTs or trigger parameters such as the charge discriminator threshold. It also simulates and includes the effect of electronic noise and night sky background. Calibration and pedestal runs can also be simulated with this program.

The output from Camera can then be analyzed following the standard MAGIC analysis chain, that starts with the calibration and charge extraction process using Sorcerer (Section 4.1.4.1). For normal data analysis, only simulations of gamma rays are necessary. Still, with CORSIKA it is possible to simulate several types of particles like protons or electrons, which might be needed for some highly non-standard analysis or performance studies based only on MC simulations. For the (point-like) sources analyzed in this work, gamma rays were simulated in a ring of 0.4◦ radius centered in the camera center. MC simulations should accurately reproduce the telescopes response. Each time there is a major hardware intervention some parameters in Reflector and/or Camera must be adjusted, giving rise to a new set of MC simulations that are used for the anal- ysis of the data taken after such intervention. Data analyzed in this thesis corresponds to four different periods, each of them associated to a different MC sample: 4.1. THE MAGIC TELESCOPES 55

• ST.03.03: Data taken before August 2014, when there was a mirror exchange and a re-alignment of the reflector

• ST.03.05: Data taken between August 2014 and November 2014, when the downsampling mentioned in Section 4.1.2 was introduced.

• ST.03.06: Data taken between November 2014 and April 2016, when there was a major re-alignment of the mirrors.

• ST.03.07: Data taken after April 2016.

4.1.6 High level analysis products At this point, each event that initially was represented by (2 × 1039) 30 ns waveforms (one for each pixel on each camera) is essentially reduced to three indicators: an arrival direction, an estimated energy and a hadronness. With these three indicators and by applying cuts in some of the other still stored image parameters we can obtain different plots that describe the VHE emission of the source that is being observed.

4.1.6.1 Collection area The effective collection area can be understood as the area of an equivalent detector that would detect with 100% efficiency the same rate of gamma rays than the real instrument. In other words, if from a number of simulated gamma-ray events Nsim falling within an area Asim, Ndet events are detected, the effective area Aeff depends on the Energy E as Ndet(E) Aeff(E) = Asim (4.3) Nsim(E) It can be evaluated at different levels in the analysis chain: at the trigger level, after the cleaning and image reconstruction (reconstruction level) or after all the analysis cuts (analysis level). It depends on several parameters, but mainly on the energy and incoming direction of the gamma ray. The higher the energy of the primary gamma ray, the greater the number of Cherenkov photons produced during the shower and then the higher the probability of detecting such event is. At a fixed energy (and above a given energy threshold) the effective area increases with the zenith angle: the atmospheric depth is higher, the shower develops farther from the telescopes and then the size of the Cherenkov pool is larger. There is also a dependence on the azimuth angle due to the variation of the distance between the two telescopes (measured on the plane orthogonal to the gamma-ray direction) and the effect of the geomagnetic field, that separates positive and negative charges distorting the showers and their associated images. The dependence on the azimuth angle is much smaller than that coming from the zenith angle or the energy of the gamma ray and can often be averaged over the whole azimuth distribution of the observations.

4.1.6.2 Energy threshold The energy threshold of IACT telescopes is commonly defined as the peak of the differ- ential event rate distribution as a function of energy. It is estimated from the effective collection area as a function of the energy, obtained from gamma-ray MC simulations, 56 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

° ° 0 •30 0.05 ° ° Rate [a.u.] 30 •45 0.04

0.03

0.02

0.01

0 0 50 100 150 200 250 300 Energy [GeV]

Figure 4.11: Rate of MC gamma-ray events that at the trigger level for an hypothetical source with an spectral index of −2.6 observed at zenith angles below 30◦ (solid line) and from 30◦ to 45◦ (dotted line). Figure from [42]. multiplied by the expected gamma-ray spectrum, which is typically assumed to be a power-law with a spectral index of −2.6. Differential rate plots like those in Figure 4.11 are built and the energy threshold is estimated by fitting a Gaussian distribution in a narrow range around the peak of these distributions. Note that in those distributions the peak is broad, which means that it is possible to obtain scientific results with the telescopes below the defined threshold. The energy threshold can be evaluated at different stages of the analysis. The lowest threshold corresponds to the trigger level, which reaches ∼ 50 GeV during MAGIC observations in moonless nights at zenith angles below 30◦ [42]. As the collection area, it can also be evaluated at the reconstruction or at the analysis level (where it is naturally higher). The energy threshold increases with the zenith angle, as there is more absorption of Cherenkov light from the showers.

4.1.6.3 Signal extraction and integral sensitivity Once each event has an assigned hadronness and reconstructed energy and direction we can evaluate if there is signal in our data. In the MAGIC standard analysis this is performed with a program called Odie, that builds the so-called θ2 histograms (an example of such histogram is shown in Figure 4.12). θ is the angular distance between the reconstructed and the expected source position for each event. The histograms are built with events that survived a predefined set of cuts in hadronness, size and estimated energy. These cuts were optimized to maximize the sensitivity for Crab Nebula observations for three different energy ranges (low, medium to high, and high energies). The signal region is defined by applying a cut in θ2. For point-like sources this cut has also been optimized with Crab Nebula observations. The signal (ON) region is then defined as a circle of radius θ around the expected source position. The background (OFF) region is also a circle of the same radius, centered on a mirror position of the source in the camera. Several OFF regions can be used to estimate the 4.1. THE MAGIC TELESCOPES 57

Figure 4.12: Example of a θ2 histogram. Bins represented with black dots correspond to the ON histogram, which is filled with events found at a distance θ from the expected source position. The gray-filled distribution represents the OFF histogram. The vertical dashed line shows the place where the θ2 cut that defines the ON and OFF regions and that is applied to compute the significance of the signal.

background, as previously explained in Section 4.1.3 and in Figure 4.5. If NON is the number of events in the signal region and NOFF the number of events in a background region, the number of excess events Nex in the signal region is:

Nex = NON − αNOFF (4.4) where α is one over the number of background regions considered. The significance of a signal in VHE astronomy is usually computed using Eq. 17 from Li & Ma [143]. The integral sensitivity is defined in this work as the integral flux above an energy p threshold giving Nexcess/ Nbgd = 5 after 50 hours of observation, where Nexcess is the number of excess events and Nbgd the number of background events. Two additional constraints are imposed: Nexcess > 10 and Nexcess > 0.05Nbgd. The first condition ensures that the Poissonian statistics of the number of events can be approximated by a Gaussian distribution. The second condition protects against small systematic discrepancies between the ON and OFF distributions, which may mimic a statistically significant signal if the residual background rate is large [42]. Figure 4.13 shows MAGIC integral sensitivity as a function of the energy threshold, for dark observations (no moonlight or twilight) for low (Zd < 30◦) and medium (30◦ < Zd < 45◦) zenith angles. The curves are produced from MC simulations and real data taken during Crab Nebula observations, which due to the high intensity and stability of its flux is sometimes labelled as the “standard candle” in gamma-ray astronomy and often used for calibration of the instruments. Each point in those curves is built as follows:

1. Minimum cuts in size and estimated energy are first applied to the data sample to accomplish an approach to the energy threshold. This defines a subsample from the overall sample. 58 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

10 Zd < 30°

30° < Zd < 45° in 50h) [% C.U.] σ

Integral Sensitivity (5 1

102 103 104 Energy threshold [GeV]

Figure 4.13: MAGIC integral sensitivity for as a function of the energy threshold for low and medium zenith angles.

2. Once the subsample is defined, we look for the cuts in θ2 and hadronness that maximize the sensitivity of the subsample.

2 3. The applied cuts in size, Eest, θ and hadronness are applied to the MC sample to obtain the energy threshold. The MC is produced and/or weighted with a given spectral shape and, as so, the sensitivity is obtained for sources with such spectrum. In the case of figure 4.13, the curves are built for an hypothetical source that follows a power law with an spectral index of −2.6.

At lower energies the sensitivity is higher for lower zenith angles: at those energies where showers produce less light the effect of Cherenkov light absorption in the atmo- sphere is more relevant. Besides, the collected light density decreases for increasing zenith angles, as shower develop farther from the telescopes. At higher energies, the sensitivity increases for higher zenith angles, essentially because the collection area is larger, as explained in Section 4.1.6.1.

4.1.6.4 Skymaps

Skymaps are two-dimensional histograms containing the arrival direction of all the gamma-ray candidates in celestial coordinates. They are normally built with events surviving the same set of cuts that are defined for the θ2 histograms. The main chal- lenge for building a skymap is the background estimation. Background is affected by inhomogeneities in the PMTs response, by the presence of stars in the FOV and by the zenith and azimuth angles of the observations. This last effect is reduced when observing in wobble mode, as the background is estimated from the same data sample (and the same FOV) from which the signal is extracted. 4.1. THE MAGIC TELESCOPES 59

4.1.6.5 Flux and spectrum The gamma-ray differential spectrum per unit of energy dE, area dA and time dt is defined as: dφ d3N (E) = γ (4.5) dE dE dA dt where φ is the gamma-ray flux and Nγ is the number of observed gamma rays, which is just the observed excess events over the background in a region around the source, for a given (estimated) energy bin, after applying cuts in θ2 and hadronness. Those cuts are the values of hadronness and θ2 that give 90% and 75% efficiency, respectively, in selecting MC gamma rays. With the applied cuts, the background is subtracted following the same procedure described in Section 4.1.6.3, typically using three OFF regions. The other parameters needed to calculate the flux are the collection area and the effective time. The latter is defined as the total time elapsed during observations minus the time the detector has been unavailable for recording events (dead time, a period of time typically following the recording of an event, during which the data acquisition system is busy and cannot accept any new events). The program that computes the spectrum in MARS is called Flute. The program calculates the collection area assuming an intrinsic spectral shape in the MC sample, which in principle should not be too different to the real spectrum of the source (see Section 4.1.6.1). Often, for a better identification of spectral features such as breaks or cut-offs, instead of the differential spectrum the spectral energy distribution (SED, E2 × dφ/dE) is computed.

Light curve A Light curve is a plot showing integral flux in a given energy range, as a function of time. In each time bin the differential energy spectrum is calculated and then integrated between a minimum energy E0 and a maximum energy E1 (which often is set as E1 → ∞).

Upper limits Upper Limits (ULs) on the flux are computed for those energy bins where no significant gamma-ray signal has been found. They are computed following the method described in [172] at the 95% confidence level, assuming a systematic uncertainty in the overall signal efficiency of 30%.

Spectra merging The proper estimation of a spectrum relies on how well the MC simulations used to compute the collection area represent the real data. When a major hardware inter- vention occurs many parameters may change (mirrors reflectivity, sampling frequency, PMTs quantum efficiency...) and typically MC simulations must be adjusted to match the new performance of the telescopes. Then, if someone wants to combine data taken under different hardware/observation conditions that are associated to different sets of MC simulations it is not possible to run Flute blindly and calculate a single spectrum, basically because the collection area would not be properly estimated. Instead, the full data sample should be divided into subsamples and analyzed them with its proper 60 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS set of MC simulations. At this point one would arrive with as many spectra obtained with Flute as subsamples in which the data set was divided. All these spectra, where each of them has a priori a properly estimated collection area, are then combined and merged to produce a unified spectrum. The program performing this task in MARS is called Foam.

Spectra unfolding

Note that while Nγ is computed in bins of Eest, the collection area is calculated in bins of Etrue (most of the simulated gamma rays do not even trigger the telescopes and then do not have a reconstructed Eest). Then, if we divide the excess rates in a bin of Eest by the collection area in the same bin of true Etrue, the resulting flux would be affected by an imperfect estimation of the energy. Some events with E1 < Etrue < E2 may be reconstructed with an Eest outside the interval (E1,E2). Also, events with Etrue outside (E1,E2) may be reconstructed with E1 < Eest < E2. This effect is known as “spillover” and is what demands the application of an unfolding procedure. The method of unfolding transforms the space of observables to the space of sought parameters, taking into account the migration effects due to limited acceptance of the detector and resolution. In the simplest case, the observable is a single good estimator of the energy (Eest) but normally several observables that correlate to the true energy (i.e. size, impact parameter...) are used. The final outcome is what is known as a migration matrix, which is built from MC gamma-ray simulations and where each matrix element Mij represents the probability that an event that has an Etrue belonging to bin j ends up with an Eest that belongs to bin i. There are several methods to perform the unfolding. Those commonly used in MAGIC are explained in [32]. The procedure used in this work is a forward folding method performed with a MARS program called Fold. For this method a given spectral model (for instance, a power law) with parameters θ = {θ1, θ2, ..., θn} has to be assumed. The θi parameters are obtained via a maximum likelihood approach, where the data inputs are the the number of events (in bins of estimated energy) in the ON and OFF regions, as given by Flute or Foam. An additional set of nuisance parameters µi for modelling the background are also optimized in the likelihood calculation. In each step of the maximization procedure the expected number of gamma rays in a given bin of Eest is calculated by folding the gamma-ray spectrum with the MAGIC telescopes response (energy-dependent effective area and energy migration matrix), also taken from Flute or Foam output. The background nuisance parameters and the statistical uncertainties in the telescopes response are treated as explained in [172]. The results of the likelihood maximization is not actually a set of discrete spectral points, but the most likely set of parameters θi given the data and the assumed spectral model, and an estimation of the goodness of the fit (the χ2/ndf or the fit probability). This information can be used to test how well a model describes the data or to compare two different models through a likelihood ratio test. The forward folding method performed by Fold is particularly useful (and statistically more accurate) when dealing with bins with low statistics (less than 20 events in the ON or OFF regions) because it uses Poisson event statistics. Besides, as it takes as input the number of events it also uses the information from energy bins without significant signal. 4.2. MOONLIGHT OBSERVATIONS 61

4.2 Moonlight observations

IACT telescope arrays are usually optimized for observations during dark nights. As PMTs age quickly in a too bright environment3, observations are normally restricted to relatively dark conditions. When IACT instruments operate only during moonless astronomical nights, their duty cycle is limited to 18% (∼1500 h/year4). Every month around the full Moon, the observations are generally stopped for several nights in a row, in what is known as the full Moon break. Operating IACT telescopes during moonlight and twilight time allows increasing the duty cycle up to ∼40%. This is interesting for many science programs: a larger amount of data can be obtained and a better time coverage, without full-Moon breaks, can be achieved. It may also be crucial for the study of transient events (flares of active galactic nuclei, gamma-ray bursts, cosmic neutrino or gravitational wave detection follow-ups, etc.) that occur during moonlight time. With moonlight observations enabled, the IACT can be more reactive to the variable and unpredictable gamma-ray sky. Moreover, operation under bright background light offers the possibility to observe very close to the Moon to study for instance the cosmic-ray Moon shadow to probe the antiproton and positron fractions [85, 193] or the lunar occultation of a bright gamma-ray source, as e.g. in hard X-ray for source morphology studies [102]. Different hardware approaches have been developed by IACT experiments to extend their duty cycle into moonlight time. One possibility is to restrict the camera sensitivity to wavelengths below 350 nm, where the moonlight is absorbed by the ozone layer. This idea was applied to the Whipple 10 m telescope, which was equipped with the dedicated UV-sensitive camera ARTEMIS [192], or with a simple UV-pass filter in front of the standard camera [82]. The drawback of this technique is the dramatic increase of the energy threshold (a factor ∼4) due to the reduction of the collected Cherenkov light. The CLUE experiment [64] was a similar attempt with an array of 1.8 m telescopes sensitive in the background-free UV range 190-230 nm. More recently, the VERITAS collaboration also developed UV-pass filters to extend the operation during moonlight time [110]. Another approach, developed first by the HEGRA collaboration [134], is to reduce the HV applied to the PMTs (reducing the gain) to limit the anode current, protecting the detectors from being damaged. This, however, only allows observations at large angular distances from a partially illuminated Moon. An alternative way to safely operate IACT arrays under moonlight would be to use, instead of PMTs, silicon photomultiplier detectors (SiPMs), which are robust devices that can be exposed to high illumination levels without risk of damages. This was successfully demonstrated with the FACT camera [133], which can operate with the full Moon inside its FOV. The use of a silicon photomultiplier camera is actually under consideration for the new generation of IACT instruments [125, 160, 168, 185]. A new type of pixel, called Light- Trap [113, 114], was designed at IFAE also with the aim of using SiPM technology in Cherenkov telescopes. This pixel is described in detail in Appendix B.

3The idea of ageing in PMTs refers to the degradation of their gain with time. This degradation is dominated by the damage produced in the anode and is proportional to the amount of (integrated) charge hitting the anode (see Section 4.1.1). If PMTs are exposed, on average, to a constant illumi- nation intensity the degradation increases with time. Under brighter environments, the ageing speed increases. 4This does not include the eventual observation time loss due to bad weather or technical issues. 62 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.14: The MAGIC telescopes performing observations under bright moonlight. Cred- its: Adiv Gonz´alezMu˜noz.

The cameras of the MAGIC telescopes were designed to allow observations during moderate moonlight, operating the PMTs at a rather low gain (Section 4.1.1). The use of reduced HV [84] and UV-pass filters [112] were introduced later to extend the observations to all the possible Night Sky Background (NSB) levels, up to few degrees from a full Moon. IACT observations under moonlight are becoming more and more standard, and are routinely performed with the MAGIC and VERITAS telescopes. The performance of VERITAS under moonlight with different hardware settings at a given (fixed) NSB level has been recently reported [55]. In this Section I present a more complete study on how the performance of an IACT instrument is affected by moonlight and how it degrades as a function of the NSB intensity. The study is based on extensive observations of the Crab Nebula, using an adapted analysis chain and tuned Monte Carlo (MC) simulations, and was first presented in [29]. The observations, carried out from October 2013 to March 2016 by MAGIC with nominal HV, reduced HV and UV-pass filters, cover the full range of NSB levels that are typically encountered during moonlight nights.

4.2.1 Hardware settings for moonlight observations In this work, the performance of MAGIC is studied for different NSB conditions. Dur- ing the observations we do not measure directly the NSB spectrum, but just monitor the PMT anode current (DC, as defined in Section 4.1.1) in every camera pixel. We in- fer the NSB level by comparing the measured DC in the camera of one of the telescopes, MAGIC 1, with a reference average median DC that is obtained in a well-defined set of observation conditions. Here we use as reference the telescopes pointing towards the 4.2. MOONLIGHT OBSERVATIONS 63

Crab Nebula at low zenith angle during astronomical night, with no Moon in the sky or near the horizon, and good weather. We shall refer to these conditions as NSBDark. The median DC in MAGIC 1 during Crab Nebula dark observations is affected by hardware interventions: it depends on the PMTs HV and so it changes after a camera flat-fielding. For the whole studied period, the median DC during dark observations of the Crab Nebula with nominal HV lies between 1.1 and 1.3 µA. As the Crab Nebula is in the galactic plane, the NSB is lower by 30-40% for a large fraction of MAGIC obser- vations, when the telescopes point to extragalactic regions of the sky. During reduced HV and UV-pass filter observations the measured DC is lower than what would be obtained if observing under the same NSB conditions and nominal HV. Correction fac- tors are applied to properly convert from DC to NSB level based on the gain reduction factor of the PMTs and on the moonlight transmission of the filters. Due to the constraints imposed by the DC safety limits described in Section 4.1.2, observations are possible up to a brightness of about 12 × NSBDark using the standard HV settings (nominal HV). Observations can be extended up to about 20 × NSBDark by reducing the gain of the PMTs by a factor ∼1.7 (reduced HV settings). When the HV is reduced there is less amplification in the dynodes, so fewer electrons hit the anode. However, the PMT gains cannot be reduced by an arbitrary large factor because the performance would significantly degrade, resulting in lower collection efficiency5, slower time response, larger pulse-to-pulse gain fluctuations and an intrinsically worse signal-to-noise ratio [98]. Even when the telescopes are operated with reduced HV, observations are severely limited or cannot be performed if the Moon phase is above 90%. Observations can, however, be extended up to about 100 × NSBDark with the use of UV-pass filters. This limit is achievable if the filters are installed and at the same time PMTs are operated with reduced HV. This is done only in extreme situations (>50 × NSBDark). All the UV-pass filter data included in this work were taken with nominal PMT gain. In practice, observations can be performed in conditions that are safe for the PMTs as close as a few degrees away from a full Moon. The telescopes can be pointed almost to any position in the sky, regardless the Moon phase, and, as a result, they can be operated continuously without full Moon breaks [112]. The characteristics of the filters are explained in Section 4.2.2. As a first approximation, the brightness of the whole sky depends on the Moon phase and its zenith angle. Figure 4.15 shows the brightness of a Crab-like FOV, seen by MAGIC, as a function of the angular distance to the Moon for different Moon phases. The brightness values were simulated with the code described in [79], for a Moon zenith angle of 45◦. While the Moon phase is lower than 50%, the brightness is below 5 × NSBDark in at least 80% of the visible sky and then in general operations can be safely performed with nominal HV. For phases larger than 80%, the brightness is typically above 10 × NSBDark in most of the sky when the Moon is well above the horizon, and the observations are usually only possible with reduced HV. When the Moon phase is close to 100%, observations are practically impossible without the use of UV-pass filters. Combining nominal HV, reduced HV and UV-pass filter observations, MAGIC could increase its duty cycle to ∼40%.

5In MAGIC the HV divider chain is fixed for all dynodes and the voltage is also reduced at the first dynode. 64 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS ] Dark

100 × NSB Dark 102 NSB [NSB Phase: 100%

20 × NSB Dark

Phase: 80% 12 × NSB Dark 10

Phase: 50%

Phase: 20%

1 10 20 30 40 50 60 70 80 90 Moon Separation [deg]

Figure 4.15: Crab FOV brightness, simulated with the code described in [79], as a function of the angular distance to the Moon for different Moon phases (gray solid lines). Moon zenith angle was fixed at 45◦. In blue, green and red the maximum NSB levels that can be reached using nominal HV, reduced HV and UV-pass filters are shown, respectively.

4.2.2 UV-pass Filters The idea of using UV-pass filters during MAGIC observations emerged as an alternative to try to solve a problem that the Moon Shadow observations, described in Appendix C, had to face. This project required accumulating a considerable amount of observation hours pointing just a few degrees away from the Moon. It also demanded that the observations were taken at a rather low zenith angle so that the energy threshold was not too high. Such conditions are achievable only ∼20 hours per year under background light levels below the 20 ×NSBDark limit for reduced HV observations. With the use of UV-pass filters it was not only possible to increase the maximum allowed NSB level for safe observations, but also to extend the duty cycle of the tele- scopes until including the three to five nights of the full Moon break. The newly opened window had a clear advantage for observations of the shadowing of cosmic rays by the Moon. But even with the filters and contrary to initial calculations, it turned out that it would take several years to collect the observation time needed to achieve the mini- mum initial goals. But there were other projects that could profit from the increased duty cycle. Among them, the Cassiopeia A campaign, which was granted with more than 30 hours of UV-pass filters data. In this section I describe the mechanical characteristics and the optical transmission of the filters. Their performance is discussed in Section 4.2.5.

4.2.2.1 Filter selection and Optical properties Camera filters are used to reduce strongly the NSB light, while preserving a large fraction of the that peaks at ∼330 nm. Their transmission must be high in UV and cut the longer wavelengths.√ They were selected to maximize the signal- to-noise ratio that scales as TCher/ TMoon, with TCher and TMoon the Cherenkov-light 4.2. MOONLIGHT OBSERVATIONS 65

10 1 Direct Moonlight Rayleigh•scattered Moonlight

9 Dark NSB Cherenkov light 0.9

Filter transmission MAGIC PMT Quantum Efficiency 8 0.8

Photon flux [a.u.] 7 0.7

6 0.6

5 0.5

4 0.4 Filter transmission/PMT QE

3 0.3

2 0.2

1 0.1

0 0 300 350 400 450 500 550 600 Wavelenght [nm]

Figure 4.16: The blue curve shows the typical Cherenkov light spectrum for a vertical shower initiated by a 1 TeV gamma ray, detected at 2200 m a.s.l [90]. In green, the emission spectrum of the NSB in the absence of moonlight measured in La Palma [69]. The dotted curves show the shape of direct moonlight spectrum (black) and Rayleigh-scattered diffuse moonlight (grey) [115, 116]. The four curves are scaled by arbitrary normalization factors. The filter transmission curve is plotted in red. As a reference, the quantum efficiency of a MAGIC PMT is plotted in orange (using the right-hand axis). 66 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS and the moonlight transmission of the filters, respectively. An additional constraint was imposed by the MAGIC calibration laser, which has a wavelength of 355 nm. TMoon depends on the spectral shape of the scattered moonlight, which depends on the angular distance to the Moon. Far from it (tens of degrees away) the NSB is dominated by Rayleigh-scattered moonlight that peaks at ∼470 nm. Close to the Moon, Mie scattering of moonlight dominates; its spectrum peaks at higher wavelengths and resembles more the spectrum of the light coming directly from the Moon (“direct moonlight”). The spectral shape of the NSB is also affected by the aerosol content and distribution, and by the zenith angle of the Moon. Typical spectra for Rayleigh-scattered and direct moonlight were computed using the code SMARTS to simulate the solar spectrum [115, 116] and adding the effect of the Moon albedo. They are shown in Figure 4.16, together with the spectrum of the Cherenkov light from a vertical shower initiated by a 1 TeV gamma ray, at 2200 m a.s.l. [90]. Taking the spectral information of Cherenkov light and diffuse moonlight into account, we selected commercial inexpensive UV-pass filters produced by Subei6 (model ZWB3) with a thickness of 3 mm and a wavelength cut (50% transmission) at 390 nm. The filter transmission curve, which was measured in the laboratory, is also shown in Figure 4.16. The transmission of the filters for Cherenkov light from air showers were measured by installing a filter in only one of the two telescopes, selecting image of showers with similar impact parameters (defined as the distance of the shower axis to the telescope center) for both telescopes, and comparing the integrated charge in both images. The transmission for air showers light at 30◦ from zenith was measured to be TCher = (47±5)%. The transmission for the NSB goes from ∼20%, when pointing close to the Moon, to ∼33%, when background light is dominated by either Rayleigh- scattered moonlight or dark NSB. Other parameters such as the Moon phase and zenith angle also affect the NSB transmission. The conversion from DC to NSB level could then be different depending on the observation conditions. For the performance study in this work we adopted a “mean scenario”, corresponding to an NSB transmission of 25%.

4.2.2.2 Mechanical design and mounting in the camera

The filters were bought in tiles of 20 cm × 30 cm, which for the selected filters was the largest size commercially available. They were mounted on a light-weight frame designed and built at IFAE. This frame consists of an outer aluminum ring that is screwed to the camera and steel 6 mm × 6 mm section ribs that are placed between the filter tiles (see Figures 4.17 and 4.18). The filter tiles are fixed to the ribs by plastic pieces and the space between tiles and ribs is filled with silicone. This gives mechanical stability to the system and prevents light leaks. Two people can mount, or dismount, the UV-pass filter on a MAGIC camera in about 15 minutes (Figure 4.19). The camera front is accessed from a bridge near the camera tower and the filter is secured to the camera with 32 screws. The total cost of the UV-pass filters for both telescopes, including delivery, was of $4700 in 2014.

6http://www.globalsources.com/sbgx.co 4.2. MOONLIGHT OBSERVATIONS 67

Figure 4.17: On the left, the UV-pass filters installed on the camera of one of the MAGIC telescopes. On the right, design of the frame that holds the filters. The outer aluminium ring is screwed to the camera.

Figure 4.18: Filter frame assembled before mounting in the camera. 68 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.19: The filters are installed from the front of the camera, which can be accessed from a bridge that is part of the camera tower. The mounting is a task for two people, that carry the filters frame to the bridge and screw it to the camera.

4.2.2.3 Pixel shadowing by the filter frame

The steel ribs of the filter frame partially shadow some of the camera pixels. The effect of this shadowing can be easily seen, for instance, in the camera display of the mean number of detected photoelectrons during calibration events, as in Figure 4.20. This shadowing must be taken into account in the Monte Carlo simulations that are used to analyze the data taken with UV-pass Filters (see Section 4.2.4.3). As explained in Section 4.1.1, during calibration runs the camera is flashed with short pulses coming from the center of the dish. We could approximate that in such situation the camera is being illuminated by a point-like source. In a Cherenkov flash, however, the light focused into the camera comes from the whole dish. The shadowing of the pixels by the frame changes from one scenario to the other. To see how significant this change is I performed a series of simple simulations that compared the shadowing produced on each pixel in the two cases. I modelled the light coming from the calibra- tion box as a point like source located at a 17 m distance from the camera centre, in the axis perpendicular to the camera plane. The light reflected by the mirrors during Cherenkov pulses was modelled by 384 point like sources homogeneously distributed in a disk of 17 m diameter. The distance between the frame and the camera was set to be 2 cm. The width of the ribs was considered to be 1 cm, slightly wider than the real steel ribs to account for the additional shadowing produced by the silicone sealing. This width was optimized to achieve the best matching in the RMS on the number of phe recorded during calibration pulses for data taken with and without UV-pass filters. The results from these simulations are presented in Figure 4.21. About 7% of the pixels are significantly shadowed (obscured by the ribs by more than 40%). The 4.2. MOONLIGHT OBSERVATIONS 69

Figure 4.20: Output plots from Sorcerer for a calibration run during UV-pass filter observa- tions. Upper Left: Mean number of photoelectrons from calibration pulses as a function of the pixel index. Upper right: The same, but a camera representation. Lower left: Mean number of photoelectrons distribution. The pixels partially shadowed by the frame exhibit a lower value. 70 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Figure 4.21: Simulations for the shadowing of the pixels by the filter frame. The three images show the shadowing produced when the camera is illuminated by a point-like source (top left), by a disk-like source (top right) and the residuals from the two upper plots (bottom). difference between the shadowing in each pixel when illuminated by a point-like source or by a disk-like source as the mirror dish is nearly negligible (less than 3% in 99% of the pixels).

4.2.3 Data sample To characterize the performance of MAGIC under moonlight we used 174 hours of Crab Nebula observations taken between October 2013 and January 2016, under NSB conditions going from 1 (dark) up to 30× NSBDark. Observations are possible at higher illumination levels, but it is difficult to collect a significant sample of Crab Nebula data under such conditions. In fact, only on rare situations MAGIC targets are found under higher NSB levels than the ones analyzed in this work. All the data correspond to zenith angles between 5◦ and 50◦. The selected samples were recorded during clear nights, for which the application of the atmospheric corrections described in [101] and mentioned in Section 4.1.4.4 are not required. Data were divided into different samples according to their NSB level and the hardware settings in which observations were performed (nominal HV, reduced HV or UV-pass filters), as summarized in Table 4.2. When dividing the data the aim was to have rather narrow NSB bins while keeping sufficient statistics in each of them (∼ 10 hours per bin). Bins are slightly wider in the case of the UV-pass filter data to fulfill that requirement. 4.2. MOONLIGHT OBSERVATIONS 71

Sky Brightness Hardware Settings Time [NSBDark] [h] 1 (Dark) nominal HV 53.5 1-2 nominal HV 18.9 2-3 nominal HV 13.2 3-5 nominal HV 17.0 5-8 nominal HV 9.8 5-8 reduced HV 10.8 8-12 reduced HV 13.3 12-18 reduced HV 19.4 8-15 UV-pass filters 9.5 15-30 UV-pass filters 8.3

Table 4.2: Effective observation time of the Crab Nebula subsamples in each of the NSB/hardware bins.

4.2.4 Moonlight adapted analysis Hardware settings of the telescope have to be adapted for moonlight observations. The trigger thresholds are automatically raised when the NSB level increases to keep the trigger rates at a reasonable value. If the background light intensity is too high, the gain of the PMTs has to be reduced or the UV-pass filters must be mounted. In any case, having non-standard hardware settings plus recording data with additional noise require an analysis adapted to such conditions. This section describes how moonlight affects the MAGIC data and how the analysis chain and MC simulations have been modified. In overall, the analysis chain is the same as the one described in Section 4.1.4, be- sides a few modifications introduced to account for the different observation conditions. The moonlight-adapted analysis has basically three main differences with respect to the standard one: a set of cleaning levels, size cuts and tuned MC simulations, all of them optimized for each NSB/hardware bin. Probably one of the most remarkable results from this study on moonlight observations is how a rather simple analysis can produce results with reasonable performance and accuracy, as it will be shown in Section 4.2.5.

4.2.4.1 Charge extraction As explained in Section 4.1.4.1, during the charge extraction process the sliding win- dow method picks up, in the absence of signal, the largest noise fluctuation of the recorded waveform. The main sources of noise are the statistical fluctuations due to NSB photons, the PMT afterpulses and the electronic noise. The noise due to back- ground light fluctuations scales as the square root of the NSB (Poisson statistics). The afterpulse rate is proportional to the PMT current, which increases linearly with the NSB. When the PMTs are operated under nominal HV, electronic noise has a similar level to the NSB fluctuation induced by a dark extragalactic FOV, which has no bright stars [41]. For Crab dark observations (NSBDark), the brightness of the FOV is about 60% higher than under a dark extragalactic FOV, and the NSB-related noise already dominates. Figure 4.22 shows the distribution of extracted charge in photoelectrons (phe) for pedestal events (triggered randomly without signal) under four different ob- 72 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS servation conditions. During observations of the Crab Nebula under dark conditions the pedestal distribution has an RMS of ∼1 phe and a mean bias of ∼ 2 phe. The distribution is asymmetric with larger probability of upward fluctuation (induced by the sliding window method) and an extra tail at large signals (>8 phe) produced by the PMT afterpulses. During moonlight observations, the noise induced by the NSB increases while the electronic noise remains constant (as long as the hardware settings remain unchanged). In fact, the electronic noise in terms of photoelectrons is proportional to the calibration constant, which depends on the hardware configuration of the observations. With reduced HV, all gains are lower, and hence the calibration constants increase resulting in higher electronic noise level in phe (∼1.7) and, as a consequence, worse signal-to- noise ratio of integrated pulses. The transit time in PMTs also increases when the gain is lowered, but, as can be seen in Figure 4.23 the delay in arrival time of pulses is of the order of ∼1 ns. This means that the signal pulse is always well within the 30 ns window and that the peak search method is not affected. During UV-pass filter observations PMTs are operated with nominal HV but some pixels are partially shadowed by the filter frame. The camera flat-fielding, which makes all pixels respond similarly to the same sky light input, gives higher calibration constants to the shadowed pixels. Thus, electronic noise on those pixels is larger, while in contrast the NSB noise is strongly reduced by the filters. The relative contribution of the electronic to the total noise is then also higher during UV-pass filter observations. The mean noise and its RMS increase with the sky brightness level, as can be seen in Figure 4.24. The values of the typical pedestal distribution mean and RMS for all the NSB/hardware bins are shown in Table 4.3. The broader pedestal charge distribution has a double impact on the extraction of a real signal (Cherenkov light). If the signal is weak, the maximum waveform fluctuation may be larger than the Cherenkov pulse and the sliding window could select the wrong section. Then, the reconstructed pulse time is random and the signal is lost. If the signal is strong enough, the sliding window selects the correct region, the time and amplitude of the signal is just less precise (NSB does not induce a significant bias). Strong signals are almost not affected as their charge resolution is dominated by close to Poissonian fluctuations of the number of recorded phe.

4.2.4.2 Image cleaning During moonlight observations the background fluctuations are higher than under dark conditions, as already exposed in Figure 4.24. The cleaning levels must be increased accordingly. We modified those levels to ensure that the fraction of pedestal events that contain only noise and survive the image cleaning is lower than 10%. They were optimized for every NSB/hardware bin independently to get the lowest possible anal- ysis threshold for every bin. The optimized cleaning levels for each bin are shown in Table 4.3. The time window widths were not modified for reduced HV observations, because the variations in the PMTs response are expected to be very small. We do not use variable cleaning levels that would automatically scale as a function of the noise because, as explained in Section 4.1.4, the MAGIC data reconstruction is based on comparison with MC simulations, which must have exactly the same cleaning levels as the data. During moonlight observations, the noise level is continuously changing, so it is not feasible to fine tune our MC for every observation. Instead 4.2. MOONLIGHT OBSERVATIONS 73

Figure 4.22: Distributions of the pixel charge extracted with a sliding window for pedestal events (i.e., without signal) for different NSB/hardware conditions.

0.2 Nominal HV

Reduced HV Events [a.u.]

0.15

0.1

0.05

0 10 11 12 13 14 15 16 17 18 Arrival time [ns]

Figure 4.23: Arrival time of calibration pulses in one pixel for a selection of calibration runs taken under nominal and reduced high voltage. For these distributions the arrival time was computed from a gaussian fit to each calibration pulse recorded. Dashed lines show a gaussian fit to the arrival time distributions. Distance between the two peaks of the gaussian fits is ∼0.6 ns. The spread of both distributions are within 1%. 74 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

8 4

7 3.5

6 3 Ped RMS [phe] Ped Mean [phe]

5 2.5

4 2

3 1.5

2 1 Nom HV Nom HV RedHV RedHV 1 0.5 UV•pass filters UV•pass filters

0 0 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 Sky Brightness [NSB ] Sky Brightness [NSB ] Dark Dark

Figure 4.24: Mean bias (left) and RMS (right) of the pedestal distribution as a function of the sky brightness and the hardware configuration of the observations. we create a set of MC simulations for every NSB/hardware bin with fixed noise and cleaning levels.

4.2.4.3 Monte Carlo simulations There are essentially two main effects that moonlight causes and that Monte Carlo simulations have to deal with: the increase in the NSB level (and in the overall noise in phe) and the trigger variations. In the case of UV-pass filter observations, also the filter transmission and the filter frame must be considered. MC samples adapted for every NSB/hardware bin were prepared. For nominal and reduced HV settings, we used the standard MAGIC MC simulation chain with additional noise to mimic the effect of moonlight (and reduced HV). The noise is injected after the calibration at the pixel signal level, before the image cleaning and parametrization (Section 4.1.4.2). The first step is to model the noise distribution in a given integration window of 3 ns that would produce the same pedestal charge distribution than the one obtained during observations (see Figure 4.22). Then a random value is extracted from the modelled noise distribution and is added to the extracted signal of the MC event. If the modified signal is larger than a random number following the pedestal charge distribution, this modified signal becomes the new charge and a random jitter is added to the arrival time (depending on the new signal/noise ratio). If the random pedestal signal is larger it means that the sliding window would have caught a spurious bump coming from the NSB instead of the signal itself. Then the pixel charge is replaced by this fake signal purely obtained from the simulated pedestal distribution and the arrival time is chosen randomly according to the pedestal time distribution. This method allows us to adapt our MC to any given NSB without reprocessing the full telescope simulation and data calibration. In the case of the UV-pass filter observations, additional modifications on the sim- ulation chain were implemented to include the filter transmission and the shadowing produced by the frame ribs. These modifications were implemented at Camera level 4.2. MOONLIGHT OBSERVATIONS 75

Sky Brightness Hardware Settings Pedestal Distr Cleaning Level factors Size Cut mean / rms Lvl1 / Lvl2 [NSBDark] [phe] [phe] [phe] 1 (Dark) nominal HV 2.0 / 1.0 6.0 / 3.5 50 1-2 nominal HV 2.5 / 1.2 6.0 / 3.5 60 2-3 nominal HV 3.0 / 1.3 7.0 / 4.5 80 3-5 nominal HV 3.6 / 1.5 8.0 / 5.0 110 5-8 nominal HV 4.2 / 1.7 9.0 / 5.5 150 5-8 reduced HV 4.8 / 2.0 11.0 / 7.0 135 8-12 reduced HV 5.8 / 2.3 13.0 / 8.0 170 12-18 reduced HV 6.6 / 2.6 14.0 / 9.0 220 8-15 UV-pass filters 3.7 / 1.6 8.0 / 5.0 100 15-30 UV-pass filters 4.3 / 1.8 9.0 / 5.5 135

Table 4.3: Noise levels of the Crab Nebula subsamples, adapted image cleaning levels and size cuts used for their analysis.

(Section 4.1.5): the quantum efficiency of each PMT is scaled by the measured trans- mission curve of the filters and the shadowing produced by the frame on each pixel during Cherenkov flashes, according to the frame model described in Section 4.2.2. In Camera the NSB is directly introduced in phe as an average additional noise that affects the whole camera rather smoothly: it is not scaled by the quantum efficiency of the PMTs. The noise in the UV-pass filter simulations is treated in the same way than for nominal and reduced HV. This means that the shadowing of NSB by the frame is not included and that the noise level in the partially shadowed pixels is overestimated. We did not simulate the effect of the moonlight on the trigger because it is very difficult to reproduce the behavior of the IPRC, which controls the pixel DTs (see section 4.1.2). Instead, simulations were performed using the standard dark DTs and we later applied cuts on the image size of the events surviving the image cleaning on each telescope. This size cut acts as a software threshold and it is optimized bin-wise as the minimal size for which the data and MC distributions are matching. Even in the absence of moonlight a minimum cut in the total charge of the images is applied, as potential gamma-ray events with lower sizes are either harder to reconstruct or to distinguish from hadron-induced showers [42]. The used size cuts are given Table 4.3. Figures 4.25 and 4.26 compare size distributions of MC gamma-ray events (simulated with the spectrum of the Crab Nebula reported in [42]) with those of the observed excess events within a 0.14◦ circle from the Crab Nebula. Comparison of the distributions of other Hillas parameters can be found in Appendix A.1.

4.2.5 Performance

In this section I discuss how moonlight and the use of different hardware configurations affect the main performance parameters of the MAGIC telescopes. 76 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

1 MAGIC 1: Nominal HV, Dark (NSB=1) 1 MAGIC 1: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 10−1 10−1 Event rate [a.u.] Event rate [a.u.]

10−2 10−2

− − 10 3 10 3

10−4 10−4 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 no size cut with size cut no size cut with size cut

1 1 Ratio Data/MC Ratio Data/MC

10− 1 10− 1 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 log(size [phe]) log(size [phe])

1 MAGIC 1: Reduced HV, NSB: 5•8 1 MAGIC 1: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 10−1 10−1 Event rate [a.u.] Event rate [a.u.]

10−2 10−2

− − 10 3 10 3

10−4 10−4 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 no size cut with size cut no size cut with size cut

1 1 Ratio Data/MC Ratio Data/MC

10− 1 10− 1 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 log(size [phe]) log(size [phe])

Figure 4.25: Comparison between MAGIC 1 data (red) and MC gamma-ray (blue) image size distributions for different NSB/hardware bins. Data distributions are composed by ex- cess events within a 0.14◦ circle around the Crab Nebula position. MC distributions were simulated with the same energy distribution as the Crab Nebula spectrum reported in [42]. In dashed and solid lines the distributions before and after applying the optimized size cuts are shown. Distributions with and without size cuts were normalized to different values for a better visualization. Lower panels show the ratio of the data distributions to the MC ones. 4.2. MOONLIGHT OBSERVATIONS 77

1 MAGIC 2: Nominal HV, Dark (NSB=1) 1 MAGIC 2: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 10−1 10−1 Event rate [a.u.] Event rate [a.u.]

10−2 10−2

− − 10 3 10 3

10−4 10−4 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 no size cut with size cut no size cut with size cut

1 1 Ratio Data/MC Ratio Data/MC

10− 1 10− 1 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 log(size [phe]) log(size [phe])

1 MAGIC 2: Reduced HV, NSB: 5•8 1 MAGIC 2: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 10−1 10−1 Event rate [a.u.] Event rate [a.u.]

10−2 10−2

− − 10 3 10 3

10−4 10−4 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 no size cut with size cut no size cut with size cut

1 1 Ratio Data/MC Ratio Data/MC

10− 1 10− 1 1.5 2 2.5 3 3.5 1.5 2 2.5 3 3.5 log(size [phe]) log(size [phe])

Figure 4.26: Same as Figure 4.25, but for MAGIC 2. 78 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS ] 2

105

104 Collection area [ m

103

Nominal HV, Dark (NSB=1) Nominal HV, NSB: 3•5 102 Reduced HV, NSB: 5•8 UV•pass Filters, NSB: 8•15

1.5 2 2.5 3 3.5 4 log(E/GeV)

Figure 4.27: Effective collection area at reconstruction level for zenith angles below 30◦ for four different observation conditions: Dark conditions with nominal HV (black), 3- 5 ×NSBDark with nominal HV (blue), 5-8 × NSBDark with reduced HV (green) and 8- 15 ×NSBDark with UV-pass filters (red). The optimized cleaning levels and size cuts from Table 4.3 were used to produce these plots.

4.2.5.1 Energy threshold As explained in Section 4.1.6.2 the energy threshold of an IACT array is obtained from MC simulations. It naturally increases during moonlight observations, as the DTs are automatically raised by the IPRC (see Section 4.1.2). As explained in Section 4.2.4.3, our MC simulations do not reproduce the complex behavior of the trigger during such observations. Here the energy threshold is evaluated at the reconstruction level, after the image cleaning and size cuts are applied, where a good matching between real data and MC is achieved. The energy threshold depends on the effective area of the instrument. The effective collection area for gamma rays at the reconstruction level as a function of the energy for four different NSB/hardware situations are shown in Figure 4.27. In all four curves two regimes can be identified: one, at low energies, in which the collection area is rapidly increasing with the energy and another, towards high energies, where it reaches a plateau. As expected, the dark-sample analysis presents the largest effective area along the full energy range. The degradation due to moonlight is more important at the lowest energies, where the Cherenkov images are small and dim. The higher the size cuts and cleaning levels, the higher the energy at which the plateau is achieved. In the case of UV-pass filter observations, the used cleaning levels and size cuts are lower (in units of phe) than the ones applied during reduced HV data analysis, but due to the filter transmission, the plateau is reached at even higher energies. Above ∼1 TeV the effective area is almost flat for the four studied samples and the effect of Moon analysis is very small (below ∼10%). The degradation of the effective area at low energies is directly translated into an increase of the energy threshold, as can be seen in Figure 4.28, where the differential rate plots for the same four NSB/hardware cases are shown. In Figure 4.29 we show 4.2. MOONLIGHT OBSERVATIONS 79

0.05 Nominal HV, Dark (NSB=1) Nominal HV, NSB: 3•5 Reduced HV, NSB: 5•8 Rate [a.u.] 0.04 UV•pass Filters, NSB: 8•15

0.03

0.02

0.01

0 50 100 150 200 250 300 350 400 Energy [GeV]

Figure 4.28: Rate of MC gamma-ray events that survived the image cleaning and a given quality size cut for an hypothetical source with an spectral index of −2.6 observed at zenith angles below 30◦. The four curves correspond to different observation conditions: Dark conditions with nominal HV (black), 3-5 ×NSBDark with nominal HV (blue), 5-8 × NSBDark with reduced HV (green) and 8-15 ×NSBDark with UV-pass filters (red). Dashed lines show the gaussian fit applied to calculate the energy threshold on each sample. the obtained energy threshold as a function of the sky brightness for different hardware configurations at low (< 30◦) and medium (30◦ − 45◦) zenith angles7. For low zenith angles it goes from ∼70 GeV in the absence of moonlight to ∼300 GeV in the bright- est scenario considered. For medium zenith angles, the degradation is similar from ∼110 GeV to ∼500 GeV. The degradation of the energy threshold Eth as a function of the NSB level can be roughly approximated, for nominal HV and reduced HV data, by

 0.4 Dark NSB Eth(NSB) = Eth × (4.6) NSBDark

Dark Where Eth is the energy threshold during dark Crab Nebula observations. At the same NSB level, reduced HV data have a slightly higher energy threshold than nominal HV data due to higher electronic noise in phe units, while the UV-pass-filter energy threshold is significantly higher (∼40%) than the one of reduced HV data without filters. The energy threshold increase with filters is due to the lower photon statistic (the same shower produces fewer phe). This degradation is reduced at higher NSBs (i.e. higher energies), where larger image sizes make the photon statistics less important than the signal-to-noise ratio in the energy threshold determination.

7Here we compute an average over a relatively wide zenith range, but energy threshold dependence with the zenith angle is stronger for medium zenith angles (see Figure 6 in [42]) 80 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS Energy Threshold [GeV]

102 Nominal HV Zd < 30°

Reduced HV 30° < Zd < 45°

UV•pass Filters

1 10 Sky Brightness [NSB ] Dark Figure 4.29: Energy threshold at the event reconstruction level as a function of the sky brightness for observations with nominal HV (black), reduced HV (green) and UV-pass filters (red) at zenith angles below 30◦ (filled circles, solid lines) and between 30◦ and 45◦ (empty squares, dashed lines). Gray lines represent the approximation given by equation 4.6 for zenith angles below 30◦ (solid) and between 30◦ and 45◦ (dashed).

4.2.5.2 Reconstruction of the Crab Nebula spectrum Standard cleaning MAGIC data are automatically calibrated with the standard analysis chain optimized for dark observations. Also the image cleaning, parametrization and the stereo recon- struction are automatically performed and stored. Most of the analyses start at this point, just before the data quality selection process (Section 4.1.4.4). When dealing with moonlight data an adapted analysis is in principle required, as described in Sec- tion 4.2.4. This involves for instance applying different cleaning levels, which can only be done at earlier stages of the analysis chain. However, when moonlight is weak its effect can be almost negligible and the data can be processed following the standard chain. Actually, as explained in Section 4.1.4.2, the standard cleaning levels were set intentionally higher than what could be chosen for dark extragalactic observations so that they could be also used for brighter FOVs, including weak moonlight. This strat- egy is coherent with the decision of operating the PMTs at a rather low gain during standard observations (Section 4.1.1). The question that arises is which is the highest NSB level for which the standard analysis provides consistent results, within reason- able systematic uncertainties, with respect to those obtained with the dark reference sample. Figure 4.30 tries to answer this question. It shows the obtained Crab Nebula SEDs when the standard analysis is applied, including standard dark MC for the train and test samples, to the moonlight data samples that were taken with nominal HV (Table 4.2). To minimize systematic uncertainties typical selection cuts with 90% 4.2. MOONLIGHT OBSERVATIONS 81

] •10

•1 10 s •2 [ TeV cm dN dE dA dt

2

E 10•11

MAGIC, 2016 Nominal HV, Dark (NSB=1) Nominal HV, NSB: 1•2 Nominal HV, NSB: 2•3 Nominal HV, NSB: 3•5 Nominal HV, NSB: 5•8

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

Figure 4.30: Spectral energy distribution of the Crab Nebula obtained for different NSB levels (given in units of NSBDark) using the standard analysis, compared to the result obtained previously by MAGIC (best log-parabola fit in red solid line, [42]). The lower panel shows the ratio of the fluxes measured under moonlight to the ones measured in dark conditions. gamma-ray efficiency for the gamma-ray/hadron separation and sky signal region radius were used [42]. The image size cuts described in Section 4.2.4.3 were applied to produce these spectra. Those cuts can be applied when running Flute, so they do not “slow down” the analysis process, which would be the main reason to go for an standard analysis instead of the optimized one. The SED obtained using data with 1-2 × NSBDark is compatible, within errors, with the one obtained with dark data. This shows that the standard analysis is perfectly suitable for this illumination level. For brighter NSB conditions the reconstructed spectra are underestimated. With 2-3 × NSBDark, the data-point errors above ∼130 GeV are below ∼20% while with 5-8 × NSBDark the reconstructed flux falls below ∼50% at all energies. Thus, the standard analysis chain can be still used for weak moonlight at the price of additional systematic bias (10% for 1-2 × NSBDark and 20% for 2-3 × NSBDark) but for higher NSB levels a dedicated Moon analysis is mandatory.

Custom analysis

Figure 4.31 shows the spectra of the Crab Nebula obtained after applying the dedicated Moon analysis (dedicated MC, cleaning levels and size cuts) described in Section 4.2.4 to each data set. In almost all the cases the fluxes obtained are consistent within ±20% with the one obtained under dark conditions, at least up to 4 TeV. The only exception is the brightest NSB bin (UV-pass filters data up to 30 ×NSBDark) where the ratio of the flux to the dark flux gets slightly above ∼30% at energies between about 400 and 800 GeV. Eventual causes that might produce this overestimation of the flux in this particular NSB bin are further discussed in Appendix A.2. It is also interesting to notice how the spectrum reconstruction improves when the dedicated moon analysis is 82 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS performed by comparing the spectra obtained for the nominal HV samples in Figures 4.30 and 4.31.

4.2.5.3 Angular resolution The reconstruction of the gamma-ray arrival direction could be affected in two ways by moonlight. Firstly, as already discussed, it induces more background noise that affects the quality of the recorded images. Secondly, as already introduced in Section 4.1, the tracking monitor of the telescopes relies on the recognition of stars in the field of view by the Starguider camera. When background light is brighter the star recognition process is tougher and sometimes it can even fail. An eventual mispointing due to this effect is ruled out by Figure 4.32. The plot in the left was done by Pierre Colin and shows the difference between the reconstructed and the expected source position obtained for a few observation hours on each NSB/hardware bin. The samples are also divided by the period in which those data were taken, so that it would be noticeable in case there was an effect coming from some hardware intervention of the telescopes8. The plot in the right is Figure 22 from [42] and corresponds to only dark data. The comparison shows that there is no additional systematic uncertainty on the reconstructed source position. Despite one particular sample that falls outside (which could be affected by the particular observation conditions of the nights in which it was taken) all the rest are well within a 0.02◦ circle around the expected Crab Nebula position. To study the possible degradation of the point spread function (PSF), we compare the θ2 distribution obtained for Crab data taken under moonlight and under dark conditions, θ being the angular distance between the Crab Nebula position and the reconstructed event arrival direction. As explained in [42], this distribution can be well fitted by a double exponential function. Figure 4.33 shows the θ2 distribution of events with estimated energy above 300 GeV and gamma-ray/hadron separation cut corresponding to 90% gamma-ray efficiency for four representative NSB/hardware bins. For all the NSB/hardware bins the θ2 distribution above the energy threshold is in good agreement with the PSF obtained under dark conditions. The angular resolution does not seem to be significantly affected by moonlight.

4.2.5.4 Sensitivity Previous subsections show that moonlight observations are perfectly apt for bright gamma-ray sources such as the Crab Nebula, as its spectrum and direction can be well reconstructed, with the only drawback being a higher energy threshold with respect to the one obtained in dark observations. However, one may wonder how the perfor- mance for the detection of weak sources is affected by moonlight, which may degrade the gamma-ray/hadron separation power. To study this potential effect, the integral sensitivity as a function of the energy threshold was computed for all the NSB/hardware bins analyzed, following the procedure (and the definition of sensitivity) described in Section 4.1.6.3. To accumulate enough data in every NSB/hardware bin, data from a large zenith angle range, going from 5◦ to 45◦, are used. As explained in Section 4.1.6.3, the

8For instance, after a mirror alignment a better match between the nominal and the reconstructed source position is expected. 4.2. MOONLIGHT OBSERVATIONS 83

] •10

•1 10 s •2 [ TeV cm dN dE dA dt

2

E 10•11 MAGIC, 2016 Nominal HV, Dark (NSB=1) Nominal HV, NSB: 1•2 Nominal HV, NSB: 2•3 Nominal HV, NSB: 3•5 Nominal HV, NSB: 5•8

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

] •10

•1 10 s •2 [ TeV cm dN dE dA dt

2

E 10•11 MAGIC, 2016 Nominal HV, Dark (NSB=1) Reduced HV, NSB: 5•8 Reduced HV, NSB: 8•12 Reduced HV, NSB: 12•18

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

] •10

•1 10 s •2 [ TeV cm dN dE dA dt

2

E 10•11 MAGIC, 2016 Nominal HV, Dark (NSB=1) UV•pass Filters, NSB: 8•15 UV•pass Filters, NSB: 15•30

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

Figure 4.31: Spectral energy distribution of the Crab Nebula obtained for different NSB levels (given in units of NSBDark, coloured dots) using the dedicated Moon analysis for nom- inal HV (top), reduced HV (centre) and UV-pass filters (bottom) data. For comparison the result obtained with the dark sample using standard analysis in this work (black dots) and previously published by MAGIC (red solid line, [42]) are shown in every panel. The bot- tom sub-panels show the ratio of the fluxes measured under moonlight to the flux measured under dark conditions. 84 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

0.04

0.03

0.02

Mispointing in Y [deg] MispointingY in 0.01 Mispointing in Y [deg] Y in Mispointing 0

•0.01

•0.02

•0.03

•0.04 •0.04 •0.03 •0.02 •0.01 0 0.01 0.02 0.03 0.04 Mispointing in X [deg] Mispointing in X [deg]

Figure 4.32: Left: 2D-skymap, in camera coordinates, showing the reconstructed direction of the excess events for data taken under different hardware and NSB conditions. Data were also divided by the epoch in which they were taken (see Section 4.1.5): ST.03.03 corresponds to data taken before August 2014, when there was a mirror exchange and a re-alignment of the reflector, ST.03.06 to data taken after November 2014 and ST.03.05 to the data taken in between these two periods . Plot credits: Pierre Colin. Right: The same plot, but taken from Figure 22 in [42], where all data corresponds to ST.03.03 period. In both plots the two circles show a distance of 0.02◦ and 0.03◦ and the data correspond to zenith angles between 5◦ and 45◦. sensitivity and energy threshold depend strongly on the zenith angle. As data sub- samples have different zenith angle distributions, the performances are corrected to correspond to the same reference zenith-angle distribution (average of all the data). To visualize the degradation caused by moonlight, the integral sensitivity computed for each NSB/hardware bin is divided by the one obtained under dark conditions at the same analysis-level energy threshold. The obtained sensitivity ratios are shown in Figure 4.34 as a function of the energy threshold. The Moon data taken with nominal HV provide a sensitivity only slightly worse than the one obtained using dark data. The sensitivity degradation is constrained to be less than 10% below 1 TeV and all the curves are compatible within error bars above ∼300 GeV. Error bars increase with the energy because the event statistic decreases dramatically. These error bars are not independent as the data corresponding to a given energy threshold are included in the lower energy analysis. The only visible degradation is near the reconstruction-level energy threshold (<200 GeV), where the sensitivity is 5-10% worse. For Moon data taken with reduced HV, the sensitivity degradation lies between 15% and 30%. It seems to increase with the NSB level, although above 400 GeV the three curves are compatible within statistical errors. This degradation is caused by a combination of a higher extracted-signal noise (see section 4.2.4) and a smaller effective area. The degradation is even clearer in the UV-pass filter data, where the sensitivity is 60-80% worse than the standard one. Such a degradation is expected, especially due to the fact that the filters reject more than 50% of the Cherenkov light. Besides, sensitivity 4.2. MOONLIGHT OBSERVATIONS 85

4 Nominal HV, Dark•Crab NSB Nominal HV, 3•5 x Dark•Crab NSB events 10 events N N 103

103

102

102

10

10 0 0.02 0.04 0.06 0 0.02 0.04 0.06 θ2 [ deg2 ] θ2 [ deg2 ]

3 Reduced HV, 5•8 x Dark•Crab NSB 10

events 3 events UV Filters, 8•15 x Dark•Crab NSB 10 N N

102 102

10 10

0 0.02 0.04 0.06 0 0.02 0.04 0.06 θ2 [ deg2 ] θ2 [ deg2 ]

Figure 4.33: θ2 distribution of excess events (gamma-ray events) with an estimated energy above 300 GeV for the usual four cases studied: Dark (NSB = 1), nominal HV NSB: 3-5, reduced HV NSB: 5-8, UV-pass filters NSB: 8-15 (NSB in NSBDark units). The solid black lines show the PSF fit (double-exponential) obtained with the dark sample. could also be affected by a poorer reconstruction of the images, especially in the pixels that are partially obscured by the filter frame ribs. At the highest energies (>2 TeV) sensitivity seems to improve. This could be expected for bright images, that are less affected by noise, but higher statistics at those energies would be needed to derive further conclusions.

4.2.5.5 Systematic uncertainties During moonlight observations many instrumental parameters are more variable than during dark observations, in particular the trigger DTs and the noise contained in the signal. These variations induce larger MC/data mismatches and then larger sys- tematic uncertainties. As shown in Section 4.2.5.2, the Crab Nebula spectrum can be well reconstructed in every NSB/hardware bin. The reconstructed flux above the energy threshold of every NSB bin is within a 10%, 15%, 30% error band around the flux obtained under dark conditions for nominal HV, reduced HV and UV-pass filter observations, respectively. The spectral shape is particularly well reproduced in all hardware configurations. The dark-to-Moon flux ratios vary less than 10% over an order of magnitude in energy, corresponding to an additional systematic uncertainty on the power-law spectral index below 0.05. The overall flux may mask large day-to-day fluctuations. These fluctuations must be understood and quantified if moonlight observations are going to be used for instance to study the variability of a source. This can be done by studying the daily flux of 86 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Nominal HV

1 < NSB < 2 1.8 2 < NSB < 3 3 < NSB < 5 1.6 5 < NSB < 8

1.4 Sensitivity ratio (moon/dark) 1.2

1

0.8

0.6

102 103 Energy threshold [GeV]

Reduced HV

1.8 5 < NSB < 8 8 < NSB <12

1.6 12< NSB <18

1.4 Sensitivity ratio (moon/dark) 1.2

1

0.8

0.6

102 103 Energy threshold [GeV]

UV•Filters

1.8

1.6

1.4 Sensitivity ratio (moon/dark) 1.2 8 < NSB <15 15< NSB <30 1

0.8

0.6

102 103 Energy threshold [GeV]

Figure 4.34: Ratio of the integral sensitivity under moonlight to the dark sensitivity as a function of the analysis energy threshold, for nominal HV (top), reduced HV (middle) and UV-pass filter (bottom) data. The NSB levels are given in unit of NSBDark. 4.2. MOONLIGHT OBSERVATIONS 87

Sky Brightness Hardware Settings Day-to-day Systematics Dark (NSBDark = 1) nominal HV (7.6 ± 1.2)% 1-8 NSBDark nominal HV (9.6 ± 1.2)% 5-18 NSBDark reduced HV (15.4 ± 3.2)% 8-30 NSBDark UV-pass filters (13.2 ± 3.4)%

Table 4.4: Additional systematic uncertainties that must be added to the errors of the light curve shown in Figure 4.35 to get constant-fit χ2 equaling the number of degrees of freedom. In the UV-pass filter case, the computed day-to-day systematic errors are valid for energies above 500 GeV. a source that is assumed to be stable, as the Crab Nebula9. Figure 4.35 shows the daily light curve of the Crab Nebula flux above 300 GeV from October 2013 to March 2016 for every NSB level observed without UV-pass filters and the light curve above 500 GeV from January to October 2015 for the two NSB bins with UV-pass filters10. A higher cut in energy is used for the UV-pass filter light curve because the last bin (NSB:15-30× NSBDark) has an energy threshold above 300 GeV at the observed zenith angles. Solid horizontal lines show the result of applying a constant fit to each data sample. Taking into account only statistical fluctuations, the χ2 test indicates that a constant flux is incompatible for every light curve, even for dark observations. If we assume that the Crab Nebula flux is constant and that then the fluctuations are only due to systematic uncertainties, we can estimate the magnitude of those uncertainties 2 by adding errors quadratically (on top of the statistical errors)√ until the constant-fit χ equals the number of degrees of freedom k plus or minus 2k, which corresponds to the standard deviation of the χ2 distribution. Instead of doing this for every NSB/hardware bin, the systematic uncertainties are here estimated from constant fits applied to three larger (and then more constraining) samples: one of moderate moonlight with nominal HV (1 − 8 × NSBDark), another one of moonlight with reduced HV (5 − 18 × NSBDark) and one of strong moonlight with UV-pass filter (8 − 30 × NSBDark). Table 4.4 gives the day-to-day systematic errors obtained for these three hardware/NSB conditions as well as for dark observation with nominal HV. For dark observations, the obtained day-to-day systematic uncertainty is (7.6 ± 1.2)%. This result is below the previous study based on Crab Nebula light curve that reports a day-to-day systematic uncertainty of ∼12% for the period from November 2009 to January 2011 [37] and from October 2009 to April 2011 [43]. This is consistent with the result after the telescope upgrade reported in [42], which claims day-to-day systematic uncertainty below 11%. For observation under moonlight with nominal HV (NSB < 8 × NSBDark), the obtained day-to-day systematic is (9.6 ± 1.2)%, still be- low the 11%. The additional systematic due to the moonlight is marginal and can be only constrained to be below 9%. For brighter moonlight that requires hardware modifications, the systematic errors get larger. A few data points show a flux much lower than expected (down to ∼50%). The overall day-to-day systematic is estimated at (15.4 ± 3.2)% for reduced HV and (13.2 ± 3.4)% for UV-pass filters, corresponding

9Strictly, the Crab Nebula is not absolutely stable in gamma rays, as the detection of exceptional flares by LAT have shown [80, 152]. But no variability has ever been found in VHE, at least within the systematic uncertainties of the IACT instruments. 10UV-pass filter observation started only in January 2015. 88 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS to an additional systematic on top of the dark nominal HV systematic errors laying between 6% and 18%. For every hardware configuration, the additional day-to-day systematic errors is of the same order, or below, the systematic errors found for the overall flux. To summarize, the additional systematic uncertainties of MAGIC during Moon time depend on the hardware configuration and the NSB level. For moderate moonlight (NSB < 8 × NSBDark) observations with nominal HV, the additional systematic errors on the flux are below 10%, raising the flux-normalization uncertainty (at a few hundred GeV) from 11% [42] to 15%. For observations with reduced HV (NSB < 18× NSBDark) the additional systematic errors on the flux is ∼15%, corresponding to a full flux- normalization uncertainty of 19% after a quadratic addition. For UV-pass filter obser- vations, the flux-normalization uncertainty increases to 30%. The additional systematic on the reconstructed spectral index is negligible (±0.04). The overall uncertainty is still ±0.15 for nominal HV observations, probably dominated by the uncertainty of the energy scale, as in dark observations. The uncertainty in the energy scale may in- crease for reduced HV and UV-pass filter observations but this effect is included in the flux-normalization uncertainty increase, as it is actually difficult to determine if a flux shift is due to a wrong energy calibration or a wrong effective area calculation. Con- cerning the pointing accuracy, as discussed in Section 4.2.5.3, no additional systematic uncertainties have been found.

4.2.6 Towards increasing the duty cycle Next chapter will show how in the case of Cas A all the effort invested on tuning the telescopes for moonlight observations, developing a specific analysis and characterizing the performance finally pays off. But it is important to remark that the possibilities that moonlight observations offer have a much wider scope. The results presented, which represent the first study in detail of the performance under moonlight of an IACT system using a dedicated analysis for such observations, show that, except for the energy threshold, the performance is only moderately affected by moonlight. The fact that the hardware modifications performed to tolerate stronger sky brightness (reduced HV, UV-pass filters) seem to have more effect than the increase in the noise itself can be seen as a vote of confidence for the use of SiPMs in Cherenkov astronomy. Or at least it suggests that those robust photodetectors have the potential to improve the performance under bright conditions. The main benefit of operating the telescopes under moonlight is that duty cycle can be doubled, suppressing the need to stop observations around full Moon. Depending on the needed energy threshold, many projects can profit from this additional time. Already moderate moonlight observations have led to the discovery of VHE emission from several active galactic nuclei, such as PKS 1222+21 [45], 1ES 1727+502 [44, 54], B3 2247+381 [38]. They are also used to study light curves of variable sources with better sampling, for instance the binary systems LSI +61 303 [39] and HESS J0632+057 [36] and the active galactic nuclei PG1553+13 [40], or to accumulate large amount of data as for deep observations of the Perseus cluster [27]. The bright moonlight observations are particularly useful for projects in which the relevant physics lie above a few hundred GeV, such as long monitoring campaigns of VHE sources with hard spectrum. The eventual loss in sensitivity can be compensated with the possibility of 4.2. MOONLIGHT OBSERVATIONS 89 much longer observation time in a less demanded observation period (currently often even used for technical work). The amount of observation hours taken under moonlight by MAGIC has been considerably increasing during the last few years. This tendency is expected to continue after the relatively simple dedicated analysis developed here and the results derived from the performance study, encouraging other projects to profit from the additional available time. 90 CHAPTER 4. ADAPTING MAGIC FOR MOONLIGHT OBSERVATIONS

Nominal HV ×10•9 ] •1 s

•2 0.18

0.16

0.14

0.12 F (E>300GeV) [cm 0.1

0.08 Dark (NSB=1) 0.06 1 < NSB < 2 2 < NSB < 3 0.04 3 < NSB < 5 0.02 5 < NSB < 8

0 56600 56800 57000 57200 57400 MJD Reduced HV ×10•9 ] •1 s

•2 0.18

0.16

0.14

0.12 F (E>300GeV) [cm 0.1

0.08

0.06 Dark (Nominal HV) 5 < NSB < 8 0.04 8 < NSB <12 0.02 12< NSB <18

0 56600 56800 57000 57200 57400 MJD UV Filters ×10•9 ] •1 s

•2 0.1

0.08

F (E>500GeV) [cm 0.06

0.04

Dark (No filter) 0.02 8 < NSB <15 15< NSB <30

0 57000 57100 57200 57300 57400 57500 MJD

Figure 4.35: Daily light curve of the Crab Nebula above 300 GeV for observation under different sky brightness with nominal HV (top), reduced HV (middle) and above 500 GeV for UV-pass filters (bottom). Horizontal lines correspond to the constant flux fit of the different NSB bins. For comparison, the light curve and constant fit of the dark observation are reproduced in every panel. Chapter 5

Cosmic-ray acceleration in Cas A

At the beginning of the previous chapter I mentioned that a precise measurement of the Cas A gamma-ray spectrum at TeV energies with current gamma-ray instruments was unthinkable without conducting a deep observation campaign. This campaign was possible in MAGIC thanks to the new time window opened by moonlight observations, which allowed us to collect almost 200 hours in less than two years, more than 70% of those gathered while the Moon was above the horizon. Even more remarkable is the fact that at least 50% of the full data sample was obtained during time slots that otherwise when MAGIC would have simply not observed. Since the degradation of the performance is considerable only at the lower energies, bright NSB conditions do not represent a major obstacle for PeVatron searches. In fact, Cas A observations are the perfect example of a type of project that can fully exploit the benefits from the developed moonlight analysis. This chapter starts with a description of the observation campaign and on how the moonlight-adapted analysis described in the previous chapter is used to produce the most accurate VHE spectrum of Cas A to date, presented for the first time in [28]. It ends with an interpretation of the results that aims to answer two questions: Which is the origin of the gamma-ray radiation: leptonic or hadronic? And, in case of hadronic origin, is Cas A a PeVatron at its present age?

5.1 Cas A observations

5.1.1 Observations with MAGIC Cas A observations with MAGIC were performed between December 2014 and October 2016, for a total observation time of 191 hours after data quality cuts. They were carried out following the standard data-taking routine described in Section 4.1.3 and most of them (∼78%) occurred during moonlight time (see Table 5.1). The data correspond to zenith angles between 28◦ and 50◦. The lower limit of 28◦ corresponds to the lowest zenith angle at which Cas A is seen from the MAGIC site. One could argue that observations should have been scheduled for zenith angles above 50◦, angles at which the collection area is larger and hence the sensitivity for the highest energies improves (see Section 4.1.6.1). There were two reasons not to do so. First, that the spectrum in the energy range between ∼100 GeV and ∼1 TeV also contains relevant information when trying to model the origin of the gamma-ray emission. This part

91 92 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A

Sky Brightness Hardware Settings Time [NSBDark] [h] 0-2 (including Dark) nominal HV 42.2 2-3 nominal HV 20.6 3-5 nominal HV 23.0 5-8 nominal HV 21.9 8-12 nominal HV 12.2 0-3 reduced HV 3.8 3-5 reduced HV 9.2 5-8 reduced HV 11.4 8-12 reduced HV 13.7 0-8 UV-pass filters 9.3 8-15 UV-pass filters 12.9 15-30 UV-pass filters 10.9 Total (all configurations) 191.1

Table 5.1: Effective observation time of the different hardware and sky brightness conditions under which Cas A samples were taken. Most of the NSB/hardware bins include data from ST.03.06 and ST.03.07

of the spectrum had not been fully explored by Fermi-LAT and VERITAS, as can be seen in Figure 3.4, and it is also the energy range in which MAGIC is most sensitive. The second reason has to do with systematic uncertainties. At high zenith angles the light produced in the showers must traverse a larger atmospheric depth, its absorption increases and hence the analysis is more sensitive to the differences in the atmospheric conditions between data and MC. Both high zenith angle and moonlight observations carry additional systematic uncertainties. The systematic errors resulting from the combination of both is unknown, but for sure higher than if we restrict ourselves to the zenith range in which the performance under moonlight was studied. A precise measurement of the spectrum at energies below 10 TeV requires that systematic errors are kept under control, especially if one wants to claim or exclude an eventual curvature or any spectral feature.

The data have been analyzed following the optimised moonlight analysis described in Section 4.2.4. The full data sample was divided in subsamples according to the hard- ware configuration of the observations, the NSB level under which they were taken and to the period in which they were performed (ST.03.06/ST.03.07, periods defined in Sec- tion 4.1.5). The 19 resulting subsamples were analyzed independently with their corre- sponding set of MC simulations and specific trained Random Forests. The 19 resulting spectra were merged using Foam to derive a unified spectrum (see Section 4.1.6.5). For the spectral reconstruction a point-like source was assumed and typical selection cuts with 90% and 75% gamma-ray efficiency for the gamma-ray/hadron separation and sky signal region radius, respectively [42]. Three OFF regions were considered for the background estimation. 5.2. RESULTS 93

5.1.2 Observations with Fermi-LAT

Daniel Galindo and Emma de O˜naWilhelmi, coauthors of [28], analyzed 8.3 yr of Fermi- LAT data (up to December 6, 2016) on a 15◦ ×15◦ region around the position of Cas A. The goal was to update and improve the existing spectrum and to combine it with the one obtained with MAGIC. The details of the analysis can be found in the mentioned publication. Here I will just stress that events were selected with energies between 60 MeV and 500 GeV and that the usual filters and corrections recommended by the Fermi-LAT collaboration were applied. The Cas A spectrum was fitted with a smoothly E Γ1 E (Γ1−Γ2)/β −β broken power-law (dN/dE = No( ) (1+( ) ) ) with the parameter β fixed Eo Eb to 1 and the energy break to Eb=1.7 GeV, as it had been done in [198]. Eo is the normalisation energy, fixed to 1 GeV. The SED was obtained by fitting the source normalisation factor in each energy bin independently using a power-law spectrum with a fixed spectral index of 2. Spectral points were drawn if Test Statistics (TS) value of the bin is at least 4, otherwise upper limits at the 95% confidence level were computed.

5.2 Results

5.2.1 Spectrum

If having reached this point the reader compares Table 5.1 with Table 1 in [28], the publication that came out from the results presented in this chapter, he will find that the UV-pass filter data were not included in the published article. The main reason is simple: the UV-pass filter data were not ready to be analyzed by the time in which the spectrum that appears in the mentioned publication was obtained. A fraction of the data had been wrongly calibrated by the standard On Site Analysis Team (see Section 4.1.4.3) and had to be re-processed and re-analyzed. Besides, the MC sample and the Random Forest needed to analyze the UV-pass filter data corresponding to the ST.03.07 data was not ready by that time. As it will be shown, the results obtained with 158 hours of “non-filter” data were significant enough not only to go without the ∼30 remaining hours, but also to stop performing new observations. But now that the UV-pass filter data have been analyzed there is still a probably more important reason to still present the data as it has been done in the publication: UV-pass filter data come with the price of higher additional systematic uncertainties in the flux normalization, as it was shown in Section 4.2.5.5. Here I will present first the spectral measurements as they appear in the publication and later on I will compare it with those obtained after including the UV-pass filter data. The individual spectra of each subsample, along with other sanity checks performed to evaluate the robustness of the results, are shown in Section 5.3. Figure 5.1 shows the reconstructed SED obtained with Fermi (blue squares) and with the MAGIC telescopes (black dots), after applying the Fold forward folding method described in Section 4.1.6.5, using nominal and reduced-HV data only. Red solid line is the curve obtained that best fits the MAGIC data assuming a power-law with an exponential cut-off (EPWL): 94 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A ] •1 s •2

−11

[erg cm 10 dN dE dA dt 2 E

10−12

MAGIC Statistical uncertainty Fermi Systematic uncertainty − 10 13

− 10 1 1 10 102 103 104 E [GeV]

Figure 5.1: SED measured by the MAGIC telescopes (black dots) and F ermi (blue squares). The red solid line shows the result of fitting the MAGIC spectrum with Eq. 5.1. The black solid line is the broken power-law fit applied to the F ermi spectrum.

dN  E −Γ  E  = N0 exp − (5.1) dE E0 Ec −11 −1 −2 −1 with a normalisation constant N0 = (1.1 ± 0.1stat ± 0.2sys) × 10 TeV cm s at a normalisation energy E0 = 433 GeV, a spectral index Γ = 2.4 ± 0.1stat ± 0.2sys and a +1.6 +0.8 cut-off energy Ec = 3.5 −1.0 stat −0.9 sys TeV. The computation of the systematic errors of the spectral parameters and on the construction of the statistical and systematic bands of MAGIC spectrum in Figure 5.1 is discussed in Section 5.3.2. The probability of the EPWL fit is 0.42. The model was tested against the null hypothesis of no cut-off, which is described with a pure power-law (PWL). The proba- bility of the PWL fit is 6 × 10−4. A likelihood ratio test between the two tested models favours the one that includes a cut-off at ∼ 3.5 TeV with 4.6σ significance. Figure 5.2 compares the fit residuals for the two tested models: PWL and EPWL. The residuals obs exp obs are here defined as NON /NON − 1, where NON is the number of observed events (in- exp cluding background) in the ON region and NON is the number of events predicted by the fit in the same region. All the bins in estimated energy which contain events are used in the fits, but only those with a 2σ significance gamma-ray excess are shown as SED points in upper panel of Figure 5.1. Another way to visualize the significance of the cut-off is shown in Figure 5.3. It 2 shows the χ profile when fitting the spectrum with Equation 5.1, but fixing Ec to different values. The minimum is obtained at ∼3.5 TeV. 5.2. RESULTS 95

0.6 0.4 0.2 0

Fit residuals −0.2 −0.4 −0.6 EPWL −0.8

− 3 1 102 10 104 0.6 0.4 0.2 0

Fit residuals −0.2 −0.4 −0.6 PWL −0.8 −1 102 103 104 Estimated Energy [GeV]

Figure 5.2: Relative fit residuals for the two tested models fitting the MAGIC spectrum: power-law with exponential cut-off (EPWL, upper panel) and power law (PWL, lower panel). The error bars are calculated such that they correspond to the total contribution of each estimated energy bin to the final likelihood of the fit.

Figure 5.3: Obtained χ2 when fitting the spectrum of Figure 5.1 using Equation 5.1 for different fixed values of the cut-off energy. 96 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A ]

•1 −11

s 10 •2 [ TeV cm

−12 dN 10 dE dA dt

2 E

− 10 13 Only nominal and reduced HV (158 h)

All, including UV•pass Filters (191 h)

102 103 104 Energy [GeV]

Figure 5.4: SED measured by the MAGIC telescopes before and after including UV-pass filter data. Dashed lines show the result of fitting the MAGIC spectrum with Eq. 5.1 including (red) and without including (black) UV-pass filter data. Light and dark gray shaded areas are the statistical and systematic bands of Figure 5.1, respectively.

For the F ermi-LAT analysis, a broken power-law function with normalisation −12 −1 −2 −1 No = (8.0±0.4)×10 MeV cm s , indices Γ1 = 0.90 ± 0.08 and Γ2 = 2.37 ± 0.04 is obtained and shown in Figure 5.1, blue solid squares. The dominant systematic un- certainties at low energies come from the modelling of the Galactic diffuse emission [28]. The spectral index of ∼2.37 measured with F ermi-LAT above a few GeV is compatible within errors with the one measured using the MAGIC telescopes.

Adding UV-pass filter data Figure 5.4 shows the comparison between the MAGIC spectra obtained before and after adding the UV-pass filter samples. Both spectra are perfectly compatible. The new optimized parameters from the EPWL fit (Equation 5.1) to the full data sample −11 −1 −2 −1 are N0 = (1.1 ± 0.1stat ± 0.3sys) × 10 TeV cm s , Γ = 2.3 ± 0.1stat ± 0.2sys and +1.3 +0.4 Ec = 2.9 −0.8 stat −0.7 sys TeV, always for a normalisation energy E0 = 433 GeV. The fit probability is 0.33. For a PWL, the fit probability is 1 × 10−4 and the likelihood ratio tests favours the EPWL model with 4.9σ significance. The comparison of the fit residuals for the two tested models is shown in Figure 5.5.

5.3 Statistical and systematic uncertainties

Here I describe how the final MAGIC spectrum of Cas A, with their error bands, was obtained. I also include the results of several checks that were performed to test the robustness of the reported cut-off and the reliability of the measured spectrum.

5.3.1 Individual spectra The spectrum of Figure 5.1 is the result of applying a forward folding method to a spectrum that is obtained after merging the different individual spectra corresponding 5.3. STATISTICAL AND SYSTEMATIC UNCERTAINTIES 97

0.6 0.4 0.2 0

Fit residuals −0.2 −0.4 −0.6 EPWL −0.8

− 3 1 102 10 104 0.6 0.4 0.2 0

Fit residuals −0.2 −0.4 −0.6 PWL −0.8 −1 102 103 104 Energy [GeV]

Figure 5.5: Relative fit residuals for the two tested models fitting the MAGIC spectrum: power-law with exponential cut-off (EPWL, upper panel) and power law (PWL, lower panel) for the full sample (including UV-pass filter data). to each of the NSB/hardware bins detailed in Table 5.1. Actually, those subsamples are also divided according to the epoch in which they were taken, since data from ST.03.06 and ST.03.07 are combined. Figures 5.6 and 5.7 show all these individual spectra, one for each NSB/hardware/MC period bin. The result of merging all these spectra using Foam, before applying Fold forward folding algorithm, with and without UV-pass filter samples is shown in Figure 5.8. Both spectra are perfectly compatible.

5.3.2 Statistical and systematic uncertainties in the spectrum The statistical band in Figure 5.1 is built as follows. We re-write Equation 5.1 as

 −Γ       dN 0 E E0 E = N0 exp exp − (5.2) dE E0 Ec Ec    0 E0 with N0 = N0 exp − . After fitting the spectrum with this function, the limits Ec of the statistical band at the normalization energy E0 are obtained from the statistical 0 (MINOS) errors of the parameter N0. The complete band is built by repeating the fit to the SED at different energies E0. As already said in Appendix A.2, the systematic uncertainty due to an eventual mismatch on the absolute energy scale between MAGIC data and MC simulations is constrained to be below 15%. By conservatively modifying the absolute calibration of the telescopes by ±15%, and re-doing the whole analysis it is possible to evaluate the effect of this systematic uncertainty in the estimated source spectrum. This does not produce a simple shift of the spectrum along the energy axis, but changes also its hardness. Even in the unlikely scenario in which, through the 158 h of observations, the average Cherenkov light yield was overestimated by 15% relative to the MC, by applying the corresponding correction the resulting spectrum is still better fit by an EPWL at the level of 3.1σ. In the also unlikely scenario in which the light yield was underestimated, the EPWL is preferred over the PWL at the 6.5σ level. The systematic 98 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A ] ] ] •1 ST.03.06 NomHV NSB_0•2 •1 ST.03.07 NomHV NSB_0•2 •1 ST.03.06 NomHV NSB_2•3 s s s •2 •2 •2 Eff Time= 25.6 h Eff Time= 16.6 h Eff Time= 11.4 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 ST.03.07 NomHV NSB_2•3 •1 ST.03.06 NomHV NSB_3•5 •1 ST.03.07 NomHV NSB_3•5 s s s •2 •2 •2 Eff Time= 9.2 h Eff Time= 10.6 h Eff Time= 12.4 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 ST.03.06 NomHV NSB_5•8 •1 ST.03.07 NomHV NSB_5•8 •1 ST.03.07 NomHV NSB_8•12 s s s •2 •2 •2 Eff Time= 5.4 h Eff Time= 16.5 h Eff Time= 12.2 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV]

Figure 5.6: Individual Cas A spectra obtained with flute for each subsample of Table 5.1 (nominal HV only). Samples were also divided according to the epoch in which they were taken to derive these spectra (different epochs require different MC samples for the analysis). As reference the fit result from Equation 5.1 is shown (red dashed line). 5.3. STATISTICAL AND SYSTEMATIC UNCERTAINTIES 99 ] ] ] •1 ST.03.06 RedHV NSB_0•3 •1 ST.03.06 RedHV NSB_3•5 •1 ST.03.06 RedHV NSB_5•8 s s s •2 •2 •2 Eff Time= 3.8 h Eff Time= 9.2 h Eff Time= 11.4 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 ST.03.06 RedHV NSB_8•12 •1 ST.03.06 Filters NSB_0•8 •1 ST.03.07 Filters NSB_0•8 s s s •2 •2 •2 Eff Time= 13.7 h Eff Time= 4.6 h Eff Time= 4.7 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 ST.03.06 Filters NSB_8•15 •1 ST.03.07 Filters NSB_8•15 •1 ST.03.06 Filters NSB_15•30 s s s •2 •2 •2 Eff Time= 4.6 h Eff Time= 8.3 h Eff Time= 4.5 h 10−11 10−11 10−11 SED [ TeV cm SED [ TeV cm SED [ TeV cm

10−13 10−13 10−13

102 103 104 102 103 104 102 103 104

Energy [GeV] Energy [GeV] Energy [GeV] ] •1 ST.03.07 Filters NSB_15•30 s •2 Eff Time= 6.4 h 10−11 SED [ TeV cm

10−13

102 103 104

Energy [GeV]

Figure 5.7: Individual Cas A spectra obtained with flute for each subsample of Table 5.1 (reduced HV and UV-pass filters only). Samples were also divided according to the epoch in which they were taken to derive these spectra. As reference the fit result from Equation 5.1 is shown (red dashed line). 100 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A ]

•1 −11

s 10 •2 [ TeV cm

−12 dN 10 dE dA dt

2 E

− 10 13 Only nominal and reduced HV (158 h)

All, including UV•pass Filters (191 h)

102 103 104 Energy [GeV]

Figure 5.8: SED obtained with foam after merging all the individual spectra of Fig- ures 5.6 and 5.7, including (red squares) and without including (black dots) UV-pass filter samples. band is delimited by the the best-fit functions obtained, using Equation 5.1, in these two extreme cases in which the absolute calibration of the telescopes was modified by ±15%. The systematic uncertainties in the flux normalization N0 and spectral index Γ are retrieved from the study described in Section 4.2.5.5. The systematic errors in the cut-off energy were estimated from the best-fit values of Ec obtained when modifying the absolute light scale by ±15%.

5.3.3 Fit Results The significance of the reported cut-off in the TeV gamma-ray spectrum of Cas A is a result of a Likelihood ratio tests between the two tested models employed when aiming to fit the spectrum: a model including a cut-off (EPWL) and the null hypothesis of no cut-off (PWL). The fit results before and after including UV-pass filter data are shown in Tables 5.2 and 5.3, respectively. Tables 5.4 and 5.5 show the results of fitting the spectra obtained after redoing the analysis modifying the absolute light scale by ±15%. Finally, Table 5.6 shows the results of the fits applied to the original data sample (without including UV-pass filter data) when using 5 OFF regions for the background estimation (instead of 3, as it has been done in every single spectrum shown in this thesis).

5.3.4 Safety check with Crab Nebula samples If the Random Forest is not properly trained (for instance, if the OFF data sample is not accurately chosen) or if the MC sample used for the collection area computation is not correctly selected the obtained spectrum would be unreliable. A test that is commonly performed in any IACT analysis consists on obtaining the spectrum of a rather stable and well known source as the Crab Nebula, using data taken contemporary to the original observations and obtained under the same conditions (zenith range, sky 5.3. STATISTICAL AND SYSTEMATIC UNCERTAINTIES 101

Fit Parameter PWL EPWL −1 −2 −1 +0.3 +0.1 N0 [TeV cm s ] 9.2−0.3 e-12 1.1−0.1 e-11 +0.03 +0.09 Γ −2.67−0.03 −2.38−0.08 +1.6 Ec [TeV] - 3.5−1.0 χ2/ndf 29.0/9 8.1/8 Prob. 6 e-4 0.42

Table 5.2: Fit results for EPWL and PWL models after fitting the spectrum obtained without including UV-pass filters data. Uncertainties from MINOS.

Fit Parameter PWL EPWL −1 −2 −1 +0.3 +0.1 N0 [TeV cm s ] 8.7−0.3 e-12 1.1−0.1 e-11 +0.03 +0.10 Γ −2.66−0.03 −2.33−0.09 +1.3 Ec [TeV] - 2.9−0.8 χ2/ndf 33.5/9 9.2/8 Prob. 1 e-4 0.33

Table 5.3: Fit results for EPWL and PWL models after fitting the spectrum obtained including UV-pass filters data. Uncertainties from MINOS.

Fit Parameter PWL EPWL −1 −2 −1 N0 [TeV cm s ] (7.2 ± 0.2) e-12 (1.0 ± 0.1) e-11 Γ −2.80 ± 0.03 −2.60 ± 0.09 Ec [TeV] - 4.3 ± 2.0 χ2/ndf 15.3/9 5.6/8 Prob. 0.08 0.69

Table 5.4: Fit results for EPWL and PWL models after fitting the spectrum obtained without including UV-pass filters data, after scaling the absolute calibration in the MC by 0.85. EPWL model is preferred with 3.1σ. Uncertainties from Migrad.

Fit Parameter PWL EPWL −1 −2 −1 N0 [TeV cm s ] (8.1 ± 0.3) e-12 (1.2 ± 0.1) e-11 Γ −2.55 ± 0.03 −2.13 ± 0.09 Ec [TeV] - 2.6 ± 0.7 χ2/ndf 47.3/9 4.9/8 Prob. 3.5 e-7 0.69

Table 5.5: Fit results for EPWL and PWL models after fitting the spectrum obtained without including UV-pass filters data, after scaling the absolute calibration in the MC by 1.15. EPWL model is preferred with 6.5σ. Uncertainties from Migrad. 102 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A

Fit Parameter PWL EPWL −1 −2 −1 N0 [TeV cm s ] (7.5 ± 0.2) e-12 (0.9 ± 0.1) e-11 Γ −2.64 ± 0.03 −2.34 ± 0.08 Ec [TeV] - 3.5 ± 1.1 χ2/ndf 32.5/9 7.3/8 Prob. 2 e-4 0.50

Table 5.6: Fit results for EPWL and PWL models after fitting the spectrum obtained without including UV-pass filters data using 5 OFF regions for the background estimation. EPWL model is preferred with 5.0σ. Uncertainties from Migrad.

Sky Brightness Hardware Settings Time [NSBDark] [h] 0-2 (including Dark) nominal HV 3.7 2-3 nominal HV 2.2 3-5 nominal HV 2.2 5-8 nominal HV 2.0 8-12 nominal HV - 0-3 reduced HV - 3-5 reduced HV 0.7 5-8 reduced HV 1.1 8-12 reduced HV 0.9 Total (all configurations) 12.8

Table 5.7: Crab sample used for tests. Effective observation time of the different hard- ware and sky brightness conditions of the different subsamples. The relative weight of each subsample should be compared with 5.1 (without including UV-pass filter data).

brightness, atmospheric transmission...), analyzed with the same Random Forest and using the same MC test sample. I collected data from several Crab Nebula observations taken under different NSB/hardware conditions and built a sample that has nearly the same relative contribution from each NSB/hardware bin than the original Cas A sample. This sample is shown in Table 5.7 and the effective observation times should be compared to those in Table 5.1. The obtained individual spectra corresponding to each NSB/hardware/MC period bin are shown in Figure 5.9. The result of merging these spectra using Foam, in Figure 5.10. The obtained spectrum is compatible with a reference Crab Nebula measured by MAGIC and reported in [42]. No artificial cut-off appears as a result of this analysis. 5.3. STATISTICAL AND SYSTEMATIC UNCERTAINTIES 103 ] ] ] •1 •1 •1 s s s

•2 NSB 0•2 Nom HV ST.03.06 •2 NSB 0•2 Nom HV ST.03.07 •2 NSB 2•3 Nom HV ST.03.06 Eff Time= 2.0 h Eff Time= 1.7 h Eff Time= 1.1 h 10−10 10−10 10−10 SED [ TeV cm SED [ TeV cm SED [ TeV cm

−12 −12 −12 10 3 10 3 10 3 102 10 104 102 10 104 102 10 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 •1 •1 s s s

•2 NSB 2•3 Nom HV ST.03.07 •2 NSB 3•5 Nom HV ST.03.06 •2 NSB 5•8 Nom HV ST.03.03 Eff Time= 1.1 h Eff Time= 2.2 h Eff Time= 1.2 h 10−10 10−10 10−10 SED [ TeV cm SED [ TeV cm SED [ TeV cm

−12 −12 −12 10 3 10 3 10 3 102 10 104 102 10 104 102 10 104

Energy [GeV] Energy [GeV] Energy [GeV] ] ] ] •1 •1 •1 s s s

•2 NSB 5•8 Nom HV ST.03.06 •2 NSB 3•5 Red HV ST.03.06 •2 NSB 5•8 Red HV ST.03.06 Eff Time= 0.8 h Eff Time= 0.7 h Eff Time= 1.1 h 10−10 10−10 10−10 SED [ TeV cm SED [ TeV cm SED [ TeV cm

−12 −12 −12 10 3 10 3 10 3 102 10 104 102 10 104 102 10 104

Energy [GeV] Energy [GeV] Energy [GeV] ] •1 s

•2 NSB 8•12 Red HV ST.03.06 Eff Time= 0.9 h 10−10 SED [ TeV cm

−12 10 3 102 10 104

Energy [GeV]

Figure 5.9: Individual Crab Nebula spectra obtained with flute for each subsample of Table 5.7. Samples were also divided according to the MC period in which they were taken to derive these spectra. The Crab Nebula spectrum measured by MAGIC in [42] is given as reference (red dashed line). 104 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A ] •1

s −9

•2 10 [ TeV cm dN − 10 10 dE dA dt

2 E

− 10 11

− 10 12

102 103 104 Energy [GeV]

Figure 5.10: SED obtained with foam after merging all the individual spectra of Figure 5.9. The Crab Nebula spectrum measured by MAGIC in [42] is given as reference (red dashed line). 5.4. INTERPRETATION 105

5.4 Interpretation

Figure 5.1 holds the most precise spectrum of Cas A to date in the gamma-ray band. At VHE, the measurements were extended up to ∼10 TeV. MAGIC and Fermi also connect smoothly close to ∼100 GeV energy, filling the previously “unexplored” range between ∼100 and ∼400 GeV (see Figure 3.4). These new measurements, obtained in a wider energy range with lower statistical uncertainties, demand an update of the models that aim to explain the observed emission. Needless to say that the picture we are facing appears completely different after having found evidence of a cut-off at ∼3.5 TeV. We attempted to model the emission using naima [199], assuming that the pop- ulation of particles producing such radiation were either electrons or protons, always described by a power-law with an exponential cut-off function like in Equation 2.1 (see Section 2.2.1 and Appendix D). For all the calculations the distance to the source was fixed at 3.4 kpc.

5.4.1 Leptonic model We first considered the possibility that the gamma-ray emission was originated by an electron population, producing Bremsstrahlung and Inverse Compton radiation in the gamma-ray range, and synchrotron radiation at lower energies. The aim was to see if a pure leptonic model could explain MAGIC observations while being compatible with the observations at other wavelengths. The photon fields that contribute to the inverse Compton component are the ubiquitous 2.7 K cosmic microwave background (CMB) and the far infrared (FIR) field, which in Cas A is particularly large, of ∼2 eV/cm3 at 100 K. Fixed the photon fields, we can obtain the highest possible density of electrons N0 allowed by the VHE flux. For the obtained value of N0 we can constrain the maximum magnetic field B for which the synchrotron radiation produced by the derived population does not exceed the radio and X-ray measurements. This constraint is due to the fact that, as reported in section 3.3, several emission regions, likely associated to different particle populations, were identified at those wavelengths. The population that mainly contributes to the observed gamma-ray spectrum might not be the same that is producing the bulk of the radio emission, but for sure it cannot produce a higher radio flux than what has already been measured. The spectral measurements at radio, X-ray and VHE also constraint the spectral index and the cut-off energy of the electron population. The multi-wavelength SED is shown in Fig. 5.11. The MAGIC points can be described by an electron population with amplitude at 1 TeV of 2×1034eV−1, spectral index 2.4 and cut-off energy at 8 TeV up-scattering the FIR (brown dash-dot line) and the CMB photons (green dashed line). The comparison with the low energy part of the SED constraints the magnetic field to B/180 µG. The resulting emission from the leptonic model is shown in Fig. 5.11. The same population of electrons would unavoidably produce Bremsstrahlung radiation below a few GeV (see green dotted line in Fig. 5.111). Always in the frame of the leptonic model, the emission observed with F ermi LAT at the lowest energies constrains the target proton density to be below n∼1 cm−3. This value is lower than the 10 cm−3 used in [198]

1Note that the structure in the spectral shape around 2 MeV is due to the transition between the two asymptotic regimes described in [61], used in the naima code. 106 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A and [175], which is the estimated density behind the blastwave according to [141]. But it is still compatible with values for the smooth ejecta density estimated and observed over the remnant, ranging from 0.1 to 10 cm−3 [155] and with the stellar wind densities considered in [201]. A relatively low magnetic field and a large photon field could be fulfilled in a reverse shock evolving in a thin and clumpy ejecta medium which provides a moderate amplification of the magnetic field and large photon fields in the clumps which are observed as optical knots. The model is generally compatible with the X-ray points and with MAGIC spectrum above a few TeV, it is consistent with the radio measurements, but fails to reproduce the gamma-ray spectrum between 1 GeV and 1 TeV, being a factor 2-3 below the measured LAT spectrum. In addition, to accommodate a magnetic field of the order of ∼1 mG, as reported in [191], the amplitude of the electron spectrum would need to be decreased at least by a factor 100, rendering a negligible inverse Compton contribution at the highest energies. A pure leptonic model cannot explain the multiwavelength measurements assuming only one population of electrons, as it had already been practically discarded in [175, 201].

5.4.2 Hadronic model Next goals were to test if a pure hadronic model could explain the GeV-TeV emission and, in such case, which would be the maximum energy of the accelerated protons. To compute the expected emission by pion decay we assumed a target density of 10 cm−3, as estimated in [141]2. The proton spectrum is best-fit with a hard index of 2.21 and an exponential cut-off energy of 12 TeV, which implies a modest acceleration of cosmic rays to VHE, well below the energy needed to explain the knee. The cut-off energy found is indeed several tens of TeV lower than what was assumed in the previous models of [175] and [201]. The proton energy above 1 TeV is 5.1×1048 erg, which is 51 only ∼0.2% of the estimated explosion kinetic energy of Esn = 2 × 10 erg [141]. The total energy stored in protons above 100 MeV amounts to 9.9 × 1049 erg. The flat spectral index is in agreement with the standard theory of diffuse shock acceleration, but the low cut-off energy implies that Cas A is extremely inefficient in the acceleration of cosmic rays at the present moment. The characteristic maximum energy of these accelerated protons can be expressed, for standard parallel shock acceleration efficiency (see e. g. [140]), as: B t u Ep ' 450( )( 0 )( s )2η−1 TeV, (5.3) max 1 mG 100 yr 3000 km/s 3 where us ∼ 10 km/s is the speed of the forward shock, t0 ∼ 330 yrs is the age of Cas A and η ≥ 1 is the acceleration efficiency (the ratio of the mean free path of a particle to its gyroradius, ∝ z−1, defined in Section 3.1), which is ∼1 in the Bohm diffusion regime. Even assuming a magnetic field as low as a few tens of µG, a poor acceleration efficiency η 10 has to be invoked to accommodate the low cut-off energy found. Alternatively, Cas A may also be located in a very diffusive region of the Galaxy, resulting in a very fast escape of protons of TeV and higher energies.

2As it can be seen in Figure 2.2, the target density shifts the radiation curve up and down, affecting the normalization N0, but has no effect on the maximum energy of the relativistic protons, which only depends on the properties of the accelerator. 5.4. INTERPRETATION 107

10-8 Hadronic Model Total Leptonic Model 10-9 IC on CMB IC on FIR Bremsstrahlung ] 10-10

− 2 Synchrotron cm − 1 10-11 d N/ E [ erg s 2 10-12 E

10-13

10-14 10-8 10-6 10-4 10-2 100 102 104 106 108 1010 1012 1014 Energy[eV]

Figure 5.11: Multi-wavelength SED of Cas A. The different lines show the result of fitting the measured energy fluxes using naima and assuming a leptonic or a hadronic origin of the GeV and TeV emission. The radio emission is displayed in purple dots [142, 153, 46, 161, 78, 121, 167], soft SUZAKU X-rays are marked in red [149] and hard INTEGRAL X-rays in blue [195]. LAT points are shown in cyan and the MAGIC ones in green. 108 CHAPTER 5. COSMIC-RAY ACCELERATION IN CAS A Conclusions

From Bruno Rossi’s book I got the first lines of this thesis and the inspiration I needed to write the next ones. Now that we are approaching the end I wanted to somehow show my gratitude by summoning him again. The last chapter starts as follows [174]:

“Half a century after the discovery of cosmic rays the problem of their origin is still unsolved. We do not know for certain where cosmic rays come from. We do not know for certain how they acquire tremendous energies.”

Another half a century after Rossi wrote those words the problem remains unsolved. But although the last two sentences are still perfectly up to date, there has been some progress in the field since then. Especially since the 1980’s the SNR hypothesis gained popularity injecting a boost of optimism in the community: gamma-ray observations were going to reveal us which were the remnants that were responsible for the acceler- ation of the galactic cosmic rays. Until now, Cas A was one of the most promising PeVatron candidates among the SNRs known so far. The work I have done during my thesis has provided us the most precise measurement of the Cas A spectrum at VHE to date. This measurement allowed us to constrain the characteristics of the particle acceleration in this remnant, adding one piece more to the puzzle that, once completed, we expect that will finally solve the mystery of the cosmic-ray origin. The results obtained are the outcome of a long process that started with the op- timization of moonlight observations with MAGIC. Without that it would have been very difficult to obtain the amount of observation hours needed to accurately evaluate the Cas A spectrum at TeV energies. And at the same time, the results obtained for Cas A can be seen as a proof of the importance of exploiting at maximum the capabili- ties of the telescopes by increasing their duty cycle. The optimized moonlight analysis that Pierre Colin and I developed is being actively used by the rest of the MAGIC collaboration for different projects. The methods and the performance evaluation that I described in Chapter 4 and that were published in [29] will be a reference not only for ongoing and future MAGIC campaigns, but also for the times of CTA. For the first time the performance under moonlight of an IACT system was studied in detail with an analysis dedicated for such observations, including moonlight-adapted MC simu- lations. This dedicated analysis differs from the standard analysis chain in basically three aspects: the use of higher cleaning levels and size cuts and the modification of the MAGIC MC simulations by introducing noise to mimic the effect of NSB in the images or the characteristics of the filters in the case of UV-pass filter observations. Probably one of the most remarkable characteristics of such analysis is that regardless of its simplicity the accuracy that can be reached is very reasonable.

109 110 CONCLUSIONS

The main effect of moonlight in the telescopes performance is an increase in the energy threshold, mainly caused by the different hardware and software modifications that have to be adopted to deal with the increased noise (higher trigger threshold, image cleaning levels...). The energy threshold increases with the NSB level, which for zenith angles below 30◦ goes from ∼70 GeV (at the reconstruction level) under dark conditions up to ∼300 GeV in the brightest scenario studied (15-30 × NSBDark). No significant worsening on the angular resolution above 300 GeV was observed. An eventual degradation in the sensitivity is constrained to be below 10% while observing with nominal HV under illumination levels < 8 × NSBDark. The sensitivity degrades by 15 to 30% when observing with reduced HV and by 60 to 80% when observing with UV-pass filters. This can be seen as an additional motivation to equip future IACT telescopes cameras with SiPMs, that in principle do not require any hardware intervention to operate under moonlight. With the dedicated moonlight analysis we could reconstruct the Crab Nebula spec- trum in all the NSB/hardware bins considered, obtaining a flux that is compatible within 10%, 15% and 30% with the one obtained under dark conditions for nominal HV, reduced HV and UV-pass filter observations, respectively. The systematic uncer- tainty on the flux-normalization, 11% for standard dark observation, increases to 15% for nominal HV moonlight observations with NSB < 8 × NSBDark, 19% for reduced HV observations between 5 and 18 × NSBDark and 30% for UV-pass filter observations between 8 and 30 ×NSBDark. No significant additional systematic error on the spectral slope was found, and the overall uncertainty is still ±0.15 as reported in [42]. These results are particularly important in the context of this thesis because they prove that the spectral characteristics of presumably steady sources such as the Crab Nebula and Cas A can be safely studied with moonlight observations, keeping the systematic uncertainties under control. With almost 200 hours of observations with the MAGIC telescopes, almost 80% of them during moonlight time, we were able to find for the first time 4.9 σ evidence of a cut-off at ∼3.5 TeV in the Cas A spectrum. This measurement questioned the expectations from previous works that aimed to model the gamma-ray emission in the remnant. Although some works had already suggested that this remnant was not efficient enough to accelerate protons up to PeV energies [68], none of the proposed models, including the most recent ones by the time in which we performed the ob- servations [175, 201], were expecting a cut-off at such a low energy. Actually, all the speculations concerning the acceleration of the highest energy particles in Cas A were limited by the statistical and systematic uncertainties of the previous spectral mea- surements at TeV energies. The results reported here offer a new scenario for those aiming to study particle acceleration in Cas A. Together with Daniel Galindo, Emma de O˜naWilhelmi, Juan Cortina and Abelardo Moralejo we used naima to model the observed electromagnetic emission assuming it was produced either by electrons or protons. We found that, as expected, a purely leptonic model cannot explain the broadband spectrum of Cas A. A leptonic population is undoubtedly necessary to explain the emission at radio and X-ray energies, but the same population cannot account for the observed flux at HE and VHE. Actually, a leptonic population could only partially contribute non-negligibly to the TeV emission if it was located in a region of low magnetic field as in the reverse shock. However, the models we tested strongly suggest that the bulk of the HE and VHE 111 gamma rays must be of hadronic origin. Cas A is most likely accelerating cosmic rays as it was expected, but to a rather low energy of a few TeV. Even if there was a non- negligible leptonic contribution to the TeV emission the main conclusion of this work would remain unchanged: at its present age Cas A is not a PeVatron, as it cannot accelerate protons up to the energies of the knee. A detailed study of the cut-off shape and especially of the morphology of the source can be the key to understand the reason behind this low acceleration efficiency. Despite their limited angular resolution and sensitivity, current IACT telescopes may still be able to provide a non-negligible contribution. Unfortunately the size of the source is comparable to the PSF of these instruments. However, they could still be handful to constrain the extension of the gamma-ray emission or to exclude some of the accel- eration regions identified in the X-ray band as the responsible of such emission. Of course, with the better angular resolution of CTA we will be able to identify with much better precision where in the remnant the accelerated particles emitting gamma rays are located. Having found evidence of a cut-off in the VHE spectrum of Cas A, another promising candidate is discarded from the PeVatron quest. With only a few more candidates remaining still in the race, the necessity of identifying new young SNRs is essential if we want to prove that these systems are the accelerators of the galactic cosmic rays. At the same time, as the known SNRs fail to pass the PeVatron test other up to now not so popular accelerators as the interacting winds of massive stars or binary systems may gain strength. Perhaps with CTA or with the improvement of the neutrino observatories the mystery of the cosmic-ray origin is finally revealed. But at least for a few more years Rossi’s words will still sound updated. 112 CONCLUSIONS Appendix A

Performance under moonlight: auxiliary analysis

Some auxiliary tests and plots relative to the study of the performance of MAGIC under moonlight are shown here.

A.1 Hillas distributions

Figures A.1 to A.4 show the comparison between data and MC image length and width distributions, before and after applying the optimized size cuts of Table 4.3. The effect of moonlight and of using different hardware configurations is not as notorious as in the image size distributions shown in Figures 4.25 and 4.26.

A.2 15-30 ×NSBDark UV-pass filter spectrum

In Section 4.2.5.2 it was pointed that in the Crab spectrum obtained with the brightest NSB bin (15-30 ×NSBDark with UV-pass filters) of Figure 4.31 the ratio of the flux to the dark one went slightly above ∼30% at energies between about 400 and 800 GeV. Since this was the only bin in which such a large difference was found, I carried out a few tests to try to find the origin of this disagreement. I do not arrive to any strong conclusion from these tests, but I present them as they could have some interest for future studies. If this apparent shift in the spectrum is properly understood and corrected, it might be possible to reduce the systematic uncertainties in the flux normalization of UV-pass filter data. One possibility would be due to an eventual miscalibration in the energy scale between data and MC. In MAGIC the uncertainty in the energy scale is constrained to be within 15% [42]. Figure A.5 shows the reconstructed spectra for the NSB bin under study when artificially shifting the light scale in the MC test sample by -10%,-5%, 5% and 10%. The effect of this variation is important specially close to the energy threshold and maybe also at the highest energies, but it fails to explain the overestimation of the flux at the mid energies. In Section 4.1.6.1 it was explained that the collection area has a small dependence on the azimuth angle of the observations that can often be averaged in only one azimuth bin, as it was done in all the spectra shown so far in this thesis. To investigate if

113 114APPENDIX A. PERFORMANCE UNDER MOONLIGHT: AUXILIARY ANALYSIS

10 10 MAGIC 1: Nominal HV, Dark (NSB=1) MAGIC 1: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Length [mm] Length [mm]

10 10 MAGIC 1: Reduced HV, NSB: 5•8 MAGIC 1: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Length [mm] Length [mm]

Figure A.1: Comparison between MAGIC 1 data (red) and MC gamma-ray (blue) image length distributions for different NSB/hardware bins. Data distributions are composed by excess events within a 0.14◦ circle around the Crab Nebula position. MC distributions were simulated with the same energy distribution as the Crab Nebula spectrum reported in [42]. In dashed and solid lines the distributions before and after applying the optimized size cuts of Table 4.3 are shown. Distributions with and without size cuts were normalized to different values for a better visualization. A.2. 15-30 ×NSBDARK UV-PASS FILTER SPECTRUM 115

10 10 MAGIC 2: Nominal HV, Dark (NSB=1) MAGIC 2: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Length [mm] Length [mm]

10 10 MAGIC 2: Reduced HV, NSB: 5•8 MAGIC 2: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 Length [mm] Length [mm]

Figure A.2: Same as Figure A.1, but for MAGIC 2. 116APPENDIX A. PERFORMANCE UNDER MOONLIGHT: AUXILIARY ANALYSIS

10 10 MAGIC 1: Nominal HV, Dark (NSB=1) MAGIC 1: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Width [mm] Width [mm]

10 10 MAGIC 1: Reduced HV, NSB: 5•8 MAGIC 1: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Width [mm] Width [mm]

Figure A.3: Comparison between MAGIC 1 data (red) and MC gamma-ray (blue) image width distributions for different NSB/hardware bins. Data distributions are composed by excess events within a 0.14◦ circle around the Crab Nebula position. MC distributions were simulated with the same energy distribution as the Crab Nebula spectrum reported in [42]. In dashed and solid lines the distributions before and after applying the optimized size cuts of Table 4.3 are shown. Distributions with and without size cuts were normalized to different values for a better visualization. A.2. 15-30 ×NSBDARK UV-PASS FILTER SPECTRUM 117

10 10 MAGIC 2: Nominal HV, Dark (NSB=1) MAGIC 2: Nominal HV, NSB: 3•5 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Width [mm] Width [mm]

10 10 MAGIC 2: Reduced HV, NSB: 5•8 MAGIC 2: UV•pass Filters, NSB: 8•15 Crab (no size cut) MC (no size cut) Crab (no size cut) MC (no size cut) Crab (with size cut) MC (with size cut) Crab (with size cut) MC (with size cut) 1 1 Event rate [a.u.] Event rate [a.u.]

− − 10 1 10 1

− − 10 2 10 2

− − 10 3 10 3

− − 10 4 10 4 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Width [mm] Width [mm]

Figure A.4: Same as Figure A.3, but for MAGIC 2. 118APPENDIX A. PERFORMANCE UNDER MOONLIGHT: AUXILIARY ANALYSIS

] −10 •1 10 s •2 [ TeV cm dN dE dA dt

2 E 10−11

MAGIC, 2016 Nominal HV, scale: 0.90 Energy scale: 0.95 Energy scale: 1.05 Energy scale: 1.10 No scaling (E scale = 1)

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

Figure A.5: Spectral energy distribution of the Crab Nebula obtained for UV-pass filters data with 15-30 ×NSBDark modifying the light scale in the MC sample. For comparison the result obtained with the dark sample using standard analysis in this work (black dots) and previously published by MAGIC (red solid line, [42]) are shown in every panel. The bottom sub-panels show the ratio of the fluxes measured under moonlight to the flux measured under dark conditions. this effect could be appreciable in the brightest NSB bin I obtained the spectra using different amounts of azimuth bins. The results are shown in Figure A.6. The spectrum obtained with 4 azimuth bins is closer to the dark one than the others, but the fact that, for instance, when considering instead 6 azimuth bins the reconstructed flux is again higher has to be understood before attempting any conclusion. A.2. 15-30 ×NSBDARK UV-PASS FILTER SPECTRUM 119

] 10

•1 10 s •2 [ TeV cm dN dE dA dt

2 − E 10 11

MAGIC, 2016 Nominal HV, Dark, 1 Az bin 1 bin in Az 2 bins in Az 3 bins in Az 4 bins in Az 6 bins in Az

1.8 1.6 1.4 1.2 1 0.8 0.6

Flux Ratio (moon/dark) 0.4 0.2 102 103 104 Energy [GeV]

Figure A.6: Spectral energy distribution of the Crab Nebula obtained for UV-pass filters data with 15-30 ×NSBDark using different number of azimuth bins in the estimation of the collection area. For comparison the result obtained with the dark sample using standard analysis in this work (black dots) and previously published by MAGIC (red solid line, [42]) are shown in every panel. The bottom sub-panels show the ratio of the fluxes measured under moonlight to the flux measured under dark conditions. 120APPENDIX A. PERFORMANCE UNDER MOONLIGHT: AUXILIARY ANALYSIS Appendix B

The Light-Trap detector

B.1 Silicon photomultipliers: a new era begins?

During this thesis I insisted a few times on how gamma-ray observations at multi- TeV energies can be the key to finally solve the puzzle of the galactic cosmic-ray origin. The future of PeVatron searches will require large IACT arrays involving tens of telescopes covering a large detection area to achieve a much better sensitivity at the highest energies. It will also need dedicated surveys to identify new candidates to accelerate cosmic rays to the energies of the knee, especially if we find that those few we know by now are running short of power as in the case of Cas A. Future IACT instruments require the construction of many more telescopes [12]. Also large cameras with a wide field of view will be needed, especially for surveys [86]. Of course, PeVatron searches are not the only goal of future VHE astronomy. The capability to react and understand transient phenomena or the sensitivity for dark matter searches, for instance, should also experience a significant boost. However, the cost of making more and larger cameras needing many thousands of pixels rapidly becomes prohibitive if utilising PMTs at over 100e/pixel. Future IACT telescopes may resort to cameras equipped with SiPMs: the tendency of the last years shows how their cost goes down while their performance improves. In the context of the IACT, SiPMs have the advantage to be robust devices that do not experience any ageing when exposed to bright environments as during bright Moon observations. One of the main conclusions of Chapter 4 was that hardware interventions such as reducing the gain of the PMTs or using UV-pass filters seemed to have a larger impact on the sensitivity of the instruments than the noise itself. The use of SiPMs in Cherenkov astronomy offers the possibility to reduce the cost of building telescopes and to significantly increase the duty cycle, allowing operations under any Moon phase. Other advantages of SiPMs with respect to PMTs are that they do not require high voltage operation (typical operational bias voltages go from ∼25 to ∼100 V), that they are not affected by magnetic fields and that can be easily calibrated thanks to their excellent single photoelectron resolution. However, SiPMs are still not perfectly suitable for VHE astronomy. Typical com- mercial devices are not available in sizes larger than 6×6 mm2, which can be a problem when trying to build large cameras, not only due to the cost, but also because of the complexity of the readout. Building larger detectors does not seem to be the best solution: impedance and dark count rate (thermal noise, see Section B.2) significantly

121 122 APPENDIX B. THE LIGHT-TRAP DETECTOR increase with size. A different approach is to build pixels made of several SiPMs (∼10) tiled together, where the signal output is the sum of all the individual signals of each SiPM [120]. Some drawbacks of this approach are that the gain of the individual SiPMs must be kept very well under control (ideally all the tiled devices should have the same gain) and that the noise of all the devices adds up too, which can be particularly disturbing during the calibration process. Typical SiPMs also have the disadvantage of offering a photo-detection efficiency (PDE) that is not so good below 400 nm where most of the Cherenkov light is emitted, but too good at higher wavelengths where the contribution from NSB is much larger. This may however change with the current development of SiPMs with enhanced sen- sitivity in the UV band [48].

B.2 Silicon photomultipliers: a brief technical re- view

B.2.1 Operational principle A photon travelling in a semiconductor like silicon may interact with the medium through photoelectric effect, transferring its energy to a valence electron. That electron can then jump to the conduction band, creating an electron-hole pair. Semiconductor detectors rely in the existence of a depletion region, a region free of charge carriers. In the case of silicon detectors, this region appears around a p-n junction. If an electric field is applied within the depletion region, the electron and hole created by the incoming photon will be accelerated by the field and will travel in opposite directions. If the field intensity is high enough, the charge carriers will have enough energy to produce secondary electron-hole pairs that will be also accelerated by the electric field and produce new pairs. A single photon interacting in these detectors can then trigger a self- perpetuating ionization cascade. This type of detectors are known as avalanche photo- detectors (APDs), photodiodes that are supplied with a reverse bias that generates the electric field necessary to draw the charges away from the depletion region, resulting in a net current of electrons going through the n-side and a current of holes going through the p-side of the device (Figure B.1). The detection process in silicon detectors is similar to what happens in gas detectors: a photon frees some charge inside the detector, the charge is driven by an electric field and an output pulse or current is then measured. Depending on the intensity of the field this cascade effect that produces secondary charges (charge amplification) is generated or not. A gas detector in which the electric field is so strong that a single ionization is enough to produce a discharge that ends with the gas completely ionized is known as a Geiger-M¨ullertube. The output signal of such detectors is always the same, no matter how many photons are interacting within the detector. The silicon detectors we are considering work in the same regime (Geiger mode). Above a given breakdown voltage Vbr silicon becomes conductive and is capable of amplifying the initial charge produced by a photoelectron into a macroscopic charge. Once an electron-hole pair is created a cascade is produced which would perpetually develop unless it is artificially quenched, which is achieved with a series resistor that limits the current during the break down of the diode. A single silicon photodiode can be schematized as a diode B.2. SILICON PHOTOMULTIPLIERS: A BRIEF TECHNICAL REVIEW 123

Figure B.1: Left: Representation of an APD operating in Geiger mode. Figure from [178]. Right: Equivalent circuit to a single avalanche photodiode.

followed by a resistor RQ as in Figure B.1. If no avalanche occurs within the detector, there is no current. An avalanche occurs if the potential in the diode Vd is higher than Vbr. If the circuit is supplied with a fixed bias voltage Vbias:

Vbias = Vd + RQI (B.1)

But a current I will flow only while Vd > Vbr which means

I > 0 ⇔ Vbias − RQI > Vbr (B.2)

When a trigger occurs the avalanche starts developing and the current going through RQ sharply rises until it is high enough so that the breakdown condition is no longer fulfilled. Then the avalanche is suffocated and the current slowly drops until the sili- con recovers its neutral status. As a result, a pulse like the one shown in Figure B.2 is produced. This pulse is independent of the number of photons interacting within the detector and hence a single silicon photodiode cannot be used to measure an instanta- neous flux. A SiPM is built as a dense array of identical, small, electrically and optically isolated Geiger-mode photodiodes (Figure B.3). Each array element is known as a “microcell” and typical SiPMs have between and 100 and 1000 of those per mm2. All microcells produce nearly the same pulse and are built small enough to minimize the probability of having two photons reaching the same cell. If we neglect this effect, photons arriving simultaneously to the detector will interact in different cells. The output from each cell is summed giving rise to a pulse which intensity is proportional to the number of triggered cells. In Figure B.4 we can see how events of 1 to 5 photoelectrons are seen in an oscilloscope. Thanks to these almost quantized output SiPMs have a unique single- photoelectron resolution, a feature that makes them extremely suitable for measuring low level fluxes. An example of a single-photoelectron spectrum is shown in Figure B.5.

B.2.2 Pulse shape Typical SiPM pulses are shown in Figures B.2 and B.4. The rise time depends on the drift time of the charge carriers during the avalanche process [177], on the total 124 APPENDIX B. THE LIGHT-TRAP DETECTOR

0.6 [V] out

V 0.5

0.4

0.3

0.2

0.1

0 ×10−6 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 ∆t [s]

Figure B.2: Example of a pulse detected with a KETEK PM3375T SiPM recorded at the laboratory.

Figure B.3: A SiPM is an array of several APDs (microcells) where the output is summed. Figure from [178]. B.2. SILICON PHOTOMULTIPLIERS: A BRIEF TECHNICAL REVIEW 125

Figure B.4: Image from an oscilloscope showing pulses corresponding to 1 to 5 photoelectron events. Figure from [178].

103 0 phe Events 1 phe Entries 5000

2 phe

102 3 phe

4 phe

10 5 phe

1

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Amplitude [V]

Figure B.5: Example of a single photoelectron plot for a Hamamatsu SiPM, obtained from measurements performed at the laboratory. 126 APPENDIX B. THE LIGHT-TRAP DETECTOR

Vbr 102

∆V 10

1 SPM Dark cureent [uA]

10−1

10−2

10−3

10−4 20 22 24 Vbias [V]

Figure B.6: Dark current as a function of the bias voltage for a KETEK PM3375T SiPM, as measured at the laboratory. area of the device and on the capacitance that results from the lines that connect all the microcells [178]. Rise times going from 0.1 to 1 ns can be achieved with modern sensors. The decay or recovery time depends on the microcell reset period, which is de- termined by the equivalent microcell capacitance C and the quenching resistor RQ as τdecay = C · RQ. As C increases with the cell size, shorter decay times are obtained building devices with smaller microcells. The quenching resistor can be tuned to pro- duce decay times ranging from ∼1 to ∼100 ns [177]. The full width half maximum (FWHM) of the pulses is a very important parameter for Cherenkov astronomy applications. As it happens with the rise time, it mainly de- pends on the size of the device and on the length of the lines connecting the microcells. For 1×1 mm2 SiPMs the FWHM can be smaller than 1 ns. In 6×6 mm2 devices it can reach ∼4 ns. Time resolution is typically below 100 ps, which fulfills comfortably the requirements of Cherenkov astronomy.

B.2.3 Over-voltage, gain and PDE

The breakdown voltage Vbr of a SiPM can be easily spotted as a sudden increase in the current in a I-V plot, as in Figure B.6. SiPMs are typically operated at a few Volts above the breakdown voltage. The difference between the operational bias voltage Vbias and Vbr is known as “over-voltage” (∆V ) and is an essential concept in these detectors because it affects parameters like gain, detection efficiency or noise. The gain G is defined as the ratio of the output charge produced during a single- photoelectron event to the charge of an electron [177]. It is related to the equivalent B.2. SILICON PHOTOMULTIPLIERS: A BRIEF TECHNICAL REVIEW 127

Figure B.7: PDE relative to the peak PDE at ∼420 nm for a KETEK PM3375T SiPM operated at 15% ∆V . The absolute PDE at 420 nm is above 62% for this device at this over-voltage, according to the manufacturer. microcell capacitance C as C · ∆V G = (B.3) qe It depends linearly on the applied over-voltage: the amount of charge created in an avalanche is proportional to the intensity of the electric field. C depends, for instance, on the microcell size. Typical gain of these detectors are of the order of 106 − 107. The gain can be measured from a single-photoelectron plot like the one in Figure B.5. Formally it should be done with the charge distribution instead of amplitude (which can be obtained using an ADC) and the separation between peaks, which is constant, would correspond to the charge integrated in a single Geiger discharge. Still, the amplitude distribution is useful to check the stability of the gain, as it will be shown later. The PDE of a SiPM is the probability that an incident photon produces a pulse. It depends on the wavelength (absorption probability of photons in silicon is wavelength dependent), on the applied over-voltage and on the microcells design: PDE(λ, ∆V ) = η(λ) · (∆V ) · F (B.4) η(λ) is the quantum efficiency of silicon. (∆V ) is the probability that an avalanche is produced after the electron-hole pair is created. The higher the electric field (given by ∆V ), the higher this probability is. F is known as the fill factor and it is the fraction of the total SiPM area that is used for detecting photons (active area). F is basically determined by the size of the microcells and the gab between neighbouring cells. Figure B.7 shows the relative PDE of a KETEK PM3375T SiPM operated at 15% over-voltage.

B.2.4 Noise The main source of noise in SiPMs is the dark count rate (DCR). Electron-hole pairs could be spontaneously generated and start an avalanche. The output pulse would 128 APPENDIX B. THE LIGHT-TRAP DETECTOR be identical to the one initiated by a photon. DCR increases with the cell size and, naturally, with the SiPM size. It also increases with over-voltage: the higher the electric field, higher the probability of initiating an avalanche.

The other important source of noise in SiPMs is optical cross-talk. The charge carriers generated during an avalanche initiated by a photon (or by a dark count event) are accelerated by the electric field and emit photons in the near infrared band [138]. These photons can reach the depletion region of neighbouring cells and immediately start a new avalanche (prompt cross-talk). This process is extremely fast and the result is that an event which was initially triggered by a single photon could generate a pulse equivalent to a 2 (or more) photoelectron event. Cross-talk probability increases with the microcell size and with over-voltage. It can be roughly estimated by the ratio of the dark count rate at the second photoelectron level to the rate at the single photoelectron level. The step-like curve of Figure B.9 was obtained by Clara Fern´andez, a summer student I had the pleasure to supervise in July 2016. It shows the event rate as a function of the trigger threshold of a KETEK SiPM in the absence of light (all events triggered by noise). The measurements were performed in a dark box and, if we assume there is no light leakage, then all the events recorded were initiated by a thermal electron-hole pair. The first step or flat region shows the dark count frequency at the single photoelectron level. Next step would be the rate at the second photoelectron level and it is mainly composed of events initiated by a dark count which produced cross-talk in one neighbouring cell (the probability of having two dark counts at the same time is negligible compared to the cross-talk probability).

There is a second form of cross-talk known as delayed cross-talk. This can occur when a photon produced during an avalanche creates a charge carrier in a neighbouring pixel, close to the depletion region. This charge carrier would diffuse to the avalanche region, triggering a second avalanche that would be delayed by several tens of ns. In the output signal this would be observed similar to an afterpulse.

Afterpulsing in SiPMs typically occurs when charge carriers are trapped by im- purities in the microcell. The trapped carriers could be released after the quenching started, starting a new avalanche. If the afterpulse occurs within the recovery time of the system. the waveform produced by this delayed avalanche would partially overlap with the original one, as can be seen in the example in Figure B.8.

The effect of dark counts is not so critical in Cherenkov astronomy because NSB produces an unavoidable and higher rate of background events. Afterpulsing and de- layed cross-talk could introduce some bias, but their presence is easier to spot and their effect can be minimized by using SiPMs with fast recovery time. But (prompt) optical cross-talk is a real issue. Due to this effect applying a calibration similar to the F-factor method described in Section 4.1.4.1 is not straight forward in SiPMs. The combination of dark counts and cross-talk has, however, a positive effect: the conversion from ADC counts to photoelectrons during the calibration process can be performed in SiPMs without using any light source. Only with events triggered by dark counts a single- photoelectron spectrum like the one in Figure B.5 can be built. This can be used for instance to detect eventual gain variations, as will be later shown. B.2. SILICON PHOTOMULTIPLIERS: A BRIEF TECHNICAL REVIEW 129

0.025

0.02 Amplitude [V]

0.015

0.01

0.005

0

−0.005

× −6 −0.01 10 0.06 0.08 0.1 0.12 0.14 Time [s]

Figure B.8: Example of an event containing an after pulse. The estimation of the baseline and the charge extraction (see Section B.4.6 and Figure B.17) is affected by the afterpulse.

Figure B.9: Dark count rate as a function of the trigger threshold for a KETEK PM3375T SiPM operated at 9% ∆V . Plot made by Clara Fern´andez. 130 APPENDIX B. THE LIGHT-TRAP DETECTOR

B.2.5 Temperature dependence One of the main concerns when considering the use of SiPMs for any application is the temperature dependence of most of the SiPM parameters. The breakdown voltage increases with temperature, which means that, if Vbias remains unchanged, gain, PDE and cross-talk probability also change. The problem would be solved if the temperature of the sensors is regulated. But as gain, PDE and cross-talk probability depend on the over-voltage and not on Vbr, they would remain unchanged if the Vbias is adjusted to maintain a constant over-voltage. In the Cherenkov astronomy domain, FACT successfully implemented a feedback system to correct the bias voltage to keep the over-voltage constant [49]. Dark count rates, however, increase with the temperature, regardless of constant over-voltage.

B.3 The Light-Trap pixel principles

Probably the main disadvantages of SiPMs as potential detectors for Cherenkov as- tronomy are their limited size and their typical wavelength-dependent efficiency, not optimal for Cherenkov pulses and too sensitive to NSB (can be judged by comparing Figures 4.16 and B.7). It could be argued that we should also add the temperature dependence, but the tests performed at MAGIC show that the sensors temperatures can be kept reasonably stable inside the camera (less than 1◦C variation within one night [120]). The Light-Trap pixel intends to provide a cheap solution when a large detection area is needed, having its best sensitivity in the UV range where Cherenkov light peaks, while rejecting most of the NSB. A Light-Trap pixel consists of a SiPM coupled to a polymethylmethacrylate (PMMA) disc (see Figure B.10). This disc contains some wavelength-shifting (WLS) fluors which absorbs photons in the ∼300-400 nm wavelength range and re-emits them in the ∼400- 500 nm range. WLS photons are re-emitted isotropically, so a fraction of them are trapped in the disk by total internal reflection (TIR) and eventually reach the SiPM. Some of the photons that escape through the side-walls or the bottom can be recovered with the addition of reflective surfaces near the disc. The rest of the re-emitted photons escape. As a result,

1. Light around the peak of the Cherenkov spectrum (∼350 nm) is collected.

2. Light at longer wavelengths (for which the NSB dominates) is not absorbed by the WLS material and hardly reaches the SiPM.

3. The absorbed Cherenkov photons are re-emitted at a wavelength where the SiPM PDE is higher

4. The collection area of the detector can be a factor ∼10-50 larger than the sensitive area of the SiPM, i.e. the cost is reduced by the same factor (if the cost of the disk is comparatively low) and thus enables us to build pixels far larger than commercially available SiPMs. B.4. PROOF-OF-CONCEPT PIXEL 131

(a) Top down view (b) Side view

Figure B.10: Conceptual design of the Light-Trap.

B.4 Proof-of-Concept Pixel

B.4.1 SiPM

The SiPM used for this study was the 3 mm × 3 mm KETEK PM3375 (with a peak sensitivity at 420 nm, as shown in Figure B.7). It is not the optimal device for IACT applications, mainly due to its high cross-talk probability (∼36% at 15% over-voltage), but it was convenient for testing purposes. It is built with robust pins, which makes easy the task of mounting and unmounting the SiPM from the printed circuit board (PCB) used for readout. Besides, as it will be shown later, we were not interested in the absolute properties of the sensor itself but on how the performance achieved with the Light-Trap compares to the performance of the same, standalone, SiPM used to build it. The SiPM has a breakdown voltage of 25 ± 3 V according to the manufacturer. From our measurements, performed between 21 and 24◦C, we found in most of our devices Vbr to be slightly above 22 V (see Figure B.6). The tests were performed at ∼9 % over-voltage, where the cross-talk probability is ∼20%. No temperature-control system was used to keep the SiPM temperature stable. Instead, we monitored the ambient temperature and the SiPM gain by constantly calibrating the amplitude of the single-photoelectron pulse using dark counts, as described in Section B.2.4.

B.4.2 Disc

Custom-doped wavelength-shifting polymethylmethacrylate (PMMA, refractive index n=1.49) discs were purchased from Eljen Technology (Sweetwater, Texas). These discs have a diameter of 15 mm and a thickness of 3 mm and are doped with EJ-299-15. They are intended to absorb light in the UV band and re-emit it by fluorescence in blue wavelengths (see Figure B.11). The dopant levels were customised by Eljen according to our specifications, i.e. to absorb ∼100% of incident 340 nm photons within 1.5 mm of the material. The wavelength-shifting fluors have fast re-emission time on the order of ∼1 ns (exponential decay time) with a quantum yield of ∼84%. 132 APPENDIX B. THE LIGHT-TRAP DETECTOR

Figure B.11: Left:PMMA disc doped with EJ-299-15 manufactured by Eljen. Right: EJ-299-15 absorption and emission spectra, as reported by the company.

A critical aspect is the optical polishing of the disc surface. It affects the TIR effi- ciency, as imperfections in the walls scatter the arriving photons. In [70], the smooth- ness of a surface is characterized by its root mean square roughness σ, defined as the root mean square deviation from the mean surface level. Optical applications typi- cally require σ to be small compared to the wavelength λ of the light that is reflected (or transmitted) at the surface in consideration. The specular reflectivity at normal incidence Rs of a given surface can be expressed as

 (4πσ)2  R = R exp − (B.5) s 0 λ2

Where R0 is the specular reflectivity of a perfectly smooth surface of the same material. The manufacturing process of the discs we purchased begins with the casting of small rods of the doped plastic. The rods are then heat pressed to the desired thickness (3 mm). After that, the disc surfaces are manually polished using a diamond tool. This process can still leave some “footprints” in the discs. This was particularly appreciable in the first set of discs that were manufactured for us. With an optical microscope it was possible to observe regular scratches in the surface of the discs (upper left panel in Figure B.12). Just by focusing a beam of green light (∼532 nm) perpendicularly to its flat surfaces it was possible to appreciate the interference effect these scratches produce (bottom panel in Figure B.12). At those wavelengths light is not absorbed by the WLS and, in a perfectly smooth surface, the light beam would go through the disc without suffering any deviation. If the surface has a certain roughness, a fraction of the light is deflected. In a screen located behind one of the discs it was possible to observe the diffraction pattern shown in the upper right panel of Figure B.12. Light losses due to scattering in the disc surface can be studied by placing a detector at several distances behind the disc, as schematized in the bottom panel of Figure B.12. I performed these tests using the setup shown in the top panel of Figure B.131. As light source I used lasers of ∼532 and ∼ 650 nm. The transmitted light was collected using a photodiode. An integrating sphere is used to diffuse the incoming light, avoiding an eventual saturation of the detector. The plot in the bottom panel of Figure B.13 shows

1This image was taken from Adrianna Garcia Master thesis, which was carried at IFAE during the first half of 2017, which I co-supervised and was successfully defended at Universitat de Barcelona on September 2017 [105]. B.4. PROOF-OF-CONCEPT PIXEL 133

Figure B.12: Top left: Image taken with an optical microscope of one of the discs from the first production where the periodic scratches resulting from the polishing process can be observed. Top center: Laser spot observed on a screen. Top right: Diffraction pattern observed when the same laser beam goes through one of the discs of the first production. The two preferred directions are correlated with the orientation of the scratches on each flat surface of the disc. Bottom: A laser beam going through a perfectly smooth disc is not deflected if it arrives perpendicularly to the disc surface (left). If the disc has a certain roughness a fraction of the beam will be deflected. A detector located at D1 will measure a higher flux than the one in D2. the ratio of the measured intensity of the light collected when the light goes through the disc to the intensity measured when the disc is removed (light beam directly focused into the integrating sphere) for each position of the detector. The integrating sphere entrance had a diameter of 19 mm. The plot shows a clear difference between the first set of discs (1st production) and a second improved set (2nd production), that was also produced by Eljen after we reported to them the results from our experiments. The same measurements performed over a transparent quartz disc (with no WLS) are also shown as reference. The effect observed in the 1st production discs is not observed in those of the 2nd production or in the quartz disc (the measured transmission efficiency is compatible with the expected Fresnel losses over the two surfaces the light goes through). Experiments as the one described and many other optical techniques can be used to characterize the roughness of a surface and, for instance, evaluate σ (see [197] for a review). Kumara Cordero from the Institut Catal`ade Nanoci`enciai Nanotecnologia (ICN2) took a few images of a disc of the 1st production using an atomic force micro- scope (AFM). In those images we could identify the periodic scratches of the polishing process (besides many other randomly distributed marks, presumably resulting from the manipulation of the discs during the experiments) and measure their width and depth (Figure B.14). Those images are heightmaps that show the difference in height over the imaged sample (the zero level is arbitrary). With such images it is even pos- 134 APPENDIX B. THE LIGHT-TRAP DETECTOR

0.95

0.9 Transmission efficiency

0.85

0.8 1st Production disc, 532 nm 1st Production disc, 650 nm 2nd Production disc, 650 nm 0.75 Quartz Disc (dummy), 532 nm

0 20 40 60 80 100 120 Distance [cm]

Figure B.13: Top: Setup used to characterize the transmission efficiency of the discs as a function of the distance between the disc and the detector. Image from [105]. Bottom: Plot summarizing the result of such tests. B.4. PROOF-OF-CONCEPT PIXEL 135 sible to obtain a mean surface roughness, which in our images was estimated to be of ∼10 nm. This technique actually offers many other features to characterize surfaces that go beyond the work I performed. But even if for the final tests we used the discs from the 2nd production I though it was still worthy to briefly describe these optical experiments I performed, hoping they could inspire future tests for new developments of the pixel.

B.4.3 Optical coupling Ideally, we would like that all the photons re-emitted by the WLS were reflected by the disc walls, get trapped and only be able to escape when they are approaching the SiPM. In an optimal Light-Trap pixel the outer layer of the SiPM would have the same refractive index than the disc, so that all the photons approaching the detector are not reflected, either by the disc wall or by the SiPM (as if the detector would be be embedded in the disc. How close to this ideal situation is our real detector depends on how efficiently the disc and the SiPM are optically coupled. Our Light- Trap pixel uses an optically clear silicone rubber sheet (EJ-560, also purchased from Eljen Technology) that was cut to match the SiPM. This silicone material is soft and only lightly adhesive, thus allowing the removal and addition of the SiPM. It features ∼100% internal transmission.

B.4.4 Reflective walls To help improve efficiency in the case that wavelength-shifted photons do not undergo TIR, 3M R ERS reflective foil was cut so as to surround the back and sides of the disc. The reflectivity of this material was measured by Adrianna Garc´ıaat IFAE, as part of her Master thesis, and estimated to be of ∼98%, with no major dependence on the incident angle of the incoming light [105].

B.4.5 Pixel holder and readout electronics In order to hold the SiPM, PMMA disk, and reflective foil together it was necessary to construct a cylindrical polyethylene holder (see Figure B.15). It should be noted that the reflective foil was firmly positioned inside this holder, although it is still expected to be in contact with the disk in some places (thus impacting TIR efficiency). Two plastic screws are used to apply pressure between the disc and the SiPM to improve the efficiency of the optical coupling. The holder is screwed to the PCB containing the SiPM readout electronics. The output signal of the SiPM is pre-amplified using a wideband current mode preamplifier named PACTA [176], initially designed to be used with PMTs in CTA.

B.4.6 Laboratory measurements We tested the system by flashing the Light-Trap with four fast-response LEDs of dif- ferent colours, peaking at 375, 445, 503 and 600 nm (Figure B.16). Fast pulses of a few ns at a 1 kHz rate are produced by means of a Kaputschinsky LED driver [147]. The output signal was recorded using a digital oscilloscope (Rhode&Schwartz RTO 1024) 136 APPENDIX B. THE LIGHT-TRAP DETECTOR

Figure B.14: Top left: Image of one of the 1st production discs taken with an AFM. Top right: Zoom to the area delimited by the red square in top left image. Center and Bottom: Height profile of the red line in the top right image. The same profile appears twice: in the upper sub-panel it was used to calculate the width of the scratch, in the lower sub-panel to calculate its depth. B.4. PROOF-OF-CONCEPT PIXEL 137

Figure B.15: Image of the Light-Trap holder and electronics board. The SiPM sits on the board upwards in this picture and the disk surface is pointing to the right. of 2 GHz bandwidth and 10 GSa/s. Data were stored in 100 ns waveforms with a res- olution of 0.05 ns (interpolated time). Figure B.17 shows some examples of UV pulses (∼375 nm) recorded with the Light-Trap pixel and with the same SiPM without the disk (naked SiPM). An algorithm looks for the maximum amplitude in the acquired window and for the maximum integrated signal in a 10 ns sliding window, similarly to what was described in Section 4.1.4.1. The baseline is estimated from the beginning of the waveform, before the pulse rises, and then subtracted from the integrated sig- nal. Once the maximum amplitude is found, the pulse is fitted near the peak position with two Gaussian functions, which can be then used to identify and discard unwanted events such as after-pulses. The timing properties of the Light-Trap are affected by the re-emission time of the photons absorbed by the WLS and for the total distance that the wavelength-

Figure B.16: Setup used to characterize the light-trap at the laboratory. 138 APPENDIX B. THE LIGHT-TRAP DETECTOR

0.02 0.02 Amplitude [V] Amplitude [V] 0.015 0.015

0.01 0.01

0.005 0.005

0 0 − − ×10 6 ×10 6 0.02 0.04 0.06 0.08 0.1 0.02 0.04 0.06 0.08 0.1 Time [s] Time [s]

0.14 0.14

0.12 0.12 Amplitude [V] Amplitude [V]

0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02

− − 0 ×10 6 0 ×10 6 0.02 0.04 0.06 0.08 0.1 0.02 0.04 0.06 0.08 0.1 Time [s] Time [s]

0.14 0.14

0.12 0.12 Amplitude [V] Amplitude [V]

0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02

− − 0 ×10 6 0 ×10 6 0.02 0.04 0.06 0.08 0.1 0.02 0.04 0.06 0.08 0.1 Time [s] Time [s]

Figure B.17: Examples of UV pulses observed with the naked SiPM (left) and with the Light-Trap (right). The signal is integrated in a 10 ns sliding window (limited by the blue dashed lines). Red solid line is the constant fit performed to obtain the baseline level. Blue solid line is the two-Gaussian fit performed to the pulse near the peak. Top panels correspond to 1 phe events, while center and bottom panels correspond to 15 phe events. B.4. PROOF-OF-CONCEPT PIXEL 139

1600 naked SiPM Events Light Trap (2nd prod.) 1400

1200

1000

800

600

400

200

0 66 68 70 72 74 76 78 Arrival time in the DAQ window [ns]

Figure B.18: Arrival time distributions of UV pulses for the Light-Trap and for the naked SiPM. shifted photons travel within the disc before reaching the SiPM. This is observed as a delay in the arrival time and a broadening of the pulses. The first effect is observed in Figure B.18, which compares the arrival time of 5000 UV pulses detected by the Light-Trap and the naked SiPM. The arrival time here is defined as the time in the recorded window at which the maximum amplitude was found. The delay observed in the Light-trap distribution is of the order of the re-emission time of the WLS. The broadening of the pulses collected with the Light-Trap can be observed in Figure B.19. Pulses of 15 phe recorded with the Light-Trap have a typical FWHM of ∼5 ns, while those recorded with the KETEK SiPM are of ∼4 ns. To evaluate the photodetection efficiency of the Light-Trap we compared the mean number of photons detected by the Light-Trap with the mean number of photons detected by the naked SiPM, given the same incident photon flux. To do so we first calculate the conversion from amplitude integrated in the 10 ns sliding window, baseline subtracted and scaled by the resolution or the number of integrated slices (n.i.s) in those 10 ns, to photoelectrons. This conversion is obtained from dark runs, measurements performed with the LED switched off and with the trigger level set at ∼0.5 phe. A dark run then consists mainly on dark count and dark count + cross-talk events. A histogram as the one in Figure B.20 is built from those runs, where the peaks corresponding to 1 to 4 phe events can be clearly identified. The distance between consecutive peaks gives the conversion from integrated amplitude over n.i.s. to phe. A dark run is always taken before and after a data run, measurements taken under LED illumination, to constantly monitor the SiPM gain. Both dark and data runs are composed of 5000 events. Once the conversion factor is known, the histograms storing data run events are directly built in units of photoelectrons. To obtain the mean number of detected photons the cross-talk probability pXT must be taken into account. To do so we followed the approach proposed in [104], that describes pXT with a binomial distribution:

m − 1 p (p ) = (1 − p )n pm−n (B.6) n,m XT XT XT n − 1 140 APPENDIX B. THE LIGHT-TRAP DETECTOR

0.75 phe < N < 1.25 phe 14.75 phe < N < 15.25 phe

0.35 0.35

counts [a.u.] 0.3 counts [a.u.] 0.3

0.25 0.25

0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

0 0 0 2 4 6 8 10 12 0 2 4 6 8 10 12 FWHM [ns] FWHM [ns]

Figure B.19: Comparison of the FWHM of UV pulses of 1 (left) and 15 phe (right) events observed with the Light-Trap and with the naked SiPM.

Events 103

102

10

1 0 0.05 0.1 0.15 0.2 0.25 Integrated amplitude/n.i.s. [mV]

Figure B.20: Histogram with events acquired during a dark run with the naked SiPM at ∼ 9% over-voltage. Since the trigger threshold was set at ∼0.5 phe, the first peak corresponds to the 1 phe peak. B.4. PROOF-OF-CONCEPT PIXEL 141

Then, the probability of detecting a mean number of photons µ can be written as

N n  2 X X 1 − x−√n·G+B f(x) = A pn,m(pXT )P (n | µ)√ e 2πσn + Ped(x) (B.7) n=0 m=1 2πσn where A is the normalization, P (n | µ) is the Poisson probability of having n cells fired given a mean number of interacting photons µ, G is the conversion factor from integrated amplitude/n.i.s to phe, B is the bias or the position of the pedestal peak and σn = σe + n · σl is composed by the electronic noise σe and the noise related to gain fluctuations σl. Ped(x) describes the pedestal events as

 2 1 − √x−B Ped(x) = A0P (0 | µ)√ e 2πσe (B.8) 2πσe

with A0 the normalization for this term in the equation. For a detailed explanation on the formalism of treating SiPM data including cross-talk effects I refer the reader to [104] and [83]. This model does not account for afterpulsing or delayed cross-talk, which could bias the measured flux towards higher values. Most of these events can be removed, however, by applying quality cuts to the fit used to evaluate the baseline (see Figure B.17). Figure B.21 shows examples of the spectral curves that are obtained using Eq. B.7 for different values of µ. The result of fitting f(x) to the spectra obtained with the naked SiPM and with the Light-Trap when flashed with UV light can be seen in Figure B.22. In the case of the naked SiPM, where the detected flux is lower, the different multi-electron peaks can be well fitted and distinguished using this model. The value obtained for pXT is consistent with that expected from Figure B.20, where the cross-talk probability N>1.5 phe could be roughly estimated as pXT ' ' 19%. The obtained parameters for N>0.5 phe pXT , B and G in the naked SiPM histogram are set as fixed parameters for fitting the distributions obtained with the Light-Trap. In those distributions the measured flux is higher and individual peaks are harder to distinguish (and then there is a degeneracy in the parameter space). The distribution obtained using the discs from the first production is also shown to exhibit how a poor optical polishing can result in a significantly lower detection efficiency. A simple ratio of the Light-Trap output signal to the one obtained with the naked- SiPM allowed for an estimation of the “boost factor” achieved by the additional use of the PMMA disk. The ratio of this boost factor with that expected by the simple geometric consideration (i.e. the increase in area between the 9 mm2 SiPM and 176.71 mm2 disc, a factor 19.63) gives the “trapping efficiency” of the Light-Trap device. To help understanding the performance and characteristics of the Light-Trap, Juan Cortina and John E. Ward simulated of the pixel using the Geant4 software simulation package (version 4.10.01-p02[47]). A perfectly optically polished 15 mm disc with the absorption and emission properties of the EJ-299-15 material was simulated. A cou- pling material (PMMA, with no dopant) was also added to optically couple a sensitive detector to the disc (i.e the SiPM, but the properties of the KETEK device have not been included in these simulations). Finally, two mirrors with 99% reflectivity were placed at the bottom and sides of the disk, while leaving a 20 µm air gap at the bottom and 100 µm at the sides. The results comparing lab measurements and simulations are shown in Figure B.23. The pixel tested collects the same amount of UV light that what would be achieved 142 APPENDIX B. THE LIGHT-TRAP DETECTOR

1 0.4 f(x) f(x) 0.9 0.35 0.8 µ = 1.5 phe 0.3 µ = 3.0 phe 0.7 0.25 0.6

0.5 0.2

0.4 0.15 0.3 0.1 0.2 0.05 0.1

0 0 0 2 4 6 8 10 0 2 4 6 8 10 12 14 x [phe] x [phe]

0.12

f(x) 0.14 f(x)

0.1 0.12 µ = 8.0 phe µ = 15.0 phe 0.1 0.08

0.08 0.06

0.06 0.04 0.04

0.02 0.02

0 0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 35 40 x [phe] x [phe]

0.08 0.06 f(x) f(x)

0.07 0.05

0.06 µ = 25.0 phe µ = 40.0 phe 0.04 0.05

0.04 0.03

0.03 0.02 0.02

0.01 0.01

0 0 10 15 20 25 30 35 40 45 50 20 30 40 50 60 70 80 x [phe] x [phe]

Figure B.21: Spectrum obtained using Eq. B.7 for different values of µ. The rest of the parameters were fixed at A = 1, A0 = 0.5, G = 1, B = 0, σe = σl = 0.1 and pXT = 0.2. B.4. PROOF-OF-CONCEPT PIXEL 143

Raw Sum Amplitude in 10.0 ns (UnBias)

240 300 220 Events Events

200 250 µ = ( 7.6 ± 0.1) phe 180 µ = (33.6 ± 0.2) phe p = (20.3 ± 1.0) % xt 160 200 140

120 150 100

80 100 60

40 50 20 0 0 0 5 10 15 20 25 30 0 20 40 60 80 100 Sum Amplitude [V] Collected signal [phe]

200 µ ±

Events = (44.1 0.2) phe 180

160

140

120

100

80

60

40

20

0 0 20 40 60 80 100 Collected signal [phe]

Figure B.22: Distributions from data runs with UV illumination (∼ 375 nm) for naked SiPM (top left), Light-Trap first production (top right) and Light-Trap second production (bottom). Red curve is the fit performed using Eq. B.7 144 APPENDIX B. THE LIGHT-TRAP DETECTOR

Efficiency vs. Colour 35 Lab Measurements Simulation 30 6

25 5

20 4

15 3 Boost Factor

Trapping Efficiency [%] Trapping 10 2

5 1

0 0 350 400 450 500 550 600 Wavelength [nm]

Figure B.23: Trapping efficiency and boost factor of the Light-Trap proof of concept pixel as a function of wavelength. using six SiPMs similar to the one used to build the Light Trap. We refer to this value as “Boost Factor” in the figure. This also means that the efficiency of the light trap to bring the incident light into the SiPM is ∼30%. The pixel is almost blind to longer wavelengths. The obtained results agree with the simulations performed with Geant4. The measurements have shown that the Light-Trap concept works, although the trapping efficiency is modest. Several steps may be undertaken to enhance the perfor- mance of such a device. In addition, the concept of the Light-trap might open new opportunities not only in IACT astronomy but also in other fields, both in science (e.g astroparticle physics, astrophysics, high energy physics) and industry (e.g. medical applications), as long as a low-cost large-area detector is required. By choosing the adequate dopant it can be tuned to be sensitive in a desired wavelength range. Appendix C

Moon shadow observations with MAGIC

The most effective way to detect cosmic-ray electrons is through direct measurements, with balloons or satellites. However, at energies above 100 GeV the particle flux is smaller and to achieve good statistics instruments with large collection area are needed (see Chapter 2). Even if designed to observe gamma-ray sources, IACT telescopes can be used for this task. As explained in Section 2.3.2, electron and positrons also produce electromagnetic showers and in practice represent an irreducible background. And even if we could distinguish gamma-ray and electron/positron initiated showers, it would still be impossible to differentiate between those produced by electrons and those produced by positrons. In the Moon Shadow Project, MAGIC aims to take advantage of the natural spec- trometer created by the Moon and the Earth magnetosphere to measure the electron flux. Charged particles approaching the Earth are deviated as a result of their interac- tion with the Earth magnetic field. Meanwhile, the Moon blocks a small fraction of the incoming electrons, creating a “hole” in the flux. But as particles are deviated by the magnetic field, this hole, as seen from Earth, is deviated from the actual Moon position. This hole is what we call Moon shadow. The deviation of cosmic rays depends on their energy and charge, which means that the shadow for 1 TeV electrons would not be in the same place as the one for 1 TeV positrons or 500 GeV electrons. In Figure C.1 we can see where the electron and positron/proton shadows would be located if the Moon, seen from La Palma, at the MAGIC telescopes site, were at a Zenith angle (Zd) of 45◦ and an Azimuth angle (Az) of 90◦, as calculated by [84]. Those results were obtained using a dipole model for the Magnetosphere. The possibility of using a more complex model for describing the magnetic field is studied in [111]. According to Figure C.1, in that situation the shadow for 1 TeV electrons would be located barely 1.5◦ below the Moon (Eastwards), while the 1 TeV positron shadow would be 1.5◦ above the Moon (Westwards). As the deviation of relativistic particles depends on charge and energy, but not on the mass, the shadow for positrons and the one for protons of the same energy would be located in the same place. Once the position of the Moon shadow is known, the observation strategy is simple: the telescopes would be pointed close to the place where the shadow is expected to be found, so that we could compare the region where the missing flux is observed with a region that is “not shadowed”. Assuming a homogeneous distribution of incoming

145 146 APPENDIX C. MOON SHADOW OBSERVATIONS WITH MAGIC

Figure C.1: Moon shadow position for electrons and positrons/protons as it would be seen by MAGIC when the Moon has a Zd of 45◦ and an Az angle of 90◦ (Figure from [84]). In this situation the electron shadow is below the Moon (Eastwards) and the positron/proton shadow is above the Moon (Westwards). cosmic rays, the difference between the flux measured in both regions would be in- terpreted as the electron flux that has been blocked by the Moon. But if the idea is theoretically simple, executing it is not. As explained in Chapter 4, IACT telescopes are built to work under dark conditions. Pointing close to the Moon is obviously not so compatible with this constraint. This type of observations can be performed in MAGIC using reduced HV, but only when the Moon phase is low (less than 50%). Besides, for having low energy thresholds the Moon zenith angle should be below 50◦. As a result, only about 15 hours per year would be available to observe the Moon shadow (∼30 hr/yr shared between the e− and e+ shadows) under such conditions [84]. MAGIC UV- pass filters were initially thought specifically to increase the observation time available for Moon shadow observations. In Chapter 4 it was mentioned that combining the use of UV-pass filters and reduced HV it is possible to perform observations under a ◦ sky brightness as high as 100 × NSBDark, allowing observations at 5 away from a Full Moon. In the following pages I present a feasibility study of the Moon shadow project with MAGIC using UV-pass filters.

C.1 Feasibility study

The goal is to estimate which is the observation time needed to detect the electron shadow using UV-pass filters, in the range of a few hundred GeV to ∼1 TeV. To com- pute it we need the expected event rates detected in the signal and in the background regions. The background region would be filled with hadronic events, but also with showers produced by electrons and positrons. The expected detected background flux C.1. FEASIBILITY STUDY 147

det φbkg can be directly obtained from real data. If we neglect the anisotropy in the arrival det sh direction of charged particles, the expected flux in the signal region would be φbkg −φe− , sh where φe− is the expected rate of the electrons blocked by the Moon that would be detected by MAGIC. This rate depends on the detection efficiency of the camera for this particular observations and on the real cosmic-ray electron flux.

C.1.1 Energy range and camera acceptance An illustration of the method planned for Moon shadow observations is shown in Figure C.2. The telescopes are pointed at a 0.4◦ offset from the Moon shadow axis. The signal region would be over the Moon shadow axis, while the background region in a parallel axis also located at a 0.4◦ offset from the camera center. Figure C.2 suggests that, at least for the position of the Moon that is considered in the image, it would be possible to observe simultaneously the shadow in the range from ∼380 GeV up to ∼1 TeV, as all the shadows could be gathered together inside MAGIC FOV. But we should first keep in mind that the efficiency of detection is not the same when the source is close to the center of the camera than when it is on the edges. This can be seen in the left panel of Figure C.3, where the rates for gamma rays above 290 GeV for observations of the Crab Nebula are shown against the offset angle (angular distance to the center of the camera). We assume in this study that the offset-angle dependence of electron rates is similar to that of gamma rays. The acceptance is roughly constant at its maximum at offset angles in the range of 0 − 0.4◦. At 0.7◦ the rates decrease to one half and at 1◦, are reduced by a factor 4. The expected energy-dependent electron and background rates should be corrected by this effect. In order to do so, I parametrized the ratio of the acceptance of the camera at an offset angle x to the acceptance at the camera centre as:

1 if x ≤ 0.4◦ (C.1) 2.5 · exp(−2.3 · x) if x > 0.4◦ (C.2)

For an observation configuration like the one of C.2, this can be translated to an energy-dependent acceptance that would be maximum for the 540 GeV Shadow and would decrease for higher and lower energies as can be seen in the right panel of Figure C.3.

C.1.2 Background and electron rates Background is mainly composed of hadrons, but it also includes electrons and positrons. For this feasibility study I obtained the background detection rate per unit area from a sub-sample of the Crab Nebula observations taken with UV-pass Filters and 8- 15 × NSBDark (Figure C.4). The collection area, needed for computing the expected electron rate, is also obtained from this data set (Figure C.5). Collection area and background rates were obtained for a θ2 cut of 0.13◦, a hadronness cut of 0.15 and a size cut of 200 phe. The obtained rates per unit area musts be integrated in an area of the size of the Moon (ΩMoon). 148 APPENDIX C. MOON SHADOW OBSERVATIONS WITH MAGIC

Figure C.2: Scheme of the signal and background regions during an hypothetical Moon Shadow observation with the 540 GeV electron shadow at a 0.4◦ offset C.1. FEASIBILITY STUDY 149

Figure C.3: Left: gamma-ray rates for the Crab Nebula as a function of its offset from the FOV center[42].Right: Acceptance as a function of the electron shadow energy for the observation configuration of Figure C.2.

•1 102

sr All background •1 • Only e

Events hr 10

1

10−1

10−2 102 103 Eest (GeV)

Figure C.4: Background and electrons rates that would be detected if the event would arrive at a 0.4◦ offset. This rates are yet not corrected by the camera acceptance. ] 2 105

104 Effective area [m

103

102

10

102 103 E [GeV]

Figure C.5: Effective collection area for Moon shadow observations as a function of the estimated energy obtained from real data. 150 APPENDIX C. MOON SHADOW OBSERVATIONS WITH MAGIC

Figure C.6: The AMS-02 electron spectrum (black dots) and the parametrization adopted from Equation C.3.

The electron flux measured by AMS-02 reported in Figure 1.2 was taken as reference to produce an approximate electron spectrum:

8.2 · 10−12  E −3.2 φ − = × e GeV sr s cm2 1 T eV " ( 2 ) !# (C.3) log E  + 0.1 1 + 0.01 exp exp − 10 1 T eV − 1 1.1

This spectrum fits the AMS data, as it can be seen in figure C.6. With the collection area Aeff and the electron rate we can obtain

sh φe− (E) = Aeff(E)ΩMoon φe− Acc(E) (C.4)

Where Acc(E) is the energy-dependent acceptance from Figure C.3.

C.1.3 Observation time needed to detect the electron shadow

det sh Knowing φbkg and φe− we can proceed to calculate the observation time needed to detect the e− shadow. Electron and background rates are integrated between 380 GeV and 1 TeV. Background events are randomly generated in a 2 × 2 deg2 map centred at the Moon position. Then, the electron events that fall at the actual Moon posi- tion are removed and θ2 plots are produced, as in Figures C.7 and C.8. It would be needed of the order of 1000 observation hours using UV-pass filters under a brightness of 8-15 × NSBDark to detect the electron shadow. If one repeats the calculations using background rates and collection area for dark observations finds that about 250-300 observation hours would be needed. That number increases if using reduced-HV (with- out UV-pass filter) to observe under brightness greater than 12 × NSBDark. In total ∼15 hr/yr are available to observe the e− shadow with MAGIC without using UV-pass filters. Under such conditions it would take of the order of 20 years to detect the e− shadow. By mounting the UV-pass filters it is possible to collect ∼50 hr/yr at a 4◦ offset from the Moon during the 6 nights around full Moon in which the operations C.1. FEASIBILITY STUDY 151

Figure C.7: Estimation of the significance of the electron Moon shadow detection after 250 (top) and 500 (bottom) observation hours, assuming background rates and collection area 2 for UV-pass filter observations with 8-15 × NSBDark. Left: θ plots. Right: Skymap near the expected Moon shadow position. of the telescopes is limited1. Most of those hours would be taken at brightness much higher than 15 × NSBDark, which means that even more than 1000 hr might be needed to detect the e− shadow with UV-pass filters. With or without UV-pass filters, this seems unfeasible for current IACT telescopes.

1Since it is not possible to mount/dismount the filters in the middle of the night, those would be the only time slots in which Moon Shadow observations could be performed without interfering with other projects. 152 APPENDIX C. MOON SHADOW OBSERVATIONS WITH MAGIC

Figure C.8: Estimation of the significance of the electron Moon shadow detection after 1250 (top) and 5000 (bottom) observation hours, assuming background rates and collection area 2 for UV-pass filter observations with 8-15 × NSBDark. Left: θ plots. Right: Skymap near the expected Moon shadow position. Appendix D

The naima package

Naima is a Python package for computation of non-thermal radiation from relativistic particle populations. It is used to perform Markov Chain Monte Carlo (MCMC) fitting of radiative models to multiwavelength data [199]. The code uses the parametrisation of neutral pion decay by [130], the parametrization of synchrotron radiation by [24] and the analytical approximations to Inverse Compton up-scattering of blackbody ra- diation and non-thermal bremsstrahlung developed by [132] and [61], respectively (see Section 2.2.1). Here I briefly describe how Naima was used in this thesis. For more details on the program, see [199]. Naima can be used with different aims, even for educative or illustrative purposes as in Figures 2.2 to 2.5. But the main goal for which it was conceived was to test if a hypothetical population of relativistic particles could explain a given measured electromagnetic spectrum. In such case, the user must input:

• The parent population type (electrons, protons) and a mathematical model de- scribing its energy distribution (a power law, a power la with cut-off, a broken power law...)

• The radiation mechanisms involved (like those described in Section 2.2.1)

• The spectral measurements to be fitted.

In case of a power law with exponential cut-off, the parent population has three asso- ciated parameters: the normalization, the spectral index and the cut-off energy (see Section 2.2.1). The radiative models have also their associated parameters, such as the magnetic field or the target proton density. Besides, to compute the expected elec- tromagnetic flux at the Earth user also has to input, for instance, the distance to the source. Among all those parameters, the user has to decide which ones are set free and which ones are given as fixed inputs.

153 154 APPENDIX D. THE NAIMA PACKAGE Bibliography

[1] Aartsen, M. G. et al. (IceCube Collaboration) (2014). Phys. Rev. Lett., 113 , 101101.

[2] Aartsen, M. G. et al. (IceCube Collaboration) (2014). Phys. Rev. Lett., 113 , 101101.

[3] Abbasi, R. et al. (2013). Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 700 , 188 – 220.

[4] Abbasi, R. U. et al. (2008). Physical Review Letters, 100 , 101101.

[5] Abbasi, R. U. et al. (High Resolution Fly’s Eye Collaboration) (2004). Phys. Rev. Lett., 92 , 151101.

[6] Abdo, A. A. et al. (2010). ApJ , 710 , L92–L97.

[7] Abdo, A. A. et al. (2009). Physical Review Letters, 103 , 251101.

[8] Abdo, A. A. et al. (2010). A&A, 523 , A46.

[9] Abeysekara, A. U. et al. (2014). ApJ , 796 , 108.

[10] Abu-Zayyad, T. et al. (2013). ApJ , 768 , L1.

[11] Acero, F. et al. (2010). A&A, 516 , A62.

[12] Acharya, B. S. et al. (2013). Astroparticle Physics, 43 , 3–18.

[13] Ackermann, M. et al. (2013). Science, 339 , 807–811.

[14] Ackermann, M. et al. (2016). A&A, 586 , A71.

[15] Actis, M. et al. (2011). Experimental Astronomy, 32 , 193–316.

[16] Adriani, O. et al. (2015). Journal of Physics: Conference Series, 632 , 012023.

[17] Adri´an-Mart´ınez,S. et al. (2016). Journal of Physics G: Nuclear and Particle Physics, 43 , 084001.

[18] Ageron, M. et al. (2011). Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 656 , 11 – 38.

155 156 BIBLIOGRAPHY

[19] Aguilar, M. et al. (AMS Collaboration) (2013). Phys. Rev. Lett., 110 , 141102.

[20] Aharonian, F. et al. (2001). A&A, 370 , 112–120.

[21] Aharonian, F. et al. (2006). A&A, 457 , 899–915.

[22] Aharonian, F., Bergstr¨om, L., & Dermer, C. (2013). Astrophysics at Very High Energies: Saas-Fee Advanced Course 40. Swiss Society for Astrophysics and Astronomy, Saas-Fee Advanced Course, Volume 40. ISBN 978-3-642-36133- 3. Springer-Verlag Berlin Heidelberg, 2013 , 40 .

[23] Aharonian, F., Yang, R., & de O˜naWilhelmi, E. (2018). arXiv:1804.02331.

[24] Aharonian, F. A., Kelner, S. R., & Prosekin, A. Y. (2010). PRD, 82 , 043002.

[25] Ahn, H. et al. (2007). Nuclear Instruments and Methods in Physics Research Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 579 , 1034 – 1053.

[26] Ahnen, M. L. et al. (2016). A&A, 595 , A98.

[27] Ahnen, M. L. et al. (2016). A&A, 589 , A33.

[28] Ahnen, M. L. et al. (2017). MNRAS, 472 , 2956–2962.

[29] Ahnen, M. L. et al. (2017). Astroparticle Physics, 94 , 29–41.

[30] Ajima, Y. et al. (2000). Nuclear Instruments and Methods in Physics Research A, 443 , 71–100.

[31] Albert, J. et al. (2008). Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 588 , 424 – 432.

[32] Albert, J. et al. (2007). Nuclear Instruments and Methods in Physics Research A, 583 , 494–506.

[33] Albert, J. et al. (2007). A&A, 474 , 937–940.

[34] Albert, J. et al. (2007). arXiv:astro-ph/0702475.

[35] Aleksi´c,J. (2013). Optimized dark matter searches in deep observations of Segue 1 with MAGIC . Ph.D. thesis IFAE and Universitat Autonoma de Barcelona. URL: https://magicold.mpp.mpg.de/publications/theses/JAleksic1.pdf.

[36] Aleksi´c,J. et al. (2012). ApJ , 754 , L10.

[37] Aleksi´c,J. et al. (2012). Astroparticle Physics, 35 , 435–448.

[38] Aleksi´c,J. et al. (2012). A&A, 539 , A118.

[39] Aleksi´c,J. et al. (2012). ApJ , 746 , 80.

[40] Aleksi´c,J. et al. (2012). ApJ , 748 , 46. BIBLIOGRAPHY 157

[41] Aleksi´c,J. et al. (2016). Astroparticle Physics, 72 , 61–75.

[42] Aleksi´c,J. et al. (2016). Astroparticle Physics, 72 , 76–94.

[43] Aleksi´c,J. et al. (2015). Journal of High Energy Astrophysics, 5 , 30–38.

[44] Aleksi´c,J. et al. (2014). A&A, 563 , A90.

[45] Aleksi´c,J. et al. (2011). ApJ , 730 , L8.

[46] Allen, R. J., & Barrett, A. H. (1967). ApJ , 149 , 1.

[47] Allison, J. et al. (2006). IEEE Transactions on Nuclear Science, 53 , 270–278.

[48] Ambrosi, G. et al. (2017). Nuovo Cimento C Geophysics Space Physics C , 40 , 78.

[49] Anderhub, H. et al. (2013). Journal of Instrumentation, 8 , P06008.

[50] Anderson, M. et al. (1991). ApJ , 373 , 146–157.

[51] Anderson, M. C., & Rudnick, L. (1995). ApJ , 441 , 307–333.

[52] Antoni, T. et al. (2003). Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 513 , 490 – 510.

[53] Apel, W. et al. (2010). Nuclear Instruments and Methods in Physics Research Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 620 , 202 – 216.

[54] Archambault, S. et al. (2015). ApJ , 808 , 110.

[55] Archambault, S. et al. (2017). Astroparticle Physics, 91 , 34 – 43.

[56] Archambault, S. et al. (2017). ApJ , 836 , 23.

[57] Ashworth, W. B., Jr. (1980). Journal for the History of Astronomy, 11 , 1.

[58] Atwood, W. B. et al. (2009). ApJ , 697 , 1071–1102.

[59] Axford, W. I., Leer, E., & Skadron, G. (1977). International Cosmic Ray Con- ference, 11 , 132–137.

[60] Baade, W., & Zwicky, F. (1934). Physical Review, 46 , 76–77.

[61] Baring, M. G. et al. (1999). ApJ , 513 , 311–338.

[62] Barrau, A. et al. (1998). Nuclear Instruments and Methods in Physics Research A, 416 , 278–292.

[63] Barrio, J. et al. (1998). MPI-PhE/98-5, March,.

[64] Bartoli, B. et al. (2001). Nuclear Physics B Proceedings Supplements, 97 , 211– 214. 158 BIBLIOGRAPHY

[65] Becker Tjus, J. (2014). ASTRA Proceedings, 1 , 7–11.

[66] Bell, A. R. (1978). MNRAS, 182 , 147–156.

[67] Bell, A. R. (1978). MNRAS, 182 , 443–455.

[68] Bell, A. R. et al. (2013). MNRAS, 431 , 415–429.

[69] Benn, C. R., & Ellison, S. L. (1998). New Astron.Rev., 42 , 503–507.

[70] Bennett, H. E., & Porteus, J. O. (1961). Journal of the Optical Society of America (1917-1983), 51 , 123.

[71] Berezhko, E. G., P¨uhlhofer, G., & V¨olk,H. J. (2003). A&A, 400 , 971–980.

[72] Binns, W. R. et al. (2014). ApJ , 788 , 18.

[73] Binns, W. R. et al. (2008). New Astronomy Reviews, 52 , 427–430.

[74] Blandford, R. D., & Ostriker, J. P. (1978). ApJ , 221 , L29–L32.

[75] Blasi, P. (2013). A&ARv, 21 , 70.

[76] Blum, K., Katz, B., & Waxman, E. (2013). Physical Review Letters, 111 , 211101.

[77] Borla, D. (2011). A Study of Cosmic Electrons between 100 GeV and 2 TeV with the MAGIC Telescopes. Ph.D. thesis Max-Planck-Institut f¨ur Physik and Technische Universit¨atM¨unchen. URL: https://magicold.mpp.mpg.de/ publications/theses/DBorla.pdf.

[78] Braude, S. Y. et al. (1969). MNRAS, 143 , 289.

[79] Britzger, D. (2009). Studies of the Influence of Moonlight on Observations with the MAGIC Telescope. Diploma thesis Universitat M¨unchen. URL: https:// magicold.mpp.mpg.de/publications/theses/DBritzger.pdf.

[80] Buehler, R. et al. (2012). ApJ , 749 , 26.

[81] Casse, M., & Paul, J. A. (1980). ApJ , 237 , 236–243.

[82] Chantell, M. C. et al. (1997). Astroparticle Physics, 6 , 205–214.

[83] Chmill, V. et al. (2017). Nuclear Instruments and Methods in Physics Research A, 854 , 70–81.

[84] Colin, P. (2011). Proc. of the 32nd ICRC, Beijing, 6 , 194.

[85] Colin, P. et al. (2009). Proc. of the 31st ICRC, Lodz, Id. 1239 .

[86] Cortina, J., L´opez-Coto, R., & Moralejo, A. (2016). Astroparticle Physics, 72 , 46–54.

[87] Daum, A. et al. (1997). Astroparticle Physics, 8 , 1–11.

[88] De Naurois, M., & Mazin, D. (2015). Comptes Rendus Physique, 16 , 610–627. BIBLIOGRAPHY 159

[89] DeLaney, T., & Rudnick, L. (2003). ApJ , 589 , 818–826.

[90] Doering, M. et al. (2001). Proceedings of the 27th ICRC, Hamburg, (pp. 2985– 2988).

[91] Drury, L. O. . (2012). Astroparticle Physics, 39 , 52–60.

[92] EAS-Top Collaboration et al. (1999). Astroparticle Physics, 10 , 1–9.

[93] F. Hess, V. (1912). Physikalische Zeitschrift, 13 , 1084–1091.

[94] Farnier, C., Walter, R., & Leyder, J.-C. (2011). A&A, 526 , A57.

[95] Fermi, E. (1949). Physical Review, 75 , 1169–1174.

[96] Fermi, E. (1954). ApJ , 119 , 1.

[97] Fern´andez-Barral, A. (2017). Extreme particle acceleration in microquasar jets and pulsar wind nebulae with the MAGIC telescopes. Ph.D. thesis IFAE and Universitat Autonoma de Barcelona. URL: https://www.dropbox.com/s/ j1js8o2pooq1vcc/AlbaFdezBarral_PhDThesis.pdf?dl=0.

[98] Flyckt, S., & Marmonier, C. (2002). Photomultiplier tubes principles and appli- cations. Philips Photonics, Brive, France.

[99] Fomin, V. P. et al. (1994). Astroparticle Physics, 2 , 137–150.

[100] Fowler, J. W. et al. (2001). Astroparticle Physics, 15 , 49–64.

[101] Fruck, C. et al. (2013). Proc. of 33rd ICRC, Rio de Janeiro, Id. 1054 .

[102] Fukada, Y. et al. (1975). Nature, 255 , 465.

[103] Gaisser, T. K., Stanev, T., & Tilav, S. (2013). Frontiers of Physics, 8 , 748–758.

[104] Gallego, L. et al. (2013). Journal of Instrumentation, 8 , P05010.

[105] Garc´ıa,A. (2017). Characterization of PMMA discs and a reflective material used in a Light-Trap photodetection device.. Master’s thesis IFAE and Universitat de Barcelona.

[106] Garc´ıa,J. R. et al. (2014). arXiv:1404.4219.

[107] Giuliani, A. et al. (2011). ApJ , 742 , L30.

[108] Gotthelf, E. V. et al. (2001). ApJ , 552 , L39–L43.

[109] Graziani, M. (2017). ArXiv e-prints,.

[110] Griffin, S. et al. (2015). Proc. of the 34th ICRC, The Hague, Id. 868 .

[111] Guberman, D. (2014). Observing the shadowing of the Cosmic Ray electron flux with the MAGIC telescopes: a feasibility study. Master’s thesis IFAE and Universitat de Barcelona. 160 BIBLIOGRAPHY

[112] Guberman, D. et al. (2015). Proc. of the 34th ICRC, The Hague, Id. 1237 .

[113] Guberman, D., Cortina, J., & Ward, J. (2017). Nuclear Instruments and Meth- ods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment,.

[114] Guberman, D. et al. (2017). arXiv:1709.00239. [115] Gueymard, C. (1994). Updated transmittance functions for use in fast spectral direct beam irradiance models. Proceedings of the 23rd American Solar Energy Society Annual Conference, San Jose.

[116] Gueymard, C. (1995). Smarts2, simple model of the atmospheric radiative trans- fer of sunshine: Algorithms and performance assessment. Rep. FSEC-PF-270-95, Florida Solar Energy Center, Cocoa.

[117] H. E. S. S. Collaboration et al. (2016). arXiv:1609.08671.

[118] H. E. S. S. Collaboration et al. (2016). arXiv:1611.01863.

[119] H. E. S. S. Collaboration et al. (2016). arXiv:1601.04461. [120] Hahn, A. et al. (2017). Nuclear Instruments and Methods in Physics Research A, 845 , 89–92.

[121] Hales, S. E. G. et al. (1995). MNRAS, 274 , 447–451.

[122] Hartman, R. C. et al. (1999). ApJ , 123 , 79–202.

[123] HEGRA-Collaboration et al. (2000). A&A, 359 , 682–694.

[124] Helder, E. A., & Vink, J. (2008). ApJ , 686 , 1094–1102.

[125] Heller, M. et al. (2016). arXiv:1607.03412. [126] HESS Collaboration et al. (2016). Nature, 531 , 476–479.

[127] Holder, J. (2017). In 6th International Symposium on High Energy Gamma-Ray Astronomy (p. 020013). volume 1792 of American Institute of Physics Conference Series. arXiv:1609.02881. [128] Holder, J. et al. (2008). In American Institute of Physics Conference Series (pp. 657–660). volume 1085.

[129] H¨orandel,J. R. (2003). Astroparticle Physics, 19 , 193–220.

[130] Kafexhiu, E. et al. (2014). PRD, 90 , 123014.

[131] Kampert, K.-H., & Unger, M. (2012). Astroparticle Physics, 35 , 660–678.

[132] Khangulyan, D., Aharonian, F. A., & Kelner, S. R. (2014). ApJ , 783 , 100.

[133] Knoetig, M. L. et al. (2013). Proc. of the 33rd ICRC, Rio de Janeiro, Id. 695 .

[134] Kranich, D. et al. (1999). Astroparticle Physics, 12 , 65–74. BIBLIOGRAPHY 161

[135] Krause, O. et al. (2008). Science, 320 , 1195.

[136] Krymskii, G. F. (1977). Akademiia Nauk SSSR Doklady, 234 , 1306–1308.

[137] Kumar, S., & for the VERITAS Collaboration (2015). ArXiv e-prints,.

[138] Lacaita, A. L. et al. (1993). IEEE Transactions on Electron Devices, 40 , 577–582.

[139] Lagage, P. O., & Cesarsky, C. J. (1983). A&A, 118 , 223–228.

[140] Lagage, P. O., & Cesarsky, C. J. (1983). A&A, 125 , 249–257.

[141] Laming, J. M., & Hwang, U. (2003). ApJ , 597 , 347–361.

[142] Lastochkin, V. A., V. P.and Porfriev et al. (1963). Radiophysica (U.S.S.R.), 6 , 629.

[143] Li, T.-P., & Ma, Y.-Q. (1983). ApJ , 272 , 317–324.

[144] L´opez-Coto, R. (2015). Very-high-energy γ-ray observations of pulsar wind nebulae and cataclysmic variable stars with MAGIC and development of trig- ger systems for . Ph.D. thesis IFAE and Universitat Autonoma de Barcelona. URL: https://www.mpi-hd.mpg.de/personalhomes/rlopez/ Theses/PhD_thesis.pdf.

[145] L´opez-Coto, R. et al. (2016). Journal of Instrumentation, 11 , P04005.

[146] Lorenz, E. (2004). New Astronomy Reviews, 48 , 339 – 344. 2nd VERITAS Symposium on the Astrophysics of Extragalactic Sources.

[147] Lubsandorzhiev, B. K., & Vyatchin, Y. E. (2004). ArXiv Physics e-prints,.

[148] M. Hillas, A. (1985). In F. C. Jones (Ed.), Proceedings of the 19th International Cosmic Ray Conference (pp. 445–448). La Jolla volume 3.

[149] Maeda, Y. et al. (2009). PASJ , 61 , 1217–1228.

[150] Majumdar, P. et al. (2005). (p. 203). volume 5.

[151] Malkov, M. A., & Drury, L. O. (2001). Reports on Progress in Physics, 64 , 429–481.

[152] Mayer, M. et al. (2013). ApJ , 775 , L37.

[153] Medd, W. J., & Ramana, K. V. V. (1965). ApJ , 142 , 383.

[154] Mezger, P. G. et al. (1986). A&A, 167 , 145–150.

[155] Micelotta, E. R., Dwek, E., & Slavin, J. D. (2016). A&A, 590 , A65.

[156] Mirzoyan, R., & Lorenz, E. (1997). In Proceedings of the 25th International Cosmic Ray Conference (pp. 265–268). volume 7.

[157] Morlino, G. (2016). arXiv:1611.10054. 162 BIBLIOGRAPHY

[158] Morse, J. A. et al. (2004). ApJ , 614 , 727–736.

[159] Murphy, R. P. et al. (2016). ApJ , 831 , 148.

[160] Otte, A. N. et al. (2015). Proc. of the 34th ICRC, The Hague, Id. 1052 .

[161] Parker, E. A. (1968). MNRAS, 138 , 407.

[162] Patnaude, D. J., & Fesen, R. A. (2007). ApJ , 133 , 147–153.

[163] Patnaude, D. J., & Fesen, R. A. (2009). ApJ , 697 , 535–543.

[164] Picozza, P. et al. (2007). Astroparticle Physics, 27 , 296–315.

[165] Pierre Auger Collaboration (2004). Nuclear Physics B Proceedings Supplements, 136 , 399–406.

[166] Pierre Auger Collaboration et al. (2008). Astroparticle Physics, 29 , 243–256.

[167] Planck Collaboration et al. (2014). A&A, 571 , A28.

[168] Rando, R. et al. (2015). Proc. of the 34th ICRC, The Hague, Id. 176 .

[169] Reed, J. E. et al. (1995). ApJ , 440 , 706.

[170] Riegel, B. et al. (2005). Proc. of the 29th ICRC , 5 , 219.

[171] Rodriguez, G. (2014). Nuclear Instruments and Methods in Physics Research A, 742 , 261–264.

[172] Rolke, W. A., L´opez, A. M., & Conrad, J. (2005). Nuclear Instruments and Methods in Physics Research A, 551 , 493–503.

[173] Romero, G. E. (2004). In K. S. Cheng, & G. E. Romero (Eds.), Cosmic Gamma- Ray Sources (p. 127). volume 304 of Astrophysics and Space Science Library.

[174] Rossi, B. (1964). Cosmic Rays. McGraw-Hill paperbacks in physics.

[175] Saha, L. et al. (2014). A&A, 563 , A88.

[176] Sanuy, A. et al. (2012). Journal of Instrumentation, 7 , C01100.

[177] Saveliev, V. (2010). Silicon Photomultiplier - New Era of Photon Detection. INTECH Open Access Publisher. URL: https://books.google.es/books?id= p1bkoAEACAAJ.

[178] SensL (). Introduction to sipm. technical note. URL: www.sensl.com.

[179] Serpico, P. D. (2009). PRD, 79 , 021302.

[180] Shen, C. S. (1970). ApJ , 162 , L181.

[181] Sitarek, J. et al. (2013). Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 723 , 109 – 120. BIBLIOGRAPHY 163

[182] Sitarek, J. et al. (2013). Nuclear Instruments and Methods in Physics Research A, 723 , 109–120.

[183] Skilling, J. (1975). MNRAS, 172 , 557–566.

[184] Skilling, J. (1975). MNRAS, 173 , 245–254.

[185] Sottile, G. et al. (2013). Nuclear Physics B Proceedings Supplements, 239 , 258– 261.

[186] Strong, A. W. et al. (2010). ApJ , 722 , L58–L63.

[187] Tameda, Y. (2009). Nuclear Physics B - Proceedings Supplements, 196 , 74 – 79. Proceedings of the XV International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2008).

[188] Tescaro, D. et al. (2009). ArXiv e-prints,.

[189] The HAWC Collaboration, & The IceCube Collaboration (2017). arXiv:1708.03005.

[190] Tylka, A. J. (1989). Physical Review Letters, 63 , 840–843.

[191] Uchiyama, Y., & Aharonian, F. A. (2008). ApJ , 677 , L105.

[192] Urban, M. et al. (1996). Nuclear Instruments and Methods in Physics Research A, 368 , 503–511.

[193] Urban, M. et al. (1990). Nuclear Physics B Proceedings Supplements, 14 , 223– 236.

[194] Vink, J., & Laming, J. M. (2003). ApJ , 584 , 758–769.

[195] Wang, W., & Li, Z. (2016). ApJ , 825 , 102.

[196] Weekes, T. C. et al. (1989). ApJ , 342 , 379–395.

[197] Whitehouse, D. J. D. J. (1996). Selected papers on optical methods in surface metrology. Bellingham, Wash. : SPIE Optical Engineering Press. Includes bibli- ographical references and indexes.

[198] Yuan, Y. et al. (2013). ApJ , 779 , 117.

[199] Zabalza, V. (2015). In 34th International Cosmic Ray Conference (ICRC2015) (p. 922). volume 34 of International Cosmic Ray Conference. arXiv:1509.03319.

[200] Zanin, R. et al. (2013). Proc. of 33rd ICRC, Rio de Janeiro, Id. 773 .

[201] Zirakashvili, V. N. et al. (2014). ApJ , 785 , 130.