Universita` degli Studi di Padova

facolta` di scienze matematiche, fisiche e naturali corso di laurea specialistica in fisica

Indirect search in the with the MAGIC Telescope

Relatore: Prof. Mos`eMariotti

Laureando: Alessandro Venturini

Anno Accademico 2006/2007 This work is licensed under a Creative Commons Attribution-Non Commercial license 3.0. To view a copy of the license visit http://creativecommons.org/licenses/by-nc/3.0/ or write to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Introduzione

Lo sviluppo della scienza occidentale moderna, nata quattro secoli fa con Galileo, ha subito negli ultimi decenni una forte accelerazione e prodotto i suoi piu` grandiosi risultati. I mezzi a sua disposizione sono un patrimonio di conoscenza che cresce esponenzialmente ed un’avanzatissima tecnologia. Ciononostante, quando gli scien- ziati contemporanei si trovano ad affrontare la questione di che cosa componga l’Universo –cui in tempi piu` remoti filosofi e religiosi avevano fornito, a modo loro, una risposta– la conclusione a cui essi giungono `e, ironicamente: non sappiamo. Questa apparente ammissione di ignoranza, tuttavia, nasconde una delle piu` impor- tanti e sorprendenti scoperte della fisica recente: cio` che comunemente chiamiamo “materia” costituisce solamente il 4% del contenuto dell’Universo, mentre al restante 96% si da` il nome di energia oscura e materia oscura (rispettivamente, 73% e 23%). L’energia oscura puo` essere descritta nei suoi effetti cosmologici modificando alcune delle leggi fondamentali della gravitazione (aggiungendo cio`e la cosiddetta costante cosmologica Λ nelle equazioni della Relativita` Generale di Einstein) e in- terpretata come una densita` di energia costante e presente ovunque nell’Universo. L’interpretazione e la descrizione della materia oscura, con le sue forti conseguenze astrofisiche oltre che cosmologiche, sono sfide ancora aperte poste dalla Natura.

Dall’astrofisica alla cosmologia, arrivando a toccare l’avanguardia delle piu` fon- damentali teorie fisiche, la questione della materia oscura ha stimolato la fantasia dei fisici –ed `e costata molto duro lavoro– per quasi un secolo. Nessun’altra idea della fisica contemporanea ha manifestato una presenza cos`ı forte in campi tanto diversi quanto la dinamica delle galassie e lo studio delle simmetrie di base a cui obbedi- scono le particelle elementari. Per questa e per le altre ragioni sopra espresse, la materia oscura `e stata giustamente considerata una delle rivoluzioni scientifiche del XX secolo. Il modesto obiettivo di questa tesi `e di rappresentare un piccolo mattone nella costruzione di una comprensione piu` profonda di questo complicato puzzle. La scoperta e i primi studi sulla materia oscura erano basati sul fatto che questa sostanza `e visibile solo attraverso le sue interazioni gravitazionali con la materia “brillante”, cio`e la materia ordinaria. Le piu` recenti teorie riguardo alla sua carat- terizzazione come particella, sebbene non ancora ben definite, stabiliscono pero` la possibilita` che fotoni di alta energia (raggi gamma) siano emessi attraverso processi di auto-annichilazione. L’osservazione di tali fotoni permetterebbe di discriminare tra alcune ipotesi alla base della descrizione particellare della materia oscura. Il la- voro di tesi svolto ha riguardato un tentativo sperimentale di rivelare questo tipo di segnale da una concentrazione di materia oscura attorno alla galassia nana Draco, in cui il rapporto tra materia visibile e materia oscura `e stato stimato essere di 1 a 200.

I Il presente elaborato `e organizzato come segue. Nel primo capitolo sono deline- ati i principali elementi della ricerca attuale sulla materia oscura e le motivazioni per l’osservazione di Draco, consistenti principalmente nell’annuncio di una scoperta pos- itiva fatta dall’esperimento CACTUS, la cui dubbia affidabilita` tuttavia ha richiesto una conferma o una smentita da parte di un secondo strumento. Nel secondo capitolo `e spiegato come sia possibile osservare i raggi gamma con un Telescopio Atmosferico Cerenkovˇ a Immagini (IACT), uno speciale tipo di telescopio progettato per guardare il cielo nella banda delle altissime energie. Tale tipo di strumento utilizza l’atmosfera terrestre, opaca ai fotoni gamma, come calorimetro, raccogliendo la luce Cerenkovˇ prodotta dallo sciame di particelle che eredita energia e direzione del fotone inci- dente. In particolare sono descritte le caratteristiche dello strumento effettivamente usato per l’osservazione, il Telescopio MAGIC (Major Atmospheric Gamma Imaging Cerenkov).ˇ Situato sull’isola spagnola di La Palma e gestito da una collaborazione europea, esso `e tra i quattro maggiori telescopi Cerenkovˇ attualmente operativi e quello con la minore soglia in energia, particolarita` che lo rende adatto alla ricerca indiretta di materia oscura. L’analisi dei dati di un apparato complesso come uno IACT `e un compito piuttosto difficile, specialmente se l’oggetto osservato ha delle caratteristiche peculiari e i dati non sono registrati in condizioni ottimali, come accade nel caso in oggetto. Nel terzo capitolo `e illustrata la catena di analisi standard per MAGIC e la sua versione adattata allo studio di Draco `e applicata su un insieme di dati della Nebulosa del Granchio, un resto di supernova che rappresenta la candela standard per l’astronomia gamma. Questa sorgente, le cui caratteristiche sono ben conosciute, `e usata per testare la bonta` della catena di analisi che verra` in seguito applicata ad un oggetto ignoto. Il campione di dati di Draco da analizzare, sfortunatamente, risente di un problema hardware presente al momento della presa dati, il quale rende particolarmente in- omogenea l’accettanza della camera del telescopio. Per questo motivo il quarto capi- tolo tratta di uno studio originale condotto su dati di fondo, il cui scopo `e migliorare la qualita` dell’analisi nel caso che la camera del telescopio risenta di disomogeneita.` Come risultato si ottengono la stima dell’errore sistematico dovuto a tali effetti ed al- cuni suggerimenti su come recuperare in parte la qualita` dei dati. Tali suggerimenti sono applicati nel quinto capitolo, che presenta l’analisi e i risultati dell’osservazione della galassia Draco. Da tale oggetto non `e stato riscontrato alcun segnale gamma di energia superiore alla soglia dell’analisi (80 GeV) ed `e stato percio` possibile calcolare solo i limiti supe- riori al flusso di fotoni osservato. Tali limiti, bench´e troppo alti per potervi estrarre informazioni sulla natura particellare della materia oscura, non sono tuttavia com- patibili con il segnale rivelato da CACTUS. Una discussione su tali risultati e sulle possibilita` sperimentali di osservazione indiretta di materia oscura sono presentate nel sesto capitolo.

L’autore, nell’ambito del lavoro di tesi, ha avuto l’opportunita` di effettuare un turno di osservazione presso le strutture dell’Instituto de Astrof´ısica de Canarias come operatore del telescopio, e di presentare parte dei risultati ottenuti al MAGIC Collab- oration Meeting tenutosi a Sofia nel maggio 2007.

II Introduction

The development of western modern science, started four centuries ago with Galileo, has undergone a rapid acceleration and brought its greatest achievements in the last few decades. It can exploit means such as an exponentially growing knowledge joined to an amazingly advanced technology. Nevertheless, when contemporary sci- entists face with the question of what the is made of –to which in the past philosophers and religious gave their own answers– the conclusion they find is, iron- ically: we don’t know. This apparent admission of ignorance, however, hides one of the most important and surprising discoveries of the recent physics: what we nor- mally call “matter” makes up only a 4% of the content of the Universe, the other 96% being referred to as dark energy and dark matter (73% and 23% respectively). Dark energy can be described in its cosmological effects by modifying some of the fundamental laws of gravitation (i.e. adding the so-called cosmological constant Λ to Einstein’s equations of General Relativity) and interpreted as a constant energy den- sity present everywhere in the Universe. Dark matter interpretation and description, with its strong astrophysical rather than cosmological consequences, are still open challenges set up by Nature.

From astrophysics to cosmology, touching the avant-garde of the most fundamen- tal physical theories, the dark matter issue has excited the physicists’ fantasy –and has cost lots of hard work– for almost a century. No other idea in contemporary physics proved to manifest such a strong presence in fields as different as galaxy dynamics and the study of the ultimate symmetries which elementary particles obey. For this and the above reasons, dark matter has been justly considered as one of the 20th century scientific revolutions. This thesis aims to be, modestly, the smallest brick in building a deeper understanding of such an intricate puzzle. The discovery and the first studies on dark matter were based on the fact that this kind of substance is visible only through its gravitational interaction with “bright”, i.e. ordinary, matter. The latest theories about its characterization as a particle, al- though not very definite yet, state the possibility that high-energy photons (gamma- rays) are emitted through self-annihilation processes. The observation of such pho- tons would allow some hypotheses on the dark matter particle description to be dis- criminated. This thesis work deals with an experimental trial to reveal that kind of signal from a dark matter concentration around the Draco dwarf galaxy, where the ratio between visible and dark matter has been estimated about 1:200.

This thesis is organized as follows. In the first chapter the key elements of current dark matter research are outlined together with the motivation for observing Draco, which consists essentially in a former claim of positive detection by the CACTUS ex-

III periment. Some doubts on the reliability of those results, though, needed a second instrument to confirm or deny the discovery. The second chapter explains how to detect gamma-rays by means of a so-called Imaging Atmospheric Cerenkovˇ Telescope (IACT), a special kind of telescope designed to look at the very-high-energy sky. This kind of instrument uses the atmosphere, which is opaque to gamma-rays, as a calorimeter, collecting the Cerenkovˇ light produced by the particle air shower which inherits energy and direction from the incident photon. In particular, the characteris- tics of the instrument actually used for the observation, the MAGIC Telescope (Major Atmospheric Gamma Imaging Cerenkov),ˇ are described. Located on the Spanish Ca- nary island of La Palma and operated by a large European collaboration, it is among the four biggest currently working Cerenkovˇ telescopes and the one with the lowest energy threshold, feature that makes it suited for indirect dark matter searches. Analyzing data from a complex apparatus like a IACT is a rather demanding task, especially if the observed target is somewhat peculiar and data are not taken in op- timal conditions, as they are in the present case. In the third chapter the standard analysis chain for MAGIC is showed and a version fit to the Draco study is applied to a sample of data from the Crab Nebula, a supernova remnant which represents the standard candle for gamma-ray astronomy. This source, whose characteristics are well known, is used to test the analysis chain which will be later applied to an unknown target. The Draco dataset to be analyzed, unfortunately, is affected by a hardware issue present during datataking that makes the telescope camera accep- tance inhomogeneous. For this reason the fourth chapter deals with an original study made on background data, whose purpose is to improve the quality of the analysis in case that severe inhomogeneities affect the telescope camera. As a result, we get an estimate of the systematic error due to such effects and some suggestions to partially recover the data quality. These suggestions are applied in the fifth chapter, which presents the analysis and the results of the observation of the Draco galaxy. From this object no gamma-ray source is found for energies above the analysis threshold (80 GeV) and therefore only upper limits on the photon flux are calcu- lated. Such upper limits, although not stringent enough to put any constraint on the features of the dark matter particle, do not match the signal revealed by CACTUS. A final discussion about those results and the experimental possibilities for a dark matter indirect observation are contained in the sixth chapter.

The author, within the thesis work, had the opportunity to take an observation shift in the premises owned by the Instituto de Astrof´ısica de Canarias as a telescope operator, and to present part of his findings at the MAGIC Collaboration Meeting held in Sofia in May, 2007.

IV Au lieu de s’´etonner de l’existence superflue d’un autre monde, c’est notre seul monde ou` le co¨ıncidences nous surprennent qu’il ne faut pas perdre de vue.

Ren´e Magritte

V VI Contents

1 The Dark Matter Challenge 1 1.1 Dark Evidences from Outer Space ...... 1 1.2 Particle Candidates ...... 3 1.3 Dark Matter Search Strategies and Possibilities ...... 6 1.3.1 Direct Detection ...... 6 1.3.2 Indirect Detection ...... 7 1.4 The Draco ...... 9

2 Telescopes for Cosmic Rays. IACTs and MAGIC 11 2.1 Cosmic Rays ...... 11 2.1.1 Atmospheric Showers ...... 12 2.2 Cerenkovˇ Light Collection: the Imaging Technique ...... 13 2.2.1 The Cerenkovˇ Effect ...... 13 2.2.2 Cerenkovˇ Light from EAS ...... 14 2.2.3 The Imaging Technique ...... 15 2.3 The MAGIC Telescope ...... 20 2.3.1 Mirrors ...... 20 2.3.2 Camera ...... 22 2.3.3 Readout and Trigger ...... 22

3 Data Analysis Chain Tested on the Crab Nebula 25 3.1 The Crab Nebula ...... 25 3.2 Backgrounds ...... 26 3.3 Observation Modes ...... 27 3.4 Run Classification ...... 28 3.5 Montecarlo Simulation ...... 29 3.6 Data Analysis ...... 30 3.6.1 Calibration ...... 30 3.6.2 Image Cleaning and Parametrization ...... 31 3.6.3 Cuts ...... 32 3.6.4 Statistical Photon/Hadron Separation ...... 33 3.6.5 Signal Detection and Other Physical Results ...... 36 3.6.6 Statistic and Systematic Uncertainties Estimation ...... 41

4 Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 42 4.1 The Camera Inhomogeneity ...... 42 4.2 Analysis of an Off Sample as Wobble Data ...... 44

VII 4.2.1 Dataset Preparation ...... 44 4.2.2 Test on Different Ways to Extract On and Off Histograms . . . . 47 4.2.3 Test on the Arc-Length Spanned by the Nominal Source Posi- tion in the Wobble Circle ...... 52 4.2.4 Test on Different Image Cleanings ...... 55 4.2.5 Equalization ...... 56 4.3 Summary of results and recipes ...... 57

5 Analysis of the Draco Source 59 5.1 Data Samples ...... 59 5.2 Analysis Chain ...... 60 5.3 Coping With the Inhomogeneity ...... 61 5.4 Results ...... 64

6 Conclusions 68 6.1 Discussion of the Results from the Off Sample ...... 68 6.2 Discussion of the Results from Draco ...... 68

A Celestial Coordinate Systems 73

B Image Parameters 76

C Off Data Extraction Methods in Wobble Mode 78

VIII

Chapter 1

The Dark Matter Challenge

The first section of this thesis aims to provide the introduction to the theoretical frame- work and the scientific motivation for the following work. The nature of dark matter has been one of the most challenging topics in contemporary physics since the first evi- dences of its existence had been found in the 1930s. Cosmologists and astrophysicists on one side, together with particle theorists on the other have put a lot of effort into this field: we will briefly account for their achievements and for the experimental strategies which can be set in this scenario. At the end of the chapter the Draco dwarf galaxy, main subject of this work, is presented.

1.1 Dark Evidences from Outer Space

The first interest in a kind of matter which does not emit radiation (“dark”) and thus can be observed only by its gravitational interactions dates back to Opik’s¨ 1915 stud- ies about the dynamical matter density in the Solar vicinity. His research, though, brought no news. The modern meaning of dark matter shows up for the first time in Zwicky’s work of 1933 on the dynamics of galaxies in the Coma cluster [37, 33]. Since then, astrophysical evidences of the presence of some “mass excess” with re- spect to the visible part accumulated throughout time, while cosmologists and par- ticle physicists attempted to fit the observations into a theoretical frame. The cold dark matter scenario was then established as a likely cosmological explanation [10], while on the particle side the debate –which is accounted for in the next section– is still wide open. Here we outline the main dark matter evidences from an observa- tional point of view [8].

Evidences from Galaxies The most direct and clear evidence for dark matter comes from the observation of the rotation curves of galaxies, i.e. the plot of star and gas circular velocities versus their distance from the . A simple Newtonian approach gives r GM(r) Z v(r) = with M(r) = 4π ρ(r)r2dr. r

If the matter density were given only by the visible mass, one would expect the velocity to fall like p1/r outside the galactic disk. Experimental data show, instead, 2 Chapter 1. that the velocity keeps a constant trend even far beyond the visible disk, thus proving the presence of a dark matter halo with density ρ(r) ∝ 1/r2. The density profile of the innermost part of the spherical halo is still unknown, with N-body simulations favoring from time to time one or the other hypothesis.

Figure 1.1: Rotation curve of the NGC 6503 galaxy.

More refined experiments, in recent times, allowed the observation of dark mat- ter structures through their effect of gravitational lensing. General relativity in fact states that compact gravitational bodies bend nearby photon paths (namely, make spacetime geodesics curve) and therefore act as lenses for light sources behind them in the line of sight. This technique has been used especially for elliptical galaxies.

Evidences from Galaxy Clusters

The first dark matter hint came from a much larger scale than that of a single galaxy, the Coma galaxy cluster. Zwicky measured the velocity dispersion of the galaxies in the cluster, and found the total mass M by applying the virial theorem. He then estimated the visible mass from the total luminosity L of galaxies, obtaining the striking result of M/L ∼ 400. Further studies on other clusters confirmed that these structures are utterly dominated by dark matter, since the typical mass-to-light ratio1 lies in the range 200-300. The same method applied to the dwarf galaxies in the halo (such as Draco) gave similar results, providing the nearest evidence of dark matter.

1The ratio of gravitational mass and luminosity for an object is conventionally called mass-to-light ratio (M/L). The Dark Matter Challenge 3

Evidences from Global Structures The amount of dark matter present on cosmological scales has recently been bril- liantly measured by two collaborations, WMAP and SDSS. The study of temperature anisotropies of the Cosmic Microwave Background performed by the WMAP exper- iment yielded the following results [?], respectively for the baryon and the total matter relative densities2:

2 2 +0.008 Ωbh = 0.0224 ± 0.0009 ; Ωmh = 0.135−0.009 that is to say, roughly speaking, that ordinary matter (“baryons”) accounts only for 1/6 of the total matter density in the Universe, the other 5/6 being ascribable to dark matter. The Sloan Digital Sky Survey obtained results in good agreement through the analysis of the 3D spatial distribution of some 200 000 galaxies.

1.2 Particle Candidates

The undeniable existence of some kind of substance whose visible effects are, as far as we know, only gravitational in nature, brought immediately the question of its particle composition. Namely, the dark matter particle candidate must show the following observed properties [5]:

1. it must have extremely weak or no electromagnetic nor strong interactions. As a consequence, dark matter cannot cool by radiating photons and thus, unlike baryons, does not collapse to the center of galaxies. In other words, one could state that dark matter is very nearly dissipationless.

2. It must be collisionless, at least in the limit of low densities and large scales. Ellipsoidal halos observed in galaxy clusters would otherwise have a spherical shape.

3. Assuming it is a thermal component of the early Universe, it must be sufficiently cold (non-relativistic) at the epoch of its decoupling from the other thermal species. Simulations show that a hot dark matter hypothesis leads to the forma- tion of large scale structures before the small scale ones (top-down formation), which is in contrast with the astrophysical data.

4. It must account for the measured density Ωm. These conditions alone rule out the idea that dark matter halos are made just of very cold ordinary matter like gas or dust. It is clear, then, that some brand new kind of particle, not yet observed, has to provide these characteristics. The generic name of Weakly Interacting Massive Particle (WIMP) is used when referring to such an object, however without specifying its identity. A wealth of WIMP candidates has been proposed, from Standard Model neutrinos to the most exotic ones. In fact, while astrophysical and cosmological constraints are more or less definite, the great uncertainties about what direction to take in order to go beyond the limits of the

2 By definition, Ωi = ρi/ρc, where ρi are the energy densities of the components of the Universe (radiation, matter, dark energy, curvature) and ρc is the critical density to make the Universe flat in a +0.04 Robertson-Walker metric. h = 0.71−0.03 is the scaled Hubble constant. See Coles, Lucchin [13]. 4 Chapter 1.

Standard Model of Particles make room for fantasy. We will then focus our attention on a very limited number of dark matter candidates, conscious anyhow that every choice is somewhat arbitrary, and that the great favor encountered today by the neutralino and other supersymmetric particles, could vanish suddenly if tomorrow a new big discovery points elsewhere.

Neutrino. Since the first confirmations that flavor oscillations in both solar and atmospheric neutrinos indicated that they have non-zero masses, many people re- garded to neutrino as the dark matter particle. The known neutrino is, indeed, an actual WIMP, and then should be considered as a dark matter component. The initial enthusiasm, though, was to be soon cooled down first by β-decay experiments, and then by the CMB anisotropy measure, which respectively pushed the neutrino density 2 2 down to Ωνh < 0.07 and Ωνh < 0.0067. Far from being the dominant dark matter fraction, neutrino extremely light mass makes it a hot thermal component, thus being disfavored by structure formation evidences. Other kinds of unobserved neutrinos, like the right-handed partners of the stan- dard ones, or a further sterile neutrino could however be massive enough and were thus be proposed as possible candidates.

Axion. First introduced in an attempt to resolve the strong CP violation prob- lem, the pseudoscalar boson named axion shows WIMP characteristics if its mass is O(meV). Axion production mechanisms imply its non-thermal nature in the early Universe, thus making the relic density calculation troublesome and uncertain. Nev- ertheless, it is possible to find some conditions under which axion represents a viable dark matter particle.

WIMPzilla. Rejecting the hypothesis that dark matter particles were in thermal equi- librium with other species before decoupling allows one to think of superheavy (1010÷ 1016 GeV) candidates dubbed WIMPzillas. The strongest motivation for these “mon- sters” comes actually from cosmic ray physics, where the decay or annihiliation of WIMPzillas is used to explain the observation of cosmic rays at energies above the GZK cutoff. This is a theoretical limit to the possibility of ultra-high-energy protons to travel long distances in the Universe, since at ∼5×1010 GeV protons interact with CMB photons at resonance with the mass of the ∆ hadron (m∆ = 1.232 GeV).

Kaluza-Klein particle. Many of the current researches toward a Grand Unification Theory (one to describe all known interactions, including gravity) postulate the ex- istence of some hidden additional space dimensions, accessible only at very small length or very high energy scales. In this framework, excitations of Standard Model states along the orthogonal dimensions, called Kaluza-Klein excitations, can act as WIMPs. The most favored candidate as a stable lightest Kaluza-Klein particle (LKP) is (1) the excitation of the U(1)Y gauge boson B, usually referred to as B . The suitable mass range for B(1) to account for the observed quantity of dark matter is 0.4÷1.2 TeV. The Dark Matter Challenge 5

Supersymmetric Candidates One of the most trodden roads to solve Standard Model’s inconsistencies is to enlarge the SM gauge group to a new symmetry group –named supersymmetry, short SUSY– where bosons and fermions are coupled in common multiplets. Every known parti- cle is then provided a superpartner sharing the same quantum numbers, except spin which differs for ~/2. Since no bosons with the same charge and mass of the electron, nor any other superpartners have ever been observed, it is clear that SUSY is broken in the low energy world in which we live, and that supersymmetric particles must have masses above current lower bounds (∼100 GeV). This scenario, though, presents a severe flaw since it allows a supersymmetric particle to mediate a qq → ˜lq˜ process, thus providing an efficient channel for proton decay. Current experimental limits on the proton lifetime are however on the order of 1033 years. It has been therefore proposed to add to the SUSY theory a discrete symmetry, the R-parity, to distinguish ordinary particles (R = +1) from superpartners (R = −1). If R-parity holds with a broken SUSY, supersymmetric particles can decay only in and odd number of su- perpartners, plus ordinary particles, thus preventing the proton from decaying and making the lightest supersymmetric particle (LSP) stable. Supersymmetry can be realized in a number of more or less complex ways. We will consider here the so-called minimal supersymmetric extension of the Standard Model (MSSM), that is the one which contains the smallest possible number of fields to give rise correctly to the Standard Model when the symmetry is broken. Without going into details, MSSM population includes: • all ordinary quarks with their spin 0 superpartners (squarks q˜);

• all known leptons and their bosonic counterparts (sleptons ˜l);

• all gauge bosons (gluons, W i, B) and their fermion partners (respectively gluinos, Winos and Bino) commonly named gauginos;

• the standard Higgs boson, an additional Higgs doublet with opposite hyper- charge and a couple of spin 1/2 Higgsinos. When considering mass eigenstates, electroweak gauginos mix into eight different states. Namely, the charged parts of Winos and Higgsinos appear as two couples of ± ± charginos (χ1 , χ2 ), while Bino, the third Wino and the neutral states of the Higgsinos 0 form four neutralinos (χ1,2,3,4).

Sneutrino. The supersymmetric partner of the standard neutrino would represent an interesting dark matter candidate if its mass were between 0.5 and 2.3 TeV. Such a particle, however, has a quite large cross section for scattering on nucleons, and hence should has been already observed by direct detection experiments (see below at section 1.3.1).

0 Lightest neutralino. The lightest of the four neutralinos, χ1, usually referred to simply as the neutralino (χ) is at the present day universally regarded as the most viable WIMP candidate. The reasons for this favor can be summarized by the fact that the features of this particle, completely developed in a particle physics framework, fit very well the astrophysical constraints for dark matter, without need for any ad 6 Chapter 1. hoc hypotheses. Its mass can span from the electro-weak scale (∼100 GeV) up to several TeV. Being the lightest supersymmetric particle (LSP) it is absolutely stable, since R-parity conservation prohibits every decay process other than self-annihilation. Neutralino has thus a quite low annihilation rate and is heavy enough to represent a good cold dark matter candidate. At low velocities, the leading channels are annihi- lations into fermion-antifermion pairs (mainly b¯b, tt,¯ cc¯ and τ +τ −) and gauge bosons pairs (W +W − and Z0Z0). It should be stressed that final states containing one or two photons appear only at the one-loop level. Gamma-rays from neutralino self- annihilation, then, are likely to be indirectly produced: χχ → ff¯ → π0 + ... and then π0 → γγ.

The last two candidates which we deal with are superpartners of particles intro- duced in extensions of the Standard Model, and hence are not present in the MSSM.

Axino and gravitino. The spin 1/2 counterpart of the axion, the axino, and the spin 3/2 gravitino, superpartner of the –unseen– graviton, the gauge boson that mediates gravitational interaction in some quantum gravity theories, show similar phenom- enology as WIMP candidates. Depending on the SUSY model adopted and on the early Universe conditions, axino or gravitino can be the LSP, though their lightness make them rather “warm” dark matter candidates.

1.3 Dark Matter Search Strategies and Possibilities

Aside from astrophysical evidences of the existence of dark matter, experiments have been designed and carried out to detect dark matter through some non-gravitational process. The final purpose is to obtain some information on the particle features such to restrict the candidate zoo. Up to now, no reliable positive results have been published, nevertheless large regions of the parameter space –usually summarized in a cross section vs mass plot– have been probed. Here we deal with the main exploited detection strategies.

1.3.1 Direct Detection

An experimental technique which looks for WIMP scattering on nuclei is traditionally called direct detection, although it does not show any immediacy with respect to any other technique3. Direct detection experiments are based on the assumption that dark matter is omnipresent and have a non-zero cross section for scattering on heavy nuclei. Elastic scattering events should therefore be detectable through the observation of nuclear recoil, while inelastic scattering would leave the nuclei in an excited state and thus provide a delayed photon emission as a signature. The recoil energies, on the order of 10 keV, are measured as photons or phonons by solid state detectors or by scintillation in noble liquids. Such small energy scales oblige direct detection experiments to be performed underground, in order to be shielded

3While speaking at a conference, Feynman was questioned by a person in the audience about how he could believe the results of a very complex experiment to definitely prove the existence of quarks. Feyn- man then replied: “Do you believe the Pope to exist?” “Of course, I’ve seen him on TV.” (R.P. Feynman, The Pleasure of Finding Things Out, 1999) The Dark Matter Challenge 7 from cosmic rays which would be an overwhelming background. Moreover, a great effort is done against backgrounds, trying to reduce natural radioactivity sources in the surrounding environment and in the materials employed for the experiment construction. The DAMA Collaboration claimed the discovery of an annual modulation in the number of scattering, explained by the change of the Earth’s velocity with respect to the galaxy. Other experiments such as EDELWEISS and CDMS, though, did not confirm their results.

1.3.2 Indirect Detection

When dark matter in astrophysical bodies is observed through its self-annihilation products one calls it indirect detection. Detectable products are positrons, anti-protons, neutrinos and gamma-rays. Since this thesis work was carried out with a gamma-ray telescope, we will focus only on the latter. Monochromatic gamma-rays from χχ → γγ or χχ → Z0γ would be a perfect “smoking gun” if observed, but as explained above these process are loop-suppressed and so is the photon yield. A continuous spectrum from secondary pion decays is thus easier to observe, as shown in figure 1.2.

Figure 1.2: Simulated integrated gamma-ray flux from neutralino self-annihilation for differ- ent values of SUSY parameters (SPS benchmarks 1a–5, from [3]).

The expected flux takes the form Z Z hσvi dNγ 1 2 Φ(ψ, E) = 2 B(Ω)dΩ ρ (ψ, s)ds 8πmχ dE ∆Ω ∆Ω los where one can distinguish three different factors: 8 Chapter 1.

• the first contains particle physics quantities, such as the thermally averaged self-annihilation cross section hσvi, the WIMP mass mχ and the photon yield per energy unit dNγ/dE;

• the second is the telescope-related factor, and integrates the detector angular resolution (see page 30) B(Ω) over the angular acceptance ∆Ω;

• the last factor gives astrophysical information such as the dark matter density ρ integrated over the line of sight (s coordinate). ψ is the angle between the line of sight and the center of the dark matter distribution.

The flux dependence on the density squared is a strong motivation for observing objects where dark matter presumably concentrates. Although there is not a unan- imously accepted model for ρ(r) yet, many authors use a power-law density profile with some exponent, and possibly some higher-order corrections. In the gamma-ray field of dark matter search, the most studied objects with likely high dark matter densities are the Galactic Center, the intermediate mass black holes (IMBHs) and the dwarf galaxies. Dark matter is surely highly present in the Galactic Center [23], and if also a is present, it could enhance the dark matter density, although the gravitational interaction with baryons could have disrupted any small scale dark matter concentration. Detection from the Galactic Center is nevertheless problematic since a great number of unknown local or fore- ground gamma-ray sources could mask or fake a dark matter annihilation signal. 2 6 IMBHs [36] are relatively small objects (10 ÷ 10 M ) which can however provide dark matter density spikes and thus be observable. Aside from the uncertainties about the ρ distribution, these kind of black holes are difficult to identify. Finally, dwarf galaxies are found in major galaxy halos and show particularly high mass-to-light ratios, representing good observation targets. One of these objects, called Draco, is the subject of this thesis and is dealt with in detail in the next section.

The instruments to observe annihilation signal from such objects are gamma-ray telescopes. These apparatus though are designed to study other kinds of gamma-ray astrophysical sources, such as supernova remnants, active galactic nuclei (AGN), mi- croquasars, pulsars and gamma-ray bursts (GRB). Dark matter-oriented observations, therefore, often have to compete with other proposals in order to gain their place in the scientific programs of those experiments. Cosmic gamma-rays can be directly observed only from space, i.e. by satellite- borne detectors. The advantage of direct photon detection is compensated by the strict limitations in size imposed by current space-flight technology. The small size places an effective upper bound to the energy of detectable photons, for the high- est energy emission is power-law-suppressed and therefore TeV photons events are too rarely recorded to provide a significant detection. Nevertheless the Energetic Gamma-Ray Experiment Telescope (EGRET), launched in 1991, managed to build up a large catalog of sources in the 20 MeV – 30 GeV range, more than half of which, however, still remain unidentified. Currently, two largely improved instruments are to start operations, namely GRID carried by the European Space Agency AGILE satel- lite (launched in April 2007), and LAT onboard of the NASA GLAST mission, which is to be launched within 2008 and expected to achieve great scientific goals. LAT en- ergy range, sensitivity and precision in the source location determination are tenfolds The Dark Matter Challenge 9 better than the EGRET ones [31]. Space-based instruments are fit to make compre- hensive sky surveys since their high velocities allow them to see the whole sky in few hours. Ground-based telescopes cannot observe gamma photons directly, since the Earth atmosphere is opaque for such high-energy particles. Those instruments then col- lect secondary products of the gamma-ray interactions with the atmosphere, such as Cerenkovˇ light emitted by cosmic ray showers. For example MILAGRO, in the USA, is a water Cerenkovˇ detector equipped with 723 photomultipliers, and it is sensitive to gamma-rays with energies above 100 GeV. ARGO, at 4300 m a.s.l. on the Tibet plateau is a 6700 m2 wide array of resistive plate counters designed to study cos- mic rays through the analysis of small air showers [4] (see next Chapter). The most diffuse type of gamma-ray instrument is called Imaging Cerenkovˇ Atmospheric Tele- scope (IACT) and its working principle is explained in detail in Section 2.2. Among the so-called IACT first generation one can recall the Whipple 10 m-telescope based in Arizona, USA and the HEGRA 5×5 m array in the island of La Palma, Spain. Ground-based instruments does not suffer from size limitations, so that the second generation was built with larger mirrors and an increased number of telescopes. Among the currently working ones we will cite the “Big Four”: HESS (4×12 m) in Namibia; CANGAROO (4×10 m) in Australia; the VERITAS array which took Whipple’s place and HEGRA’s successor MAGIC, which is presently the largest single-dish work- ing IACT. Two third-generation arrays (∼100 single telescopes), one in the Northern hemisphere and the other in the Southern one (Cerenkovˇ Telescope Array, CTA), are in the design phase. Another kind of gamma-ray telescopes different from the IACTs are the converted solar power plants, such as the CACTUS experiment in California, USA. The presence of the atmosphere increase the lower energy threshold of ground- based intruments up to ∼100 GeV, but their huge effective collection areas allow them to detect signals of several TeV. Unlike satellite telescopes, ground ones are designed to point and track single sources, making them somewhat complementary instruments to space-based ones. The field of view, though, is necessarily limited by the environment (mountains, buildings) and the season; moreover, ground-based telescopes operate only at night and only with good weather.

1.4 The Draco Dwarf Spheroidal Galaxy

The term dwarf spheroidal (dSph) is applied to a small number of low-luminosity galaxies companion of the Milky Way and observed also in the Andromeda and Tri- angulum galaxies. Dwarf spheroidal galaxies, like galaxy clusters seem to contain large amounts of dark matter, since typical M/L ratios are about 200. In particular MAGIC observed a dSph in the Draco , hence named Draco dwarf spher- oidal galaxy, whose main astrophysical features are listed in table 1.1. For Draco, two

RA Dec mass luminosity M/L distance 6 5 (10 M )(10 L )(M /L ) (kpc) 17h 20m 12.4s 57◦ 540 5500 44±24 1.8±0.8 245±155 83±3

Table 1.1: Characteristics of the Draco dSph [6, 24]. 10 Chapter 1. dark matter density profiles are taken into account, according to observational data, both of the form of an exponentially cutoff power law:   −α r ρdm(r) = C r exp − . rb

7 −2 The so-called cusp profile has C = 3.1 × 10 M kpc , rb = 1.189 kpc and α = 1; 8 −3 the core profile features C = 3.6 × 10 M kpc , rb = 0.238 kpc and α = 0 [29]. Given the MAGIC PSF and angular resolution, the two models are indistinguishable for observations with ψ up to 0.4◦. Simulations of neutralino self-annihilation with these density profiles and different values for SUSY parameters (see figure 1.3) show that the leading annihilation channel is τ +τ −, but the averaged cross section remains well below MAGIC sensitivity for a 5σ, 50 hours observation.

Figure 1.3: Predicted cross sections for different SUSY parameters: m0 < 6 TeV, m1/2 < 4 TeV, −4 < A0 < 4 TeV, tan β < 50. The two overlapping lines plotted above represent MAGIC sensitivity for cusp and core profiles.

The engaging of the MAGIC Telescope in this research was triggered by a former claim by the CACTUS Solar Facility that in 2005 announced the detection of some 30 000 excess events from that source. The signal was found in a 7 hours observa- tion in the energy range 50÷150 GeV. As the lowest-energy-threshold working IACT, MAGIC was the best instrument to make a second observation and possibly confirm the result with a few hours datataking. CACTUS results, though, were never published since a re-analysis of the same data raised severe doubts on the claimed figures. The results obtained with this thesis work seem to support this latest version. Chapter 2

Telescopes for Cosmic Rays. IACTs and MAGIC

This chapter introduces the physics of cosmic rays in the atmosphere and the related pro- duction of Cerenkovˇ light from Extended Air Showers. The working principle of Imaging Atmospheric Cerenkovˇ Telescopes (IACTs) is outlined. In particular, we will dwell upon our instrument –the MAGIC Telescope– and give some detail about the reflecting dish, the camera and the data acquisition chain.

2.1 Cosmic Rays

A comprehensive description of cosmic rays should include every kind of particle arriving from outer space and hitting the Earth. Such a broad definition, though, finds a meaning only in its historical origins, dating back to the first decades of the 20th century. In those times, the first experiments with both ground-based and balloon-borne ion chambers provided evidences of a constant flux of charged parti- cles coming from outer space. Further studies followed, giving birth to what is now called High Energy Physics. Nowadays, many characteristics of cosmic rays are well known –composition, spectrum (figure 2.1), spatial distribution– and a modern defi- nition would add also neutral particles such as gamma-rays and neutrinos to protons, electrons and atomic nuclei which were the first discovered. Origin and production mechanisms of such a variegated family involve a symmetrically wide variety of as- trophysical processes, most of which still under debate or even almost unknown. Gamma-rays, i.e. photons with energies from 100 keV up, are present in the cos- mic ray flux as few as only one in 10 000. They represent, though, one of the most important components because: 1) their trajectories are not bended by static electro- magnetic fields, and therefore they trace back to the source; 2) contrary to neutrinos, they are easily detected. Galactic and extra-galactic charged cosmic rays, instead, undergo multiple interactions with magnetic fields, so that they arrive to the Earth with isotropic incoming directions. Gamma-rays are produced by many processes, as different as synchrotron radiation, bremsstrahlung, inverse Compton scattering, nu- clear or particle (π0) decays. They can be absorbed through interactions with matter or with other photons. The former is a negligible process for gamma-rays traveling in space, since intergalactic matter densities hardly make up one radiation length. Pair production by photon-matter interaction becomes instead of fundamental im- 12 Chapter 2.

S166 James W. Cronin: Cosmic rays gested that the cosmic rays were the result of the forma- tion of complex nuclei from primary protons and elec- trons. In the 1920s electrons and ionized hydrogen were the only known elementary particles to serve as building blocks for atomic nuclei. The formation of atomic nuclei was assumed to be taking place throughout the universe, with the release of the binding energy in the form of gamma radiation, which was the ‘‘cosmic radiation.’’ A consequence of this hypothesis was that the cosmic ra- diation was neutral and would not be influenced by the earth’s magnetic field. A worldwide survey led by Arthur Compton demonstrated conclusively that the in- tensity of the cosmic radiation depended on the mag- netic latitude (Compton 1933). The cosmic radiation was predominately charged particles. This result was the subject of an acrimonious debate between Compton and Millikan at an AAAS meeting that made the front page of the New York Times on December 31, 1932. In 1938, Pierre Auger and Roland Maze, in their Paris laboratory, showed that cosmic-ray particles separated by distances as large as 20 meters arrived in time coin- cidence (Auger and Maze, 1938), indicating that the ob- served particles were secondary particles from a com- mon source. Subsequent experiments in the Alps showed that the coincidences continued to be observed even at a distance of 200 meters. This led Pierre Auger, in his 1939 article in Reviews of Modern Physics, to con- clude FIG. 1. Spectrum of cosmic rays greater than 100 MeV. This figure was produced by S. Swordy, University of Chicago. One of the consequences of the extension of the en- Figura 2: Composizione dei raggi cosmici nell’atmosfera in funzione dell’altezza [2]. ergy spectrum of cosmic rays up to 1015 eV is that it Figure 2.1: Cosmic ray energy spectrum (left) and composition (right). is actually impossible to imagine a single process able tion (Zatsepin et al., 1966; Berezinskii et al., 1990; Wat- to give to a particle such an energy. It seems much son, 1991; Cronin, 1992; Sokolsky et al., 1992; Swordy, more likely that the charged particles which consti- 1994; Nagano, 1996; Yoshida et al., 1998). In Fig. 1 the tute the primary cosmic radiation acquire their en- A livello del mare la maggior parte del flusso di raggi cosmici carichi `ecostituito da spectrum of cosmic rays is plotted for energies above 2 ergy along electric fields of a very great extension. muoni. Il muone [3] `eun leptone di massa 105.65837 ± 0.00001 MeV/c che compare sia portance108 eV. Thewhen cosmic gamma-rays rays are predominately reach atomic the nu- top of the Earth atmosphere, as explained in (Auger et al., 1939). − theclei next ranging section. in species from The protons so-calledcon to carica iron nuclei, Extragalactic elettrica with negativa Background (pari alla carica Light dell’elettrone) (EBL) provides che con an carica positiva. Il µ Auger and his colleagues discovered that there existed traces of heavier elements. When ionization potential is in nature particles with an energy of 1015 eV at a timeeffective absorption mediumlibero for decade high-energy con vita media photonsτµ = 2from.19703 distant± 0.00004 sources,µs nei canali mainly mostrati nella tabella taken into account, as well as spallation in+ the residual when the largest energies from natural radioactivitythrough or gas of space, the γ the+ relativeγ → abundancese+ +1.e Il− areµprocess. similardecade to the con The la interaction stessa vita media of gamma-rays nei canali previsti with dalla EBL simmetria pro- CPT. Esistono artificial acceleration were just a few MeV. Auger’s abundances of elements found in the sun. The energies amazement at Nature’s ability to produce particlesduces of range an from exponential less than 1 MeV cut-off to more than in 10 their20 eV. The spectra. A role similar to EBL is played by the enormous energies remains with us today, as there is no decadimento probabilit`a differential flux is described by a power law: − − clear understanding of the mechanism of production,omnipresent Cosmic Microwave Backgroundµ → fore ultra-high+ν ¯e + νµ energies, as introduced' 100% in Ϫ nor is there sufficient data available at present to hope dN/dEϳE ␣, (3.1) − − to draw any conclusions. Section 1.2. µ → e +ν ¯e + νµ + γ (1.4 ± 0.4)% where the spectral index ␣ is roughly 3, implying that the In 1962 John Linsley observed a cosmic ray whose µ− → e− +ν ¯ + ν + e+ + e− (3.4 ± 0.4) × 10−5 intensity of cosmic rays above a given energy decreases e µ energy was 1020 eV (Linsley, 1962). This event was ob- by a factor of 100 for each decade in energy. The flux of served by an array of scintillation counters spread over 2 − 2 cosmic rays is about 1/cm /sec at 100 MeV and only of Tabella 1: Modalit`adi decadimento del µ . 8km in the desert near Albuquerque, New Mexico.2.1.1 Atmospheric2 20 Showers The energetic primary was detected by sampling some order 1/km /century at 10 eV. of the 5ϫ1010 particles produced by its cascade in the The bulk of the cosmic rays are believed to have a High-energygalactic origin. cosmic The acceleration raysreaching mechanism for the these top of the Earth atmosphere immediately in- atmosphere. Linsley’s ground array was the first of a forse rari modi di decadimento che violano la conservazione del sapore leptonico, i quali number of large cosmic-ray detectors that have mea-teractcosmic with rays the is thought atmosphere to be shock waves molecules from super- and give rise to a cascade of secondary parti- sured the cosmic-ray spectrum at the highest energies. nova explosions. This basic ideasono was first tuttavia proposed ancora by oggetto di indagine e non saranno presi in considerazione. Tra cles,Enrico called FermiExtended (1949), who discussedAir Shower the acceleration(EAS). of For primary− gamma-rays,− the most probable + cosmic rays as a process of the scatteringquesti of il the pi`uconsistente charged `e µ → e + νe +ν ¯µ (e corrispondente coniugato per il µ ), per + − III. COSMIC-RAY SPECTRUM processcosmic-ray is the particlese offe movingpair magnetic production;il quale clouds. `estato Subse- secondary stimato un electrons limite superiore and positrons al rapporto then di lose ramificazione en- di 1.2% a un quent work has shown that multiple ‘‘bounces’’ off the After 85 years of research, a great deal has beenergyturbulent through magnetic bremsstrahlung fields associatedlivello with until di supernova confidenza they reach del 90%. a critical energy value Ec ' 80 MeV, learned about the nature and sources of cosmic radia-belowshock which waves is ionization a more efficient becomesLo acceleration spettro the process in momento dominating dei µ process.nei raggi Secondarycosmici a livello photons, del mare in- si pu`ovedere nella stead, can initiate a newfigura branch 3. Siof noti the checascade i µ dei if raggi their cosmici energy sono is greater sempre ultrarelativistici. than 1.022 Il rapporto tra Rev. Mod. Phys., Vol. 71, No. 2, Centenary 1999 + − MeV [21]. Primary electronsµ e µ anda livello positrons del mare too `eparican produce a electromagnetic show- N + ers, but their contribution as a background source forF gamma-ray= µ = 1.27 telescopes± 0.04 is quite negligible. Protons or other primary hadrons generate aN so-calledµ− hadronic shower, where the first processes involve strong interactions and thus production of unsta- 2 ble mesons (mostly π and K). Because high-energy hadrons have many interaction channels open, the development of a hadronic shower is way more complex than the electromagnetic one’s. The variety of particles produced is also the reason of the high transverse momentum hadronic showers have with respect to e.m. ones (see figure 2.2). Secondary hadrons which have not enough energy to create other particles then decay, mainly into muons and gammas; the latter can start a final electromagnetic Telescopes for Cosmic Rays. IACTs and MAGIC 13 branch of the hadronic shower.

Figure 2.2: Schematic illustration (left) and Montecarlo simulation (right) of both an electro- magnetic and a hadronic showers.

2.2 Cerenkovˇ Light Collection: the Imaging Technique

2.2.1 The Cerenkovˇ Effect The Cerenkovˇ effect is the emission of a characteristic electromagnetic radiation by a charged particle moving faster than light in a dielectric medium. While a charged particle in rectilinear uniform motion does not emit radiation, it can however produce polarization “waves” when travelling through a dielectric medium: more precisely, the charge presence induces the formation of electric dipoles which then readjust to a neutral configuration when the charge has gone. If the particle moves slowly, induced electric dipoles around it are symmetrically distributed and no net effect is visible. If the particle velocity, instead, is greater than the phase velocity of light in the medium c/n (where n is the medium refractive index), the information of its passing by propagates slower than its motion, so that the emission from readjusting dipoles form a coherent shock-wave known as Cerenkovˇ radiation (see figure 2.3 a,b).

The Huygens wavefront construction (fig. 2.3 c) requires, in order to be coherent, that c · ∆t c · ∆t 1 cos θ = n = n = v · ∆t βc∆t βn where β = v/c is the particle velocity in c units. This formula highlights the most rel- evant feature of Cerenkovˇ radiation, i.e. that it is produced in cones with an aperture angle dependent on the particle energy; the maximum angle of emission is observed for ultrarelativistic particles (β ' 1) and is given by the equation cos θmax = 1/n. The threshold energy of an incident particle of rest mass m to emit Cerenkovˇ light is given by the condition β > 1/n: mc2 mc2 Ethr = q = √ 2 1 − n−2 1 − βmin Since the refractive index of a medium depends on the wavelength of the crossing radiation, the Cerenkovˇ emission spectrum is limited by the constraints n(λ) > 1/β 14 Chapter 2.

Figure 2.3: Sketch of the origin of the Cerenkovˇ effect. (a) Polarization when the incident particle velocity is small. (b) Polarization in the case of v > c/n. (c) Huygens construction of the wavefront.

1 and λ > λ0, lower limit for the anomalous dispersion (see figure 2.4 and [25]). A discussion on the emitted and observed spectra for Cerenkovˇ light in air is developed in the next section.

Figure 2.4: Cerenkovˇ light spectrum limits given by the dependency  = (ν). ν0 is the frequency limit for anomalous dispersion; the constraint n(λ) > 1/β becomes  >  β−2 √ 0 remembering the identity n = c µ.

2.2.2 Cerenkovˇ Light from EAS To describe the Cerenkovˇ emission in the atmosphere one should take into account the variation of the air refractive index with the altitude. Assuming the simple expo- nential law for the atmospheric density,  h  ρ(h) = ρ0 exp − h0

1 The behaviour of a dielectric medium when =[(ν)] has a resonance and <[(ν)] decreases with the frequency is called anomalous dispersion [18]. Telescopes for Cosmic Rays. IACTs and MAGIC 15

3 with ρ0 = 1.3 mg/cm and h0 = 7.1 km, the air refractive index can be expressed as  h  n(h) = 1 + η(h) = 1 + η0 · exp − h0

−4 where η0 = 2.9×10 . From these equations one can find the dependency of the threshold energy and maximum Cerenkovˇ angle on the atmospheric height h:

mc2 Ethr(h) ' ; cos θmax(h) ' 1 − η(h). p2η(h)

Taking for a cosmic ray shower a typical value of h = 10 km, one obtains the following threshold energies: 42.9 MeV for electrons, 8.9 GeV for muons and 78.8 GeV for protons. As for the maximum Cerenkovˇ angle, particles in the highest part of the shower have the lowest one because of two concurring conditions, namely the highest energy and the lowest refractive index. For 10 km high, ultrarelativistic electrons, θ ' 0.68◦. Other small dependencies of the refractive index, such as on the air temperature or on the radiation wavelength, can be neglected. The number of Cerenkovˇ photons emitted per unit of path length dx and per unit of wavelength dλ is given by:

d2N 2παZe2  1  = 1 − dxdλ λ2 β2n2 where α is the fine structure constant and Ze the charge of the emitting particle. The 1/λ2 dependency shows that most of the photons are produced with the shortest wavelength within the range where n > β (see figure 2.4). However, the interactions with the atmosphere molecules make the Cerenkovˇ spectrum observed by ground- based instruments significantly different from the emission one (figure 2.5). The main processes involved are:

• absorption in the ozone layer (O3+γ −→ O2+O) and from atmospheric oxygen and nitrogen, for photons with λ < 290 nm; absorption from water and carbon dioxide molecules, for wavelengths greater than 800 nm;

• Rayleigh scattering on gas molecules, which have a cross section proportional to λ4; Mie diffusion, from aerosol particles, whose contribution is estimated in a few percent of Rayleigh scattering.

Finally, one has to consider the influence of the geomagnetic field on charged par- ticles. The effect on the Cerenkovˇ light cone is a preferential broadening along the East-West direction.

2.2.3 The Imaging Technique The name Imaging Technique refers to the peculiar way by means of which ground- based Cerenkovˇ telescopes achieve to study cosmic ray atmospheric showers, and especially to study primary photons from the induced electromagnetic showers. As explained above, EAS are characterized by a longitudinal development where one can distinguish a head, made of the first –and therefore most energetic– products of the primary particle interaction with the atmosphere (marked with A in figure 2.6); 16 Chapter 2.

Figure 2.5: Differential Cerenkovˇ emitted and observed spectra. The effect of interaction processes through the atmosphere is clearly visible.

a core, where the shower has its maximum (B); and a tail (C), where the shower ends. Since the Cerenkovˇ emission characteristic angle θC varies with the particle energy, Cerenkovˇ photons from the shower head will impinge on the mirror with a smaller inclination than the ones from the tail. Using the well-known property of a parabolic surface to focus parallel rays in one point (see figure 2.7), it can be demonstrated that a photon hits the camera plane at a distance r from the center F that is r = f tan θ, i.e., for small values of θ (ultrarelativistic particles), essentially r ∝ θ. Considering an EAS whose axis is aligned with the telescope, as it normally happens for gamma-rays when the telescope is pointing the source in the camera center, figure 2.6 shows that Cerenkovˇ photons from different parts of the shower form on the camera an ellipse-shaped image, with its major axis pointing toward the camera center. The spatial (and time2) charge distribution of the collected image gives information on the longitudinal evolution of the shower. The only difference for showers not aligned with the telescope is the direction of the major axis: in this case it does not point towards the camera center. The photon/hadron separation required in the data analysis is based essentially on this geometrical difference, since hadronic showers come as an isotropic background, whereas gamma showers are expected to produce an excess of radially distributed images in the camera. Details about signal/noise separation are given in Chapter 3. It should be stressed that the Imaging Technique makes one able to collect data from primary particles with a high impact parameter, which is the distance on ground between the telescope and the shower axis. This advantage increases the effective collection area up to some hundreds m2.

2While to assign a head-tail direction on a spatial basis, i.e. using charge distribution n-moments, depends on the image cleaning (see 3.6.2) and can be rather problematical and arbitrary for round- shaped images, a parametrization on an arrival time basis would give a more definite information. This kind of analysis, however, demands a really high time resolution since showers develop in a few nanoseconds. This seems to become a likely opportunity for MAGIC since the installation of the new ultrafast 2 GHz ADCs (early 2007): see for example Diego Tescaro’s thesis [30] and following works (MAGIC Collaboration Meeting, Sofia 2007). Telescopes for Cosmic Rays. IACTs and MAGIC 17

Figure 2.6: The Imaging Technique principle.

In order to extract useful information from the data, shower images are parame- trized with an ellipse and a set of correlated quantities (often called Hillas parameters [17]) –see figure 2.9:

Length : half length of the ellipse major axis.

Width : half length of the ellipse minor axis. This parameter is correlated with the transversal development of the shower.

Leakage : fraction of Size contained in the outer pixels (see 2.3.2).

Alpha : angle between the line defined by the camera center and the image Center of Gravity and the ellipse major axis. Photon-induced images tend to have little values of Alpha, while hadronic images show a random distribution with respect to this parameter. 18 Chapter 2.

θ = 0 θ

r

F F

f

Figure 2.7: Focusing of the Cerenkovˇ photons in a IACT. If the photon incoming direction is not parallel to the telescope axis, its image is displaced from the camera center F by a quantity r correlated to the incidence angle θ. f is the mirror focal distance (for MAGIC f = 17 m).

Dist : distance of the image CoG and the camera center. It is correlated with the impact parameter of the shower.

Other parameters or combinations of the previous can be defined for further studies; in our analysis the following will play a role:

Size: total amount of photoelectrons collected in the shower image. Given a Zenith angle of observation and the impact parameter, this quantity is nearly propor- tional to the energy of the primary particle.

Conc: fraction of photoelectrons contained in the two brightest pixels of the image.

NumIslands: number of “islands”, i.e. number of compact groups of lighted pixels which passed the image cleaning. Hadron-induced events are more likely to produce a high number of islands.

Disp: distance between the image Center of Gravity and the estimated source posi- tion along the major axis of the ellipse.

Theta: distance between the Disp-estimated source position and the nominal source position (see figure 2.8).

Since Disp gives only an estimate of the distance of the source position from the image CoG, a means to decide on which side the source lies is required. In this case one can exploit one of the so-called head-tail discriminators such as Asym or M3Long, mathematically defined in Appendix B together with the other parameters described above. Telescopes for Cosmic Rays. IACTs and MAGIC 19

Figure 2.8: Schematic illustration of the Disp and Theta parameters.

Figure 2.9: Geometrical definition of some simple Hillas parameters. In an arbitrary refer- ence (x, y), the camera center coordinates are (x0, y0). It is worth remembering that the image CoG, which defines the Dist parameter, might not coincide with the ellipse geometri- cal center. 20 Chapter 2.

2.3 The MAGIC Telescope

The Major Atmospheric Gamma Imaging Cerenkovˇ Telescope (MAGIC) is presently the largest single IACT. Operated by a large pan-European collaboration, the tele- scope is located in the island of La Palma (Spain) at the Roque de Los Muchachos site (28.76 N, 17.89 W, 2200 m a.s.l.), managed by the Instituto de Astrof´ısica de Canarias (IAC) as a part of the European Northern Observatory. Designed in the

Figure 2.10: A satellite picture of the MAGIC site in La Palma. The green arrow points ex- actly on the Telescope. Near on the left one can see the Control House, and the MAGIC II construction site just below. In the top left corner, the Residencia. late 1990s, built in 2001-2003, MAGIC is carrying out regular observations since the end of 2004. Its distinctive features are: • a very light carbon-fiber space frame (∼ 9 tons) which allows fast repositioning, necessary to observe prompt emission from transient sources like Gamma-Ray Bursts; • a very large, parabolic reflecting dish of 17 m diameter; • a low energy threshold between 50 and 100 GeV, which makes it an ideal in- strument complementary to satellite-borne gamma telescopes. The energy threshold of the final analysis will be still lower if one can perform stereo- scopic observation. In this respect, a clone telescope with some important technical improvements, called MAGIC II, is currently under construction and expected to start physics tasks at the end of 2008.

2.3.1 Mirrors The 239 m2 reflecting surface consists of 956 square mirrors of 50 cm side and 34÷36 m radius of spherical curvature, depending on the position of the mirror Telescopes for Cosmic Rays. IACTs and MAGIC 21

Figure 2.11: The MAGIC Telescope in park position at the end of the night (left) and during datataking (right; long-exposed photo with moonlight).

in the parabolic dish. Each one is made of an aluminium honeycomb structure; a heating/drying system in case of ice or dew formation; a reflecting 5 mm-thick plate of diamond-milled aluminium and a quartz coating layer. Mirrors are grouped into panels of four; each panel is provided with two motors and a laser pointing to the camera lids, allowing a fine focusing during datataking through the Active Mirror Control. This is necessary in order to correct the residual deformation of the reflector when the telescope is repositioned. A further manual alignment procedure is pos- sible, focusing the mirror on an artificial light set 1 km away (Roque Lamp). The surface global reflectivity is about 85% in the wavelength range 300-650 nm.

Figure 2.12: Detail of the mirror panels. In place of the two center ones, a set of calibration instruments has been installed. 22 Chapter 2.

2.3.2 Camera

The camera is probably the most critical instrument, since it is the “eye” of the tele- scope. An hexagonal board of 1.5 m diameter, placed in the mirror focal plane, hosts 577 high Quantum Efficiency photomultipliers, of which 397 100 inner pixels and 180 1.500 outer pixels. Light from the reflector is transmitted to each PMT through a Win- ston cone with an hexagonal end, so that there are no blind regions in the camera; the total Field Of View results equal to 3.5◦×3.8◦. A special wavelength-shifter coat- ing enhances QE up to an average 20% between 250 and 700 nm wavelength. Since typical duration of Cerenkovˇ flashes is on the order of a few nanoseconds, PMTs are designed to give a fast response with 1 ns FWHM. High Voltage supply is inde- pendent for each PMT and remotely controlled by the Camera Control software (La Guagua): this allows to handle damaged or “hot” pixels without affecting the whole camera. Finally, the camera is equipped with heating and cooling systems to prevent the reaching of the dew point and to dissipate the heat from phototubes.

Figure 2.13: Photo of the camera with open lids (left) and pixel scheme (right). Inner pixels (blue) have a 0.10◦×0.12◦ Field Of View; outer pixels (red) have 0.20◦×0.22◦; the whole camera FOV is 3.50◦×3.84◦.

When dealing with images and distances on the camera, one can use both an- gular and rectangular coordinates. A system of orthogonal Cartesian axes is used to make direct measures, in millimeters, on the camera plane. When the telescope is observing the sky, the distance r of a Cerenkovˇ photon’s spot from the camera center F is directly proportional to the photon angle of incidence θ (figure 2.14). In this reference frame, angular sky coordinates and rectangular camera coordinates can be used indifferently, remembering the correspondence 30 mm ⇔ 0.1◦. It is useful to define also an angle in the camera reference frame, ϕ, defined so that tan ϕ = −y/x.

2.3.3 Readout and Trigger

Signals from PMTs are preamplified and then transmitted to the electronics room in the Control House (about 100 m away from the telescope) in order to be processed. This design has the advantage of requiring a smaller camera, but poses the issue of preserving shape and temporal structure of the signals along the transmission line. Telescopes for Cosmic Rays. IACTs and MAGIC 23

Figure 2.14: Correspondence between the photon angle of incidence (θ, degrees) and the distance of the photon’s image from the camera center F (r, mm).

The solution is achieved by means of optical fibers, which have significantly lower dispersion and attenuation features with respect to coaxial cables; moreover, optical fibers are lighter and are not sensitive to electromagnetic noise. The signal is brought to a receiver board where it is split in two: one branch passes through a discriminator and then goes to the trigger box. The other branch is amplified, stretched and split into a high gain (10×) channel and a low gain delayed channel. If the signal in the high gain channel exceeds a preset threshold, both lines are combined and digitized in the same FADC channel. Until 2006, the FADC was 8 bit, 300 MHz; since February 2007 a new MUX FADC at 10 bit, 2 GHz has been installed. The trigger system has the purpose of a first discrimination between signal and background. At this very early stage, however, it is not possible to perform a rejection of hadron-like images with respect to the gamma-like ones, which is rather an offline analysis task: the trigger thus tries to discriminate Cerenkovˇ flashes from other kinds of light signals (the so-called Light Of the Night Sky). It is segmented in three levels:

Level 0 : it acts as a flag for lighted PMTs. A phototube is considered lighted if its current exceeds a fixed threshold; if this happens, a digital signal is generated by L0 and processed by the next trigger stages.

Level 1 : this level involves only 325 inner pixels, grouped into 19 overlapping macrocells of 37 pixels each. A temporal coincidence (few ns) among a certain number of neighboring pixels within a macrocell is required: this constraint is put in the attempt to select compact configurations like the elliptic-shaped ones from Cerenkovˇ flashes.

Level 2 : the last trigger stage should perform a fast evaluation of size, shape and orientation of the image, in order to make an effective background rejection and to reduce the trigger rate. This part of the trigger system, however, is not effective yet.

The digitized data which pass all the trigger levels are stored in disks and backed-up 24 Chapter 2.

Figure 2.15: Sketches of the two first trigger level L0T and L1T.

into tapes. Every morning, data from the last observation night are copied to the Barcelona and Wurzburg¨ Datacenters, where they are kept and made available to the analyzers. Chapter 3

Data Analysis Chain Tested on the Crab Nebula

Here the standard analysis chain is explained step by step from the raw data acquisi- tion to the most physically meaningful plots which show the features of the observed gamma signal. In this chapter we analyze a sample of Crab Nebula data: this source, universally considered as the standard test-source for gamma astronomy, allows to test the performances of the analysis chain itself before it is applied to an unknown source.

3.1 The Crab Nebula

The Crab Nebula1 is a celestial body located at a distance of about 6300 ly in the Taurus constellation, at RA:5h34m, Dec:+22◦01”. The Nebula is made of mate- rials –mostly ionized helium and hydrogen, along with dust of heavier elements– expelled from the supernova (SN 1054) whose explosion was observed as a very bright new star by astronomers from the Far East, and perhaps by Native Ameri- cans, in A.D. 1054. The collapsed core of the supernova is now observable as a loud pulsar of 33 ms period, whose spectrum ranges from radio to X-rays. The sur- rounding 3-ly-large gas cloud, called Supernova Remnant (SNR), expands at a rate of approximately 1500 km/s and is one of the most intense, steady and wide-range electromagnetic sources in the sky. For these reasons it has become the standard test and calibration source for X- and gamma-astronomy [12]. Its spectrum, from radio to gamma-rays, is shown in figure 3.1 (left), together with the MAGIC observation (right). In the 0.1÷10 TeV range it is well fit by an exponential law [34]:

dF  E α = f dE 0 GeV

−3 −2 −1 −1 with f0 = (1.50 ± 0.18) × 10 ph cm s TeV and α = −2.58 ± 0.16. The features stressed above allow the Crab Nebula to be regarded as a reliable source to test hardware and analysis performances of Cerenkovˇ telescopes. In the next section the analysis chain, which will be later applied to the Draco source, is presented together

1As it was the first object described in 1758 by French astronomer Charles Messier in his famous catalog, the Crab Nebula is also known as M1 and NGC1952 (New General Catalogue, started in the 1880s by John Dreyer). 26 Chapter 3.

Figure 3.1: Crab Nebula wide-range spectrum (left) and MAGIC observation in the Very High Energy region (right). with its results on a Crab Nebula data sample. The good agreement between our results and the literature, in the case of Crab, makes the Draco analysis procedure trustworthy.

Figure 3.2: Four images of the Crab Nebula in different wavelength bands. From left to right: radio, infrared, optical, X-ray.

3.2 Backgrounds

As a very first step in the complex process of extracting physical information from the data registered by an instrument, one has to point out which kind of backgrounds are expected and some technique to disentangle them from the signal. Aside from astrophysical processes which can mask photons from the observed source, which are briefly described in Section 2.1, there are other effects acting as background sources for an observed gamma-ray flux entering the atmosphere. The first one is due to light that does not come from Cerenkovˇ flashes; the name Light Of the Night Sky (LONS) is generally used with this meaning. Within this large class one can include starlight, diffuse galactic light, other luminous phenomena in the atmosphere (such as aurorae), environmental light pollution and car flashes2. Another source of non-Cerenkovˇ light is the moon. Depending on its phase and its

2Recognizing the unique astronomical features of La Palma, the Spanish Government promoted a special law to protect the dark night from light pollution on the island (“Ley del Cielo”, 31/1988). Car flashes are due to the lights of rare cars accidentaly passing near the telescope during datataking. Data Analysis Chain Tested on the Crab Nebula 27 position with respect to the observed source, the telescope can be operated with special settings in the electronics in order not to be completely blinded by moonlight. Generally, while for weak sources dark-night observations are required, high-flux or high-energy (TeV) sources can be observed when their position in the sky is such that moonlight does not impinge on the mirrors or directly on the camera (the allowed angular distance range is 25◦–130◦); anyway, no data are collected in the three nights around full moon. This kind of background is almost completely rejected through the multilevel trigger system and the image cleaning procedure, described later (Section 3.6.2). Another process which acts as a background is the generation of electric sparks whose light is collected by a small number of neighboring photomultipliers. Such spark events, however, are quite easily recognized and rejected by means of suitable cuts in the image parameters, as explained in Section 3.6.3. Finally, the most important background source is represented by Cerenkovˇ light produced by hadron-induced electromagnetic showers: as discussed in Section 2.1.1, primary gamma-rays are about (often less) 0.1% of the total observed cosmic-ray rate. The discrimination of gamma-like from hadron-like events is a complex task since it involves a delicate analysis of the image spatial configuration recorded by the camera, and it would be nearly impossible to achieve without the aid of Montecarlo simulations of a pure gamma-ray flux. Details about this discrimination procedure are given in the following Sections.

3.3 Observation Modes

The telescope is operated in two different observation modes. In the first, called On/Off mode, the source is pointed and tracked as to be re- flected at the center of the camera (On observation). In the meanwhile, the sky in the Field Of View rotates around the camera center because of the altazimuth mount of the telescope. This kind of observation requires some additional datataking time in order to estimate the background; this task is accomplished by tracking a point (' 1◦ away from the source) in a region of the sky empty of known gamma-ray sources (Off observation). Off data are then analyzed with the assumption that they reproduce the light collected in the On region as if it were possible to “switch off” the source. This observation mode has the advantage of exploiting the whole camera active area, but it requires additional time to collect enough Off data in order to achieve a good background estimation (at least 30% of the On time). See figure 3.3. The second observation mode is called wobble mode and consists in pointing at a position 0.4◦ away from the source. With this arrangement, the source position in the camera is shifted from the camera center and therefore rotates during the datataking along the so-called wobble circle. The point opposite to the source with respect to the camera center is called anti-source and it is considered the point with respect to which the Off data are collected (see figure 3.4). To achieve a more sym- metric configuration, the telescope moves (“wobbles”) every twenty minutes from one position (W1) to another (W2) 1.6◦ away from W1 and aligned with W1 and the source, so that in the camera source and anti-source swap their own positions. Wobble mode observations use the same data to extract both On and Off events, thus saving observation time. Moreover this technique guarantees datataking compatibil- 28 Chapter 3.

Figure 3.3: Sketch of the On/Off observation mode. The source and, separately, a point off- source are tracked so that they are both observed in the camera center. While their position is fixed, the rest of the sky in the FOV rotates in the camera.

ity between the two samples, while in the On/Off mode this agreement has to be carefully checked. However, the source position shift out from the camera center makes this observation mode unfit for extended sources, because On and Off regions may in that case overlap; moreover, it reduces the effective triggering area, yielding a loss in sensitivity of nearly 20% [11]. This issues, together with the effects of camera inhomogeneities in wobble mode, are deeply studied in the next Chapter.

Figure 3.4: In the wobble mode the tracked point is always 0.4◦ away from the source, so that the source position in the camera travels along the so-called wobble circle. Background is estimated from the anti-source point. Every 20 minutes the telescope wobbles to another point in the sky in order to swap source and anti-source positions in the camera.

3.4 Run Classification

Data recorded during observation nights are collected into files called runs. There are three main types of runs. Data runs contain the information about triggered events recorded by all pixels in a time interval of about 3 minutes for the old 300 MHz FADCs, 40 s with the Data Analysis Chain Tested on the Crab Nebula 29 new ones. The typical size of these files is slightly less than 1 GB. Pedestal runs are taken with a random fast trigger, in order to make the probability of recording a real Cerenkovˇ event negligible. These runs are thus used to measure the night sky back- ground level and the electronic chain noise. They are taken every time the pointed source is changed and about once per hour. Finally, calibration runs are taken in or- der to test the performances of individual pixels and to estimate the conversion factor from the recorded signal to photoelectrons and then to photons. This is achieved by firing the camera with fast trains of LED pulses with different wavelengths. Every pedestal run is followed by a calibration run. In addition, so-called interleaved cali- bration events are produced at a rate of 50 Hz during data runs in order to correct small fluctuations of gain and pedestal in the readout chain. Every couple of pedestal and calibration runs marks the beginning of a new se- quence, an a posteriori partition of large amounts of data with the purpose of making the analysis easier. Raw data are every morning transferred to the Wurzburg¨ and Barcelona data-centers, where they are stored and quickly analyzed by an automatic and very conservative software. Analyzers can therefore choose to use directly these results or just get a first feeling and then download the raw files to perform their own analysis. A key parameter in the data analysis is the zenith angle3 under which the source has been observed. The energy estimation for an event is in fact strongly correlated with this angle, as explained in Section 2.1.1. This, as well as other observation parameters (date, time, weather, electronic settings, moonlight and many more) are recorded in a report file downloaded together with the raw data. The information contained in the report files is then extracted and processed in the analysis chain.

3.5 Montecarlo Simulation

Since IACTs full response to the incident photons cannot be tested in a laboratory, as it is usually done for other particle detectors, Montecarlo simulations of both photon production and detection play a role of crucial importance in the analysis chain. Since almost all of the recorded events are of hadronic origin, signal from primary hadrons is taken from real data rather than simulated; moreover, the complex development of a hadronic shower would have a very high computational cost and therefore it is not convenient to be simulated. Simulations are therefore performed only for EAS induced by photons from a point-like gamma source which takes a fixed position in the camera (at the center for On data, at (120, 0) mm for wobble data) but observed at different zenith angles. The full simulation is developed through several steps by different software pack- ages (in typewriter font):

• simulation of the processes that give birth to the Extended Air Shower (Corsika). Each particle in the shower is treated individually; this stage gives as an output the Cerenkovˇ photons distribution at the ground level;

• simulation of the night sky background and starlight (StarfieldAdder and StarResponse);

3A brief account of the celestial coordinates is given in Appendix A. 30 Chapter 3.

• simulation of the light reflection by the telescope mirror (Reflector);

• simulation of the light detection by the camera and signal processing (Camera). The output files of this stage are similar to real data runs and can thus be processed with the standard analysis chain.

The accuracy of the Montecarlo simulation is one of the keystones of the whole analy- sis chain; for this reason, physical and instrumental input parameters must be well known in order to produce useful simulations. One of the most important parameters is the mirrors Point Spread Function4 (PSF), which is actually estimated around 13 mm (about half diameter on an inner pixel).

3.6 Data Analysis

In this Section the analysis chain is presented. To make the explanation more effec- tive, and for the test purposes explained in Section 3.1, every step is applied to a Crab Nebula data sample.

sequence number date start–stop time Zenith angle PSF (mm) 103609 21 Oct, 2006 4:14:33 – 5:54:43 7◦÷ 16◦ 13.2 104168 27 Oct, 2006 5:09:50 – 5:29:39 12◦÷ 16◦ 13.4 104402 31 Oct, 2006 3:12:33 – 4:32:26 6◦÷ 14◦ 13.1

Table 3.1: Crab Nebula data used as a test sample for the Draco analysis. All data were taken with dark night (no moon) and in wobble mode. The total effective on-time is 3.20 hours. Unfortunately these Crab Nebula sequences, chosen because they were taken with similar hardware conditions with respect to the Draco data, have smaller Zenith angles than Draco.

The main analysis software is a dedicated ROOT-based set of programs called MAGIC Analysis and Reconstruction Software (MARS).

3.6.1 Calibration The basic informations to be extracted from the raw data are arrival time and in- tensity of the signal. A program named Callisto (Calibrate Light Signal and Time Offsets) has the duty to retrieve these quantities from the FADC samples. This can be done with several methods, among which the main are the following two:

• the digital filter extractor makes a weighted sum of a certain number of FADC slices within a given temporal window. This method assumes that the pulse has a known shape;

• the spline extractor interpolates with splines the digital samples; arrival time and charge integral are computed from the fit function.

The digital filter method is faster on a computational basis, but it fails when pulses are not well centered in the FADC time interval. Moreover it is not yet implemented

4A point-like source reflected by the mirrors produces a 2D-Gaussian shaped image whose FWHM is called Point Spread Function. It is conventionally taken as the telescope angular resolution. Data Analysis Chain Tested on the Crab Nebula 31 for the new MUX FADCs since the digitized signal shape has to be well known to adjust the weights. For the Crab Nebula sample, taken in the late 2006, the digital filter extractor has been applied. Once the electric charge from a pixel is measured, it has to be converted in num- ber of photons. The conversion requires two steps: the first is an equalization of the responses from all the pixels, by means of the calibration run; the second is the calculation of the photon number through the F-factor method5. In the calibration procedure generally a 3% of the pixels are tagged as “bad” be- cause they are either affected by hardware problems, or show too high, too low or too fluctuating rate. These bad pixels are excluded from the further steps of the analysis and, if they are not clustered, their mean charge is interpolated from neighbouring pixels. After the calibration the Merpp program (Merging and Preprocessing Program) adds to the Callisto output the information from the report files.

3.6.2 Image Cleaning and Parametrization After the calibration, a data reduction procedure is needed in order to reject the night sky background. As Cerenkovˇ events are recorded only in a small region of the camera, the image cleaning program Star (Standard Analysis and Reconstruction) finds out the pixels lighted by Cerenkovˇ light and sets to zero the signal from the rest of the camera (see figure 3.5). Different criteria can be applied to choose whether a

Figure 3.5: Example of image cleaning. Rejection of the night sky background recorded by the camera allows a strong data reduction (file size shrinks from ∼200 MB to ∼10 MB). pixel is contained in the shower image or not:

• the absolute cleaning defines two charge thresholds, c and b (in photoelectrons). If a pixel is lighted with n ≥ c photoelectrons it is tagged as core pixel; isolated core pixels are discarded. If a pixel has a charge n with b ≤ n < c and has a core neighbor, it is tagged as boundary pixel. Isolated pixels or pixels with

5 Given the average charge hQi collected by a PMT, subtracted of the pedestal value, and its variance 2 2 2 2 2 2 σ = σQ¯ − σped, the F-factor is defined as F = 1 + σ /hQi . The number of photoelectrons is 2 2 2 computed as Nphel = hQi F /σ . To obtain the number of photons reflected by the mirror, one should divide Nphel by the Quantum Efficiency of the mirror-plexiglass-Winston cone-PMT chain. Detailed calculations can be found in R. Mirzoyan [22]. 32 Chapter 3.

lower charge are neglected and the shower image is defined only by core and boundary pixels. A standard value for the parameters is c = 10, b = 5.

• the time-relative cleaning tags core and boundary pixels according to the pedestal mean and RMS of each pixel. The c and b thresholds refer in this case to the fraction of pedestal RMS that the pixel charge should exceed from the pedestal mean value to be considered core or boundary. Moreover, some constraints on the signal arrival time for neighboring PMTs are given. This kind of cleaning exploits the fact that gamma events produce nearly isochrone images on the camera, while the more fragmented hadronic images show a bigger spread in arrival time.

The image cleaning process is a delicate step in the analysis chain and requires a fine tuning of the parameters accordingly to the kind of analysis one is willing to perform and to the quality of the available data. Roughly speaking, a soft cleaning (i.e. with low thresholds) keeps images from less energetic events, together with much hadronic noise, while a more aggressive cleaning (high thresholds) helps in rejecting hadron-like events, but necessarily raises the effective energy threshold of the whole analysis. Both cleanings, for comparison, are applied to the Crab Nebula sample.

3.6.3 Cuts Once performed the cleaning procedure, images are parametrized with the Hillas parameters described in Section 2.2.3. At this stage of the analysis, some cuts can be applied in order to make a first, gross data reduction. The name quality cuts refer to the rejection of events which can be certainly regarded as background, even before the photon/hadron separation: these cuts are tested on the Montecarlo data to ensure that no (or very little) gamma events are rejected. Namely, the quality cuts applied on our Crab sample are the following:

• a spark cut to reject spark events, of the form P (Log Size) < Log Conc where P is a 3rd degree polynomial (see figure 3.6)

• a car flashes cut: Log (Width × Length/Dist) > −0.3

• a cut on the number of islands, considering that hadronic showers produce more fragmented images: NumIslands>3

• a lower size cut, necessary because too small images are badly parametrized and thus represent a background source; in the present case, we keep images with Size>70 phel

• a series of further cuts that are part of the standard analysis for the MAGIC Collaboration and are known to reject hadronic or noisy events. These involves the Length/Size, Conc and Leakage parameters.

Another kind of run selection is made at this stage, checking mainly the mean rate and the nominal source position. Run with too high or too low rate, and with the nominal source position too far from the wobble circle are excluded from the further analysis. Data Analysis Chain Tested on the Crab Nebula 33

Figure 3.6: Spark cut for Crab Nebula (left) and Montecarlo data for comparison (right). Crab Nebula events above the black line are rejected.

Figure 3.7: Rate (left) and nominal source position (right) plots for the Crab Nebula sample. Runs with anomalous rate values or with source position outside the tolerance ring (in red, width is 30 mm) are rejected.

3.6.4 Statistical Photon/Hadron Separation As already explained, events of hadronic origin are way the most powerful back- ground source in IACTs. An equally powerful technique is therefore needed to recog- nize gamma events from the 104 times more numerous hadron events. Image para- meters can, in this context, play this role: hadron and photon images show different features through which they could be discriminated. First trials made use of simple static (i.e. constant) cuts in the image parameters, while so-called supercuts are func- tions of the parameters themselves. These kinds of techniques, though, implied a somewhat a priori assumption on the hadron and photon image features in order to achieve an effective discrimination. A more refined method would get better results by means of the statistical learning strategy. Random Forests6 (RF) is a multi-dimensional classification software implemented in the MARS environment whose task is to tag each event with a number, called

6Random Forests is a trademark of Leo Breiman and Adele Cutler. http://www.stat.berkeley.edu/∼breiman/RandomForests/ 34 Chapter 3.

Hadronness, correlated to the probability for that event of being of hadronic ori- gin. The first step in the classification procedure is to train the algorithm, i.e. let it learn how to discriminate gammas from hadrons using a sample of events of known origin. For this purpose, two train samples have to be prepared, one from Montecarlo gamma events, the other from real data (hadrons). Given a set of about five image parameters7, RF builds a hyperspace where each dimension represents a parameter. Events from the train samples then distribute in the hyperspace according to their parameter values. The algorithm chooses randomly three parameters out of five and for each one finds the value c that minimizes the Gini index Q:

left left right right Q(c) N · N Np · N = p h + h 2 N left + N left right right p h Np + Nh where the subscript stands for photons, hadrons and the superscript indicates if the event values for that parameters are smaller (left) or greater (right) than the cut value c. As it can be understood from the formula, a cut that minimizes the Gini index is the one that best separates the two populations. Once the optimal cut is found, the hyperspace is branched in two subsets, one rich in photon events, the other rich in hadron events. Optimal cuts are calculated for the three randomly chosen parameters until the remaining subsets contain events from only one population or are smaller than a fixed size (these subsets are called leaves). The whole procedure is then repeated, choosing three different parameters out of five, and another tree is grown up to the leaves. RF produces a forest of M (usually 100) different trees. Since every leaf of every tree can be either a h-leaf or a γ-leaf, and since every event is contained in a big number of leaves, Hadronness can be computed counting how many times an event has been put in a h-leaf:

PM δ Hadronness = i=1 h,i M where δh,i = 1 if the event is in a h-leaf of the i-th tree, and δh,i = 0 if the event is in a γ-leaf. All this complex procedure would not be of much interest if applied only on a train sample, where Montecarlo photons are added on purpose. The trained classifier, however, can be used as a predictor. The aim of the training phase is in fact to store the values of the optimal cuts found; these are then applied to the whole (real) data sample and Hadronness is calculated for each event. For real data, Hadronness is somehow correlated to the probability for an event of belonging to an hadron-induced shower. This concept is based on the assumption that real data that in the parameters hyperspace are neighbors of Montecarlo data, have a high probability of being of gamma origin. This discussion stresses once more the importance of the accuracy of Montecarlo simulations. The main advantage in using RF is that image features for each event are summarized in only one parameter: the background rejection8 is thus performed just with a cut in Hadronness (see figure 3.8). One should note, however,

7For our Crab Nebula analysis, these parameters are namely: LogSize, Width, Length, Conc, Log[Size/(Width×Length)]. 8The Random Forests algorithm appears to be a very powerful means of background rejection. It has been demonstrated, for example, that it gives a high Hadronness value to car and sparks events, thus making the previous quality cuts unnecessary (although not harmful). Data Analysis Chain Tested on the Crab Nebula 35 that the geometrical Hillas parameters are assigned to small (low Size) images with great uncertainties, since the elliptical shape is unclear. Random Forests therefore loses some of its discriminating power at low energies, resulting in a Hadronness distribution for Montecarlo data such as the one represented in figure 3.9.

Figure 3.8: Hadronness distribution for the analyzed samples of Montecarlo data (in red, peaked at 0) and Crab Nebula data (black filled, peaked at 1). Crab Nebula data are nor- malized to the MC data integral. Note that the gamma signal from Crab Nebula makes the Hadronness distribution of real data less steep near 0 than the MC distribution is near 1.

Figure 3.9: Hadronness dependence on the estimated energy for the analyzed Montecarlo data. It is apparent that under 200 GeV the mean Hadronness values increases because of the uncertainties in the Hillas parameters. One has to take into account this behavior when performing an energy-binned Theta analysis (see for example table 3.2). 36 Chapter 3.

3.6.5 Signal Detection and Other Physical Results The last step in the analysis chain is the actual signal detection from the processed data. The general strategy is to look for an excess in the events distribution with respect to a parameter which should discriminate photons from hadrons. As already stressed in Section 2.2.3 gamma events are expected to produce images with the ma- jor axis pointing at the gamma-ray source position in the camera, while hadronic images should distribute with a random orientation. This is the key difference ex- ploited in the signal search. For On/Off data, the orientation of the shower image is usually measured by the parameter Alpha (see figure 3.10). An Alpha-plot of events with low Hadronness should show an excess peaked at Alpha = 0 from the On sample with respect to the flat Off histogram. For wobble data, however, the Alpha parameter loses its phys- ical meaning since the source is not located at the center of the camera. The dis- criminating parameter used in this case is Theta, the angular distance of the Disp- reconstructed source position from the nominal source position. For each event, two values of Theta are calculated, one from the source nominal position (θon), the other from the anti-source position (θoff ). In a ring of radius θ and thickness dθ the number of reconstructed source positions for background events is dN = 2πθdθ, or equiva- dN 2 2 lently dθ2 = π = constant. In a plot dN/dθ versus θ , one would therefore expect 2 the θoff histogram to be rather flat (except for the finite camera acceptance effect) 2 2 while the θon to show an excess peaking at θ = 0. See figure 3.11.

Figure 3.10: Left panel: The Alpha angle, defined by the ellipse major axis and the line through the CoG and the source position (at the center of the camera). Right panel: in wobble mode the most probable source position for one event is reconstructed by means of the Disp parameter. The angular distances between the reconstructed source position and the nominal source position (θon), and from the reconstructed source to the anti-source (θoff ) fill up two histograms to be compared in order to reveal a gamma signal.

Signal Detection In the case an excess is found in an Alpha- or Theta-plot, its physical significance S is given by the probability of being a random fluctuation of the background. To estimate this value, the usual Li and Ma’s formula [19] is computed: s 1 + Γ n   m  S = 2n ln · + 2m ln (1 + Γ) · Γ n + m n + m Data Analysis Chain Tested on the Crab Nebula 37

Figure 3.11: Example of Alpha-plot and Theta-plot for two Crab Nebula samples. Upper panels: Alpha-plot for Crab Nebula, Hadronness<0.15, energy range 300÷700 GeV, On data in red crosses and Off in blue line. A clear excess of the On data is visible in the first 4 bins. The total significance value (17.13σ) is calculated from data within α < 10◦ (signal region). In the middle, the excess plot (the difference On – Off) and an information summary on the right (from [36]). Lower panels: Theta-plot for Crab Nebula, Hadronness<0.20, energy range 100÷1000 GeV, On in red and Off in blue. Excesses are found for low values of θ2; the signal region is defined by θ2 < 0.05 (θ < 0.223◦) and yields a significance of 12.3σ. In the middle, the integral significance.

where n, m are the number of events in the On, Off histograms and Γ is the ratio between On and Off observation time. For the observation of an unknown source, like the Draco one, a significance S ≥ 5σ is conventionally required to claim its gamma- ray source status. This corresponds to accept the excess to be a likely fluctuation with −7 a probability of about 10 . To compare different√ observations a time-indipendent value is required, so one generally computes S/ T where T is the observation on- time. In figure 3.12 the Theta-plots for our time-relative cleaned Crab Nebula sample are shown. Results are summarized in table 3.2. It is clear that time-relative cleaning is the “soft” one since it yields higher significances (almost double in the first energy bin).

Skymap

The Disp parameter allows the calculation of a skymap (figure 3.13). This is essen- tially a bi-dimensional histogram of the reconstructed source position, given event by event in (RA, Dec) coordinates and background subtracted. Every bin in the his- togram is then smoothed with a 2d-Gaussian function with FWHM given by the mir- rors’ PSF (∼0.1◦). 38 Chapter 3.

Figure 3.12: Theta-plots for the test Crab Nebula sample treated with the time-relative image cleaning. From top to bottom, the five different energy bins (in GeV): 65–150, 150–300, 300–600, 600–1200, 1.2–100 TeV. See table 3.2. Data Analysis Chain Tested on the Crab Nebula 39

√ √ 2 Energy bin (GeV) Hadronness cut θcut Excess S (σ) S/ T (σ/ h) Time-relative cleaning 65÷150 0.30 0.05 383 3.03 1.69 150÷300 0.25 0.05 494 8.33 4.65 300÷600 0.20 0.04 300 12.01 6.71 600÷1200 0.15 0.03 118 10.82 6.05 1200÷105 0.10 0.03 29 5.54 3.10 Absolute cleaning 65÷150 0.30 0.05 219 1.77 0.99 150÷300 0.25 0.05 437 7.53 4.20 300÷600 0.20 0.04 245 8.20 4.57 600÷1200 0.15 0.03 84 6.67 3.72 1200÷105 0.10 0.03 48 5.47 3.05

2 Table 3.2: Crab Nebula results for both image cleanings. Hadronness cut and θcut are opti- mized with the aid of Montecarlo data (figure 3.9).

Figure 3.13: Crab Nebula skymap after the analysis chain. The z-axis is in arbitrary units related to the detection significance.

Energy Estimation

The energy of the shower, and therefore the energy of the primary particle, is in first order related to the Size of the image in the camera. Nevertheless, a more precise estimation should take into account some other variables. The main ones are the shower impact parameter and the Zenith angle of observation. Depending on these two factors, showers with the same energy produce images of quite different shape and density on the camera. This complex behavior suggests to exploit the Random 40 Chapter 3.

Forests algorithm also for the energy reconstruction. RF is trained again with a sub- sample of Montecarlo data; MC showers are simulated with a “true” energy, whose knowledge allows the estimation of an energy value for real data. A comparison be- tween true and estimated energy for Montecarlo events gives some key information such as the actual energy threshold and energy resolution for the analysis chain (figure 3.14).

Figure 3.14: The actual energy threshold of the analysis chain is defined as the peak of the true energy distribution for MC data, about 65 GeV in the present case (left panel). The energy threshold increases with the Zenith angle of observation. The relative energy resolution (right panel) is given by the relative difference between estimated and true energies (∆E/E) and settles around 30% for energies above the threshold.

Flux Calculation The energy estimation is an important task prior to the calculation of the differential flux of gamma-rays from the observed source. The determination of the energy spec- trum, however, is perhaps the most fundamental stage in the analysis chain, since the spectrum features are the most directly correlated to the physical processes which generate gamma-rays. The differential flux is calculated as the number of observed photons of given energy, per unit of collection area and effective observation time:

N(E) Φ(E) = A(E) · teff

The effective collection area includes the total surface invested by the cosmic-ray shower, and for IACTs it is therefore much larger than the actual telescope size (typ- ically ∼ 105 m2). It strongly depends on the Zenith angle of observation and it is estimated with the aid of Montecarlo data. The differential flux is calculated interpo- lating a small number of discrete points which represent the signal excess in a given energy bin. Typically, the point is taken into account if its significance is at least 2σ. If the significance is lower, usually an upper limit is computed in that energy bin. The upper limit at a confidence level ξ is defined as the flux level corresponding to a number of detected photons such that the probability to observe more photons in the same energy interval is ξ (usually ξ = 5%). Its evaluation involves knowledge on the telescope performances and some hypotheses on the spectrum shape (usually a power-law or an exponentially cut-off power law). Data Analysis Chain Tested on the Crab Nebula 41

Figure 3.15: Differential spectrum from our analysis of the Crab Nebula data. The results are compatible with a (E/TeV)α power law with α = –2.61 in the 0.1÷2 TeV range.

3.6.6 Statistic and Systematic Uncertainties Estimation Any quantitative result yielded by the analysis chain is necessarily affected by statistic and systematic uncertainties. On the statistic side, one assumes that the excess event detection follows a Poissonian distribution.√ As a consequence, the statistic error for a histogram bin containing N events is N; this error is propagated in the following calculations, such as the differential flux. As for the systematic side, the huge com- plexity of the instrument and the variety of physical processes involved allow only an approximate estimation of the several contributions to the systematic uncertainty. One can take into account:

• discrepancies between Montecarlo simulations and real conditions. These include atmospheric attenuation (15%), mirror reflectivity (5%), photon losses in the camera window (3%) and in the Winston cones (5%);

• data processing and analysis uncertainties, such as calibration (10%), loss of gamma events through the analysis chain (6%), effective area calculation (10%), and energy reconstruction (5%).

All these contributions affect the flux calculation, for which a total systematic error of about 30% is estimated. The uncertainty increases up to 50% for flux level in the lowest energy bins. Chapter 4

Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode

This chapter presents a study preliminary to the analysis of the Draco source. It deals with the inhomogeneity of the camera appearing in our dataset in the low energy range (.200 GeV). This particular feature, related with the period of the Draco datataking, leads indeed to systematic errors which can mask a real signal. To study it, the analysis chain has been tested on an Off data sample, treating those data as a wobble mode observation. The purpose of this study is to get an estimate of the systematic errors due to camera inhomogeneity and some procedures to quench them.

4.1 The Camera Inhomogeneity

The analysis chain explained in the previous Chapter, and especially the signal de- tection technique through the Theta-plot, is based on the assumption that events are distributed with a central symmetry in the camera1, such that for every nominal source position the θ2 distribution is approximately the same. In the very first steps of the analysis of Draco data we found that this assumption was not verified, since our data were distributed in a very inhomogeneous way in the camera: see figure 4.1. The inhomogeneity problem is however an issue which is going to be solved within the next observation cycle. It was originated from a Level 1 Trigger hard- ware damage, so that some macrocells did not trigger every time they should. Since it is a local problem, the lowest energy bins are the most affected because big images are more likely to hit a large number of macrocells and hence be triggered anyway (figure 4.2). Events characterized by small images, on the contrary, may impinge en- tirely on a “blind” region and thus be not registered at all. It should be stressed that we are not dealing with a photomultiplier inefficiency, but with a trigger misbehavior: what is triggered is therefore fully recorded.

1 dN More precisely, one expects dϕ = const., i.e. that the number of events in every ϕ angular bin is approximately constant. This circular symmetry is partially broken by the hexagonal shape of the camera. ϕ is defined in Section 2.3.2 and illustrated in figure 4.4. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 43

Figure 4.1: Center of Gravity ϕ-distributions for the Crab Nebula (left) and Draco period II (right). This kind of plot represents the number of CoGs lying within an angular interval ∆ϕ where ϕ is the camera angle shown in figure 4.4. A flat shape with six peaks, due to the hexagonal geometry of the camera, is expected. In the case of Draco a dramatic “hole” (inhomogeneity) in the 200◦–360◦ range is visible.

Figure 4.2: Center of Gravity ϕ-distributions for the Crab Nebula at different Size values. The inhomogeneities are visible only up to 200 photoelectrons (left), while for bigger sizes the plot is virtually smooth (right).

For On/Off observations a great effort has been made to move around this ob- stacle (see for example Zandanel’s thesis [36]), mainly by means of camera cuts or simply by raising the energy lower cut for the analysis, but without major improve- ments. For wobble mode observations the inhomogeneity problem has to be dealt with more carefully, since On and Off data share the same hardware camera con- ditions but are taken in different positions. Hence we found the opportunity to go deeper into the question before analyzing the Draco data. This decision was also supported by the fact that gamma signal from dark matter annihilation, if any, would likely be observed at low energies because of its power law spectrum. Our aim is to understand the ways in which an inhomogeneous camera can affect a wobble analy- sis focused on the low energies, and possibly to find some “recipes” to avoid or reduce these effects. The study accounted for in the next sections was presented, in co-operation with S. Lombardi, in the Software Session of the MAGIC Collaboration Meeting held in Sofia, May 2007. 44 Chapter 4.

4.2 Analysis of an Off Sample as Wobble Data

4.2.1 Dataset Preparation

In order to study the effects of the camera inhomogeneities on the analysis of Draco data, we need a data sample whose deviations from uniformity could not be confused with a gamma excess. For this reason our tests focused on a 3-hour-long Off data sample, i.e. the observation of an empty region of the sky, which was analyzed as in wobble mode datataking. Technical details are given in Table 4.1.

sequence number date start–stop time Zenith angle PSF (mm) 95618 17 Jul, 2006 23:15:47 – 23:49:12 6◦÷10◦ 12.1 95631 18 Jul, 2006 00:03:29 – 00:39:01 14◦÷20◦ 12.3 96262 23 Jul, 2006 22:53:26 – 23:53:54 6◦÷16◦ 12.1 96277 23 Jul, 2006 23:53:55 – 00:56:07 17◦÷29◦ 11.6

Table 4.1: Off1721-1 (RA 17.63 h, Dec 34.3◦) data sequences analyzed as wobble data. All data are taken during dark night.

Figure 4.3: Center of Gravity (x, y) and ϕ distributions for the Off data sample (all sizes). Minor inhomogeneities are visible.

As a first step in the Off study, the data sample is divided in two equally pop- ulated subsamples –to keep things simple, even-numbered runs and odd-numbered runs. To simulate a wobble datataking, the nominal source position in the camera is set by hand in two fixed points: W1 (120, 0 mm) for even runs and W7 (–120, 0 mm) for odd runs (see figure 4.4). Note that this configuration is different from a real wobble observation because in the latter the nominal source position rotates along the wobble circle, while in our test setting the two source positions are fixed. Furthermore, it is worth to underline that the number of events referring to W1 and W7 is approximately the same. This group of runs, with the nominal source position fixed as explained above, forms a sequence and it is characterized by the direction in- dividuated by W1–W7, which we will identify with the angle ϕ measured clockwise from W1. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 45

The arbitrary choice of the runs collocation in the sequence (the even in W1, the odd in W7), though randomly driven, could introduce some kind of unwanted bias. In order to avoid this possibility, we will include in the analysis also a complemen- tary sequence made exactly like the previous one, but with even- and odd-numbered runs swapped. Summarizing, the data sample built up to now consists of a couple of complementary sequences whose source positions lie at ϕ = 0◦. It is quite straight- forward, at this stage, to use the same data sample to extend this construction to other positions along the wobble circle. One can in fact set up a couple of comple- mentary sequence, similarly to the couple (W1,W7) and (W7,W1), for other pairs of symmetric points along the wobble circle: W2-W8, W3-W9, W4-W10, and so on. As shown in figure 4.4, and listed in table 4.2, a total amount of 12 sequences (called “1+1-points sequences”) spanning over the wobble circle is built in this way and subsequently analyzed. Note that the distance between two consecutive W-points is 62 mm, about the size of two camera pixels. Therefore the discretization of the con- tinuous wobble circle into twelve equally spaced points can be considered as a quite good approximation.

Figure 4.4: Positions of the reference W-points along the wobble circle for Off analysis pur- poses. Every couple of points opposite with respect to the camera center defines a value for the ϕ angle, beginning with W1–W7 (ϕ = 0◦) and increasing clockwise up to 150◦ (W6– W12). The distance between two consecutive W-points is approximately the size of two camera pixels. 46 Chapter 4.

ϕ points (coordinates in mm) sequences 0◦ W1 (120, 0) W7 (–120,0) seq1 (W1,W7); seq2 (W7,W1) 30◦ W2 (104, –60) W8 (–104, 60) seq1 (W2,W8); seq2 (W8,W2) 60◦ W3 (60, –104) W9 (–60, 104) seq1 (W3,W9); seq2 (W9,W3) 90◦ W4 (0, –120) W10 (0, 120) seq1 (W4,W10); seq2 (W10,W4) 120◦ W5 (–60, –104) W11 (60, 104) seq1 (W5,W11); seq2 (W11,W5) 150◦ W6 (–104, –60) W12 (104, 60) seq1 (W6,W12); seq2 (W12,W6)

Table 4.2: Coordinates of the fixed nominal source positions and detail of the analyzed 1+1- points sequences.

The analysis chain applied to the Off data is virtually identical to the one applied to the test Crab Nebula sample, explained in Chapter 3. In particular, the Off analysis has the following features:

• calibration is performed with the digital filter extractor;

• both an absolute and a time-relative image cleaning are performed (see Section 3.6.2);

• quality cuts, photon/hadron separation and energy reconstruction are performed with the same parameters as for the Crab Nebula.

In the following Sections, several tests on the Off sample are presented, each one dealing with one different “degree of freedom” of the analysis chain. A common procedure, though, has been followed to get comparable results: the Theta analysis is made on two energy bins, each with different Hadronness cuts as in table 4.3. In order to obtain clear plots, however, the significances have been averaged over the Hadronness values (an Off dataset shows very little variability with respect to this parameter). The considered energies are smaller than 600 GeV because we want to concentrate on the lowest bins, and moreover Off data with higher energies are too few in a 3-hour observation to make a sensible Theta-plot. Hadronness cut and signal region values are optimized by means of a comparison with Montecarlo and Crab Nebula data.

2 energy bin (GeV) Hadronness cut signal region (θcut) 80÷200 0.1 0.05 80÷200 0.2 0.05 80÷200 0.3 0.05 80÷200 0.4 0.05 200÷600 0.05 0.03 200÷600 0.15 0.03 200÷600 0.25 0.03

Table 4.3: Analysis bins for the Off data sample. The results presented in the next Sections are the Hadronness averages for a given energy bin. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 47

4.2.2 Test on Different Ways to Extract On and Off Histograms As it was introduced in Section 3.6.5, the Theta analysis technique requires two his- 2 2 tograms, θon and θoff , to be filled and compared. In the simplest case, for each event 2 θon is calculated from the distance between nominal source and Disp-reconstructed 2 source, while θoff is the distance, squared, between the anti-source and the recon- structed source (figure 3.10). This procedure, however, is somehow na¨ıve because it does not account for the possible gamma excess in the Off histogram. In fact, if a 2 gamma signal is present it will be emphasized by an excess of low-θon-valued events 2 with respect to the background; the same events, though, will show up in the θoff histogram as a “bump” around θ2 = 0.64 (θ = 0.8◦, the distance between source and anti-source): see figure 4.5. To avoid this signal contamination in the background plot, a more refined procedure has been developed in order to fill the two histograms 2 2 with different events at least in the signal region (0 ≤ θ ≤ θcut). Since this algorithm is a bit complex, its details are postponed in Appendix C. This concern, however, has 2 generally little relevance except for strong signals which exceed the limit of θcut; fur- thermore, no meaning at all is found in the analysis of an Off data sample like the present one, where no gamma contamination is expected.

2 2 Figure 4.5: In a good Theta analysis, θon (red) and θoff (blue) histograms should show two 2 nicely overlapping tails (left). If, instead, gamma events are assigned a θoff value, they will show up as a negative excess (right).

A further opportunity offered by the Theta analysis is to compute the background from more than one point. The method used up to here exploits as Off reference only one point, the anti-source position, and thus it is called 1-to-1 method. One can 2 also calculate the θoff distance from other points on the wobble circle, for example from the two points orthogonal to the source-antisource line plus the anti-source (3-to-1 method), or from five Offpoints equally spaced along the wobble circle (5- to-1 method): see figure 4.6. One could expect that averaging the Off data among several Offpoints would reduce the statistical fluctuations and therefore increase the significance (figure 4.7). As we will see, though, this procedure is not suitable for dealing with strong inhomogeneities because it introduces systematic effects that dominate the improvement on the statistical side. A Theta analysis has been performed on the Off sample with the sequence config- uration shown above.√ The results are in the form of a signed significance per square- root hour, ±σ/ h, in order to be comparable with other observations. The sign 48 Chapter 4.

2 Figure 4.6: Sketch of the three methods to calculate the Off histogram. θoff can be computed from more than one point. The hollow red ◦ is the nominal source position, the full black • the anti-source position and other Offpoints, and the  is the Disp-reconstructed source position.

Figure 4.7: Comparison of the same Crab Nebula data, analyzed with the three Off extrac- tion methods (from left to right, 1-to-1, 3-to-1 and 5-to-1). Clearly using more Offpoints smooths out the background fluctuations, resulting in an increased significance. This proce- dure, however, is not suited in case of severe inhomogeneities, particularly at low energies.

is important since one would expect that complementary sequences yield opposite significance values: swapping source-runs and antisource-runs, in fact, means swap- ping On and Off data for the 1-to-1 method. Data are analyzed using the three Off-extraction methods in combination with the two image cleanings; the results are listed in tables 4.4 and 4.5. Some example plots, which are meant to exemplify the general behaviors described in the following lines, are shown from page 50. The results shown in tables and plots can be interpreted from many points of view. At this stage we would like to emphasize three main features that appear from the data. In the first place, if one looks at the absolute significance values, summarized with the ϕ averages and RMS in tables 4.4 and 4.5, one will observe that generally –though not sistematically– the 1-to-1 method yields significances closer to the null expected value than the 3-to-1 and 5-to-1 in the first energy bin, while for higher energies the three methods show similar performances. For comparison with the Crab Nebula analysis see table 3.2. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 49

ϕ absolute cleaning time-relative cleaning 1-to-1 3-to-1 5-to-1 1-to-1 3-to-1 5-to-1 SEQUENCES 0◦ 0.42 –0.34 –0.67 –0.45 –0.89 –0.92 30◦ –0.27 –1.06 –0.87 –0.72 –1.23 –1.02 60◦ –0.40 –0.12 –0.03 0.14 0.33 0.18 90◦ 0.52 1.11 0.95 0.74 1.28 1.27 120◦ –0.25 0.64 0.52 0.97 1.48 1.42 150◦ –0.24 –0.38 –0.06 0.47 0.20 0.18

ϕ average 0.37 0.65 0.61 0.60 1.04 0.96 ϕ RMS 0.40 0.79 0.69 0.66 1.10 1.04 COMPLEMENTARYSEQUENCES 0◦ –0.42 –1.02 –1.31 –0.24 –0.53 –0.39 30◦ 0.28 –0.61 –0.39 0.75 –0.04 0.10 60◦ 0.40 0.52 0.53 –0.09 0.15 0.00 90◦ –0.52 0.26 0.13 –0.68 0.10 0.15 120◦ 0.26 1.06 0.98 –0.94 –0.11 –0.08 150◦ 0.25 0.01 0.30 –0.44 –0.56 –0.53

ϕ average 0.38 0.69 0.67 0.54 0.19 0.15 ϕ RMS 0.40 0.76 0.80 0.59 0.31 0.28 √ Table 4.4: Significances per time unit (±σ/ h) from a Theta analysis on the Off sample. Values are averaged over the Hadronness cut (see table 4.3) and refer to the 80–200 GeV energy bin.

ϕ absolute cleaning time-relative cleaning 1-to-1 3-to-1 5-to-1 1-to-1 3-to-1 5-to-1 SEQUENCES 0◦ 0.19 0.01 0.06 –0.76 –0.67 –0.57 30◦ 0.43 0.52 0.59 –0.56 –0.76 –0.88 60◦ 0.70 0.47 0.36 –0.20 –1.03 –0.74 90◦ 0.60 0.63 0.38 –0.97 –0.71 –0.93 120◦ –0.72 –0.76 –0.34 –1.18 –0.66 –0.43 150◦ 0.87 0.80 0.40 0.40 1.16 0.89

ϕ average 0.53 0.48 0.35 0.73 0.77 0.71 ϕ RMS 0.57 0.57 0.33 0.57 0.80 0.68 COMPLEMENTARYSEQUENCES 0◦ –0.17 –0.27 –0.20 0.84 0.49 0.67 30◦ –0.44 –0.22 –0.11 0.57 0.14 0.00 60◦ –0.70 –0.67 –0.71 0.22 –0.67 –0.41 90◦ –0.55 –0.33 –0.52 1.05 0.92 0.63 120◦ 0.72 0.42 0.79 1.36 1.38 1.57 150◦ –0.84 –0.60 –0.90 –0.28 0.61 0.39

ϕ avg. 0.52 0.38 0.47 0.81 0.72 0.66 ϕ RMS 0.57 0.39 0.60 0.59 0.70 0.67 √ Table 4.5: Significances per time unit (±σ/ h) from a Theta analysis on the Off sample. Values are averaged over the Hadronness cut (see table 4.3) and refer to the 200–600 GeV energy bin. 50 Chapter 4.

Figure 4.8: Significance per time unit for 1+1-points sequences. Absolute cleaning, low energy range, 1 Offpoint. Error bars refer to the standard deviation with respect to the Hadronness cut. It is clear that 1-to-1 method preserves symmetry between complementary sequences.

Figure 4.9: Significance per time unit for 1+1-points sequences. Time-relative cleaning, low energy range, 1 Offpoint. Error bars refer to the standard deviation with respect to the Hadronness cut. 1-to-1 method preserves symmetry between complementary sequences, but time-relative cleaning gives bigger significance values. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 51

Figure 4.10: Significance per time unit for 1+1-points sequences. Absolute cleaning, high energy range, 3 Offpoints. Error bars refer to the standard deviation with respect to the Hadronness cut. Methods with more than one Offpoint break the symmetry between com- plementary sequences.

Figure 4.11: Significance per time unit for 1+1-points sequences. Time-relative cleaning, high energy range, 5 Offpoints. Error bars refer to the standard deviation with respect to the Hadronness cut. Methods with more than one Offpoint break the symmetry between complementary sequences. 52 Chapter 4.

In the second place, comparing the results between complementary couples of sequences (same cleaning, same method, same ϕ) it is clear that the 1-to-1 method preserves the symmetry between the two sequences, while adding more Offpoints breaks it (completely for 5-to-1). This latter behaviour has a straightforward expla- nation, directly correlated with the definition of the 3-to-1 and 5-to-1 methods to extract the Off histogram. In fact, if one considers only source and anti-source to 2 2 calculate θon and θoff values, in the Theta analysis complementary sequences will 2 2 just swap θon and θoff values for the same events. In the more elaborate 3-to-1 and 5-to-1 methods, instead, the additional Offpoints have the effect to keep a part of the Off histogram common to the two complementary sequences (see figure 4.12). This argument can help understanding also the first observation. If one considers for

Figure 4.12: In the 1-to-1 method, the same event (with reconstructed source position in RS) in complementary sequences fills the On and Off θ2 histograms with the same, but swapped, 2 2 2 θon and θoff values. In the 3-to-1 method, instead, 2/3 of the θoff value remain the same when passing to the complementary sequence. This effect is even more accentuated in the 5-to-1 method (not shown). the lowest energy bin the presence of camera inhomogeneities, in fact, one finds that with the 1-to-1 method they are “seen” with opposite sign by the two data samples and therefore tend to cancel out, while with the 3-to-1 and 5-to-1 methods they are summed up, so that significances wrongly increase. Finally, we should stress that there is no apparent systematic behavior with re- spect to the position of the source in the camera (i.e. a systematical dependency on ϕ), since RMS values are comparable to the single significances, and that significance absolute values distribute well around zero only for the 1-to-1 method.

4.2.3 Test on the Arc-Length Spanned by the Nominal Source Position in the Wobble Circle

The analysis of the previous Section focused on the three Off extraction methods; it was then useful to arrange data in simple 1+1-points sequences splitting the avail- able runs in two subsamples and assigning them fixed and opposite nominal source positions, as exemplified in table 4.2. This configuration, though it clarifies very well some issues, represents a strong simplification and is not actually corresponding to the real wobble data, in which the source position continuously moves along the wobble circle. Here we will therefore study the effects of a shorter or longer arc of the wobble circle covered by the nominal source position. For this purpose three new couples of complementary sequences are built, dis- tributing the available runs over six (“3+3-points”) of the previously defined source Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 53

sequence name source positions horizontal1 W6, W7, W8; W12, W1, W2 horizontal2 W12, W1, W2; W6, W7, W8 vertical1 W9, W10, W11; W3, W4, W5 vertical2 W3, W4, W5; W9, W10, W11 circle1 W1, W2,. . . , W12 circle2 W7, W8,. . . , W12, W1, W2,. . . , W6

Table 4.6: List of the fixed nominal source positions for the 3+3-points and 12-points se- quences.

positions (see figure 4.4), and also over all the 12 points, as in table 4.6. The results of a Theta analysis on these sequences, with Hadronness and energy bins identical to the previous Section, are shown in table 4.7 and following plots (4.13, 4.14 and 4.15).

sequence name absolute cleaning time-relative cleaning 1-to-1 3-to-1 5-to-1 1-to-1 3-to-1 5-to-1 80–200 GeV horizontal1 0.59 0.98 1.01 –0.44 0.26 0.22 horizontal2 –0.59 0.02 0.09 0.47 1.02 0.92 vertical1 –0.01 –0.51 –0.40 0.33 –0.13 0.14 vertical2 0.02 –0.47 –0.41 –0.29 –0.72 –0.48 circle1 0.13 0.03 0.03 –0.14 –0.58 –0.81 circle2 –0.13 –0.17 –0.09 0.09 –0.29 –0.48 200–600 GeV horizontal1 0.31 0.03 0.11 0.66 0.27 0.18 horizontal2 –0.31 –0.47 –0.38 –0.53 –0.68 –0.72 vertical1 –0.40 –0.11 –0.20 0.04 0.14 0.00 vertical2 0.41 0.57 0.44 –0.13 0.21 0.01 circle1 0.42 0.28 0.19 –0.51 –0.02 –0.04 circle2 –0.37 –0.36 –0.10 0.40 0.68 0.54 √ Table 4.7: Significances per time unit (±σ/ h) from a Theta analysis on the Off sample. Values are averaged over the Hadronness cut (see table 4.3).

In the light of these results one can make a few statements:

• the 1-to-1 method still guarantees symmetry between complementary sequences and yields better (i.e. closer to zero) significances than the other methods for low energies, while it has comparable performances for higher energies, exactly as observed in the analysis of 1+1-points sequences;

• fluctuations are relatively big and no systematic effects dependent on the source position are found;

• significance values are, on average, smaller than the ones found in the 1+1- points sequences analysis. 54 Chapter 4.

Figure 4.13: Significance per time unit for 3+3-points sequences. Absolute cleaning. Error bars refer to the standard deviation with respect to the Hadronness cut. Significances are generally lower than the one found with 1+1-points sequences.

Figure 4.14: Significance per time unit for 3+3-points sequences. Time-relative cleaning. Error bars refer to the standard deviation with respect to the Hadronness cut. Significances are generally lower than the one found with 1+1-points sequences. Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 55

Figure 4.15: Significance per time unit for the 12-points sequences. Error bars refer to the standard deviation with respect to the Hadronness cut. This data configuration yields the lowest significance values.

The last point should be particularly stressed, since it clearly points out that having the nominal source position to span over a wide fraction of the wobble circle is an effective way to smooth out the camera inhomogeneities.

4.2.4 Test on Different Image Cleanings

Some considerations from an image cleaning point of view can be made on the results shown above. In general, and especially for 1+1-points sequences, the time-relative cleaning seems to give slightly higher significance values. This is consistent a test made on Montecarlo data, presented in figure 4.16. The plot shows the distribution for the two cleanings of the ratio Size/MCphel, i.e. the fraction of photoelectrons correctly reconstructed by the Size parameter with respect to the “true” (simulated) number for each MC event. The result is that the absolute cleaning reconstructs on average only a 60% of the image total charge, while the time-relative one has a higher mean efficiency ('75%), but also introduces more noise. This is represented by the events whose calculated Size results bigger than the original charge (the >1-tail in the plot). The fraction of such events equals 1.79% for the absolute cleaning, 4.86% for the time-relative one. It should be stressed that these notes apply only to the two considered image cleanings, which are the most used among the MAGIC analyzers; probably a low- energy analysis would be improved by studying a dedicated cleaning with different parameters (for instance an absolute 7:5). 56 Chapter 4.

Figure 4.16: Image charge reconstruction of Montecarlo data for the two image cleanings. Events whose Size/original charge ratio exceeds the unity have pixels that survived the im- age cleaning even if lit only by the night sky background.

4.2.5 Equalization We performed a last test on the Off data sample, in order to investigate the effect of an asymmetric distribution of the events with respect to the camera center. 1+1- points sequences are rebuilt in order to have different amounts of events for different nominal source positions –see figure 4.17. In detail, in such so-called “disequalized” configuration 43% of runs are assigned to one wobble position (for instance W1) and 57% to the opposite (W7), so that the disequalization ratio is 3:4 (arbitrarily chosen).

Figure 4.17: The 1+1-points sequences described in paragraph 4.2.1 are equalized, i.e. runs are distributed between the two nominal source positions in a 50%–50% proportion (left). In the disequalized sequences (right) analyzed in this section, instead, the distribution of runs is unbalanced, such that the ratio between the number of runs referring to the two opposite positions is 3:4 (43% vs 57%).

The analysis of this kind of sequences has not been performed exhaustively for all the cases studied in the previous Sections. From the first results, however, one can guess that the same behaviors with respect to the Off extraction method, the arc- length in the wobble circle and the image cleaning highlighted above would show Camera Inhomogeneity at Low Energies: Study of Off Data in Wobble Mode 57 up in the disequalized case, too. The weight of the inhomogeneities, if not equally balanced between source and anti-source, has the only effect of yielding significance absolute values increased by a 20%–30% with respect to the equalized case. As an example the case of time-relative cleaning for 1+1-point sequences is shown in figure 4.18. Far from being a marginal question, the equalization issue is fully dealt with in the Draco data analysis (Section 5.3).

Figure 4.18: Significance per time unit for 1+1-point sequences in the disequalized config- uration. Time relative cleaning, low energy range. The effect of the disequalization can be estimated by a comparison with figure 4.9

4.3 Summary of results and recipes

We would like to summarize here the main facts about the systematic effects due to camera inhomogeneities at low energies, found with a Theta analysis on an Off data sample. Eventually we point out some suggestions about how to cope with them, in order to be able to improve the analysis on real wobble data if one has to deal with an inhomogeneous camera. It has been demonstrated that symmetry in the data configuration and analysis is the key to quench the inhomogeneities effects. On the analysis side, the 1-to-1 2 method to compute θoff values is thus preferred for low-energy analysis. Methods with more Offpoints are useful from intermediate energies up to smooth the Off histogram fluctuations in case of a low amount of data. On the data configuration side, two features showed to be helpful for the inhomogeneity issue, namely the arc-length covered by the nominal source position on the wobble circle and the data equalization. As for the arc-length spanned by the source position, we found that the wider, the better; as for the equalization, it is important to have the events well balanced with respect to the camera center. These facts suggest that 58 Chapter 4. for a steady source a quite long and continuous observation in wobble mode of a source in one night is more desirable than an equal observation time split into several nights. Furthermore, the observation schedule should be carefully planned in order to guarantee the same observation time to the two wobble positions. Dealing with the choice of the image cleaning, we do not feel to recommend one as the best for treating camera inhomogeneities, for two main reasons. First of all, in this context the difference between the behaviors of the two cleanings is small, since both yielded very similar results in the Off analysis. Secondarily, we found that time-relative has slightly higher performances, in terms of significance (Crab Nebula, Section 3.6.5) and charge reconstruction (Montecarlo, figure 4.16), than the absolute one. This property, however, should be carefully interpreted: while a higher significance is welcome when a gamma-ray source is discovered, a significance enhancement could be confused with the effects of inhomogeneity fluctuations when only a weak hint of signal –or no signal at all– is found, as the Off analysis has shown. As a conclusion, here it is our “inhomogeneity recipe” for a Theta analysis of wobble data:

1. try to have the nominal source position to cover an arc as longer as possible on the wobble circle. This can be done either planning with care the observation time in the night or by a choice of the runs to be analyzed;

2. choose –in case, discard some– the runs as to have a well balanced number of events between source positions that are opposite with respect to the camera center (equalization);

3. use the 1-to-1 method to compute the background in the low-energy bins. Chapter 5

Analysis of the Draco Source

This section explains how the data observed from Draco have been analyzed, taking into account the results found in the previous chapter. Since no source of gamma-rays is found in the energy range accessible to MAGIC, only upper limits on the flux are given. We take here the opportunity to show our gratitude to L.S. Stark and M. Rissi (ETH Zurich),¨ who performed a parallel analysis and gave a precious help dealing with the most delicate analysis steps. Micheal Rissi presented some preliminary results at the 30th ICRC (Mexico, July 2007).

5.1 Data Samples

MAGIC observed the Draco dwarf galaxy in wobble mode both in 2006 and in 2007. Differences in the hardware equipment, though, suggest to split the available data into three groups: period I (May 2006), period II (June 2006) and period III (May 2007). Each period will be analyzed separately. Details of the data sample are given in table 5.1.

first sequence start time min. Zd min. PSF effective on-time last sequence stop time max. Zd max. PSF PERIOD I 91089 20 May, 2006 00:53:44 29◦ 14.3 mm 92157 27 May, 2006 02:12:15 35◦ 16.5 mm 5.36 h PERIOD II 92285 30 May, 2006 01:09:32 29◦ 12.3 mm 94385 26 June, 2006 02:58:05 41◦ 16.3 mm 9.29 h PERIOD III 239552 9 May, 2007 01:30:43 29◦ 12.1 mm 245521 20 May, 2007 02:00:14 36◦ 15.6 mm 6.08 h

Table 5.1: Details of the Draco collected data sample. All data are taken in wobble mode; almost all during dark night, except for two sequences in period I and the first five sequences in period III that were taken with moonlight and are subsequently excluded from the analyzed data sample. 60 Chapter 5.

5.2 Analysis Chain

This Section presents the adopted analysis chain specifications. We would like to stress once more that this analysis chain is focused on low energies, a range where the effects of camera inhomogeneities become more relevant because they can involve the trigger threshold. The calibration is performed with the digital filter extractor for period I and II, while the spline method is used for period III (digital filter is currently not implemented yet for data taken with the new FADCs). The image cleaning choice is based on the results found with the Crab Nebula test analysis and on the Off study. Dealing with Draco means dealing with an unlikely signal observation, as explained in Section 1.4. For this reason one should make any effort to understand if a hint of signal is present, especially at low energies: what found in Section 3.6.5 and in Chapter 4 shows that a time-relative cleaning 2.5:0.5 is the best suited for this purposes and it is therefore the one applied to the Draco data.

Run Selection Not all the available data runs belonging to the sequences listed in table 5.1 are analyzed, since some appear unsuitable for several reasons. In the present case, the main data rejections are the following: • two moon sequences from period I and five from period III are excluded be- cause their Hillas parameter distributions are not compatible with the rest of data; • rate problems, mainly due to bad weather, make data from 30, 31 May and 24 June, 2006 unsuitable for further analysis; • three more sequences are rejected because of corrupted files; • the first sequence (11 minutes) of period III is observed under a Zenith angle higher than the rest of the same period, and is therefore not taken into consid- eration; • some single runs are rejected because of rate problems, wrong source position or other unsuitable parameters. After the first checks, 832 runs (about 84% of the observed data, corresponding to 17.43 hours of effective on-time) are judged fit for further analysis.

Cuts The following quality cuts are applied to the data (refer to Section 3.6.3):

spark cut 2.75-2.8×Log(Size)+0.85×Log2(Size)-0.089×Log3(Size)-0.3 other cuts Length<0.5×Log3(Size)+0.2×Log2(Size)+10 Log(Conc)<-1.3 Leakage>0.1 NumCorePixels<3 NumIslands>3 Analysis of the Draco Source 61

Moreover, a lower Size cut of 70 photoelectrons is made. This is rather low, but justified if one wants to push reasonably down the energy threshold.

Hadronness Evaluation and Energy Reconstruction The Random Forests algorithm is applied to calculate the Hadronness parameter and also, with a further implementation, for the energy reconstruction. Different training data subsamples are used for the two calculations in order to avoid any bias. The following parameters are used in RF:

Hadronness Energy Log(Size) Log(Size) Width Width Length Length Log[Size/(Width×Length)] Log[Size/(Width×Length)] Conc Conc Leakage1 Zenith angle MC true energy

5.3 Coping With the Inhomogeneity

As was already introduced in the previous chapter, a part of Draco data suffer from a severe camera inhomogeneity due to trigger inefficiencies. In figure 5.1 one can realize how this issue affects the low-sized events for each period. It is clear that period II is the most problematic, while for period I and period III a quite standard analysis is possible. Our aim is then to stick to the suggestions derived from the Off analysis to try to recover some useful data from period II.

Figure 5.1: Center of Gravity distributions for events under 300 photoelectrons for the three periods. Data from period II (middle panel) are heavily affected by the trigger inefficiency.

The arc-length covered by the nominal source position on the wobble circle is quite wide for all the three periods, as shown in figure 5.2. This length, however, cannot be extended at will. An important symmetry which can, instead, be imposed on the data is the balance of the run number with nominal source positions opposite with respect to the cam- era center (equalization), as introduced in Section 4.2.5. If one looks at the nominal source position distribution in the camera for period II (figure 5.3, left panel), one 62 Chapter 5.

Figure 5.2: Nominal source position distributions for the three periods of Draco data (from left to right: I, II equalized and III).

observes that the diametrically opposite bins on the wobble circle can differ greatly in the number of events. This, as found analyzing Off data, enhances the inhomo- geneities. A careful selection of runs can thus bring to a much more symmetric situation, as illustrated in the right panel of figure 5.3. To quantify the “amount” of disequalization we calculated a weighted average1, ∆¯ , of the relative differences ∆i between the number of events contained in diametrically opposite bins (refer to figure 5.3). The equalization procedure led from a ∆¯ = 34.2% of the original data to ∆¯ = 0.61% for the equalized sample. The improvement from an analysis point of view is immediately visible from figure 5.4. A Theta-plot of the original data (upper panel) shows a systematic mismatch of On and Off histograms –red crosses (On) are generally above the blue ones (Off) even for high values of θ2– and therefore it is not possible to get any sensible result. The same analysis of the equalized sample (lower panel) yields a very different situation, where excesses outside the signal region fluc- tuate around zero, as they should in absence of a clear signal, allowing the results to be trusted. Equalization thus proves to be a substantial improvement in presence of severe inhomogeneities, even if it costs some loss of data and reduces the effective acceptance area. The equalization procedure is necessary only for period II.

1Given the i index that spans over the couples of diametrically opposite bins (i = 1,..., 10), we define iA and iB as the number of events contained in the two bins of the i-th couple, with iA > iB . The relative difference to be averaged is ∆i = (iA−iB )/iA. As we want bins containing a lot of events to weight more in the average than smaller bins –in fact the latter account only for small inhomogeneities–√ we√ take as a weight for each bin the inverse of its Poissonian count error: σ(iA) = 1/ iA, σ(iB ) = 1/ iB . Then we combine the two errors to find the error on ∆i:

    2 2 3 3 2 ∂∆i 2 ∂∆i 2 iA + iB σi = σ (iA) + σ (iB ) = 5 ∂iA ∂iB iAiB

Finally we compute the weighted average

! −1 X 1 X ∆ ∆¯ = i σ2 σ2 i i i i Analysis of the Draco Source 63

Figure 5.3: Nominal source position distribution for period II before (left) and after equal- ization (right). The improvement in symmetry is exemplified for a couple of opposite bins (30×30 mm, about the size of one pixel). The equalization procedure required the rejection of less than 29% of the original data.

Figure 5.4: Comparison between Theta-plots (90÷200 GeV) for period II before and after equalization. Upper panel: original data sample. Excesses are almost all positive for θ2 up to 0.5. The integral significance increasing shape and the excess positive average (black line in the excess plot) are the most significant evidences that this analysis is wrong. Lower panel: equalized data sample. Significance and excesses fluctuate around zero and prove the analysis trustworthy. 64 Chapter 5.

5.4 Results

Energy Threshold and Resolution To estimate the actual energy threshold and resolution of the analysis chain one has to study the Montecarlo data. The plots in figure 5.5 show the distribution of the MC true energy and the relative difference between true and estimated energies for the sets of Montecarlo runs used for the three Draco periods. The peak of the true energy distribution, fitted with a Gaussian curve, points out the energy threshold for a period. Namely they are 83.9 GeV, 93.1 GeV and 118.9 GeV for period I, II and III respectively. The high threshold found for period III may be due to the fact that signal extraction and image cleaning are not fully developed yet to match the performances of the new FADCs. For every period the energy resolution is about 25% above the threshold. Since energy thresholds are different for the three datasets, we define for each period a lower bound (LB) to be used in the Theta analysis:

period I: LB = 80 GeV period II: LB = 90 GeV period III: LB = 115 GeV

Figure 5.5: Estimate of energy threshold and resolution for the three Draco periods from Montecarlo data. Analysis of the Draco Source 65

Theta Analysis

In the light of the previous considerations, in the Theta analysis the only method 2 used to calculate θoff values is 1-to-1. The results, for different energy bins and Hadronness cuts, are illustrated in table 5.2. As an example, the Theta-plots for the lowest energy bin and Hadronness<0.3 for each period are also shown (figures 5.6 – 5.8).

2 Energy Had. θcut period I period II (equal.d) period III bin (GeV) cut signif. (exc.) signif. (exc.) signif. (exc.) LB÷200 0.4 0.05 1.20 (224) 0.53 (93) –0.03 (–4) LB÷200 0.3 0.05 1.74 (251) 1.15 (152) 0.16 (15) LB÷200 0.2 0.05 0.58 (57) 0.32 (28) –0.47 (–33)

200÷400 0.3 0.05 0.49 (30) –0.95 (–60) 1.17 (60) 200÷400 0.2 0.05 1.03 (49) –0.77 (–37) 0.31 (13) 200÷400 0.1 0.05 0.66 (22) –0.35 (–12) 0.03 (1)

400÷104 0.25 0.04 1.61 (42) 0.98 (29) –0.85 (–23) 400÷104 0.15 0.04 0.98 (22) 1.80 (45) –1.81 (–41) 400÷104 0.05 0.04 –0.06 (–1) 1.40 (26) –1.22 (–20)

LB÷104 0.3 0.04 2.10 (334) 0.69 (104) 0.57 (64) LB÷104 0.2 0.04 1.24 (139) 0.26 (27) –0.68 (–58) LB÷104 0.1 0.04 –0.05 (–3) 0.15 (9) –0.03 (–2)

Table 5.2: Results of the Theta analysis on the three periods of Draco data. For every energy bin and Hadronness cut, the significance value in σ units and the total number of excesses 2 within θcut are given.

The main evidence to highlight is that, in all occurrences except one, the signifi- cance values never exceed 2σ. One should remember that this result is found with a time-relative cleaning, which proved to yield higher significances than the more stan- dard absolute one. Moreover, both positive and negative excesses are found. These evidences lead to the conclusion that no hint of signal, in the energy range spanned by the analysis, is observed. To recognize the absence of gamma signal one can make a comparison between the Off and Draco results, in terms of significance per time unit. Figures in tables 5.3 and 4.7 look in fact totally compatible.

Flux Upper Limits

When no source is observed, a flux calculation loses its meaning. In its place, up- per limits on the gamma-ray flux are calculated using the Rolke method [27]. This method is based on the construction of a probability density function for the number of observed photons

Z ∞ Z t1 dNγ Nobs = A(E)(t)dEdt 0 t0 dEdAdt 66 Chapter 5.

Energy Had.cut period√ I period II√ (equal.d) period√ III bin (GeV) σ/ h σ/ h σ/ h LB÷200 0.4 0.52 0.17 –0.01 LB÷200 0.3 0.75 0.38 0.06 LB÷200 0.2 0.25 0.10 –0.19

200÷400 0.3 0.21 –0.31 0.47 200÷400 0.2 0.44 –0.25 0.13 200÷400 0.1 0.29 –0.11 0.01

400÷104 0.25 0.70 0.32 –0.34 400÷104 0.15 0.42 0.59 –0.73 400÷104 0.05 –0.03 0.46 –0.49

LB÷104 0.3 0.91 0.23 0.23 LB÷104 0.2 0.54 0.09 –0.28 LB÷104 0.1 –0.02 0.05 –0.01

Table 5.3: Significance per time unit values for the Draco data.

where A(E) is the effective area and (t) a function describing the effective observa- tion on-time between t0 and t1. The probability density function of Nobs is manip- ulated in order to get an equivalent p.d.f. for the number of excess events Nexc and thus to calculate an upper limit for Nexc. This upper limit can be transformed into a flux upper limit by inversion of the above integral. In our case we computed the upper limit on the excesses from the Theta analysis at a confidence level of 95%, also including a systematic error estimation of 30%. The results, for different energy cuts, are the following:

Energy cut Flux U.L. (10−11 photons cm−2 s−1) period I period II (equal.d) period III E > LB 23.9 12.9 4.69 E > 150 GeV 1.49 4.54 3.65 E > 250 GeV 0.80 0.80 0.38

Skymaps Finally, the skymaps are calculated with the Disp method and drawn (figure 5.9). It is interesting to observe that in all the three periods the skymaps show a “bump” region and a “hole” region, rather than a randomly fluctuating sky. This behavior can be justified looking at the z-axis scale, which is related to the significance: since values are very small, the camera inhomogeneity effects are visible also in the skymap. Furthermore, the effect is more accentuated in the skymap for period II, which suffers the most from camera inhomogeneities. A brief discussion on the physical interpretation of the results showed above is presented in the next Chapter. Analysis of the Draco Source 67

Figure 5.6: Theta-plot for period I data. Energy 80÷200 GeV, 1-to-1 method.

Figure 5.7: Theta-plot for equalized data from period II. 90÷200 GeV, 1-to-1 method.

Figure 5.8: Theta-plot for period III data. Energy 115÷200 GeV, 1-to-1 method.

Figure 5.9: Skymaps of the three periods for Draco. Size>70 phel, Hadronness<0.15, pixels are smoothed with a Gaussian function with FWHM equal to the telescope PSF. Chapter 6

Conclusions

The aim of this thesis is to analyze the observation of the Draco dwarf spheroidal galaxy performed by the MAGIC Telescope. To this end, a complementary study on a Crab Nebula sample is required. Since the Crab Nebula is well known as a bright and steady gamma-ray source, it is used to test the performances of the telescope and the analysis chain. The latter however has to be tuned accordingly to the kind of source one has observed: in particular, the Draco data sample was to be studied at low energies. In that range, the effects of a hardware problem which affected the trigger at the moment of datataking are more important since they can mask a genuine signal. For this reason a part of this work has been devoted to the analysis of an Off data sample as a wobble observation, in order to get a quantitative estimate of such systematic effects before analyzing the Draco data. The results obtained from that analysis and from the Draco data are discussed in the following sections.

6.1 Discussion of the Results from the Off Sample

The study on the Off data sample showed to be a good test-field for the Draco analy- sis to compare with. Moreover it allowed an important recipe to reduce the inhomo- geneities to be developed, even if it is not a definitive solution and essentially requires data rejection. We want to stress again that by applying such procedures we achieved to recover most of the data from period II, otherwise unusable, and to calculate a flux upper limit. Further studies would be welcome, e.g. with other Off samples or with a longer datataking time in order to calculate a reliable estimation of the systematic effects due to the camera, which are not thoroughly accounted for yet. Aside from ad hoc analysis procedures which can be applied to deal with this problem –although they are based on the rejection of selected data– major improvements are expected to be soon achieved by means of hardware/technical work on the camera and the trigger.

6.2 Discussion of the Results from Draco

Simulations within the SUSY framework, as introduced in the first chapter, predict gamma-ray fluxes from neutralino self-annihilation far below the current MAGIC sen- sitivity for the considered density profiles in the Draco dwarf galaxy (an exponentially Conclusions 69 cutoff power law). Anyway much of the key simulation parameters are still very un- certain. For instance, density profiles do not take into account possible boosting factors like clumpiness or a central black hole, which could increase the photon yield by some orders of magnitude –remember that Φ ∝ ρ2. So after the CACTUS claim of a signal observation in the 50–150 GeV range, a confirmation from another experiment was necessary. MAGIC took this opportunity since it was the most suited instrument (low-energy threshold) and a dark matter observation, though unlikely according to the simulations, was not an impossible event.

MAGIC has taken few hours of data, part of which affected by important hardware problems. These badly worsen the quality of data since the expected signal is at very low energies and one wants to push the analysis energy threshold as down as possible. Nevertheless, after some studies on an Off sample, a procedure was found to recover at least a part of the original data and try to make a sensible analysis. The results from the Crab Nebula test sample, presented in Chapter 3, show that the more standard steps of the adopted analysis chain (cuts, gamma/hadron separation, energy estimation) are reliable. Every result show that no gamma signal is observed within the data sample from 80 GeV up. Maximum significance is 2.1σ but also negative excesses are found. Moreover, significance values are compatible with the one obtained from Off data processed with the same analysis chain. This means that any real signal, if√ present, is dominated by systematic errors which can fake a significance of ∼ 0.5 σ/ h in the lowest energy bin. Upper limits to the flux are then calculated as about 10−10 pho- −2 −1 tons cm s for E &100 GeV. According to theoretical predictions, such limits are too high to touch the SUSY parameter space. Anyway they are not compatible with the data presented by CACTUS which lead to a flux estimation of 2.4×10−9 photons cm−2 s−1 above 50 GeV [7].

Figure 6.1: Current MAGIC sensitivity compared with different SUSY simulations. If no flux boosting factors are present in dark matter observation targets, new instruments are required to reach the parameter space. 70 Chapter 6.

Summarizing, we have seen how theoretical works point out that sensitivities for MAGIC and, more generally, for current IACT instruments are too low to detect any gamma-ray signal from dwarf spheroidal galaxies. For MAGIC in particular the observation of Draco has dealt with the inhomogeneity issue which depleted the telescope performances. A null signal was therefore expected but an error estimation was required: such estimation has been achieved by the calculation of the upper limits and, for the systematic part, through the study of the Off sample. A new observation with an improved hardware equipment would surely lead to better upper limits, but anyway not sufficiently low to put any constraint on the SUSY parameters. To reach the needed sensitivity (figure 6.1) a chance is offered by MAGIC II, the stereoscopic version of MAGIC which will be starting operations in 2008, but from a more realistic point of view third generation telescopes, like CTA, are required. Dark matter search by gamma-ray instruments can find more opportunities also looking at different targets. More than half of the sources observed by past space- telescope EGRET are still unidentified: some of those could be intermediate-mass black holes, very interesting objects since their features are prescribed essentially by gravity and dark matter self-annihilation cross section. Observation of such targets, perhaps triggered by surveys made by the next satellite intrument, GLAST, represent the near future chance for the indirect dark matter search. Conclusions 71

Appendix A

Celestial Coordinate Systems

When referring to a celestial object it is necessary to mention its position in the sky, that is the only way to identify it without ambiguity. This custom is so important that new catalogue names are given after the object coordinates, such as the AGN North Celestial Pole 1ES1218+304. Three main coordinate systems are used in astronomy –choosing one or the other is often related to the nature of the object itself, or to the branch of astronomy or astrophysics one is dealing with.

Zenith North Celestial Pole ator ial Equ Celest N tion A lina zimuth tic c ip De cl e E W Altitud R. E A. First Point of Aries (γ) S Sun Figure A.1: Sketch of the Horizontal coordinate system.

Horizontal Coordinate System

Called also local coordinate system because its origin is set where the observer is. This feature makes it of easy use for naked-eye or amateur telescope observations, but unsuitable for scientific purposes because exchange of data would require for every object a coordinate transformation. The reference plane is the observer’s horizon, where the cardinal points are fixed. The angle from the horizontal plane to the observed target is called altitude (h) or elevation; it spans from zero (horizon) to 90 degrees (Zenith). The angle from the North to the projection of the target on the horizontal plane, measured clockwise, takes the name of azimuth (a) and it ranges

73 from 0◦ to 360◦. The North Celestial Pole, where the Polaris star lies, has in this reference an altitude equal to the observer’s geographical latitude (figure A.1). The zenith angle (Zd), which is an important characteristic for MAGIC observa- tions, is the altitude complement to the Zenith: Zd = 90◦ − h.

North Celestial Pole

Zenith North Celestial Pole

ator ial Equ Celest N tion A lina zimuth tic c ip De cl e E W Altitud R. E A. First Point of Aries (γ) S Sun

Figure A.2: Sketch of the Equatorial coordinate system. The Ecliptic is the plane defined by the Earth in its revolution around the Sun: thus, in a geocentric reference such as the Equatorial one, the Sun is always lying on the Ecliptic.

Equatorial Coordinate System

This is the coordinate system used in this thesis, and the most common in astro- physics. It is simply a projection of the Earth geographic grid onto the celestial sphere. The reference plane is the same on which the terrestrial equator lies, while the fundamental meridian is defined by the First Point of Aries (γ point). This is the point where the equatorial plane intersects the Ecliptic; it is called also vernal equinox because the Sun crossing this point –around March 21st– marks the begin- ning of astronomical spring. Celestial longitude is called right ascension (α or RA), and is measured from the γ point eastward, i.e. in the direction along which the Zodiacal come in the yearly order (for example, Aries, Gemini and

74 Cancer have increasing RA). Right ascension is measured in 24 hours, minutes and seconds; an interval of 1 hour corresponds to 15 degrees. Celestial latitude is called declination (δ or Dec) and is measured from –90◦ (Celestial South Pole) through 0◦ (Celestial Equator) to +90◦ (Celestial North Pole). In catalogue names1, if an object is called somewhat like ABCxxyy + rrss, it means that it is located at RA:xxhyym, Dec:+rr◦ss0. The Equatorial reference system is independent from the location (on the Earth!) but has the disadvantage of being time-changing because of the Earth’s secular mo- tions (precession, nutation). For these reason an epoch should be specified together with the spatial coordinates; for our purposes, the J2000 (January 1st, 2000, noon) epoch is always implied. This is also why the First Point of Aries, named by Roman astronomers, is actually located in the Pisces constellation. See figure A.2.

Galactic Coordinate System

A third coordinate system, useful for the study of objects belonging to the Milky Way, is the Galactic coordinate system. It is a standard longitude-latitude (l, b) grid with the equator lying on the and the fundamental meridian cross- ing the Galactic Center. Essentially, it can be thought just as a rotated Equatorial coordinate system, with the Galactic Center (l = 0, b = 0) at RA:17h45m37.224s, Dec:–28◦56’10.23” (J2000).

1 See recommendations at http://cdsweb.u-strasbg.fr/iau-spec.html

75 Appendix B

Image Parameters

We give here the analytical description of the image parameters introduced in Chap- ter 2. Some have a simple geometrical interpretation given in figure B.1.

Figure B.1: Geometrical definition of some simple Hillas parameters. In an arbitrary reference (x, y), the camera center coordinates are (x0, y0).

Given the basic quantities

(x0, y0) coordinates of the reference point (camera center) (xi, yi) coordinated of the i-th pixel in the image Ni number of photons collected by the i-th pixel Ni wi = P weight of the i-th pixel k Nk one can define:

76 P x¯ = i wixi P y¯ = i wiyi (¯x, y¯) Center of Gravity

2 σxx = (x − x¯) correlations 2 σyy = (y − y¯) σxy = (x − x¯)(y − y¯)

d = σyy − σxx

q 2 2 d + d + 4σxy a = 2σxy The main image parameters are therefore P Size i Ni

s 2 2 2 a σ + σ − 2aσxy Length y x 1 + a2

s 2 2 2 σ + a σ − 2aσxy Width y x 1 + a2

 |y¯ − ax¯|  Alpha arcsin √ Dist · 1 + a2

p 2 2 Dist (¯x − x0) + (¯y − y0)

M3Long x3

Width Disp A(Size) + B(Size) · Length + η(Size) · Leakage2 where Leakage2 is the ratio between the light content of the two outermost pixel rings in the camera and the total light content of the recorded image [23].

77 Appendix C

Off Data Extraction Methods in Wobble Mode

This Appendix aims to explain in detail the three methods –1-to-1, 3-to-1 and 5-to-1– exploited to compute the Off histogram in a Theta analysis such as the one presented in Chapter 4. As an introduction, we summarize here the configuration geometry: • every event is characterized by a “most probable” source position reconstructed by means of the Disp method (RS); • at the moment the event is triggered, the observed source is located at a precise place of the camera (nominal source position, NS), 0.4◦ away from the camera center (wobble circle); • once located the nominal source position on the wobble circle, the anti-source position (AS) is defined as the point diametrically opposite to NS, together with the other Offpoints which are the vertexes of a square (3-to-1 method) or a hexagon (5-to-1 method).

Figure C.1: Sketch of the RS, NS, AS and Offpoints positions for the three methods.

1-to-1 Method

The easiest way to fill the On and Off θ2 histograms would be just to calculate, for each event, the angular distances RS–NS (θon) and RS–AS (θoff ). This procedure how- ever does not account for the possible gamma events excess that could be found near

78 the nominal source position, and which should not be included in the background histogram. In fact, if a signal from the source is observed, gamma events will likely point toward NS and have small RS–NS distances. 2 A simple solution is then to exclude from the θoff calculation events whose RS lies nearer than ζ to NS. To balance this lack of events in the Off histogram, a symmetric exclusion circle is drawn around AS: events lying within it will not be assigned a θon value. The size of ζ is chosen such to include the signal region of the Theta-plot; ◦ ◦ in the present case, as we use θcut = 0.223 or smaller, we take ζ = 0.25 . This construction has the merit to completely decouple On and Off histograms within the signal region, and thus makes possible to calculate a sensible significance. The two histograms tails, instead, are made up of the same events and should therefore over- lap rather well. It is worth stressing that this method is completely source-antisource symmetric. See figure C.2.

Figure C.2: Sketch of the 1-to-1 method.

3-to-1 Method

Sometimes, if the observation time is short or one has a small amount of data to analyze, it could be convenient to try to smooth the Off fluctuation by means of some kind of average. This can be achieved by adding some more Offpoints with respect 2 to which calculate a θoff value. In the case of the 3-to-1 method, two more Offpoints 2 (B, C) are added to AS (figure C.3). In this case, though, θon is calculated for every event, no matter where the RS. The Off histogram is built as follows:

• for events with RS in the source exclusion circle, θoff is calculated only with respect to AS, and weighted 1/3;

• for events whose RS is outside both exclusion circles, θoff is averaged over the three points: AS, B, C;

• for events with RS in the anti-source exclusion circle, θoff with respect to B and C is double-weighted to compensate the first kind of events.

79 This method does not avoid completely the gamma contamination in the Off his- togram, because of the factor 1/3 for the θoff from AS in the source exclusion circle. Moreover, this procedure is not symmetric for a source-antisource swap unless for events with RS in some special places (B–C axis).

Figure C.3: Sketch of the 3-to-1 method.

5-to-1 Method

The last method is just an extension to five Offpoints of the previous one (figure C.4). The Off averaging procedure is almost identical, only the weights are adjusted for five points instead of three. With respect to the 3-to-1 this method has the advantage of a smaller gamma contamination in the Off histogram (1/5 instead of 1/3), but on the other hand it shreds any source-antisource symmetry.

Figure C.4: Sketch of the 5-to-1 method.

80 Figure References

Chapter 1

Figure 1.1, page 2: from Bertone et al. [8]. Figure 1.2, page 7: Micheal Rissi, private communication. Figure 1.3, page 10: from Stark et al. [29].

Chapter 2

Figure 2.1, page 12: left: from Cronin [14]; right: from L. I. Dorman, Cosmic rays in the Earth’s atmosphere and underground, Dordrecht (2004). Figure 2.4, page 14: from Nobili [25], p. 44. Figure 2.10, page 20: photo from Google Maps (http://maps.google.com). It can be retrieved searching for 28.762 N, 17.89 W. Figure 2.11, page 21 and Figure 2.13, page 22 (left): MAGIC Picture Gallery, http://wwwmagic.mppmu.mpg.de/gallery.

Chapter 3

Figure 3.1, page 26: left: from Aharonian et al. [1]; right: from Wagner et al. [34]. Figure 3.2, page 26: pictures from the CHANDRA Photo Album [12] and references therein.

81 Bibliography

[1] F. Aharonian et al. (HEGRA Collaboration), The Crab Nebula and Pulsar Between 500 GeV and 80 TeV: Observation With the HEGRA Stereoscopic Air Cerenkov Telescopes, The Astrophysical Journal 614:897–913 (2004)

[2] J. Albert et al. (MAGIC Collaboration), VHE γ-ray Observation of the Crab Nebula and Its Pulsar With the MAGIC Telescope, astro-ph/0705.3244 (2007)

[3] B.C. Allanach et al., The Snowmass Points and Slopes: Benchmarks for SUSY Searches, The European Physical Journal C 25:113–123 (2002); also in hep- ph/0202233

[4] ARGO website, http://argo.na.infn.it/

[5] Edward A. Baltz, Dark Matter Candidates, astro-ph/0412170 (2004)

[6] Edward A. Baltz et al., Detection of neutralino annihilation photons from external galaxies, astro-ph/9909112 (1999)

[7] H. Bartko et al., Proposal: Dark Matter Observations in Draco and Ursa Major, internal MAGIC document (2006)

[8] Gianfranco Bertone, Dan Hooper and Joseph Silk, Particle dark matter: evidence, candidates and constraints, Physics Reports 405:279–390 (2005)

[9] Erica Bisesi, Indirect Search of Dark Matter in the Halos of Galaxies, Ph.D. Thesis, Universita` di Udine (2007)

[10] G.R. Blumenthal et al., Nature 311, 517 (1984)

[11] Thomas Bretz et al., Comparison of On-Off and Wobble mode observations for MAGIC, 29th ICRC Proceedings 4:311–314 (2005)

[12] CHANDRA X-Ray Observatory website, http://chandra.harvard.edu

[13] Peter Coles and Francesco Lucchin, Cosmology. The Origin and Evolution of Cos- mic Structure, 2nd Edition, Wiley, Chichester (2002)

[14] James W. Cronin, Cosmic rays: the most energetic particles in the universe, Re- views of Modern Physics 71, 2 (1999)

[15] Jaan Einasto, Dark Matter: Early Considerations, astro-ph/0401341 (2004)

[16] Markus Gaug, Calibration of the MAGIC Telescope and Observation of Gamma Ray Bursts, Ph.D. Thesis, Universitat Autonoma` de Barcelona (2006)

82 [17] A.M. Hillas, 19th ICRC Proceedings 3:445 (1985)

[18] John David Jackson, Classical electrodynamics, 3rd edition, Wiley, New York (1998)

[19] Ti-pei Li and Yu-qian Ma, Analysis Methods for Results in Gamma-Ray Astronomy, The Astrophysical Journal 272:317–324 (1983)

[20] Saverio Lombardi, Studio sistematico del fondo e del segnale nei dati dell’esperimento MAGIC con applicazione all’analisi della sorgente CRAB, Laurea Thesis, Universita` di Padova (2006)

[21] Malcom S. Longair, High Energy Astrophysics, Volume I: Particles, photons and their detection, 2nd ed., Cambridge University Press, Cambridge (1994)

[22] Razmick Mirzoyan, Conversion Factor Calibration for MAGIC Based on the Use of Measured F-Factors of PMTs,MAGIC TDAS Note 00-15 (2000)

[23] Josep Flix Molina, Observation of γ-rays from the Galactic Center with the MAGIC Telescope, Ph.D. Thesis, Universitat Autonoma` de Barcelona (2006)

[24] NASA/IPAC Extragalactic Database, http://nedwww.ipac.caltech.edu/. Re- sults for the Draco dwarf object.

[25] Luciano Nobili, Processi radiativi ed equazione del trasporto nell’Astrofisica delle alte energie, Cleup Editrice, Padova (2002)

[26] Elisa Prandini, Observation of VHE Gamma Emission from the AGN 1ES1218+304 with the MAGIC Telescope, Laurea Thesis, Universita` di Padova (2005)

[27] W.Rolke, A.Lopez and J.Conrad, Limits and Confidence Intervals in the Presence of Nuisance Parameters, Nuclear Instruments and Methods A 551, 493 (2005)

[28] D.N. Spergel et al., astro-ph/0302209

[29] L.S. Stark et al., Draco Observation with the MAGIC Telescope, to be published in the 30th ICRC Proceedings (2007)

[30] Diego Tescaro, Informazioni temporali nelle immagini Cherenkov osservate dal telescopio MAGIC, Laurea Thesis, Universita` di Padova (2005)

[31] Luigi Tibaldo, Looking for cosmic ray sources: a study of gamma-ray emission from molecular clouds with the GLAST LAT telescope. Laurea Thesis, Universita` di Padova (2007)

[32] Craig Tyler, Particle dark matter constraints from the Draco dwarf galaxy, Physi- cal Review D 66:023509 (2002)

[33] Sidney van den Bergh, The Early History of Dark Matter, Publications of the Astronomical Society of the Pacific 111:657–660 (1999); also in astro- ph/9904251

83 [34] R.M. Wagner et al., Observations of the Crab nebula with the MAGIC telescope, 29th ICRC Proceedings 4:163–166 (2005)

[35] W.-M. Yao et al., Cosmic Rays, section 24 from Journal of Physics G 33, 1 (2006) available at http://pdg.lbl.gov/

[36] Fabio Zandanel, Dark Matter Search with the MAGIC Telescope: Analysis of the Unidentified EGRET Source 3EG J1835+5918, Laurea Thesis, Universita` di Padova (2007)

[37] F. Zwicky, Helv. Phys. Acta 6, 110 (1933)

Note. An electronic version of unpublished Theses should be available in the Publi- cations section of the MAGIC website: http://wwwmagic.mppmu.mpg.de/publications/theses/

84

Ringraziamenti

Questo scritto, e il molto che ci sta dietro, non `e tutto merito mio. Ci sono tante persone che voglio ringraziare per aver contribuito, ognuno a suo modo, a questa tesi. Moon over junk-yard where the snow lies bright Can set my heart to burn Prima di tutto grazie a Saverio, che ha diviso con me tutto il lavoro di questi mesi con inesauribile –quasi proverbiale, direi– gentilezza e pazienza e mi ha reso piu` che familiari i nomi di Callisto, Osteria e Melibea. Ripetuti all’infinito come un mantra, ogni volta con una sfumatura leggermente diversa, hanno partorito infine i risultati che cercavamo. Sei citato solo una volta, a pagina 43, ma sai che meta` di questa tesi `e tua. Ringrazio Mos`e per avermi fatto entrare con entusiasmo nel suo gruppo e per le possibilita` che il far parte di MAGIC mi ha offerto. La sua chiarezza nel sintetizzare arzigogoli tecnici o complicatezze fisiche `e qualcosa che devo ancora imparare. Stood before the shaman, I saw star-strewn space Behind the eyeholes in his face A tutta la stanza 383 va un abbraccio pieno di gratitudine, tanto per i mille con- sigli ricevuti quanto per le lunghe ore e i brevi momenti condivisi. Grazie Elisa per i sorrisi e gli occhiolini (anche quelli su skype ;) e per le risate, con Saverio, nelle notti bulgare. Grazie a Roberta, Michele e Fabio per aver reso il mio shift un ricordo indelebile. Le bolts, i fagioli, il pane e chorizo, il latte e cereali, il vento freddo che fis- chia attraverso gli specchi sono sensazioni in qualche modo intessute anche in queste pagine, grazie per averne fatto parte. Grazie a Villi per tutto il tempo dedicatomi, tra lunghe spiegazioni e riletture della tesi, nei suoi ultimi giorni da scapolo. E so- prattutto per avermi mostrato quanto lontano puo` spingersi lo spirito scout. Grazie a Marcos, generoso fornitore di trucchi di ROOT in cambio di regole spicciole di gram- matica italiana. E infine non si puo` tralasciare il Dazzi, che da dietro al divisorio non smette di produrre pillole di personalissima filosofia.

Un ringraziamento caloroso lo devo al resto del Gruppo MAGIC di Padova: Do- natella, Toni, Gigi e Denis, e ai membri della Magica “famiglia allargata” che ho avuto l’occasione di conoscere durante le splendide cene a Sofia. Tra questi Markus G. e Markus G., assieme a Montse e al piccolo Adrian, meritano un grazie speciale che non ha bisogno di essere spiegato. Infinity always gives me vertigo And fills me up with grace Altri fisici devono essere ancora ringraziati. Forse non `e esplicito il loro contributo a questa tesi, ma non sarei quello che sono ora se mi fosse mancato il loro supporto e loro amicizia in cinque lunghi anni di studi. Abbiamo costruito insieme la nostra idea di Fisica, e se ne `e uscito qualcosa di buono `e merito di ciascuno di loro. Con Gigi, l’astrofisico di GLAST, ho condiviso il percorso non sempre facile che dai muoni porta ai raggi gamma. Mentre la sua fama di massimo esperto in gelato, cioccolato e fondo galattico diffuso `e di pubblico dominio, forse non tutti conoscono con quanta generosita` offra i suoi consigli su ogni altro argomento. Conversare con Tito, mio affine, riesce sempre a (ri)accendere qualche mia passione: per la scienza, la politica, internet e non ultimo il buon uso del LATEX. Gotti alla scuola materna sognava di diventare un fisico nucleare; adesso che il sogno `e realizzato, si rendera` conto di quali effetti ha avuto su di me un imprinting durato vent’anni. Nicola il perfettino, il rigoroso, considerarlo come un limite lontano a cui tendere mi ha fatto imparare a fare le cose per bene. Piu` spesso pero` ho preferito averlo vicino.

This feast of beauty can intoxicate Just like the finest wine

Mamma, Papa,` l’ultima riga, la piu` importante, spetta a voi.

And don’t tell me there is no mystery It overflows my cup.