Quick viewing(Text Mode)

Open Dissertation Mrosenberg 2019.Pdf

Open Dissertation Mrosenberg 2019.Pdf

The Pennsylvania State University The Graduate School

A SEARCH FOR VERY HIGH ENERGY PHOTONS FROM GAMMA-RAY

BURSTS WITH THE HIGH ALTITUDE WATER CHERENKOV

OBSERVATORY

A Dissertation in Physics by Matthew M. Rosenberg

© 2019 Matthew M. Rosenberg

Submitted in Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy

December 2019 The dissertation of Matthew M. Rosenberg was reviewed and approved∗ by the following:

Miguel Mostafá Professor of Physics and of Astronomy and Astrophysics Dissertation Advisor, Chair of Committee

Kohta Murase Assistant Professor of Physics and of Astronomy and Astrophysics

Stephane Coutu Professor of Physics and of Astronomy and Astrophysics

Peter Mészáros Professor of Physics and Eberly Chair Professor of Astronomy and Astrophysics

Richard Robinett Professor of Physics Associate Head for Undergraduate and Graduate Students

∗Signatures are on file in the Graduate School.

ii Abstract

Gamma-Ray Bursts (GRBs) are brief, intense flashes of gamma rays lasting from a fraction of a second to minutes. The prompt emission from these explosive events outshines all the stars in their entire host galaxy. Thought to be produced by the core collapse of massive stars and the merger of compact stellar remnants in distant galaxies, GRBs can liberate on the order of 1054 ergs of gravitational potential energy in just milliseconds. In addition to constituting an interesting phenomenon in their own right, these cosmic engines accelerate particles to energy scales unattainable in laboratories on Earth and thus provide a potentially interesting probe of fundamental physics as well as source candidates for ultra-high energy cosmic rays. We present recent efforts to extend the observation of GRBs beyond ∼100 GeV with the High Altitude Water Cherenkov (HAWC) observatory. Located in Puebla, Mexico at a latitude of 19◦ north and an altitude of 4100 meters above sea level, HAWC employs a 20,000 m2 array of 300 water Cherenkov detectors to observe the relativistic charged particles produced in the extensive air showers that develop upon the collision of high-energy gamma rays with Earth’s atmosphere. This technique provides sensitivity to ∼100 GeV – 100 TeV gamma rays, allows for nearly continuous operations, and achieves a wide instantaneous field of view of ∼2 sr that allows for daily monitoring of the northern sky. HAWC is thus ideally suited to capture any &100 GeV emission from transient events like GRBs. As GRB photons above a few TeV in energy are likely to be absorbed by the extra- galactic background light before reaching Earth, HAWC’s ∼100 GeV – 1 TeV data is of prime importance in the search for high-energy GRB emission. However, the small air-shower data necessary to achieve this lower threshold of ∼100 GeV has previously been poorly modeled in HAWC simulations and has therefore not been used in past HAWC GRB searches. We will show that these modeling discrepancies were caused by an inaccurate treatment of detector noise, outline a solution that allows HAWC to achieve its lowest possible energy threshold, and present a method to reduce the impact of detector noise on HAWC’s angular resolution in this newly recovered small air-shower data. Along with new GRB search algorithms, these improvements provide up to an order of magnitude improvement in HAWC’s sensitivity to gamma-ray bursts. We use these new techniques to scan archival HAWC data for gamma-ray emission coincident with GRBs detected by the Fermi and Swift satellites between December 2014 and April 2018. While no significant detections were found, a comparison of our upper limits on the &100 GeV flux from GRBs 170206A and 171120A with Fermi measurements suggests a cut-off or spectral steepening below that energy under a assumption of z . 0.3. However, these limits are not sufficiently strict to compellingly constrain GRB models with predictions for TeV scale gamma-ray emission.

iii Table of Contents

List of Figures vii

List of Tables xxv

List of Equations xxvii

Acknowledgments xxx

Chapter 1 Introduction 1 1.1 Gamma-Ray Bursts ...... 2 1.1.1 Observational History ...... 3 1.1.1.1 The Vela Discovery ...... 3 1.1.1.2 The Compton Gamma-Ray Observatory ...... 3 1.1.1.3 Beppo-SAX and HETE-2 ...... 7 1.1.1.4 Swift ...... 8 1.1.1.5 Fermi ...... 10 1.1.1.6 HESS and MAGIC ...... 14 1.1.1.7 LIGO, Virgo, and the Multi-Messenger Era ...... 15 1.1.2 GRB Theory ...... 18 1.1.2.1 The GRB Paradigm ...... 18 1.1.2.2 EBL Attenuation ...... 22 1.2 The Role of HAWC in GRB Science ...... 24

Chapter 2 Detecting Gamma Rays 25 2.1 Extensive Air Showers ...... 25 2.1.1 Gamma-Ray Air Showers ...... 26 2.1.2 Cosmic-Ray Air Showers ...... 30 2.2 Cherenkov Radiation ...... 33 2.3 Gamma-Ray Detectors ...... 34 2.3.1 Direct Space-Based Detectors ...... 34 2.3.2 Imaging Atmospheric Cherenkov Telescopes ...... 35 2.3.3 Extensive Air-Shower Arrays ...... 37

iv Chapter 3 The High Altitude Water Cherenkov Observatory 39 3.1 Instrumentation, Electronics, and Online Processing ...... 41 3.2 Calibration ...... 45 3.2.1 Charge Calibration ...... 45 3.2.2 Timing Calibration ...... 47 3.3 Air-Shower Reconstruction ...... 49 3.3.1 Hit Selection and Charge Scaling ...... 49 3.3.2 Core Fit ...... 50 3.3.3 Angle Fit ...... 53 3.3.4 The Reconstruction Chain ...... 55 3.4 Event Size Bins ...... 56 3.5 Gamma/Hadron Shower Discrimination ...... 58 3.5.1 Compactness ...... 60 3.5.2 PINCness ...... 60 3.6 Sensitivity to Gamma Rays ...... 61

Chapter 4 HAWC’s Small Air-Shower Simulation 64 4.1 Monte Carlo Overview ...... 65 4.2 Discrepancies with Data ...... 65 4.3 Improved Modeling of Detector Noise in the Monte Carlo ...... 67 4.3.1 More Accurate Noise Models ...... 69 4.3.2 A Data Noise Overlay Algorithm ...... 72

Chapter 5 Improving HAWC’s Low-Energy Sensitivity 74 5.1 Low-Energy Reconstruction Challenges ...... 75 5.2 The Multi-Plane Fitter ...... 77 5.2.1 How the Algorithm Works ...... 78 5.2.2 Impact on Event Size ...... 78 5.2.3 Impact on Reconstruction Time ...... 79 5.3 A New Low-Energy Multi-Plane Fitter Analysis ...... 80 5.3.1 Setting the fHit Thresholds for Analysis Bins ...... 81 5.3.2 On-Array vs. Off-Array Events ...... 85 5.3.3 Combining High-Energy Bins ...... 89 5.3.4 Gamma/Hadron Separation Cuts for New Bins ...... 92 5.3.4.1 The Compactness and PINCness Variables Revisited . . . . 92 5.3.4.2 Cut Optimization ...... 98 5.4 Performance of the Low-Energy Multi-Plane Fitter Analysis ...... 102 5.4.1 Results with One Year of Crab Data ...... 102 5.4.2 Agreement with MC Predictions of Crab Observations ...... 111 5.5 Improvements to Low-Energy Sensitivity ...... 115 5.5.1 Predicted Sensitivity Gain to Point Sources with a Low-Energy Cut-off 117 5.5.2 Sensitivity Gain in Low-fHit Crab Data ...... 118

v 5.6 MC Crab Predictions ...... 121 5.6.1 Significance and Excess Calculations ...... 121 5.6.2 Spectral Assumptions ...... 122 5.7 Other Gamma/Hadron Separation Variables Considered in the Multi-Plane Fitter Analysis ...... 125 5.7.1 Modeling of nHitSP-X Variables ...... 125 5.7.2 A Quality Cut on the Fraction of In-Time Hits ...... 127 5.7.3 A Muon Identification Variable ...... 129 5.8 Improving Low-Energy Sensitivity with the Existing Reconstruction . . . . . 134 5.9 An Alternate Noise Discrimination Technique ...... 137

Chapter 6 HAWC’s GRB Search Algorithms 143 6.1 The Published Single-Bin Excess Analysis ...... 143 6.2 A Multi-Bin Excess Analysis ...... 144 6.3 ZEBRA ...... 150 6.4 HAWC’s Sensitivity to GRBs ...... 158 6.4.1 Search Algorithm Comparisons ...... 159 6.4.2 The Most Sensitive Search Method ...... 162

Chapter 7 The 41 Month GRB Search 164 7.1 Burst Sample ...... 164 7.2 Time Windows ...... 165 7.3 Results ...... 166 7.3.1 Background Consistency ...... 166 7.3.2 Flux Limits ...... 167 7.3.3 Comparisons with Fermi Measurements ...... 169 7.4 Conclusions ...... 174

Appendix A List of Bursts in the 41 Month GRB Search 176

Appendix B Flux Limits for Bursts in the 41 Month GRB Search 184

Bibliography 192

vi List of Figures

1.1 Example BATSE light curves, in counts per second (cps) vs. time in seconds (s), for a sample of four GRBs [4]...... 4

1.2 Distribution of T90, the time taken to observe 5% to 95% of all detected gamma rays, as measured by BATSE for GRBs in the BATSE 4B catalog [7]...... 5

1.3 Hardness ratio (HR) vs. T90 for GRBs seen by BATSE [8]. The hardness ratio is defined as a burst’s ∼100 - 300 keV fluence divided by its ∼50 - 100 keV fluence. Short GRBs, lasting less than 2s and plotted as squares with crosses, have harder spectra (are more skewed towards higher energies) than long GRBs, which last longer than 2s and are plotted as open circles. Also shown for these two samples of GRBs are dotted regression lines and their average HR and T90 values (as filled circles). The solid line simply connects these two points, while the dashed line is a regression line for the combined (long and short) GRB samples. HR and T90 are not correlated within either sample, and the overall correlation of the combined populations is simply an artifact of the different HR distributions of the two GRB classes...... 6

1.4 A schematic diagram of the typical GRB X-ray afterglow light curve seen by Swift [17]. The prompt gamma-ray emission (phase 0) is followed by a power-law decay with an index of . 2 3 -3 (phase I). Then, starting at tb1 ≈ 10 − 10 seconds since the burst trigger, the power-law slope typically becomes shallower with an index of ∼ -0.5 (phase II), though X-ray flares 3 4 (phase V) are sometimes seen during this period. In phase III, beginning at tb2 ≈ 10 − 10 s, the light curve steepens as the power-law index decreases to ∼ -1.2. In some bursts, at 4 5 tb3 ≈ 10 − 10 s, this period is followed by an additional, yet steeper component with a power-law index of ∼ -2 (phase IV). The portions of the light curve seen in the majority of bursts are shown as solid lines, while those seen in only a subset of bursts are shown as dashed lines...... 9

1.5 The fluence distribution in the 10 – 1000 keV energy range for the 178 LAT detected GRBs in the 2nd Fermi-LAT GRB catalog and the 2357 GBM detected bursts observed in the same time period [22]. Separate distributions are shown for short GRBs and long GRBs. Bursts with LAT detections are among the brightest seen by Fermi-GBM...... 10

vii 1.6 Light curves observed by Fermi-GBM and Fermi-LAT for GRB 130427A in five energy bands [23]. NaI (sodium iodide) and BGO (bismuth germanate) are two types of scintillator detectors that make up Fermi-GBM. The LLE (LAT Low Energy) light curve is obtained from LAT data with a low-energy event selection. The circles in the bottom panel show the energy and arrival time of individual LAT detected gamma rays, with filled circles plotted for photons with a >90% probability of actually coming from GRB 130427A. All light curves and points are plotted as a function of time since the GBM trigger. The higher-energy emission is delayed with respect to the lower-energy emission...... 12

1.7 GRB 130427A spectral fits to combined Fermi-GBM and Fermi-LAT data for three different time periods (measured in time since the GBM trigger) [23]. An additional Power-Law (PL) component is needed to fit the delayed (11.5 – 33 s) emission. For reference, the top panel shows, for the same three time periods, the Fermi-GBM light curves (obtained by combining and arbitrarily scaling the data from the top three panels of figure 1.6) with the energies of LAT detected photon (solid circles from the bottom panel of figure 1.6) overlaid on top...... 13

1.8 The GBM measured fluence in the 10 keV – 1 MeV energy range vs. the fluence measured by the LAT in the 100 MeV – 100 GeV energy range (during the same time period used for the GBM measurement) for all bursts in the 2nd Fermi-LAT GRB catalog [22]. The solid green line is a one to one line denoting equal LAT and GBM fluences. The dashed and dotted-dashed green lines denote points where the LAT fluence is 10% and 1% of the GBM fluence, respectively. The short GRBs (red points) have LAT fluences roughly equal to the GBM fluences, whereas the long GRBs (blue points) have LAT fluences on the order of 10% of the GBM fluences...... 14

1.9 90% confidence localizations for the joint detection of GW170817 and GRB 170817A in equatorial coordinates [32] (axes are right ascension and declination). The localization contour from the LIGO-Virgo measurements is shown in green, the Fermi-GBM localization in purple, and the localization from combined Fermi and INTEGRAL data in gray. The yellow star in the zoomed-in region of interest shows the location of the coincident optical transient seen by the Swope telescope...... 16

1.10 The light curves and gravitational wave time-frequency map for the joint detection of GW170817 and GRB 170817A [32]. The top three panels show light curves from Fermi- GBM (in two different energy bands) and the SPI-ACS instrument onboard INTEGRAL with background estimates overlaid in red. The bottom panel shows the LIGO time-frequency map of the GW170817 detection...... 17

1.11 A diagram (from [38]) illustrating how the electromagnetic emission associated with gamma- ray bursts is produced in the canonical GRB model. Internal shocks within the jets (produced by matter rapidly accreting into the central ) convert the bulk kinetic energy of the outflow into the random particle energy that powers the prompt gamma-ray emission. Shocks produced when the jet collides with the surrounding medium give rise to the afterglow emission...... 19

viii 1.12 EBL intensity as a function of wavelength, obtained from the compilation of measurements presented in [48]...... 22

1.13 EBL attenuation factor (the fraction of gamma rays not absorbed by pair production with EBL photons) vs. energy in GeV for three (0.2, 0.6, and 1.0). Almost all TeV scale photons are absorbed by the EBL for sources at redshifts much larger than z = 0.2. The Dominguez 2011 EBL model [49] is used to calculate attenuation factors...... 23

2.1 A diagram illustrating the development of electromagnetic cascades from gamma-ray induced extensive air showers in the Heitler model [50]. Electrons and positrons from gamma-ray pair-production interactions emit additional gamma rays via bremsstrahlung radiation, which materialize into yet more e± pairs, etc...... 27

2.2 The number of electromagnetic particles in extensive air showers produced by 100 GeV, 1 TeV, 10 TeV, and 100 TeV gamma rays, respectively, as a function of atmospheric depth (expressed in radiation lengths), adapted from [53]. The HAWC observatory is at an atmospheric depth of ∼16.8 radiation lengths for vertical showers, close to shower maximum for very high-energy gamma rays...... 28

2.3 A diagram illustrating the disk formed by extensive air showers as they develop and propagate through the atmosphere, adapted from an image produced by Z. Hampbel-Arias (https://zhampel.github.io/research/). The disk has a width of ∼1 - 3 meters near the shower axis, propagates through the atmosphere at nearly the speed of light, and is roughly planar (but does display a mild curvature, which is measured empirically for HAWC air showers)...... 30

2.4 A diagram illustrating the development of hadronic extensive air showers [54]. Shower development is dominated by pions produced in interactions with atmospheric nuclei. Neutral pions decay into gamma-ray pairs, producing electromagnetic cascades. Charged pions produce additional pions in interactions with atmospheric nuclei, developing the hadronic component of the air shower...... 31

2.5 The longitudinal shower profile of a vertical 1019 eV proton [51]. The simulated numbers of hadrons, muons, electrons (and positrons), and gamma rays are shown as a function of atmospheric depth and altitude...... 32

2.6 A diagram depicting Cherenkov radiation: the coherent, conical, electromagnetic shock wave produced by charged particles traveling in a polarizable material at a velocity v = βc > c/n, where n is the index of refraction of the material and c the speed of light in a vacuum. Blue arrows indicate the direction of the emitted Cherenkov radiation. Image provided by Arpad Horvath (https://en.wikipedia.org/wiki/Cherenkov_radiation) under the CC BY-SA 2.5 license...... 34

ix 2.7 A diagram illustrating the IACT detection technique. The Cherenkov light cone produced by relativistic charged shower particles has an opening angle of ∼1 degree, creating a “light pool” ∼120 m in diameter for gamma rays interacting ∼10 km above the telescopes employed to detect the Cherenkov radiation. Image from https://www.isdc.unige.ch/cta/outreach/data 36

3.1 A drone photograph of the HAWC detector (source: HAWC collaboration). The 300 large, densely spaced WCDs that make up the HAWC main array are at the center of the picture. Also visible are the 350 smaller outrigger tanks constructed later (in 2018)...... 40

3.2 The layout of the HAWC main array (left) and a diagram of a single WCD (right) [64]. Open circles in the HAWC layout schematic show the locations of individual WCDs; the blue dots within show the coordinates of individual PMTs. The WCD diagram illustrates the trajectories of a shower particle entering a tank and the Cherenkov photons it emits, as well as the locations of the four PMTs present in each tank. A human figure is included to demonstrate the scale...... 41

3.3 A photograph of one of the 10” Hamamatsu R7081 (left) and 8” Hamamatsu R5912 (right) PMTs used in the HAWC main array (source: HAWC collaboration)...... 42

3.4 A diagram illustrating the data acquisition and online processing systems used in HAWC [65]. All electronics from the front-end boards on are housed in the “counting house,” the structure visible in the photograph of HAWC (figure 3.1) in the center of the main array (and as the empty, un-instrumented gap in the WCD layout shown in figure 3.2, left). A description of each component in this schematic is provided in the text...... 43

3.5 A diagram illustrating the Time over Threshold (ToT) measurement (adapted from [70]). Low ToT refers to the amount of time a PMT pulse spends above a low voltage threshold and high ToT to the amount of time spent over a high threshold...... 46

3.6 Example PMT charge calibration curves [69]. Plotted is the average number of photoelec- trons (calculated via the occupancy calculation described in the text) as a function of ToT (as obtained by the laser calibration system). An empirically derived six parameter broken power-law function is fit to each curve. As discussed in the text, the high ToT measurement is always used when available. PMT signals that produce a low TOT above the range of the low ToT fit function always rise above the high ToT threshold; the low ToT data past the fit function (starting just before the “shoulder” at ∼3250 ToT units) is therefore not used in assigning hit charges. The ToT units are set by the precision of the TDC modules, with 1 ToT unit equal to 100/1024 ns...... 47

3.7 Example PMT slewing curves (slewing time vs. ToT) for four PMTs (labeled H7A – H7D) [69]. The two slewing curves obtained for each PMT from the low ToT and high ToT measurements are shown...... 48

x 3.8 Charge measurements for hits in a high-probability gamma-ray air-shower event [72]. Open circles represent WCDs, while colored circles show the positions of PMTs in the main HAWC array that recorded a hit for this event. The color scale shows the effective charge of each hit. The air-shower core is clearly identifiable in the dense cluster of high-charge hits. 51

3.9 The lateral distribution function for the same high-probability gamma-ray air-shower event shown in figure 3.8 [72]. Plotted is the effective charges of hits in the event as a function of their distance (measured along the ground) to the reconstructed air-shower core. The SFCF fit (equation 3.3) to this data is shown, along with the “PINC moving average” (a parameter in the PINCness background rejection variable discussed in section 3.5.2). . . . 52

3.10 Time measurements for hits in the same high-probability gamma-ray air-shower event shown in figures 3.8 and 3.9 [72]. Open circles represent WCDs, while colored circles show the positions of PMTs in the main HAWC array that recorded a hit for this event. The color scale shows the time of each hit. The inclination of the air-shower plane is visible in the transition from early to late time hits from the upper right to lower left sections of the array. 53

3.11 A demonstration of the shower curvature and sampling effects for the same high-probability gamma-ray air-shower event shown in figures 3.8 – 3.10 [72]. Plotted is the recorded time of hits in the event – relative to the time expected from assuming that all air-shower particles arrive along a perfect plane – as a function of their distance (measured along the ground) to the reconstructed air-shower core. The color scale shows the effective charge of each hit. The curvature and sampling effects are evident in the increasing time delay of hits far from the shower core...... 55

3.12 Simulated energy distributions in each event-size bin for gamma rays from a Crab Nebula like source with an E−2.63 power-law spectrum transiting at a declination of 22◦ north. All distributions are scaled to peak at 1...... 58

3.13 Lateral distribution functions for high-probability cosmic-ray (left) and gamma-ray (right) air showers [72]. The SFCF fit (see section 3.3.2) and “PINC moving average” (a parameter in the PINCness variable discussed in section 3.5.2) are included. The lack of large hits far from the core and smooth decline of charge with distance from the core seen in the lateral distribution function of the gamma ray is typical of electromagnetic showers and distinguishes them from cosmic-ray events...... 59

3.14 HAWC’s quasi-differential gamma-ray sensitivity with 507 days of data compared to the sensitivity of the Fermi space telescope, three IACTs (HESS, VERITAS, and MAGIC), and an upper limit on the Crab Nebula from CASA-MIA [72]. HAWC’s sensitivity is calculated in each of the 9 standard event-size bins from section 3.4 (bin 0 is excluded) as the minimum flux necessary to achieve a 50% probability of detecting a Crab Nebula like source (a source at a declination of 22◦N with an E−2.63 power-law spectrum) at 5σ. The dark red line is a fit to the minimum flux calculated for the 9 size bins shown in light red. Also plotted is 1%, 10%, and 100% of the gamma-ray flux from the Crab Nebula...... 62

xi 4.1 Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air- shower events, scaled to 1s of observation. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively...... 66

4.2 Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air- shower events, scaled to 1s of observation. The MC distributions were produced with single PE, uncorrelated noise occurring at a rate (in each PMT) of 10 kHz, 20 kHz, and 30 kHz. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively. 69

4.3 Left: the distribution of the number of hits in random 1500ns chunks of raw, un-triggered data with a Gaussian fit. Right: the charge distribution of raw data hits with a power-law fit. The fit functions are given in equation 4.1. Both distributions are area normalized to one. 71

4.4 Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air- shower events, scaled to 1s of observation. The MC distribution was produced with the data noise model constructed from the Gaussian and power-law fits to the number of noise hits and charge distributions plotted in figure 4.3 and explained in the accompanying text. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively. 71

4.5 Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air- shower events, scaled to 1s of observation. The MC distribution was produced with the data noise overlay algorithm discussed in this section. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively...... 73

5.1 A small air shower from HAWC data. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. The color scale represents charge, with darker points for hits with larger PMT signals...... 76

5.2 A small simulated HAWC air shower. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. The color scale represents charge, with darker points for hits with larger PMT signals...... 76

xii 5.3 Angular resolution vs. fHit (the fraction of available PMTs in an air shower that record a hit) for simulated gamma rays that land on the main HAWC array with and without detector noise. Angular resolution is calculated as the 68% containment angle in the Monte Carlo ∆θ distribution, where ∆θ is the angle between the reconstructed and true (simulated) shower axis...... 77

5.4 Left: Event size measured in fHit (= fraction of available PMTs with a hit) with the multi-plane fitter vs. fHit without the multi-plane fitter for a selection of HAWC air-shower events. A one to one line is superimposed along with vertical lines at the boundaries between the standard 10 fHit bins. Right: Events are sorted into the 10 standard fHit bins using, in one case, fHit calculated with the multi-plane fitter applied (MPF bins), and in the other, without the multi-plane fitter applied (default bins). The red curve shows the fraction of events in each default bin that falls into the same MPF bin. The blue curve shows the fraction of events in each MPF bin that falls into the same default bin. . . . . 79

5.5 Histograms of the times required, in a large number of trials, to reconstruct the same 10,000 HAWC data events without the multi-plane fitter (red) and with the multi-plane fitter (blue). 80

5.6 Left: fHit (the fraction of available PMTs that record a hit) distributions for data and Monte Carlo. Right: the number of data events in each fHit bin divided by the number of MC events in each bin. The difference in event rates between data and MC stabilizes at ∼50% above an fHit of 0.03...... 82

5.7 The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 28. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns...... 83

5.8 The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 29. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns...... 83

5.9 The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 30. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns...... 84

5.10 The energy distributions of simulated gamma-ray air showers that fall into bin -1 (0 < fHit 6 0.03) and bin 0 (0.03 < fHit 6 0.05). Bin -1 does not significantly cover any energies not seen by bin 0...... 84

xiii 5.11 A diagram illustrating the rec.coreFiduScale variable, a measure of the distance of the reconstructed core of an air shower to the edge of the HAWC array. Specifically, it is defined as the linear scale factor for the size of the 2-D shape outlining the edge of the HAWC array on which the reconstructed core is located. A value of 100 is chosen for the rec.coreFiduScale value denoting the edge of the array. The red outlines show possible coordinates for positions with three given rec.coreFiduScale values (50, 100, and 150). Small black dots are PMTs in the main HAWC array...... 86

5.12 An examination of core positions for bin 0 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simulated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable. 87

5.13 An examination of core positions for bin 1 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simulated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable. 87

5.14 An examination of core positions for bin 2 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simulated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable. 88

5.15 An examination of core positions for bin 3 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simulated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable. 88

5.16 Left: the Bin 0a MC energy distribution from figures 5.17 and 5.18 with two characteristic bin energies illustrated. The “90% energy” is defined as the energy above which 90% of the distribution lies. The “peak energy” is, of course, the energy at which the distribution peaks. Right: These two characteristic energies as a function of bin number...... 90

5.17 Monte Carlo energy distributions of simulated gamma rays in the 12 bins defined in table 5.2. Left: gamma rays are simulated with isotropic arrival directions and a flux −11 −1 −2 −1 −1 E −2 of 3.6 · 10 T eV cm s sr ( T eV ) . Right: distributions from the left-hand plot scaled so that each peaks at one...... 91

5.18 Monte Carlo energy distributions of simulated gamma rays in the 7 bins defined in table 5.3. Left: gamma rays are simulated with isotropic arrival directions and a flux of 3.6 · −11 −1 −2 −1 −1 E −2 10 T eV cm s sr ( T eV ) . Right: distributions from the left-hand plot scaled so that each peaks at one...... 91

xiv 5.19 Predicted one year Crab significance (calculated with simulated gamma rays and data background) with a compactness cut of nHitSP-X/CxPE-Y > Z for bin 0a (left) and bin 4 (right). nHitSP-X gives the number of hits within +/- X nanoseconds of the reconstructed shower plane. CxPE-Y gives the charge of the largest hit more than Y meters from the reconstructed core. Cut values (Z) are chosen by, for each value of X and Y, looping through Z values and taking the one that gives the largest predicted significance...... 94

5.20 Comparisons of bin 0a data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 94

5.21 Comparisons of bin 0b data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 95

5.22 Comparisons of bin 1a data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 95

5.23 Comparisons of bin 1b data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 96

5.24 Comparisons of bin 2 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 96

5.25 Comparisons of bin 3 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 97

5.26 Comparisons of bin 4 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions...... 97

5.27 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 0a as a function of compactness and PINCness cut values. Compactness provides a small boost to predicted sensitivity, but the PINCness cut is ineffective. . . . . 98

xv 5.28 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 0b as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective. . 99

5.29 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 1a as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective. . 99

5.30 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 1b as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective. . 100

5.31 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 2 as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut provides only a very slight gain...... 100

5.32 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 3 as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, and the PINCness cut provides a small gain...... 101

5.33 Predicted 1 year Crab significance (calculated with simulated gamma rays and data back- ground) in bin 4 as a function of compactness and PINCness cut values. Both the compact- ness and PINCness cuts are effective in improving predicted sensitivity...... 101

5.34 Measurements of the Crab Nebula using all 2016 event data from bin 0a. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 104

5.35 Measurements of the Crab Nebula using all 2016 event data from bin 0b. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 105

xvi 5.36 Measurements of the Crab Nebula using all 2016 event data from bin 1a. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.8◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 106

5.37 Measurements of the Crab Nebula using all 2016 event data from bin 1b. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 107

5.38 Measurements of the Crab Nebula using all 2016 event data from bin 2. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 108

5.39 Measurements of the Crab Nebula using all 2016 event data from bin 3. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 109

xvii 5.40 Measurements of the Crab Nebula using all 2016 event data from bin 4. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges...... 110

5.41 Comparisons of Crab measurements (using all HAWC data from 2016) to MC predictions for each bin of the low-energy multi-plane fitter analysis. Top: angular resolution without (left) and with (right) gamma/hadron separation cuts. Middle: observed excess (number of air showers minus cosmic-ray background estimate, a measure of the number of detected gamma rays) without (left) and with (right) cuts. Bottom: ∼1 year Crab significance without (left) and with (right) cuts. See text for details on how the reported results were calculated...... 113

5.42 Gamma-ray efficiencies for the gamma/hadron separation cuts used in the low-energy multi-plane fitter analysis. Data results are obtained by taking the ratio of observed Crab excess measurements (plotted in figure 5.41) with and without cuts. As no excess was detected in bin 0a data without cuts, efficiency observations are not reported for this bin. Efficiencies are shown for the application of both the compactness and PINCness cuts (top) and for the two cuts individually (bottom)...... 114

5.43 Predicted 1 year bin 0 Crab significance in the Extended Crab Paper analysis. PINCness and compactness (nHitSP10/CxPE40) cuts are applied and significance is plotted as a function of the cut values...... 116

5.44 Left: predicted significance per transit (calculated with MC signal and data background)

◦ α −E/Ec for a source at declination 22 with flux dN/dE = φo(E/T eV ) e , where φo = −11 −1 −2 −1 3.45 · 10 T eV cm s . Curves are plotted as a function of cut-off energy (Ec) for two values of α and for both the ECP and LEMPF analyses. Right: the ratio of the significance results obtained with the LEMPF analysis to those obtained with the ECP analysis. . . . 117

5.45 Crab Nebula significance maps for all events from 2016 that fall into bin 0 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼40% boost to ECP bin 0 Crab sensitivity...... 119

5.46 Crab Nebula significance maps for all events from 2016 that fall into bin 1 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼25% boost to ECP bin 1 Crab sensitivity...... 119

xviii 5.47 Crab Nebula significance maps for all events from 2016 that fall into bin 2 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼25% boost to ECP bin 2 Crab sensitivity...... 120

5.48 Crab Nebula significance maps for all events from 2016 that fall into bin 3 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼3% boost to ECP bin 3 Crab sensitivity...... 120

5.49 Predicted Monte Carlo energy distributions of gamma rays from the Crab Nebula that fall into ECP bins 0 – 3. To allow for easy comparison of bin energy ranges, the distributions are scaled so that each peaks at one...... 121

5.50 A comparison of the reference power-law Crab spectrum (see text) with the log-parabola Crab spectra measured by HAWC [72] and MAGIC [82]. Systematics bands are shown for both fits, along with MAGIC’s measured flux points. As the HAWC analysis employed event-size bins rather than energy bins (see [72] for details), HAWC flux points were not calculated...... 123

5.51 The observed and predicted Crab excess (number of air showers minus cosmic-ray background estimate, a measure of the number of detected gamma rays) in each bin of the low-energy multi-plane fitter analysis developed in section 5.3. Results are shown without (left) and with (right) gamma/hadron separation cuts. Monte Carlo predictions are calculated for the reference and MAGIC Crab spectra from figure 5.50...... 124

5.52 Data and Monte Carlo distributions of the fraction of hits close in time with the reconstructed shower plane: nHitSP-X/nHit, where nHitSP-X is the number of hits within X nanoseconds of the plane and nHit is the total number of hits in the air-shower event. Distributions are shown for bin 4 of the low-energy multi-plane fitter analysis developed in section 5.3 with X = 4ns (top), 12ns (bottom left), and 20ns (bottom right)...... 126

5.53 Left: the position of peaks in the nHitSP-X/nHit distributions plotted in figure 5.52 as a function of X. Right: the percent error in the MC peak positions, (MC_peak_position - data_peak_position)/data_peak_position, as a function of X...... 127

5.54 Source significance at the location of the Crab Nebula, calculated with the Li & Ma test

statistic [81] using all events reconstructed within an angle θs of the Crab Nebula; θs is referred to as a “smoothing angle.” Significance is plotted as a function of this smoothing angle for bins 0a (left) and 4 (right) of the low-energy multi-plane fitter analysis developed in section 5.3. Compactness, PINCness, and fShowerPlane cuts are optimized (using the method of section 5.3.4) and applied. Results are shown with and without the fShowerPlane cut and for the use of nHitSP10 and nHitSP20 in the definitions of compactness and fShowerPlane...... 128

xix 5.55 Comparisons of the data and Monte Carlo distributions of the fShowerPlane variable (calculated as nHitSP10/nHit) for bins 0a (left) and 4 (right) of the low-energy multi-plane fitter analysis developed in section 5.3. The data and MC cosmic-ray distributions are scaled to one second of observation. The MC gamma-ray distribution is scaled to have the same area as the MC cosmic-ray distribution...... 129

5.56 Distributions of the fMuonLike (left) and compactness (right) variables for simulated gamma and cosmic rays reconstructed in original bin 0 (see text). All distributions are area normal- ized to one to allow for easy qualitative comparison of their shapes. Primary particles are sim- −11 −1 −2 −1 −1 E −2 ulated with isotropic arrival directions and a flux of 3.6 · 10 T eV cm s sr ( T eV ) . 132

5.57 Simulated gamma-ray efficiency vs. hadron (cosmic-ray) rejection for compactness and fMuonLike cuts in original bin 0 events (see text). The curves are built up by varying cut values in small increments and, for each cut value, calculating the resulting gamma-ray efficiency and hadron rejection values. Primary particles are simulated with isotropic arrival −11 −1 −2 −1 −1 E −2 directions and a flux of 3.6 · 10 T eV cm s sr ( T eV ) ...... 132

5.58 Crab Nebula detection significance maps – of all 2016 events reconstructed in bin 0a of the multi-plane fitter analysis developed in section 5.3 – with a data optimized compactness cut (left) and data optimized fMuonLike cut (right). Significance values are calculated in each map pixel with the Li & Ma test statistic [81] using all events within 1.3◦ of the pixel for signal and background counts...... 133

5.59 Distribution in data of fShowerPlane for events reconstructed in original bin 0. The fShowerPlane variable is defined as nHitSP10/nHit, where nHit and nHitSP10 are the total number of hits used in the reconstruction of an air-shower event and the number of those that arrive within ±10ns of the reconstructed shower plane, respectively...... 135

5.60 Detection significance maps of the region around the Crab Nebula (located at the cross) for all 2016 data reconstructed in original bin 0. Maps are shown without cuts (left) and with cuts (right). Both maps use the same reconstruction. Significance values are obtained in each pixel by summing signal and background counts for all air showers reconstructed within 1.2◦...... 136

5.61 Distributions of the nearbyFTank variable for all hits in a sample of simulated original bin 0 gamma rays. Hits produced by simulated air-shower particles originating from the primary gamma are shown in blue. Those added as detector noise (see chapter 4 for details) are shown in red. Both the shower hit and noise hit distributions are area normalized to one to allow for easy comparison of their shapes...... 138

xx 5.62 A small air shower from HAWC data (the same shown in figure 5.1) reconstructed with the standard HAWC plane fitter (left) and with the nearbyFTank noise discrimination algorithm (right). The reconstructed shower core and plane are depicted by the black star and gray plane, respectively. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. (And switching reconstruction algorithms changes the plane fit; this is why the event appears to have a different orientation in the two plots.) Left: the color scale represents charge, with darker points for hits with larger PMT signals. Right: the color scale represents shower particle likelihood, with darker points for hits with a larger nearbyFTank value...... 140

5.63 Gamma-ray detection significance (the event counts’ fluctuation above background in number of standard deviations) at the location of the Crab Nebula, calculated with the Li

& Ma test statistic [81] using all events reconstructed within an angle θs (the “smoothing angle”) of the Crab. Significance is plotted as a function of this smoothing angle for the original bin 0 event sample reconstructed with the multi-plane fitter (left) and nearbyFTank noise discrimination algorithm (right). Results are shown with and without Monte Carlo optimized gamma/hadron separation cuts (see table 5.6) and plotted separately for HAWC’s 2015 and 2016 data sets...... 141

6.1 A diagram illustrating the background region for the multi-bin excess analysis. For any

temporal search window centered at time ts, the local zenith, θ(ts), and azimuth, φ(ts), coordinates of the burst are calculated. The zenith band for the background region is

defined by θ(ts) ± α, where α is the bin-specific search radius from table 6.1. All events

reconstructed in this zenith band with event times within four hours of the burst time tburst

(excluding those within 30 minutes of ts) are included in the background calculation. . . . 146

6.2 Example background variation fits for the randomly chosen burst GRB 161113A. Plotted is the event rate (normalized by the average) of all bin 0a events in HAWC’s field of view within four hours of the burst. Left: Each blue data point is the measured all-sky rate obtained with one minute of data. The red line shows the sinusoidal fit. Right: Each blue data point is the measured rate in an azimuth band of 1 degree width for all events in the 8 hour observation period. The red line shows the fit to the sum of two sines function. . . . 147

6.3 Frequency of measuring p-values calculated with equation 6.7 for a random patch of sky with no known gamma-ray sources. The observation frequency for a given p-value p is calculated as the fraction of all measurements with an assigned p-value less than or equal to p. The good agreement with the black one-to-one line validates the background calculations and statistical assumptions...... 150

xxi 6.4 Simulated false alarm rate as a function of TS for a targeted search in ZEBRA’s standard analysis mode. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail...... 155

6.5 Simulated false alarm rate as a function of TS for a targeted search in ZEBRA’s spectrum- independent analysis mode. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail...... 155

6.6 Simulated false alarm rate as a function of TS for a grid search in ZEBRA’s spectrum- independent analysis mode. Search points are spread out by ∼0.23 degrees over a 100 square degree grid. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail...... 156

6.7 A comparison of HAWC’s sensitivity with various ZEBRA search modes, as a function of redshift, to short (0.3s) GRBs with a simple E−2 power-law spectrum. Sensitivity is calculated as the minimum flux normalization at 1 TeV that yields a 50% detection probability. Left: sensitivity with ZEBRA’s standard search mode with EBL attenuation (calculated with the actual burst redshift) built into the E−2 spectral hypothesis (in red), ZEBRA’s standard search mode with no EBL attenuation built into the E−2 spectral hypothesis (in green), and ZEBRA’s spectrum-independent search mode (in blue). Right: the ratio of the blue and green sensitivity curves to the best case scenario (red curve) of the standard search mode with a known burst redshift...... 160

6.8 A comparison of HAWC’s sensitivity with the search algorithms of sections 6.1 - 6.3, as a function of redshift, to short (0.3s) GRBs with a simple E−2 power-law spectrum. Sensitivity is calculated as the minimum flux normalization at 1 TeV that yields a 50% detection probability. Left: sensitivity with ZEBRA’s standard search mode with EBL attenuation (calculated with the actual burst redshift) built into the E−2 spectral hypothesis (in red), ZEBRA’s spectrum-independent search mode (in blue), the multi-bin excess analysis of section 6.2 (in cyan), and the single-bin excess analysis of section 6.2 (in black). Right: the ratio of the blue, cyan, and black sensitivity curves to the best case scenario (red curve) of the standard search mode with a known burst redshift...... 161

xxii 6.9 GRB 130427A (z = 0.34) is, to date, the GRB with the highest-energy photon observed by Fermi-LAT. The blue dashed line shows Fermi’s spectral fit to the prompt emission observed by Fermi-GBM. A fit to the combined GBM and LAT data for data taken between 11.5 and 33 seconds after the burst is shown in dashed red with a 1σ error contour. The dotted red line is the extrapolation of this fit adjusted for EBL attenuation [49]. HAWC’s sensitivity to this EBL adjusted spectrum is shown in black for zenith angles of 0 and 30 degrees...... 163

7.1 A diagram illustrating the time windows used to search for emission in the short timescale analysis. Time 0 is the starting time for the T90 emission period, and T90 denotes the length of time for 90% of the emission to be observed in the instrument that detected the burst...... 165

7.2 Example search output for GRB 171120A (T90 = 64s). This burst is precisely localized but does not have a redshift measurement, so we ran the targeted spectrum-independent search (which has a minimum 5σ TS of 35 – see section 6.3). The search test statistic (TS) is plotted as a function of time for all search windows in the long timescale search (left) and short timescale search (right). The y-axis denotes the duration of the search window (∆t). Within each horizontal band associated with a given search duration, adjacent vertical bands represent the output TS for the individual search windows of that duration. These vertical bands are centered on the time at the middle of the individual search windows that they represent. Their widths are equal to the search duration in the long timescale search; in the short timescale search, we have overlapping windows and the plotted widths are shortened to allow an entry for every search window analyzed. The color axis is scaled from 0 to the minimum 5σ TS obtained in section 6.3...... 167

7.3 Energy distributions in Bin 0a (left) and Bin 4 (right) for bursts with an E−2 power-law spectrum. The lower (upper) 5% of the bin 0a (bin 4) distributions are shaded; the energy of this 5% threshold is taken as the lower (upper) bound for the energy range over which our flux limits are valid. Distributions and bounds for a GRB at a redshift of 1.0 and zenith angle of 0 degrees are shown in blue; those for a GRB at a redshift of 0.3 and zenith angle of 40 degrees are shown in red...... 169

7.4 HAWC fluence upper limits between 80 and 800 GeV vs. measured Fermi-GBM fluences between 10 and 1000 keV for all GRBs in the HAWC 41 month search sample that were detected by Fermi-GBM. Bursts whose position uncertainty regions extend beyond HAWC’s field of view (zenith < 45 degrees) were excluded. Fluence values are shown for the prompt emission in the initial Fermi-GBM T90 period. HAWC fluence upper limits are calculated assuming an E−2 power law for z = 0.3 (left) and z = 1.0 (right) using the flux limits presented in table B.1 (see appendix B). Short GRBs are plotted as red circles, long GRBs as blue crosses, and bursts with a LAT detection (160821A, 170206A, 170214A, and 171120A) as green squares. (Burst 170206A is a short GRB; the other three LAT bursts are long GRBs.) A black one-to-one line is included in the figures...... 171

xxiii 7.5 A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 160821A, using data from 92.08 - 1459.15s after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [93]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 92.08 - 1459.15s time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits...... 172

7.6 A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 170206A, using data from 0.208 - 20.208 after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [94]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 0.208 - 20.208 time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits...... 172

7.7 A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 170214A, using data from 39.49 - 751.99 after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [95]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 39.49 - 751.99 time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits...... 173

7.8 A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 171120A, using data from 0.31 - 5275.99s after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [96]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 0.31 - 5275.99s time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits...... 173

7.9 Redshift distributions for Swift GRBs detected up to March 2019 [88]. This sample contains 23 short GRBs with known redshifts (plotted in blue) and 331 long GRBs with known redshifts (plotted in red). Both histograms are area normalized to one to allow for easy comparison of their shapes...... 174

xxiv List of Tables

3.1 The definition of HAWC’s standard 10 event-size bins. The fHit thresholds are set to make the event rate in each bin 50% that of the previous bin...... 57

5.1 The fHit thresholds in a 10 bin multi-plane fitter analysis (beginning at fHit = 0.03) required to make the event rate in each bin 50% that of the previous bin...... 85

5.2 Bin definitions for a 12 bin multi-plane fitter analysis. The fHit thresholds are set to make the event rate in each bin 50% that of the previous bin. See text for an explanation of the coreFiduScale thresholds...... 89

5.3 Bin definitions for a low-energy, 7 bin multi-plane fitter analysis arrived at by taking the bins from table 5.2 and collapsing the high-energy (& 1 TeV) data into one overflow bin (bin 4)...... 92

5.4 Bin definitions and cuts for a low-energy, 7 bin multi-plane fitter analysis arrived at by taking the bins from table 5.3 and optimizing a set of gamma/hadron separation cuts as described in the text. Note: compactness is not calculated with the standard definition (rec.nHitSP20/rec.CxPE40) but instead uses rec.nHitSP10 as discussed in section 5.3.4.1 . 102

5.5 Bin definitions and cuts used in the Extended Crab Paper (ECP) analysis. Note that compactness is defined as nHitSP20/CxPE40 in bins 1 – 9 and nHitSP10/CxPE40 in bin 0 (see section 5.3.4.1 for details)...... 116

5.6 Cut values obtained for the original bin 0 event sample by optimizing on predicted Crab Nebula detection significance. Values of X and Y in the parameters nHitSP-X (the number of hits within ± X ns of the reconstructed shower plane) and CxPE-Y (the largest charge more than Y meters from the reconstructed core) used in the definition of compactness and fShowerPlane were allowed to vary in the optimization. Note: the compactness cut listed for the multi-plane fitter reconstruction with this X,Y optimization is used only for comparison to the nearbyFTank algorithm and is not employed in the main multi-plane fitter analysis of section 5.3 due to the since discovered modeling issues discussed in section 5.7.1...... 141

xxv 6.1 The search radii used in the multi-bin excess analysis. Recall that the “b” bins contain events that land off the array. These have worse angular resolution, which is why the search radius jumps to 1.6◦ in bins 0b and 1b...... 145

6.2 The minimum TS corresponding to a 5σ fluctuation for various ZEBRA analyses. A grid search with search points separated by ∼0.23 degrees is employed for non-zero position uncertainties...... 157

A.1 The name, HAWC zenith, position uncertainty, redshift, T90, and instruments that observed the bursts in the HAWC 41 month GRB search (e.g. a “yes” in the LAT column means the burst was detected by Fermi-LAT). In the case of detection by multiple experiments, the coordinates and position uncertainty measured by the most precise instrument are used. Note that the first three numbers of each burst name indicate the date of observation (e.g. 170714A was observed on July 14, 2017 and bn180113011 on January 13, 2018). The numbers following ‘bn’ in burst names denote the Fermi-GBM trigger number. Question marks indicate unknown quantities...... 183

B.1 Flux upper limits at 1 TeV for bursts in the HAWC 41 month GRB search. A power-law spectrum of A (E/T eV )−2 is assumed and upper limits are calculated for the normalization factor A, in units of T eV −1cm−2s−1, as the upper bound of 90% frequentist confidence intervals constructed with the Feldman & Cousins ordering principle [97]. Flux limits are provided for the burst redshift if known and for redshifts of 0.3 and 1.0 otherwise. NA (not applicable) is placed in the z = 0.3, 1.0 entries for bursts of known redshifts and in the known “burst z” column for bursts with unknown redshifts. If known, the burst redshift is denoted underneath the burst name in the left most column. Bursts beginning in ‘bn’ were only detected by Fermi-GBM; the numbers following ‘bn’ denote the Fermi-GBM trigger number. See appendix A for additional information on GRBs in this table...... 191

xxvi List of Equations

α   E exp − (2+α)E , (2+α)E < (α−β)E ( 100 keV ) Ep p α−β 1.1 f(E) = A{  (α−β)Ep  E β ...... 7 (2+α)(100 keV) exp(β−α)( 100 keV ) , (2+α)E ≥ (α−β)Ep

2(m c2)2 1.2 E E ≥ e ...... 20 1 2 1 − cosθ

nmax 2.1 Eγ = EcNmax = Ec2 ...... 27

2.2 Xmax = nmaxXr ln(2) = Xr ln(Eγ/Ec) ...... 27

(c/n)t 1 2.3 cos(θ) = = ...... 33 βct βn

3.1 < nPE > = −ln(1 − η) ...... 46

−T oT −p0 T oT −p2 p p 3.2 ∆tstart = e 1 − e 3 + p4 − p5 · T oT ...... 48

! 1 2 2 N −|xi−x| /2σ 3.3 Si = A 2 e + 3 ...... 51 2πσ (0.5 + |xi − x| /Rm)

nHitSP20 3.4 compactness = ...... 60 CxPE40

1 N (ζ − hζ i)2 3.5 PINCness = X i i ...... 61 N σ2 i=0 ζi

−(x − 52.8)2 ! 4.1a f = C exp ...... 70 g 2 · 12.92

xxvii −2.13 4.1b fpl = C(x + 1.90) ...... 70

dN E 5.1 = (3.45 · 10−11 T eV −1cm−2s−1)( )−2.63 ...... 122 dEdAdt T eV

−2.47−0.24log( E ) 3.23·10−11( E ) T eV , E ≤ 2.15 T eV dN 2 T eV 5.2 (T eV cm s) = { −11 E −2.63 . . . . . 124 dEdAdt 3.43·10 ( T eV ) , E > 2.15 T eV

f (t ) f (φ ) 6.1 background weight = t s a s ...... 146 ft(tb) fa(φb)

t Ω 6.2 α = on on ...... 147 toff Ωoff

∞ k ni−1 k X µ X µ 6.3 p = p(n , µ ) = i e−µi = 1 − i e−µi ...... 147 i i i k! k! k=ni k=1

X 6.4 TS = −2 ln(pi) ...... 148 i

n     µ i Y Y i −µi 6.5 P TS|Ho(~µ) = P ni|Ho(µi) = e ...... 148 i i ni!

    6.6 P TSpossible|Ho(~µ) < P TSmeasured|Ho(~µ) ...... 148

X   6.7 ptot = 1 − P TSj|Ho(~µ) ...... 149 j

√ −1 6.8 S = 2 erfc (2 ptot) ...... 149

P (b + e f, d ) 6.9 L(f) = 2 X i i i ...... 151 i P (bi, di)

di x −x 6.10 P (x, di) = e ...... 151 di!

xxviii P (b + e f , d ) 6.11 L(f~) = 2 X X i i j i ...... 152 j i P (bi, di)

6.12 p(TS) = ∆t · FAR(TS) ...... 156

6.13 p(TS) = ∆t · Ωsearch · FAR(TS) ...... 156

xxix Acknowledgments

This material is based upon work supported by the National Science Foundation under Award Nos. PHY-1506145 and PHY-1806854. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation.

xxx Chapter 1 | Introduction

Gamma-ray bursts are the most energetic explosive events in the universe today; they liberate on the order of 1054 ergs on time scales as low as milliseconds and occur several times per day across the observable universe. Their electromagnetic luminosity briefly exceeds that

of their entire host galaxy, with the peak emission at ∼1 MeV lasting from a fraction of a second to minutes. Since the public announcement of their discovery in the early 1970s, a number of space and ground-based detectors have been deployed to understand the source of this phenomenon. Gamma-ray bursts are now believed to be the electromagnetic signatures accompanying the violent births of black holes in distant galaxies. The following section explores the theory of gamma-ray bursts in more detail and reviews the most important observations behind our current understanding of the phenomenon. The purpose of this dissertation is to shed additional light on the of GRBs by

placing limits on the >100 GeV emission of known bursts using data from the High Altitude Water Cherenkov observatory. The unique ability of HAWC to provide these measurements is discussed in section 1.2. Chapter 2 provides an overview of gamma-ray detection techniques, and chapter 3 discusses the HAWC detector in detail. While HAWC is sensitive to gamma

rays from ∼100 GeV - 100 TeV, the majority of any >1 TeV emission is expected to be absorbed on route to Earth via pair production with extra-galactic background light (see section 1.1.2.2). Our efforts were therefore largely focused on improving HAWC’s poor

1 sensitivity in the 100 GeV - 1 TeV energy range; the techniques developed to that end are

documented in chapters 4 and 5. The analysis employed to search for &100 GeV photons from known gamma-ray bursts is discussed in chapter 6 and the results and their model implications in chapter 7. No significant emission in HAWC data was detected for any gamma-ray burst considered.

Most limits on the >100 GeV flux are not constraining, but the results for GRBs 170206A

and 171120A imply a spectral cut-off below that energy for a redshift assumption of z . 0.3.

1.1 Gamma-Ray Bursts

In the canonical model, GRBs occur when black holes are formed from the core collapse of a massive star or the merger of two compact stellar remnants (neutron stars or black holes). On the order of one solar rest mass of gravitational energy is released, most of which is liberated immediately as neutrinos and gravitational waves. The electromagnetic emission is thought to be produced in relativistic jets powered by the in-fall of matter onto the newly formed central engine. The thermal energy released in the explosion creates a fireball which rapidly expands along the jets, as radiation pressure (from photons trapped by the environment’s high optical depth) converts the thermal energy of the explosion into the bulk kinetic energy of the expanding fireball. The prompt gamma-ray emission is produced by internal shocks in this expanding fireball, and a longer-lasting afterglow by the external shocks created when the jets collide with the environment’s external medium. While the qualitative description of GRB phenomenology presented above is valid for both classes of stellar progenitors (stellar core collapse and compact binary mergers), these scenarios yield two observationally distinct populations of bursts. The observed peak emission produced in GRBs from compact binary mergers usually lasts less than 2s and is skewed

towards slightly higher energies (>0.5 MeV). This class of bursts is referred to as short GRBs.

2 Long GRBs, produced in the core collapse progenitor scenario, tend to last longer than 2s and have spectral peaks below 0.5 MeV. These and other aspects of GRB theory are discussed in greater detail in section 1.1.2. First, we will review the main observational evidence behind the general paradigm outlined above.

1.1.1 Observational History

1.1.1.1 The Vela Discovery

Gamma-ray bursts were first observed by the Vela satellites in 1967 and publicly reported in 1973 [1]. Carrying omnidirectional gamma-ray detectors, these spacecraft were launched by the United States Department of Defense to detect the gamma-ray signatures of nuclear blasts in an effort to monitor for compliance with the 1963 Nuclear Test Ban Treaty. The Vela team was able to rule out both the Earth and Sun as sources for these brief, intense flashes of gamma rays, sparking widespread interest in this new cosmic mystery. However, as gamma rays are difficult to detect and focus, little progress was made in ruling out the plethora of theories put forth to explain their origin in the ensuing two decades.

1.1.1.2 The Compton Gamma-Ray Observatory

A detailed theoretical understanding of gamma-ray bursts did not begin to emerge until the launch of the Compton Gamma-Ray Observatory (CGRO) in 1991 [2]. Among other instruments, the CGRO carried the Burst and Transient Source Experiment (BATSE), designed in large part to record the spatial distribution of gamma-ray bursts. Sensitive

to gamma rays from a few tens of keV to ∼2 MeV, BATSE was capable of monitoring

the entire sky over every orbit (see [2] for additional details) and detected ∼1 burst per day. An important early result from the BATSE data was the discovery that GRBs are distributed isotropically throughout the sky [3]. As a galactic source population would

3 produce a clustering towards the galactic center, this provided strong early evidence of the cosmological origin of GRBs.

Figure 1.1: Example BATSE light curves, in counts per second (cps) vs. time in seconds (s), for a sample of four GRBs [4].

Other important clues on the nature of gamma-ray bursts came from BATSE’s observed GRB light curves, the distribution in arrival time of GRB photons. These light curves showed large variability from burst to burst [4], with some (see, for example, the top panels of figure 1.1) displaying no fine structure on short time scales, and others (see, for example, the bottom

4 panels of figure 1.1) displaying rapid variability. This rapid time variability in a cosmological source indicates that the immense energy output of GRBs is released in a small volume, suggesting a compact object as a progenitor [5]. BATSE also provided the first evidence for the two distinct populations of bursts, “long” and “short,” through their analysis of the burst durations and spectra [6]. The duration of GRBs is conventionally measured by the quantity T90, the time taken to detect 5% to 95% of all gamma rays observed in a burst event. As shown in figure 1.2, the T90 distribution of GRBs observed by BATSE is bimodal, with the boundary between the two populations at ∼2s. The <2s bursts, eventually interpreted as arising from the merger of two compact objects, became known as short GRBs, and the >2s bursts, eventually interpreted as arising from a stellar core collapse, became known as long GRBs.

Figure 1.2: Distribution of T90, the time taken to observe 5% to 95% of all detected gamma rays, as measured by BATSE for GRBs in the BATSE 4B catalog [7].

Further evidence for a physical distinction between the long and short GRB populations came from BATSE’s analysis of their spectral properties [8]. To quantify the relative

5 contribution of lower and higher-energy photons to the observed flux, a “hardness ratio” was defined as a burst’s ∼100 - 300 keV fluence divided by its ∼50 - 100 keV fluence. As can be seen in figure 1.3, the population of short GRBs has a larger hardness ratio and is more skewed towards higher-energy photons than the long GRBs.

Figure 1.3: Hardness ratio (HR) vs. T90 for GRBs seen by BATSE [8]. The hardness ratio is defined as a burst’s ∼100 - 300 keV fluence divided by its ∼50 - 100 keV fluence. Short GRBs, lasting less than 2s and plotted as squares with crosses, have harder spectra (are more skewed towards higher energies) than long GRBs, which last longer than 2s and are plotted as open circles. Also shown for these two samples of GRBs are dotted regression lines and their average HR and T90 values (as filled circles). The solid line simply connects these two points, while the dashed line is a regression line for the combined (long and short) GRB samples. HR and T90 are not correlated within either sample, and the overall correlation of the combined populations is simply an artifact of the different HR distributions of the two GRB classes.

Further analysis of GRB spectra with BATSE data provided additional information on the nature of their source environments. The GRB spectra were found to be non-thermal and well described by a fit composed of two broken power laws [9]. Most burst spectra follow the empirical Band function [10]:

6   α    E (2+α)E  100 keV exp − E , (2 + α)E < (α − β)Ep f(E) = A p (1.1)  α−β  β  (α−β)Ep E  (2+α)(100 keV) exp(β − α) 100 keV , (2 + α)E ≥ (α − β)Ep where f(E) is the photon number flux, A is a normalization factor, α the low-energy power-law

index, β the high-energy power-law index, and Ep the νFν peak energy (where ν and Fν are photon frequency and energy flux per unit frequency, respectively). This peak energy is typically on the order of a few hundred keV, while the power-law indices have typical values of α ≈ -1 and β ∼ -3 – -2 [9]. Such a spectrum could not be produced if the “fireball” in which GRB emission is produced expanded smoothly along GRB jets with no internal dissipation [5]. This discovery therefore played an important role in the development of the fireball-shock model discussed in section 1.1.2, in which the observed gamma-ray spectrum is produced by particles accelerated at shock fronts arising within the expanding fireball and as it collides with an external medium.

1.1.1.3 Beppo-SAX and HETE-2

The Beppo-SAX satellite [11], capable of producing X-ray images of GRB afterglows, provided the next round of gamma-ray burst discoveries after its launch in 1997. As

X-rays are much easier to focus than gamma rays, these images allowed for precise (∼1 arcmin) localization of bursts. This, in turn, allowed for optical follow-up observations, the identification of host galaxies, and redshift measurements. For the first time, with the detection of GRB 970228, a host galaxy at a redshift of 0.695 was identified [12]. This and subsequent GRB redshift measurements unambiguously confirmed the cosmological origin of GRBs. Furthermore, these first quantitative measurements of GRB afterglows were in good agreement with predictions made prior to their publication by the GRB fireball-shock model [13] discussed further in section 1.1.2.

7 The High Energy Transient Explorer (HETE-2) satellite [14], launched in 2000, continued to provide detailed X-ray observations of GRB afterglows. This capability allowed for the first unambiguous association of a long GRB with a when the HETE-2 localization of GRB 030329 was found to be coincident with supernova SN2003dh [15], providing strong evidence for the stellar core-collapse progenitor model for long GRBs. While these discoveries were of great importance, both Beppo-SAX and HETE-2 were limited by the several hours required to process their data and obtain a precise localization. Further progress on testing GRB models with X-ray measurements would require an instru- ment capable of the rapid localization needed to study the transition from prompt to late emission and obtain accurate measurements of short GRB afterglows.

1.1.1.4 Swift

The Swift satellite, launched in 2004 with the Burst Alert, X-Ray, and UV-Optical Telescopes (Swift-BAT, Swift-XRT, and Swift-UVOT) [16], was designed for just that purpose. Swift-BAT, with a 2 sr field of view and sensitivity in the energy range of 20 – 150 keV,

can detect ∼100 bursts per year and trigger the XRT and UVOT to rapidly (within ∼100s) slew to the position measured by the BAT and carry out precise X-ray and UV/optical measurements. These new capabilities allowed for many significant new discoveries, one of the most important of which being the first observations of the transition from the prompt to afterglow emission and a detailed characterization of the X-ray afterglow [17]. Most of the X-ray afterglows observed by Swift-XRT followed the same basic pattern, which is schematically outlined in figure 1.4. While the general X-ray afterglow behavior shown in figure 1.4 was developed mainly through the observation of long GRBs, Swift’s detection of the never before seen short GRB afterglows was also significant. For the first time, host galaxies were identified with these short GRB observations, many of which were older galaxies with no signs of significant, active

8 Figure 1.4: A schematic diagram of the typical GRB X-ray afterglow light curve seen by Swift [17]. The prompt gamma-ray emission (phase 0) is followed by a power-law 2 3 decay with an index of . -3 (phase I). Then, starting at tb1 ≈ 10 − 10 seconds since the burst trigger, the power-law slope typically becomes shallower with an index of ∼ -0.5 (phase II), though X-ray flares (phase V) are sometimes seen during this period. 3 4 In phase III, beginning at tb2 ≈ 10 − 10 s, the light curve steepens as the power-law 4 5 index decreases to ∼ -1.2. In some bursts, at tb3 ≈ 10 − 10 s, this period is followed by an additional, yet steeper component with a power-law index of ∼ -2 (phase IV). The portions of the light curve seen in the majority of bursts are shown as solid lines, while those seen in only a subset of bursts are shown as dashed lines.

star formation [18]. As these are the types of environments in which one would expect to see a large population of old, compact stellar remnants, these observations provided intriguing but not definitive evidence in favor of the compact binary-merger theory for short GRB progenitors. An additional Swift discovery of note was the detection of GRB 050904, the first of several bursts observed with z > 6. (The most precise redshift measurement, z = 6.29, was achieved with an optical follow-up by the Subaru telescope [19].) As z ≈ 6 marks the barrier beyond which one finds the first light sources in the universe, which re-ionized the intergalactic medium, this exciting discovery demonstrated the unique role of GRBs as a cosmological probe.

9 1.1.1.5 Fermi

The next major round of observational breakthroughs came with the launch of the Fermi satellite in 2008. Fermi extended GRB observations to the GeV energy scale with its two detectors: the Gamma-ray Burst Monitor (GBM) [20] and the Large Area Telescope (LAT) [21]. The GBM monitors the full unocculted sky for GRBs, triggering on gamma rays in the energy range of 8 keV – 40 MeV. The LAT can extend observations to higher energies, with sensitivity in the range of 20 MeV – 300 GeV. For bursts with a significant flux at those energies, the LAT can improve the positional uncertainty of GRBs (from as much as

◦ ◦ ∼10 with only GBM data) down to .1 , but with its smaller field of view of ∼2.4 sr must sometimes slew to the position measured by the GBM. Roughly 250 bursts are detected every year by the GBM; as shown in figure 1.5, only the brightest of these are seen by the LAT, which detects GRBs at a rate of ∼8 per year.

Figure 1.5: The fluence distribution in the 10 – 1000 keV energy range for the 178 LAT detected GRBs in the 2nd Fermi-LAT GRB catalog and the 2357 GBM detected bursts observed in the same time period [22]. Separate distributions are shown for short GRBs and long GRBs. Bursts with LAT detections are among the brightest seen by Fermi-GBM.

10 Fermi discovered several important differences between the high-energy, GeV scale emission seen by the LAT and the lower-energy gamma rays detected by the GBM, suggesting a different emission mechanism for the GeV scale GRB photons. Perhaps most importantly, the higher-energy emission was found to be delayed with respect to the lower-energy emission, typically by a few seconds. This can be seen, for example, in figure 1.6, which shows light curves for GRB 130427A (one of the brightest bursts observed to date) in five different energy bands. The high-energy emission also tends to be longer lasting than the initial pulse detected by the GBM, which, in addition to the delay, suggests that the LAT emission is produced in the afterglow phase. The late, high-energy emission also tends to have different spectral characteristics, typically displaying an additional power-law component (see figure 1.7). The new high-energy observations gathered by Fermi also uncovered another important distinction between the long and short classes of GRBs. For long GRBs, the observed fluence in the LAT energy range was typically on the order of 10% of that detected by the GBM at lower energies. Short GRBs, however, typically have high-energy fluences roughly equal to those observed by the GBM. This can be seen in figure 1.8, which compares the fluence measurements for LAT and GBM detected emission for all bursts in the 2nd Fermi-LAT GRB catalog [22]. While Fermi played an important role in understanding GRB phenomenology, open questions remain in explaining the high-energy GeV scale emission. The early Fermi-LAT data was commonly interpreted as synchrotron radiation emitted in the afterglow phase by electrons accelerated in the external shock produced when the GRB jets collide with an external medium [24]. However, this model was challenged by Fermi’s detection of GRB 130427A; this burst had the highest-energy photon ever seen by that point (95 GeV), and the particularly high-energy observations recorded for this burst were incompatible with the synchrotron model outlined above [23]. As detailed in section 1.2, additional GeV – TeV observations (such as those searched for in this thesis and the MAGIC detection discussed in

11 the following section) could prove invaluable in identifying the high-energy GRB emission mechanisms.

Figure 1.6: Light curves observed by Fermi-GBM and Fermi-LAT for GRB 130427A in five energy bands [23]. NaI (sodium iodide) and BGO (bismuth germanate) are two types of scintillator detectors that make up Fermi-GBM. The LLE (LAT Low Energy) light curve is obtained from LAT data with a low-energy event selection. The circles in the bottom panel show the energy and arrival time of individual LAT detected gamma rays, with filled circles plotted for photons with a >90% probability of actually coming from GRB 130427A. All light curves and points are plotted as a function of time since the GBM trigger. The higher-energy emission is delayed with respect to the lower-energy emission.

12 Figure 1.7: GRB 130427A spectral fits to combined Fermi-GBM and Fermi-LAT data for three different time periods (measured in time since the GBM trigger) [23]. An additional Power-Law (PL) component is needed to fit the delayed (11.5 – 33 s) emission. For reference, the top panel shows, for the same three time periods, the Fermi-GBM light curves (obtained by combining and arbitrarily scaling the data from the top three panels of figure 1.6) with the energies of LAT detected photon (solid circles from the bottom panel of figure 1.6) overlaid on top.

13 Figure 1.8: The GBM measured fluence in the 10 keV – 1 MeV energy range vs. the fluence measured by the LAT in the 100 MeV – 100 GeV energy range (during the same time period used for the GBM measurement) for all bursts in the 2nd Fermi-LAT GRB catalog [22]. The solid green line is a one to one line denoting equal LAT and GBM fluences. The dashed and dotted-dashed green lines denote points where the LAT fluence is 10% and 1% of the GBM fluence, respectively. The short GRBs (red points) have LAT fluences roughly equal to the GBM fluences, whereas the long GRBs (blue points) have LAT fluences on the order of 10% of the GBM fluences.

1.1.1.6 HESS and MAGIC

The ability of space-based instruments like Fermi to obtain measurements above ∼100 GeV is hampered by the resource requirements associated with launching heavy instruments with large effective areas into orbit. Ground-based detectors, therefore, are currently the best method to obtain such very high-energy measurements. However, as discussed in much greater detail in chapter 2 (along with the trade-offs between these various detection mechanisms), gamma rays pair produce upon interacting in Earth’s atmosphere, creating a shower of relativistic charged particles that ground-based detectors must study in order to infer the arrival direction and energy of the incident gamma ray. Imaging Atmospheric Cherenkov

14 Telescopes (IACTs) achieve this through observing the Cherenkov radiation emitted by the relativistic charged shower particles as they travel through Earth’s atmosphere. Earlier this year (2019), two such IACTs, the High Energy Stereoscopic System (HESS) [25] and Major Atmospheric Gamma Imaging Cherenkov (MAGIC) [26] telescopes, announced the

first >100 GeV GRB detections. HESS detected 100 – 440 GeV photons from GRB 180720B

after beginning observations ∼10 hours after the burst trigger [27]. MAGIC obtained much earlier observations for GRB 190114C; after receiving an alert from Swift-BAT, MAGIC slewed to the burst coordinates and began taking data only 50s after the Swift trigger. In the following

20 minutes, MAGIC observed a >20σ detection at energies above 300 GeV [28]. Early analyses of these historic detections suggest that the high-energy observations are consistent with synchrotron self-Compton emission from the GRB afterglows [29]. More detailed conclusions on the implications of these exciting discoveries will likely follow from HESS and MAGIC’s eagerly anticipated publications of multi-hundred GeV spectral measurements for GRBs 180720B and 190114C.

1.1.1.7 LIGO, Virgo, and the Multi-Messenger Era

In August of 2017, the short GRB compact merger progenitor model was compellingly confirmed with the coincident detection of gravitational waves from a - and a short gamma-ray burst. The gravitational wave detection from the Laser Interferometer Gravitational wave Observatory (LIGO) [30] and Virgo gravitational wave

observatory [31] was followed within ∼2s by a spatially coincident GRB detection from the Fermi and INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) satellites [32], with the short delay between the merger and the production of the prompt GRB emission (in the jets powered by the resultant black hole) consistent with the canonical short GRB model discussed in section 1.1.2. A coincident optical counterpart was also detected by the Swope telescope [33]. Figure 1.9 demonstrates the spatial coincidence of the gravitational wave and electromagnetic GRB detections and figure 1.10 the temporal coincidence.

15 Figure 1.9: 90% confidence localizations for the joint detection of GW170817 and GRB 170817A in equatorial coordinates [32] (axes are right ascension and declination). The localization contour from the LIGO-Virgo measurements is shown in green, the Fermi-GBM localization in purple, and the localization from combined Fermi and INTEGRAL data in gray. The yellow star in the zoomed-in region of interest shows the location of the coincident optical transient seen by the Swope telescope.

These multi-messenger detections can provide a wealth of new information unattainable through the study of electromagnetic signatures alone. In addition to gravitational waves, GRBs are expected to produce neutrinos and cosmic rays, as any hadrons present in the GRB jets can be accelerated via the same mechanisms that accelerate the electrons thought to be responsible for the bulk of the gamma-ray emission. Cosmic rays, however, bend in the magnetic fields between their sources and our observatories, and so do not point back to their points of origin. And while limits on the neutrino flux from GRBs have been set by, most notably, the IceCube observatory [34] [35], no unambiguous coincidence between an astrophysical neutrino detection and electromagnetic GRB detection has been observed to date. As any protons accelerated in GRBs (with similar efficiency to the electrons that produce the prompt gamma-ray emission) would interact with ambient photons to produce

+ + + neutrinos via ∆ resonance (p + γ → n + π → n + e + νe + ν¯µ + νµ), a neutrino detection coincident with a GRB could provide evidence for GRBs as a source of Ultra High-Energy

16 Figure 1.10: The light curves and gravitational wave time-frequency map for the joint detection of GW170817 and GRB 170817A [32]. The top three panels show light curves from Fermi-GBM (in two different energy bands) and the SPI-ACS instrument onboard INTEGRAL with background estimates overlaid in red. The bottom panel shows the LIGO time-frequency map of the GW170817 detection.

17 Cosmic Rays (UHECRs). (See, e.g., [35] for a discussion on constraining the role of GRBs in UHECR production from limits on neutrino fluxes from known GRBs). Studying GRBs through multiple channels (gravitational waves, neutrinos, cosmic rays, and, of course, electromagnetic radiation) is thus an invaluable tool for better understanding both gamma-ray bursts and the sources and acceleration mechanisms for UHECRs. This multi- messenger approach, facilitated through databases dedicated to coordinating the results and operations of relevant observatories (such as the Astrophysical Multimessenger Observatory Network, or AMON [36]), will play an important role in the study of GRBs and many other astrophysical phenomena in the future.

1.1.2 GRB Theory

1.1.2.1 The GRB Paradigm

As discussed in the previous section, long GRBs appear to be produced from the core collapse of massive stars, and short GRBs from the merger of two compact stellar remnants (either two neutron stars or a neutron star and a black hole). While these two types of stellar progenitors produce gamma-ray signatures of different length and spectra (see figure 1.3), both ultimately produce a black hole that powers the GRB phenomenon through the same basic process illustrated in figure 1.11 and described below. This theoretical paradigm is referred to as the fireball, or fireball-shock model [5], [37]. After the collapse or merger, leftover matter – from the central and outer layers of the collapsed massive star (in the case of long GRBs) or the debris produced in the collision of two compact stellar remnants (in the case of short GRBs) – rapidly accretes into the newly formed black hole and provides the material for the hot plasma that emits the detected

gamma-ray emission. While most of the ∼1054 ergs of gravitational energy liberated by the core collapse or merger is released in gravitational waves and neutrinos (to which the

surrounding environment is transparent), a small fraction (∼10−2 – 10−3) is transferred to

18 Figure 1.11: A diagram (from [38]) illustrating how the electromagnetic emission associated with gamma-ray bursts is produced in the canonical GRB model. Internal shocks within the jets (produced by matter rapidly accreting into the central black hole) convert the bulk kinetic energy of the outflow into the random particle energy that powers the prompt gamma-ray emission. Shocks produced when the jet collides with the surrounding medium give rise to the afterglow emission.

the surrounding plasma, or fireball, of electrons, positrons, photons, and baryons. The energy and time scales involved with the formation of this fireball yield photon luminosities far greater than the Eddington luminosity, and so radiation pressure causes the fireball to rapidly expand. If this fireball were to expand isotropically, the observed gamma-ray fluences would imply a total energy output several orders of magnitude in excess of that attainable from likely stellar progenitors. However, the energy requirements become reasonable if the outflow were to expand along a collimated jet, an early prediction which has since been placed on a solid evidential footing by afterglow observations (e.g. [39], [40]). (Note, however, that within the comoving frame of the outflow, the “fireball” expands in all directions.) The detection of GeV scale photons also requires that the material in these jets expand relativistically. Such high-energy gamma rays are well in excess of the minimum energy

2 + − (2mec , where me is the electron mass) required for pair production (γγ → e e ). To state

19 the pair-production condition more precisely: to conserve energy and momentum, the energies

of the two photons, E1 and E2, must satisfy

2(m c2)2 E E ≥ e (1.2) 1 2 1 − cosθ where θ is the angle between the two colliding photons. A small value of θ therefore increases the minimum photon energy required for pair production. And with a high bulk Lorentz factor Γ for the expanding GRB fireball, any emitted electromagnetic radiation would be

beamed into a cone of half-opening angle ∼1/Γ (in the direction of motion of the radiating fireball particles), achieving the small value of θ (between fireball photons) required for the plasma to be sufficiently transparent to the observed GeV scale photons [41]. If the fireball were to expand smoothly, most of its energy would be contained in the kinetic energy of its baryons, rather than radiative luminosity, and the optically thick medium would produce a quasi-thermal spectrum (in contradiction with the observations). In the standard GRB paradigm, these problems are resolved by introducing shocks that can convert the kinetic energy of the baryons into non-thermal radiation [5]. These shocks are created by collisions between different parts of the outflow within the jets (internal shocks) and by collisions between the jets and external medium (external shocks). The latter type of collision sends a reverse shock traveling back into the jet and forward shock traveling out into the external medium. As particles diffuse across a shock front, they acquire the observationally required power-law energy distribution via Fermi acceleration [42]. The presence of multiple internal shocks can nicely explain the short timescale and rapid variability observed in light curves of the prompt GRB emission (see, e.g., figure 1.1). The delayed, longer lasting afterglow emission appears to be produced by particles accelerated in the external shocks. Synchrotron radiation from shock-accelerated electrons is thought to be responsible for the majority of the observed gamma-ray emission. However, as discussed in section 1.1.1.5, a simple synchrotron radiation model cannot explain the highest-energy GRB photons observed to date (which are of particular interest in this thesis). Synchrotron Self-Compton (SSC)

20 models, in which synchrotron photons are inverse-Compton scattered to higher energies by the same population of electrons that produced them, are capable of producing such GeV-TeV scale photons [43]. However, a simple prompt SSC interpretation, with a single emission zone consisting of electrons accelerated in internal shocks, cannot easily explain the delay between the keV-MeV and GeV emission seen by Fermi. Multi-zone emission models, in which the prompt low-energy photons and delayed high-energy photons are produced in different regions, have had more success in explaining the data. The GeV scale photons seen by Fermi-LAT could, for example, be produced by the lower-energy prompt emission inverse- off of electrons accelerated in external shocks [44]. Another important distinction between single and multi-zone emission models is the presence of a high-energy cut-off. As discussed above, a high bulk Lorentz factor can increase the pair-production threshold beyond the energy range in which currently operational space- based gamma-ray detectors are sensitive. Such a pair-production cut-off could, however, be detected by ground-based detectors like HAWC, which have much larger effective areas at these high energies. In a single zone emission model, the presence of the prompt keV-MeV photons would yield a higher density of pair-production targets for any GeV-TeV scale emission than provided in multi-zone models. This leads to a lower and more abrupt cut-off in single-zone models for a given bulk Lorentz factor [45]. The detection of a GeV-TeV scale cut-off could, therefore, be invaluable in determining the emission region of the highest-energy photons. GRB emission is most commonly explained through the leptonic processes discussed above. Uncertainty remains in the exact emission mechanisms at play, however, and a variety of other models have been proposed. These include proton synchrotron radiation [46] and cascade radiation from photon pair production or photohadronic interactions [47].

21 1.1.2.2 EBL Attenuation

Very high-energy gamma-ray observations from extra-galactic phenomena like GRBs are strongly affected not only by the dynamics and surrounding medium of their sources, but also by their interactions with the Extragalactic Background Light (EBL) photons they encounter on their way to our observatories. The EBL is a background level of electromagnetic radiation in the universe produced by galactic emission from, primarily, stars and active galactic nuclei. The term “EBL” often also encompasses the background radiation produced by primordial processes, most notably the cosmic microwave background. The intensity of the total electromagnetic background radiation present in the universe, from gamma-ray to radio wavelengths, is shown in figure 1.12.

Figure 1.12: EBL intensity as a function of wavelength, obtained from the compilation of measurements presented in [48].

As high-energy photons travel through the extra-galactic medium, they can pair produce with EBL photons. From equation 1.2 (with θ = π), we can see that the minimum energy of an EBL photon necessary for pair production with a 1 TeV gamma ray from, for example, a

22 GRB, is only 0.26 eV. This corresponds to a wavelength of 4.8 µm, which places the energy threshold in the optical band where EBL intensity is relatively high (see figure 1.12). Very high-energy photons from extra-galactic sources are therefore often absorbed by the EBL before reaching our observatories, with the probability of pair production with EBL photons increasing for more distant sources. This produces an EBL-induced cut-off in measured GRB spectra, with the cut-off energy lower for sources at higher redshifts. Figure 1.13 shows the attenuation factor, the fraction of gamma rays not absorbed by the

EBL, as a function of gamma-ray energy for three redshifts. By z ≈ 1, most of any &100 GeV emission and virtually all TeV scale emission is absorbed by the EBL, making sources at such distances difficult to detect for very high-energy gamma-ray observatories like HAWC.

Figure 1.13: EBL attenuation factor (the fraction of gamma rays not absorbed by pair production with EBL photons) vs. energy in GeV for three redshifts (0.2, 0.6, and 1.0). Almost all TeV scale photons are absorbed by the EBL for sources at redshifts much larger than z = 0.2. The Dominguez 2011 EBL model [49] is used to calculate attenuation factors.

23 1.2 The Role of HAWC in GRB Science

With sensitivity above ∼100 GeV, the HAWC gamma-ray observatory can help characterize the highest-energy emission from GRBs. While IACTs have better sensitivity in this energy range, the small field of view of these pointed instruments requires that they slew to coordinates provided by an external alert when observing transient sources, making it very difficult to detect any prompt high-energy emission from GRBs. The ability of IACTs to detect transients is further limited by their low duty cycles; they can only operate at night under favorable atmospheric and low moonlight conditions. The technique employed by HAWC of directly observing air-shower particles with water-Cherenkov detectors affords a much larger duty cycle and field of view, allowing for a significantly higher chance of catching emission from GRBs and other transients. HAWC is thus uniquely positioned to constrain and detect the highest-energy emission from GRBs. As discussed in section 1.1.2.1, these measurements, in particular the detection of a high-energy cut-off, could prove invaluable in disentangling the various models proposed for very high-energy GRB emission. In the following chapter, we will further examine the trade offs between the detection mechanisms employed by HAWC and other water-Cherenkov experiments, IACTs, and space-based gamma-ray detectors. Chapter 3 will document the capabilities and instrumentation of the HAWC detector in much greater detail, and chapter 6 the sensitivity of the HAWC GRB analysis developed in this thesis. In particular we will see that, while our limits on &100 GeV emission from GRBs are not strong enough to rule out any models, HAWC is indeed capable of detecting the brightest known bursts, and we anticipate meaningful results from the analysis developed here as we await a particularly energetic, low-redshift burst to occur in HAWC’s field of view.

24 Chapter 2 | Detecting Gamma Rays

While the direct detection of gamma rays is possible for space-based instruments, these high-energy particles materialize upon interacting in Earth’s atmosphere, producing a cascade of relativistic particles known as extensive air showers. Ground-based gamma-ray observatories must therefore analyze these air showers in order to draw conclusions about incident photons and the sources that produce them. To allow for an informed discussion of the various gamma-ray detection mechanisms currently in use and set the stage for the detailed overview of the HAWC detector presented in chapter 3, we must therefore first explore the physics of extensive air showers.

2.1 Extensive Air Showers

Extensive Air Showers (EASs) can be produced by the interactions of cosmic rays in Earth’s atmosphere as well as gamma rays. While important cosmic-ray science can be and is carried out by ground-based extensive air-shower detectors like HAWC, cosmic-ray air showers, which outnumber their electromagnetic counterparts by several orders of magnitude, form the main background for the HAWC detector in its primary function as a gamma-ray observatory. The differences in the development and characteristics of gamma-ray and cosmic-ray air showers are explored separately in the following sections. As discussed further in chapter 3,

25 the physical distinctions outlined below allow for reliable discrimination between these two types of air showers on an event by event basis.

2.1.1 Gamma-Ray Air Showers

The development of gamma-ray induced extensive air showers is dominated by two physical processes. One is the γ → e− + e+ pair-production events that occur when high- energy photons interact with the Coulomb fields of atoms in the atmosphere. The other is bremsstrahlung radiation from electrons and positrons produced in the gamma-ray pair- production interactions. As the air shower develops, these relativistic electrons and positrons have sufficient energy that the photons produced by their bremsstrahlung radiation are themselves gamma rays with sufficient energy to create additional e± pairs, which are sufficiently energetic to produce yet more gamma rays, and so on. This process continues

until the energy of air-shower particles falls below the critical energy Ec (≈ 85 MeV in air) where collisional energy losses exceed the radiative losses. At this point, the maximum

number of particles in the shower, Nmax, has been reached, and the air shower begins to dissipate. While a detailed treatment of the development of gamma-ray air showers requires computer simulations, most of the relevant physics and important shower properties are accurately captured by the Heitler model [50], [51] for electromagnetic cascades. In this model, a splitting

length is defined as d = Xr ln(2), where Xr is the radiation length, interpreted as the mean free path for pair production in the case of the γ → e− + e+ interactions and the distance over which an e+ or e− loses all but a factor of 1/e of its energy in the case of bremsstrahlung radiation. While this latter quantity is in fact smaller than the mean free path for pair production by a factor of 7/9, the Heitler model assumes that the pair-production splitting length, the distance a photon travels before undergoing pair production, is the same as the bremsstrahlung splitting length, the distance an e+(−) travels before undergoing a “splitting” via e+(−) → e+(−) + γ. Each electron and positron is assumed to undergo a single splitting

26 where, as in the case of pair-production interactions, each of the two outgoing particles receives half the energy held by the electron, positron, or gamma ray before the splitting. This process is illustrated in figure 2.1.

Figure 2.1: A diagram illustrating the development of electromagnetic cascades from gamma-ray induced extensive air showers in the Heitler model [50]. Electrons and positrons from gamma-ray pair-production interactions emit additional gamma rays via bremsstrahlung radiation, which materialize into yet more e± pairs, etc...

According to the Heitler model, then, gamma-ray air showers have N = 2n particles after

undergoing n splittings and traveling a distance X = nd = nXr ln(2). As discussed above, when the particle energy falls below Ec, the splittings cease and the maximum number of

particles Nmax has been reached. Defining the energy of the original, incident gamma ray as

Eγ, the depth at which the shower reaches its maximum size as Xmax, and the number of

splittings undergone before reaching N = Nmax as nmax, we then have (since all particles at each stage have equal energy):

nmax Eγ = EcNmax = Ec2 (2.1)

or nmax ln(2) = ln(Eγ/Ec), which gives:

Xmax = nmaxXr ln(2) = Xr ln(Eγ/Ec) . (2.2)

27 The Heitler model fails to account for several important features of electromagnetic showers, most importantly: electrons and positrons often produce multiple photons via bremsstrahlung radiation. This causes the model’s predictions for the ratio of gamma rays

± to e particles and the value of Nmax (for a given Eγ) to be significantly off. However, two

important features predicted by equations 2.1 and 2.2, that Nmax is directly proportional to

Eγ and Xmax increases logarithmically with Eγ, are both correct. A more careful treatment of shower development by Rossi and Greisen (employing “approximation B”) [52] accurately predicts the number of particles in an electromagnetic shower as a function of atmospheric depth, plotted under this formulation in figure 2.2 for incident gamma rays of 100 GeV, 1 TeV, 10 TeV, and 100 TeV, respectively. The depth of the HAWC observatory as seen by vertical air showers is shown as well.

Figure 2.2: The number of electromagnetic particles in extensive air showers produced by 100 GeV, 1 TeV, 10 TeV, and 100 TeV gamma rays, respectively, as a function of atmospheric depth (expressed in radiation lengths), adapted from [53]. The HAWC observatory is at an atmospheric depth of ∼16.8 radiation lengths for vertical showers, close to shower maximum for very high-energy gamma rays.

28 Very few particles from 100 GeV (and below) vertical air showers reach HAWC’s altitude, which is, however, very close to the shower maximum for more energetic primary gamma rays. Note though that not all gamma rays interact at the very top of Earth’s atmosphere, and

∼100 GeV gammas that undergo their first pair-production interaction at a larger atmospheric depth will produce showers that contain more particles (compared to gamma rays of the same energy that interact higher up in the atmosphere) upon reaching HAWC’s altitude. The size of an air shower at a given altitude also depends on its zenith angle; more inclined showers must, of course, travel through more atmosphere before reaching your detector. Air-shower zenith angles can, however, be measured; the depth of the first interaction cannot be determined by the direct detection of shower particles in extensive air-shower arrays like HAWC. The spatial distribution of these shower particles as they propagate through the atmosphere is shown in figure 2.3. In gamma-ray showers, the transverse momentum of shower particles is introduced primarily through Coulomb scatterings. To conserve momentum, however, this must occur symmetrically about the direction of the incident primary particle (in this case a gamma ray), referred to in air-shower physics as the shower axis. As new particles are formed by pair-production and bremsstrahlung radiation, they form a disk which, to first

order, can be described as a ∼1 – 3 meter thick plane. However, as all shower particles travel at roughly the speed of light, off-axis particles lag behind those closer to the shower “core” (the point where the shower axis and particle disk intersect and the density of shower particles is at its maximum), introducing an approximately spherical curvature to the particle disk. In gamma-ray showers, virtually all charged particles detectable by air-shower arrays like HAWC are electrons and positrons, which, of course, have the same mass. This results in a smooth distribution of particles about the shower axis, with the energy in the shower decreasing smoothly with distance from the shower core.

29 Figure 2.3: A diagram illustrating the disk formed by extensive air showers as they develop and propagate through the atmosphere, adapted from an image produced by Z. Hampbel-Arias (https://zhampel.github.io/research/). The disk has a width of ∼1 - 3 meters near the shower axis, propagates through the atmosphere at nearly the speed of light, and is roughly planar (but does display a mild curvature, which is measured empirically for HAWC air showers).

2.1.2 Cosmic-Ray Air Showers

When high-energy cosmic rays enter Earth’s atmosphere they interact with and fragment atmospheric nuclei, producing charged and neutral pions. (Charged pions make up 2/3 of the total produced.) While other secondary particles, most notably kaons, are produced in the first interaction between an incident high-energy cosmic ray and an atmospheric nucleus (and subsequent interactions between pions and other atmospheric nuclei), the development of cosmic-ray air showers is dominated by pions. Neutral pions immediately decay into gamma rays, producing an electromagnetic cascade as described in the previous section. Charged pions develop the hadronic component of the air shower as they fragment additional atmospheric nuclei, creating yet more charged and neutral pions which likewise further develop the hadronic and electromagnetic cascades. When the energy of newly created charged pions falls below the threshold for further pion creation (which depends on the energy of the original,

30 incident cosmic ray [50]), remaining charged pions decay (predominantly) into muons (via

+ + − − π → µ + νµ and π → µ + ν¯µ) and further development of the hadronic component of the shower ceases. Due to time dilation, almost all of these relativistic muons can reach Earth’s surface before decaying. The process outlined above for the development of cosmic-ray air showers is illustrated in figure 2.4.

Figure 2.4: A diagram illustrating the development of hadronic extensive air show- ers [54]. Shower development is dominated by pions produced in interactions with atmospheric nuclei. Neutral pions decay into gamma-ray pairs, producing electromag- netic cascades. Charged pions produce additional pions in interactions with atmospheric nuclei, developing the hadronic component of the air shower.

With interactions involving nuclear forces that produce multiple particles and pion decay processes, the physics involved in hadronic air showers is significantly more complex than in the purely electromagnetic cascades induced by gamma-ray primaries. While models are available (see, for instance, Matthews’ extension of the Heitler model [50]), computer simulations are typically employed in the analysis of hadronic showers. Figure 2.5, produced from one such simulation [51], shows the longitudinal shower profile for a high-energy proton primary, illustrating the relative contribution of hadrons, muons, and electromagnetic particles to cosmic-ray air showers.

31 Figure 2.5: The longitudinal shower profile of a vertical 1019 eV proton [51]. The simulated numbers of hadrons, muons, electrons (and positrons), and gamma rays are shown as a function of atmospheric depth and altitude.

While the spatial distribution of shower particles schematically illustrated in figure 2.3 remains qualitatively accurate in the case of cosmic-ray air showers, the hadronic interactions responsible for shower development here result in a significantly different lateral energy distribution than seen in gamma-ray air showers. The production of π0 particles with a significant amount of transverse momentum and the electromagnetic cascades they produce in their decay processes transfer clusters of electromagnetic energy away from the shower axis. The muons produced in the final stages of the development of the hadronic component of these showers can also deposit large amounts of energy at a distance from the shower core. This distinguishes the lateral profile of cosmic-ray air showers from the smooth, uniform decline in energy (with distance from the shower core) observed in gamma-ray air showers.

32 As we will see in chapter 3, these differences can be exploited to differentiate gamma-ray and cosmic-ray air showers in HAWC data and design cuts to remove the cosmic-ray background.

2.2 Cherenkov Radiation

Ground-based gamma-ray observatories typically detect extensive air showers through the Cherenkov radiation emitted by charged shower particles. Cherenkov radiation is an electromagnetic shock wave analogous to the sonic booms produced when objects (e.g. supersonic aircraft) travel faster than the speed of sound. Cherenkov radiation is produced by electrically charged particles when traveling through a polarizable material at a velocity that exceeds the phase velocity of light in that material. When these high-velocity charged particles excite the atoms and molecules in their path, that electromagnetic disturbance is radiated away at a speed of c/n, where n is the refractive index of the material and c, as usual, is the speed of light in a vacuum. If the charged particle is traveling at a speed v = βc > c/n, it will outrun the emitted electromagnetic waves, which will therefore form a coherent shock wave propagating outward in a cone at an angle θ from the particle trajectory. This is illustrated in figure 2.6. If an electromagnetic wave produced in the path of the charged particle is emitted at time t = 0, then at time t, the charged particle will have traveled a distance βct and the electromagnetic wave front a distance (c/n)t. The angle θ that the Cherenkov shock wave makes with respect to the particle trajectory is therefore clearly given by (see figure 2.6):

(c/n)t 1 cos(θ) = = . (2.3) βct βn

33 Figure 2.6: A diagram depicting Cherenkov radiation: the coherent, conical, electromagnetic shock wave produced by charged particles traveling in a polariz- able material at a velocity v = βc > c/n, where n is the index of refraction of the material and c the speed of light in a vacuum. Blue arrows indicate the di- rection of the emitted Cherenkov radiation. Image provided by Arpad Horvath (https://en.wikipedia.org/wiki/Cherenkov_radiation) under the CC BY-SA 2.5 license.

2.3 Gamma-Ray Detectors

With a review of the physics relevant to high-energy gamma-ray astronomy complete, we will now turn our attention to the gamma-ray detection mechanisms currently in use. We consider three classes of gamma-ray detectors: direct space-based detectors, imaging atmospheric Cherenkov telescopes, and extensive air-shower arrays. The following sections will examine the strengths and weaknesses of each of these approaches in turn.

2.3.1 Direct Space-Based Detectors

To detect gamma rays directly before they materialize into electromagnetic cascades in Earth’s atmosphere, space-based instruments are required. Even these, however, do not operate like conventional telescopes. Gamma rays are too energetic to reflect and focus as is done at longer wavelengths. Instead, we must study their interactions in matter. The most important of these interactions are the photoelectric effect, Compton scattering, and pair production. The photoelectric effect, in which the energy of a gamma ray is used

34 to overcome the binding energy of a then liberated electron, is relevant below ∼1 MeV. At higher energies, gamma rays interact primarily via Compton scattering and pair production. As discussed in previous sections, Compton scattering refers to scattering between photons and electrons, and in pair-production interactions (the fate of the highest-energy gamma rays) the photon’s energy is used in the creation of electron-positron pairs. All of these interactions result in the movement of electrons which can be studied in, for example, scintillator detectors to determine the energy and direction of the incident gamma ray. Electrons liberated from the photoelectric effect are used to study astrophysical gamma rays in Fermi-GBM [20]. COMPTEL [55] is a notable example of a Compton telescope designed to detect the recoil of electrons involved in Compton scattering events. Pair creation telescopes include Fermi-LAT [21] and AGILE [56]. An alternative detection technique known as coded mask imaging [57], employed by instruments such as Swift-BAT [16] and INTEGRAL [58], study X-rays and low-energy gamma rays by analyzing the shadow created when they pass through a complex aperture with a known pattern of transparent and opaque regions. These space-based gamma-ray telescopes are capable of high duty cycles (near continuous operation) and large fields of view. This makes them ideal for the study of gamma-ray

transients below ∼100 GeV. Instruments with large effective areas, which are necessarily larger and heavier and therefore more difficult to launch into space, are required to detect the lower photon fluxes observable from known astrophysical sources at higher energies. With currently available technology, ground-based instruments are required for gamma-ray

astronomy above ∼100 GeV.

2.3.2 Imaging Atmospheric Cherenkov Telescopes

One way to detect the extensive air showers produced by gamma rays interacting in Earth’s atmosphere is through the Cherenkov radiation emitted by the charged shower particles in air. This method, employed by IACTs, is illustrated in figure 2.7. The Cherenkov

35 light cone emitted by the air shower is collected and focused by one or more telescopes onto a camera, producing an image of the shower. Photomultiplier tubes are usually used to collect light focused by the IACT mirrors as the CCD cameras typically used in astronomy are not fast enough to record these brief flashes of Cherenkov radiation. The High Energy Stereoscopic System [25], Major Atmospheric Gamma Imaging Cherenkov telescope [26], Very Energetic Radiation Imaging Telescope Array System (VERITAS), and First G-APD Cherenkov Telescope (FACT) [60] are the most sensitive IACTs currently in operation. The Cherenkov Telescope Array (CTA) [61], when complete, promises a significant boost in sensitivity over these instruments.

Figure 2.7: A diagram illustrating the IACT detection technique. The Cherenkov light cone produced by relativistic charged shower particles has an opening angle of ∼1 degree, creating a “light pool” ∼120 m in diameter for gamma rays interacting ∼10 km above the telescopes employed to detect the Cherenkov radiation. Image from https://www.isdc.unige.ch/cta/outreach/data

The ability of IACTs to image the longitudinal development of air showers allows for reliable discrimination between gamma-ray and cosmic-ray air showers (the main source of background) and accurate energy measurements. These capabilities make IACTs the most sensitive instruments currently available for the detection of tens of GeV to TeV scale gamma rays. This method does, however, have drawbacks. Firstly, as pointed instruments, IACTs

36 can only view a small portion of the sky at any given time and thus have a limited field of view, typically a few degrees in diameter. Secondly, as optical instruments, their operations are restricted to moonless nights with clear weather, limiting their duty cycles to roughly 10%. This makes it difficult for IACTs to detect new sources, study extended sources, and to catch short time-scale gamma-ray transient events like GRBs.

2.3.3 Extensive Air-Shower Arrays

Extensive air-shower arrays work by directly detecting air-shower particles with an array of particle detectors. By analyzing the time at which particles at different points along the shower front pass through the array, the orientation of this shower plane can be reconstructed and the direction of the incident gamma ray determined. This process is discussed in detail in the context of the HAWC detector in the following chapter, along with techniques for reducing the cosmic-ray background. Early EAS arrays used scintillators to detect charged shower particles. The most sensitive instrument of this type is the Tibet ASγ detector [62]. However, resource requirements make it challenging to employ scintillator detectors to densely sample the tens of thousands of square meters one must instrument to achieve the large effective area necessary to detect GeV to TeV scale gamma rays. Densely sampling the air shower is necessary to lower the detector’s energy threshold and reliably differentiate between gamma-ray and cosmic-ray air showers. The water-Cherenkov technique was therefore developed for more recent EAS arrays such as Milagro [63] and its successor experiment, HAWC (discussed in detail in the following chapter). In this method, air-shower particle detectors consist of tanks or pools of water instrumented with photomultiplier tubes that detect the Cherenkov radiation emitted by charged air-shower particles traveling through the water. Unlike IACTs, which can image the entire air shower, EAS arrays can only measure the imprint they leave on the ground. EAS arrays therefore have worse angular resolution, energy resolution, and cosmic-ray background rejection than IACTs. EAS arrays also have higher

37 energy thresholds since shower particles are attenuated in Earth’s atmosphere (see figure 2.2) to a significantly greater extent than the Cherenkov photons detected by IACTs. The advantage of EAS arrays is in their high duty cycle and large fields of view. Moonlight, sunlight, and, to a large extent, bad weather are not obstacles to the operations of EAS arrays, allowing them to carry out near continuous observations. HAWC, for example, has a duty

cycle of ∼95%. And as non-pointed instruments, they can observe the entire overhead sky,

allowing for fields of view of ∼2 sr. This allows EAS arrays to survey the entire overhead sky for new or extended sources and for the prompt emission of transient events that are difficult to detect with IACTs. EAS arrays and IACTs are therefore complementary approaches in the field of gamma-ray astronomy.

38 Chapter 3 | The High Altitude Water Cherenkov Observatory

The HAWC observatory is an extensive air-shower array located in Puebla, Mexico at an altitude of 4100m above sea level and a latitude of 19◦ north. This high altitude places HAWC close to shower maximum for TeV scale gamma rays (see figure 2.2) and provides the

20,000 m2 main array sensitivity in the energy range of ∼100 GeV to ∼100 TeV. With an

instantaneous field of view of ∼2 sr, HAWC, at 19◦ N, can monitor all gamma-ray sources in a wide declination band of roughly -26◦ to 64◦ every day. With this large field of view and a

duty cycle of ∼95%, HAWC is an ideal gamma-ray survey instrument and well suited for the task of capturing very high-energy emission from GRBs and other transient sources. The HAWC main array consists of 300 Water-Cherenkov Detectors (WCDs) and was inaugurated in March 2015. The WCDs used for HAWC are large water tanks, 7.3m in

diameter and 5m tall, filled to a height of ∼4.5m with 200,000L of purified water and instrumented with four PhotoMultiplier Tubes (PMTs). These PMTs detect charged air- shower particles by capturing the Cherenkov radiation they emit when traveling through the water (hence the name “water-Cherenkov detector”). Figure 3.1 shows a photograph of HAWC, while figure 3.2 provides a diagram of the WCDs used in the main array and illustrates their layout.

39 Figure 3.1: A drone photograph of the HAWC detector (source: HAWC collaboration). The 300 large, densely spaced WCDs that make up the HAWC main array are at the center of the picture. Also visible are the 350 smaller outrigger tanks constructed later (in 2018).

An outrigger array of 350 smaller, less densely spaced WCDs is also visible in figure 3.1. However, construction on this outrigger array was not completed until August of 2018, after the occurrence of all GRBs analyzed in this thesis. Furthermore, as the dense sampling of air showers achieved in the main array is necessary to reconstruct small, sub-TeV air showers (the energy band of primary interest for the detection of GRBs), the outrigger array does not significantly improve HAWC’s effective area below ∼1 TeV and is not expected to substantially improve the sensitivity of future GRB analyses. We therefore do not further discuss the outriggers in this thesis. In the following sections, we will explore the design and operations of HAWC in much greater detail. The PMTs in HAWC’s WCDs and other detector electronics are discussed in section 3.1. The procedure used to calibrate the measurements provided by these PMTs is examined in section 3.2, while section 3.3 outlines the algorithms employed to reconstruct

40 Figure 3.2: The layout of the HAWC main array (left) and a diagram of a single WCD (right) [64]. Open circles in the HAWC layout schematic show the locations of individual WCDs; the blue dots within show the coordinates of individual PMTs. The WCD diagram illustrates the trajectories of a shower particle entering a tank and the Cherenkov photons it emits, as well as the locations of the four PMTs present in each tank. A human figure is included to demonstrate the scale.

an air-shower plane from the calibrated measurements and determine the direction of the original, incident gamma ray. The procedures used to remove the cosmic-ray background are presented in section 3.5. Lastly, the sensitivity of the HAWC detector is compared to other currently operational gamma-ray observatories in section 3.6.

3.1 Instrumentation, Electronics, and Online Processing

Photomultiplier tubes detect light by converting photons into an electric current. Every WCD in the main HAWC array is instrumented with three 8 inch Hamamatsu R5912 PMTs (re-used from the Milagro experiment [63]) at the bottom of the tanks. These are spaced 3.2m apart in an equilateral triangle centered around a fourth PMT, a high quantum efficiency 10 inch Hamamatsu R7081, in the middle of the tank (see figure 3.2, right). The inner surface of the evacuated glass chamber of these PMTs – see figure 3.3 – is lined with an

41 alkali or bialkali metal that forms the PMT photocathode. When Cherenkov photons strike the photocathode, their energy is absorbed in the liberation of one or more electrons (the photoelectric effect). These electron(s) then accelerate towards a high-voltage dynode, liberating additional electrons upon collision. These are accelerated through a chain of additional dynodes, with the number of electrons increasing at each step. After striking the final dynode, the electrons are collected at the PMT anode, producing a measurable current pulse. In HAWC jargon, these current pulses are called “hits.” The calibration procedure used to convert a measured PMT pulse into the “charge” of a hit, the number of photoelectrons emitted from a PMT’s photocathode (and the quantity used in HAWC’s air-shower reconstruction as a proxy for the amount of shower energy deposited in a tank), is discussed in the following section.

Figure 3.3: A photograph of one of the 10” Hamamatsu R7081 (left) and 8” Hamamatsu R5912 (right) PMTs used in the HAWC main array (source: HAWC collaboration).

HAWC’s Data AcQuisition (DAQ) and online processing systems, the electronics and software that convert the PMT current pulses into reconstructed air-shower events on-site in real time, are schematically illustrated in figure 3.4 and documented extensively in [65]. A summary of these components is provided below.

42 Figure 3.4: A diagram illustrating the data acquisition and online processing systems used in HAWC [65]. All electronics from the front-end boards on are housed in the “counting house,” the structure visible in the photograph of HAWC (figure 3.1) in the center of the main array (and as the empty, un-instrumented gap in the WCD layout shown in figure 3.2, left). A description of each component in this schematic is provided in the text.

PMT current pulses travel through ∼186m long RG-59 cables to a set of Front-End Boards (FEBs), each of which have inputs for 16 PMTs. These FEBs, also re-used from Milagro, are housed in Versa Module Europa (VME) cages and have separate boards for the analog and digital components (which are connected through the VME backplanes). The analog boards, in addition to providing the voltage for the PMTs, process and amplify the PMT pulses. These signals are then digitized by CAEN V1190A-2eSST Time to Digital Converters (TDCs) by analyzing the times at which the voltage of PMT pulses pass two discriminator thresholds (corresponding to roughly 1/4 and 4 photoelectrons, respectively, i.e., 1/4 and 4 electrons emitted from a PMT’s photocathode). The calibration procedures used to convert these timing signatures into the charge and time of a PMT hit is discussed in section 3.2. Each TDC module has 128 input channels (for 128 PMTs). The ten TDCs used for the main

43 array are synchronized with a 40 MHz reference clock signal provided by HAWC’s custom GPS Timing and Control (GTC) hardware [66]. Each TDC is paired with a General Electric XVB602-13240010 Single-Board Computer (SBC) running CentOS with dedicated readout software. Also installed in the VME backplanes,

the SBCs read out data from the TDCs at a rate of ∼50 MB/s to a custom “Data Queue” software application. An on-site computer farm receives data from this application via a

gigabit ethernet switch at a rate of, with all ten TDC-SBC pairs, ∼500 MB/s. The “online” processing system running on the above-mentioned computer farm applies the air-shower trigger criteria, performs real-time reconstruction of air-shower events, writes this data to disk, and runs additional automated, real-time analyses. (All triggered air-shower data is also transferred off-site to perform a consistent reconstruction across large data sets – the reconstruction software is updated periodically – and perform more in depth analyses.) We will now turn our attention to the software components involved in the online processing. The ability of the TDCs to operate without dead time allows HAWC to employ a software, rather than a hardware, air-shower trigger. This trigger condition is evaluated by “Reconstruction Client” applications, which also, as the name suggests, perform the air-shower reconstruction (the details of which are discussed in section 3.3) on events that pass the trigger. The use of a software trigger allows for tweaking of the precise trigger condition; for the period of time during which the data analyzed in this thesis was collected, the trigger condition required at least 28 hits to arrive within 150 ns (which yields a 25 kHz event rate).

After applying the trigger, the ∼500 MB/s data flow received from the SBCs is filtered down

to a rate of ∼20 MB/s. However, a “lookback cache system” can write the full 500 MB/s

data stream to a temporary disk cache for ∼24 hr storage in case a request is submitted to retain sub-threshold data for an interesting event. An “Event Sorter” application receives data from the Reconstruction Clients, writes it to disk, and passes it on to additional “Analysis Client” applications. These perform real time analyses that, for example, search the data for transient events such as GRBs and

44 Active Galactic Nuclei (AGN) flares. This allows HAWC to send out real-time alerts and engage the broader astrophysics community with prompt communication via, for example, the GRB Coordinates Network (GCN) [67] and the Astrophysical Multimessenger Observatory Network [36]. An “Experiment Control” application oversees the communication with these networks, monitors all components of the experiment, and starts and stops data collection when necessary. The “Monitoring” component in figure 3.4 refers to custom Python software that receives monitoring data (e.g. PMT count rates) from Experiment Control or directly from other detector components, transmits this data to remote servers, and updates monitoring plots in private HAWC webpages in real time. ZeroMQ [68] is used to carry out all such data transfers between software applications in HAWC’s online processing system.

3.2 Calibration

The PMTs in HAWC’s main array are calibrated with 532 nm laser pulses sent at a frequency of 200 Hz, with pulse energies and widths of 45 µJ and 300 ps, respectively. A system of optical splitters and fiber optic cables and switches [69] distribute the laser light to the 300 WCDs, each of which is equipped with a diffuser to provide light to all four of its PMTs. Neutral density filters are employed to attenuate the laser and alter light levels across seven orders of magnitude.

3.2.1 Charge Calibration

As discussed in the previous section, the quantities available from a PMT measurement after digitization of the signal are the times at which the PMT voltage pulse crosses two

discriminator thresholds (corresponding to the signal produced by ∼1/4 and 4 photoelectrons). A signal must be large enough to cross the low threshold to be recorded, but not all hits (PMT pulses) pass the high threshold. The size of a hit is measured as the time it spends

45 over these two thresholds (Time over Threshold, or ToT), with those times denoted “low ToT” and “high ToT” (see figure 3.5).

Figure 3.5: A diagram illustrating the Time over Threshold (ToT) measurement (adapted from [70]). Low ToT refers to the amount of time a PMT pulse spends above a low voltage threshold and high ToT to the amount of time spent over a high threshold.

The goal of the charge calibration procedure is to convert these ToT measurements into “charge,” the number of electrons emitted by the PMT photocathode (PhotoElectrons, or PEs in HAWC jargon). This is accomplished with an “occupancy” calculation [71] that relates the fraction of laser emission events observed by a PMT (occupancy, η) to the mean number

of PEs observed ():

< nPE > = −ln(1 − η) . (3.1)

While the errors associated with calculating charge from occupancy are too large for this

method to work above ∼2 PEs, can be accurately calculated for larger hits by extrap-

olating the linear relationship between and the light intensity at the photocathode (calculated via the occupancy method below 2 PEs) [69]. Thus, by varying the light intensity arriving at the PMTs with the neutral density filters installed in the laser system, measuring the ToT for hits observed at each intensity, and using

46 the above methods to calculate , a calibration curve relating charge (in PEs) to ToT can be constructed for each PMT in the HAWC array. An example charge calibration curve for one of HAWC’s PMTs is shown in figure 3.6. The two curves in this plot are constructed separately from the low ToT and high ToT measurements. The charge calculated from high ToT is always used if possible; low ToT is used only for small hits that do not pass the high ToT threshold.

Figure 3.6: Example PMT charge calibration curves [69]. Plotted is the average number of photoelectrons (calculated via the occupancy calculation described in the text) as a function of ToT (as obtained by the laser calibration system). An empirically derived six parameter broken power-law function is fit to each curve. As discussed in the text, the high ToT measurement is always used when available. PMT signals that produce a low TOT above the range of the low ToT fit function always rise above the high ToT threshold; the low ToT data past the fit function (starting just before the “shoulder” at ∼3250 ToT units) is therefore not used in assigning hit charges. The ToT units are set by the precision of the TDC modules, with 1 ToT unit equal to 100/1024 ns.

3.2.2 Timing Calibration

Two factors must be taken into account to convert the time stamps provided by the TDC for a hit’s threshold crossing times to the true hit time. First, the differences in cable lengths and the geometry of PMTs in the tanks (and the resulting difference in the time it takes for PMT signals to reach the TDC modules) must be accounted for. Second, a “slewing” correction must be applied. This accounts for the dependence of the precise shape of the

47 waveforms of PMT hits (illustrated in figure 3.5) on the size of the pulse; higher-charge (and higher-ToT) pulses cross the ToT thresholds earlier. The difference between the true hit time and the time at which the waveform crosses

the first threshold (the slewing time, ∆tstart) can be measured for each PMT with the laser calibration system by comparing the known time of laser pulses with the measured threshold crossing times recorded by the TDC modules. The slewing time is well described by

−T oT −p0 T oT −p2 p p ∆tstart = e 1 − e 3 + p4 − p5 · T oT (3.2)

where p0 – p5 are constants fit separately for each PMT. As in the charge calibration procedure, this timing correction is obtained by using the neutral density filters to vary the intensity of laser light in the WCDs. By measuring the slewing time and ToT of hits observed for each attenuation factor applied by the filters, slewing curves can be constructed; fitting equation 3.2 to these curves provides the slewing correction for any ToT. This procedure is carried out separately for each PMT for both the low ToT and high ToT measurements. Example slewing curves are shown in figure 3.7.

Figure 3.7: Example PMT slewing curves (slewing time vs. ToT) for four PMTs (labeled H7A – H7D) [69]. The two slewing curves obtained for each PMT from the low ToT and high ToT measurements are shown.

48 A final timing correction to remove any remaining offsets between PMTs is then applied. This was determined by reconstructing a large number of air-shower events and searching for systematic offsets in individual PMTs between measured hit times and the time the reconstructed shower front was expected to reach that point in the array. The time residuals so obtained were then applied as a correction factor for their PMTs. This process was repeated iteratively until a timing accuracy of 0.1 ns was reached.

3.3 Air-Shower Reconstruction

This section provides an overview of the algorithms used to reconstruct air showers in HAWC, to estimate the direction of incident primary particles from the charges and times of air-shower hits (as determined by the calibration procedures outlined in the previous section). Here, we will present the standard (as of this writing) reconstruction algorithms first documented in [72] and used in the majority of HAWC publications to date. The enhancements to the standard reconstruction we developed to lower HAWC’s energy threshold and improve the instrument’s sensitivity to GRBs (and other sub-TeV phenomena) are documented in chapter 5. These improvements are slated for inclusion in the next reconstruction of the full HAWC data set.

3.3.1 Hit Selection and Charge Scaling

Several hit selection criteria are applied prior to the reconstruction to remove hits unlikely to be produced by actual Cherenkov photons from consideration in the fitting algorithms:

1. Hits used in the fits must occur between 150 ns before the air-shower trigger and 400 ns after.

2. Hits with a series of TDC “low ToT” and “high ToT” (see section 3.2.1) threshold crossing times that do not resemble the pattern expected for real Cherenkov photons are removed.

49 3. A perfect vacuum cannot be created within PMTs; the small amount of atoms and molecules present within the instruments can be ionized by photoelectrons as they accelerate towards the dynode chain. These ions can produce additional signals, known as afterpulses, in close proximity to those produced by the primary photoelectrons. Hits occurring immediately after a high-charge hit are assumed to be caused by afterpulsing and are not considered in the fits.

4. Prompt afterpulsing can also extend a hit before the initial signal falls below the TDC

thresholds. Hits with ToT measurements above ∼400 ns (above a charge of ∼104 PEs) are assumed to be artificially inflated by prompt afterpulsing and are also removed.

Additionally, a charge-scaling factor must be applied prior to reconstruction to account for the different quantum efficiencies of the 8” and 10” PMTs. Quantum efficiency refers to a PMT’s ratio of photoelectrons to incident photons; the higher quantum efficiency of the 10” PMTs means that they will produce more photoelectrons (hits with a higher charge) than their 8” counterparts if struck with the same Cherenkov photon. To correct for this imbalance, a scaling factor of 0.46 is applied to the calibrated charges of hits from 10” PMTs.

This scaled, effective charge (Qeff ) is the quantity used in the reconstruction algorithms.

3.3.2 Core Fit

The air-shower core refers to the dense cluster of shower particles along the trajectory of the primary, incident gamma ray or cosmic ray. The air-shower core of the large sample event (a high-probability gamma ray from the Crab Nebula) shown in figure 3.8, a plot of the location and charge of all hits recorded for this shower, is clearly identifiable in the dense cluster of high-charge hits in the upper left section of the array. The first step in reconstructing a HAWC air-shower event is to determine where on the array the shower core lands. This is done by fitting the lateral distribution function, the effective charge of hits as a function of their distance to the core, with the core location a

50 Figure 3.8: Charge measurements for hits in a high-probability gamma-ray air-shower event [72]. Open circles represent WCDs, while colored circles show the positions of PMTs in the main HAWC array that recorded a hit for this event. The color scale shows the effective charge of each hit. The air-shower core is clearly identifiable in the dense cluster of high-charge hits. parameter in the fit. The lateral distribution function for the same sample event used for figure 3.8 is shown in figure 3.9. Note that the handful of “missing hits” caused by the gap in instrumentation needed for the DAQ system (visible as the “hole” in the array in, e.g., figure 3.8) is not an impediment to fitting the lateral distribution function and does not have a significant impact on HAWC’s core resolution. The hits lost from the “missing” ∼6 tanks in this gap would account for a low fraction of the total number of hits even in HAWC’s smallest air-shower events. While observed lateral distribution functions are best described analytically by the Nishimura-Kamata-Greisen (NKG) function [73], fitting the NKG function to data is com- putationally intensive [72]. To speed up the reconstruction, the HAWC core fitter uses a simplified version of the NKG function called the “Super Fast Core Fit” (SFCF) function:

! 1 2 2 N −|xi−x| /2σ Si = A 2 e + 3 (3.3) 2πσ (0.5 + |xi − x| /Rm)

51 Figure 3.9: The lateral distribution function for the same high-probability gamma-ray air-shower event shown in figure 3.8 [72]. Plotted is the effective charges of hits in the event as a function of their distance (measured along the ground) to the reconstructed air-shower core. The SFCF fit (equation 3.3) to this data is shown, along with the “PINC moving average” (a parameter in the PINCness background rejection variable discussed in section 3.5.2).

where Si is the effective charge of the hit recorded in PMT i, A is the fit amplitude, x is

the core position, xi the position of PMT i, σ is the width of the Gaussian part of the fit

function, N is a normalization factor for the second term, and Rm is the Molière radius (which indicates the transverse scale of electromagnetic showers). Values of 10m, 5 · 10−5,

and 120m are used for σ, N, and Rm, respectively. The core position and overall amplitude, A, are fit to data. The core position used to seed the fit is estimated from the “center of mass” core location: the effective-charge-weighted average position of all hits used in the reconstruction. The lateral distribution function shown in figure 3.9 for a sample gamma-ray event includes its SFCF fit, which describes the data reasonably well. The small discrepancies with data induce errors in the core location that are of the order of just a few meters for larger air-shower events that land on the main array. This core resolution is sufficient to accurately fit the air-shower plane and determine the direction of the primary particle.

52 3.3.3 Angle Fit

Recall that air-shower particles are distributed along a roughly planar shower front with a thickness of ∼1 – 3 meters (see figure 2.3 from section 2.1.1). The inclination of the shower front for the high-probability gamma-ray sample event discussed in the context of the core reconstruction is easily seen in figure 3.10, which shows the hit times (as the color scale in a plot showing hit positions) for this event.

Figure 3.10: Time measurements for hits in the same high-probability gamma-ray air-shower event shown in figures 3.8 and 3.9 [72]. Open circles represent WCDs, while colored circles show the positions of PMTs in the main HAWC array that recorded a hit for this event. The color scale shows the time of each hit. The inclination of the air-shower plane is visible in the transition from early to late time hits from the upper right to lower left sections of the array.

The most straightforward way to determine the direction of the original primary particle that produced an air shower (from HAWC data) is to use these hit times and positions to fit a plane to the shower front. (The primary particle’s incident trajectory is the normal to this plane.) Recall, however, that off-axis particles, having a larger distance to travel before

53 reaching the shower plane, will lag behind the shower core, inducing a roughly spherical curvature to the shower front. Furthermore, timing measurements are based on the arrival of the first detected shower particle and the shower front has both a higher particle density and a smaller width near the shower core. A hit is therefore less likely to have been produced by a particle at the leading edge of the shower front the further it is from the shower core. This effect, known as sampling, produces an additional time delay based on lateral distance from the shower core. The HAWC reconstruction algorithms use an empirically derived timing correction to simultaneously account for these curvature and sampling effects. This correction is used to offset the time delays for hits far from the core and coerce the air-shower data into a flat plane. A simple χ2 planar fit is then used to determine the orientation of the shower front and the direction of the incident primary particle. The magnitude of the curvature and sampling delays (for the same example gamma-ray event from figures 3.8 – 3.10) can be seen in figure 3.11, which shows, as a function of distance to the core, the delay between hit times and the time a planar air-shower front was expected

to pass through the array. Hits 100 m from the core are delayed from a flat plane by ∼10 ns. This sample event has a very high probability of being a gamma ray (according to the criteria developed in section 3.5) and was reconstructed in close proximity to the Crab Nebula. We can assume that such events are indeed gamma rays from the Crab Nebula and use the well known position of the Crab to calculate the “true” orientation of their shower planes. This allows for an accurate calculation of expected hit times that should not be impacted by the finite angular resolution of the detector. This method was used to calculate the expected hit times in figure 3.11. To obtain the combined curvature and sampling corrections used in the HAWC recon- struction, the data plotted for a single event in figure 3.11 was assembled for a large sample of these high-probability Crab gamma rays. A complex functional form for hit time delays as a function of lateral distance from the shower core and hit charge was derived from simulations,

54 and the Crab data was used to fit the parameters of this function and provide the desired timing corrections.

Figure 3.11: A demonstration of the shower curvature and sampling effects for the same high-probability gamma-ray air-shower event shown in figures 3.8 – 3.10 [72]. Plotted is the recorded time of hits in the event – relative to the time expected from assuming that all air-shower particles arrive along a perfect plane – as a function of their distance (measured along the ground) to the reconstructed air-shower core. The color scale shows the effective charge of each hit. The curvature and sampling effects are evident in the increasing time delay of hits far from the shower core.

3.3.4 The Reconstruction Chain

The core and angle fitting algorithms described in the previous sections are run multiple times to refine the plane fit and reduce the effect of detector noise. Noise, in this context, refers to hits from, e.g., PMT afterpulsing or small fragments of coincident sub-threshold showers. (Sources of detector noise, the unique challenge they present to reconstructing the small air showers of primary interest in this thesis, and the alterations to the standard reconstruction chain presented here that we developed to address these challenges are discussed in chapters 4 and 5.)

55 The first step in the reconstruction chain is the center of mass core fit, run to obtain a rough guess for the core location. This core guess is then used to seed the SFCF fit and obtain a more accurate core location. The curvature and sampling corrections for hit times are calculated with this SFCF core position, and the χ2 planar fit is then used to obtain a direction reconstruction. These initial steps in the reconstruction chain use all hits that pass the hit selection criteria discussed in section 3.3.1. All hits not within ±50 ns of the air-shower plane found in the first angle fit are subsequently assumed to be caused by noise. To obtain a more accurate core and direction reconstruction, these noise hits are discarded and the SFCF core fit and χ2 angle fit are run a second time.

3.4 Event Size Bins

In standard HAWC analyses, data is broken up into ten event-size bins, denoted bins 0 – 10, based on the fraction of available PMTs that record a signal during the air-shower event. Unavailable PMTs include those discarded for failing the hit selection criteria discussed in

section 3.3.1 as well as those temporarily taken out of service. Typically ∼1000 of the 1200 PMTs in the main HAWC array are available for air-shower reconstruction. This event-size parameter, the fraction of Hit PMTs, or fHit, is correlated with angular resolution; the difficulty of reconstructing the shower plane is largely determined by the number of hits available for the fit. The fHit thresholds of the event-size bins, listed in table 3.1, are set so that the event rate in each bin is roughly half that of the previous bin. This provides finer binning for lower values of fHit where HAWC’s angular resolution changes more rapidly and ensures that events in each bin can be reconstructed with similar accuracy. The fHit variable is also correlated with energy. Generally, larger showers are produced by higher-energy primary particles. The fHit bins therefore serve as a rough energy proxy. However, as can be seen in figure 3.12, there is significant overlap in the energy distributions of the ten size bins. This overlap is caused primarily by variations in the amount of atmosphere

56 fHit Thresholds for HAWC’s 10 Event Size Bins Bin Definition Bin 0 0.044 6 fHit < 0.067 Bin 1 0.067 6 fHit < 0.105 Bin 2 0.105 6 fHit < 0.162 Bin 3 0.162 6 fHit < 0.247 Bin 4 0.247 6 fHit < 0.356 Bin 5 0.356 6 fHit < 0.485 Bin 6 0.485 6 fHit < 0.618 Bin 7 0.618 6 fHit < 0.740 Bin 8 0.740 6 fHit < 0.840 Bin 9 0.840 6 fHit

Table 3.1: The definition of HAWC’s standard 10 event-size bins. The fHit thresholds are set to make the event rate in each bin 50% that of the previous bin.

the air shower must travel through before reaching HAWC, which is influenced both by the primary particle’s zenith angle and the atmospheric depth of its first interaction, and where on (or near) the HAWC array the shower core lands. The amount of atmosphere traversed by the air shower affects the number of shower particles present upon reaching the array, and showers landing far from the center of HAWC will deposit much of their energy outside the array where it of course can’t be measured. The lowest-energy “Bin 0” data sample contains near-threshold air showers, with the lower fHit bound of 4.4% set just above the noise-dominated regime in which events that pass the air-shower trigger are mostly produced by detector noise rather than reconstructible air showers. Up to the time of this writing, however, the bin 0 data has been ignored due to significant discrepancies between data and HAWC’s bin 0 Monte Carlo. We will discuss the cause of and provide a solution to these modeling discrepancies in chapter 4. The improvements to the simulation discussed there have allowed us to re-introduce the bin 0 data in our analysis and significantly lower HAWC’s energy threshold (compare the bin 0 and 1 energy distributions in figure 3.12, but note that bin 0 actually has twice as many events as bin 1). This is of significant importance in the study of extra-galactic sources like GRBs for which much of any TeV scale emission will be absorbed by the EBL on its way to Earth.

57 Figure 3.12: Simulated energy distributions in each event-size bin for gamma rays from a Crab Nebula like source with an E−2.63 power-law spectrum transiting at a declination of 22◦ north. All distributions are scaled to peak at 1.

It should also be noted that HAWC now has event by event energy estimators [74] that provide much better energy resolution than the fHit binning approach discussed here.

However, they unfortunately do not work for the .1 TeV air showers that are of primary interest for HAWC GRB searches. We therefore retain the fHit binning approach for our analysis.

3.5 Gamma/Hadron Shower Discrimination

Recall from our discussion of air-shower physics in chapter 2 that the hadronic interactions in cosmic-ray air showers tend to transport more energy off-axis than the electromagnetic interactions in gamma-ray showers. The muons, pions, and other hadronic secondaries produced in the final stages of the development of hadronic showers can produce large, isolated hits or small clusters of hits in HAWC far from the shower core. Gamma-ray showers,

58 on the other hand, exhibit a smooth and symmetric decline in energy with distance from the shower axis. Cosmic-ray and gamma-ray air showers therefore have significantly different lateral energy profiles, as can be seen in the comparison of the lateral distribution functions of the high-probability cosmic-ray and high-probability gamma-ray air showers shown in figure 3.13.

Figure 3.13: Lateral distribution functions for high-probability cosmic-ray (left) and gamma-ray (right) air showers [72]. The SFCF fit (see section 3.3.2) and “PINC moving average” (a parameter in the PINCness variable discussed in section 3.5.2) are included. The lack of large hits far from the core and smooth decline of charge with distance from the core seen in the lateral distribution function of the gamma ray is typical of electromagnetic showers and distinguishes them from cosmic-ray events.

Standard HAWC analyses use two parameters, termed compactness and PINCness, to quantify how much charge a shower deposits far from the shower core and how smoothly PMT light levels decline with distance from the core. These variables are calculated for every event and those with compactness and PINCness values characteristic of cosmic rays are cut out (ignored in the analyses) in an attempt to reduce the cosmic-ray background. The compactness and PINCness variables are defined in the following sections. In chapter 5, we will compare simulated gamma-ray and cosmic-ray distributions of these variables in each of the analysis bins developed specifically for the GRB search. There, we will demonstrate their utility in removing background and optimize a set of compactness and PINCness cut values for each of our new bins.

59 3.5.1 Compactness

The compactness variable is designed to remove events with high-charge hits far from the shower core, as these are typically produced by the hadronic secondaries (muons, in particular) present in cosmic-ray showers. It is defined in terms of the maximum charge recorded for the largest hit more than 40 meters from the reconstructed core, CxPE40. However, more energetic gamma rays will, of course, produce larger (higher-charge) hits at a given distance from the core than less energetic gamma rays. Compactness is therefore scaled by the measured size of the air shower:

nHitSP20 compactness = (3.4) CxPE40 where nHitSP20 is the number of hits within ±20 ns of the reconstructed air-shower plane. Hits out of time with the plane fit are excluded to reduce the impact of detector noise on the measured size of the air shower. With CxPE40 in the denominator, cosmic-ray showers with muons, pions, and other hadronic secondaries producing large hits far from the core tend to have a lower value of compactness than gamma rays. The compactness cut used in HAWC analyses therefore requires that the compactness of events be above some cut value. Compactness cuts were optimized separately in each of the 9 event-size bins (see section 3.4) used in standard HAWC analyses [72]. We re-optimize both the cut values and the compactness parameters (the 40 m and 20 ns numbers discussed above) for the custom bins used in our analysis in chapter 5.

3.5.2 PINCness

The Parameter for Identifying Nuclear Cosmic rays (PINCness) is used to quantify the presence of clusters of high-charge hits that are expected for cosmic-ray showers but not in gamma-ray events, in which there is a smooth and symmetric decline of charge with distance from the shower core. A PINCness term is defined for the ith hit in an event in terms of the

60 logarithm of its effective charge (ζi = log10Qi) and the average value of that quantity for all

other hits in a 5 meter thick ring centered on the shower core and containing the ith hit: hζii. The value of hζi as a function of distance to the shower core (the “PINC moving average”) is plotted in the lateral distribution functions shown in figure 3.13. Clearly, the charges in the gamma-ray event are much more tightly clustered around the average than those in the cosmic-ray event. PINCness characterizes the overall discrepancy between the measured hit charges and the moving average for all N hits in an event with the χ2 formula:

1 N (ζ − hζ i)2 PINCness = X i i . (3.5) N σ2 i=0 ζi For small, low-energy air showers, most PMTs in the rings used to calculate hζi do not record a signal, and it is difficult to characterize the typical charge (at a given distance from the core) needed to evaluate discrepancies from this average. A compactness cut is more useful for these events. For larger, higher-energy air showers, the more sophisticated PINCness cut (above which events contain enough clusters of high-charge hits sufficiently discrepant from the PINC moving average to be considered part of the cosmic-ray background) provides the majority of the background rejection power. The PINCness cut values optimized for the 9 event-size bins used in standard HAWC analyses are discussed in [72]. As with compactness, we will re-visit the PINCness variable in chapter 5 when we re-optimize their cut values for our custom GRB analysis bins.

3.6 Sensitivity to Gamma Rays

Figure 3.14 shows HAWC’s sensitivity with 507 days of data (the amount available when this study was carried out) to a source at a declination of 22◦N (which transits nearly overhead at HAWC’s latitude) with an E−2.63 power-law spectrum. Sensitivity is calculated as the minimum flux needed for a 50% probability of obtaining a 5σ detection using the standard

61 event-size-bin analysis discussed in section 3.4 (with bin 0 excluded). See [72] for additional calculation details and a discussion on discrepancies from HAWC’s design sensitivity.

Figure 3.14: HAWC’s quasi-differential gamma-ray sensitivity with 507 days of data compared to the sensitivity of the Fermi space telescope, three IACTs (HESS, VERITAS, and MAGIC), and an upper limit on the Crab Nebula from CASA-MIA [72]. HAWC’s sensitivity is calculated in each of the 9 standard event-size bins from section 3.4 (bin 0 is excluded) as the minimum flux necessary to achieve a 50% probability of detecting a Crab Nebula like source (a source at a declination of 22◦N with an E−2.63 power-law spectrum) at 5σ. The dark red line is a fit to the minimum flux calculated for the 9 size bins shown in light red. Also plotted is 1%, 10%, and 100% of the gamma-ray flux from the Crab Nebula.

Comparing HAWC’s sensitivity curve to those reported by the three currently operational IACTs (HESS, VERITAS, and MAGIC), we see that HAWC is among the most sensitive gamma-ray experiments in the world above ∼10 TeV, with good sensitivity down to a few hundred GeV. While the IACTs perform significantly better at those lower energies with just 50 hours of observation, recall from our discussion on the complementary abilities of IACTs and EAS detectors (section 2.3) that these pointed instruments have small fields of view and

62 are only operational at night under favorable conditions. HAWC observes all gamma-ray sources in the northern sky (from declinations of roughly -26◦ to 64◦) every day and has now

accumulated ∼4 years of data. It is therefore appropriate to compare HAWC’s sensitivity with (on the order of) a year of data to that of IACTs (which must typically observe each source one at a time over smaller durations) on the order of tens of hours.

As discussed in section 2.3.3, HAWC’s ∼95% duty cycle and competitive sensitivity over

this wide field of view (instantaneously, ∼2 sr) make it an ideal survey instrument for high- energy gamma-ray sources (see, e.g., HAWC’s gamma-ray source catalog [75]). Additionally, HAWC can study extended gamma-ray sources that are not visible in the small field of view of IACTs, such as Geminga [76] and the Fermi Bubbles [77], as well as the large-scale cosmic-ray anisotropy [78]. Of particular interest in this thesis is the opportunity to detect high-energy transient sources (such as AGN flares and GRBs) afforded by HAWC’s wide field of view and high duty cycle. Due to EBL attenuation (see section 1.1.2.2), HAWC’s sensitivity below a few TeV is of particular importance in the study of these extra-galactic phenomena. Our efforts to recover the sub-TeV “bin 0” data set (defined in section 3.4) and improve the reconstruction of these small air showers (documented in chapters 4 and 5) have substantially improved HAWC’s

relatively poor sensitivity in the range of ∼100 GeV – 1 TeV. In chapter 6, we will show that our analysis provides up to an order of magnitude improvement in HAWC’s sensitivity to GRBs.

63 Chapter 4 | HAWC’s Small Air-Shower Simula- tion

Up to the time of this writing, published HAWC analyses have not made use of the lowest- energy “bin 0” data sample discussed in section 3.4. The main challenge in incorporating this data and achieving the lowest possible energy threshold has been a poor understanding of how the detector and reconstruction algorithms respond to these small air showers. Significant discrepancies between the bin 0 data and HAWC’s Monte Carlo simulations were observed. As a reasonably accurate detector simulation is necessary to properly interpret measurements, this is why the bin 0 data set has, up until now, been ignored. In this chapter, we will begin with an overview of HAWC’s Monte Carlo simulations in section 4.1. The discrepancies between data and Monte Carlo for HAWC’s small air-shower event samples will be discussed in section 4.2, and their resolution, a more accurate detector noise model, will be presented in section 4.3. The improvements to the simulation developed there have now been incorporated into the standard HAWC Monte Carlo.

64 4.1 Monte Carlo Overview

There are several stages involved in the production of HAWC’s Monte Carlo. First, air showers are simulated with the CORSIKA software [79]. This stage tracks the generation of air-shower particles and their propagation through the atmosphere to the HAWC array. Next, GEANT 4 [80] is used to simulate the behavior of air-shower particles within the detector: their propagation through the water tanks, the production of Cherenkov photons, and the propagation of Cherenkov photons to the face of the PMTs. Custom HAWC software is used for the remainder of the simulation. The efficiency with which PMTs detect Cherenkov photons is modeled by studying their response to vertical muons in data [72], and detector noise hits are added to these air-shower signals (we will explore this step in much greater detail in the following sections). The same algorithms used to analyze actual air showers from experimental data (see section 3.3) are then used to reconstruct the simulated air showers. To understand the issues involved with simulating detector noise discussed in the remainder of this chapter, it is important to highlight that HAWC’s Monte Carlo simulates individual air-shower events and adds detector noise “on top.” These Monte Carlo air showers are then weighted (according to their energy, zenith angle, and the distance of their core to the center of the array) so that the simulated event distributions match that of the gamma-ray source under study. This allows for the use of the same simulated event samples (which are computationally expensive to produce with good statistics) in the analysis of a variety of gamma-ray sources.

4.2 Discrepancies with Data

While HAWC’s air-shower and detector simulations model the data very well (see, e.g., [72]) in event-size bins 1 – 9 (defined in section 3.4), the bin 0 data set, as discussed in the introduction to this chapter, exhibited significant discrepancies between data and Monte

65 Carlo. These discrepancies dramatically manifested themselves in the detector’s simulated response to the Crab Nebula (which, as the brightest gamma-ray source in the northern sky, serves as a reference source for calibrating instruments like HAWC). HAWC’s Monte Carlo predicted that this would produce a large observable excess in the bin 0 data set amounting to a 10σ deviation from background (without cosmic-ray rejection cuts) with one year of data. However, no significant excess was observed from the Crab’s location after a full year of observation. The overly optimistic predictions for bin 0 arose from a discrepancy that caused air showers of this size to be reconstructed with much greater accuracy in simulation than in data. This can be seen in figure 4.1, which compares data and Monte Carlo distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for a large sample of air-shower events. (No event cuts beyond the selection of bin 0 air showers were applied, so these distributions are produced almost entirely by the observed and simulated all-sky, isotropic cosmic-ray flux.)

Figure 4.1: Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air-shower events, scaled to 1s of observation. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively.

66 In data, this nHitSP10/nHit distribution (where nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively) is clearly bimodal. Events in the first peak have a low fraction of hits close in time with the reconstructed air-shower plane, indicating a failure in the reconstruction or a lack of a clear air-shower plane to fit (which can be caused by large amounts of detector noise). The second peak contains events with identifiable and reasonably well fit air-shower planes. Not only is this first peak completely absent in the simulation, the Monte Carlo nHitSP10/nHit distribution peaks at a significantly higher value. As we will confirm in the following sections, this is caused by an underestimation of the detector noise rate, causing air showers to be easier to reconstruct and a lower frequency of out of time noise hits for successful fits. A simulation that is accurate for the small bin 0 air showers must capture this bimodal behavior in nHitSP10/nHit. Understanding and properly simulating the poorly reconstructed events in the first peak is necessary to understand why the reconstruction is failing, to evaluate potential solutions to these failures (which will be discussed in the following chapter), and, generally, to properly interpret bin 0 observations. We will therefore re-evaluate this nHitSP10/nHit distribution to test the bin 0 modeling discrepancy resolutions explored in the following section.

4.3 Improved Modeling of Detector Noise in the Monte

Carlo

In this section, we will show that the discrepancies between data and Monte Carlo for bin 0 air showers discussed above were caused by inaccuracies in the simulation of detector noise: random hits (PMT signals) caused by, for example, PMT afterpulsing and small fragments of coincident sub-threshold air showers. Before the introduction of the techniques discussed here, the default method for simulating noise in HAWC’s Monte Carlo was to throw hits with

67 a charge of ∼1 PE uniformly in time at a rate of 10 kHz. This was done independently in each PMT, producing uncorrelated noise. This method is inaccurate for a variety of reasons. To begin with, it significantly

underestimates the actual noise rate. A HAWC air-shower event lasts 1500 ns; with ∼1000

available PMTs, a single PMT noise rate of 10 kHz will produce, on average, only ∼15 noise

hits per air-shower event. (And as discussed in section 3.3.1, only ∼1/3 of these – those occurring between 150 ns before or 400 ns after the air-shower trigger time – will end up being considered by the reconstruction.) As we will demonstrate in section 4.3.1, the detector

actually produces, on average, ∼50 noise hits in any 1500 ns window. We will see that this noise model’s failure to take into account correlation between noise hits and the fairly common higher-charge noise hits (produced by, e.g., muons) is also consequential. While these issues are not problematic in the larger event-size bins where noise accounts for a very small portion of the event and does not confuse the plane fitting algorithms even in data, they are important to accurately model in smaller air showers. The smallest event-size bin (bin 0) contains air showers that leave a signal in only 4.4 – 6.7% of the array. With

∼1000 PMTs available for the reconstruction at any given time, this represents a typical event

size of just ∼55 hits. Given the noise rates discussed above, this means that a significant fraction of hits in bin 0 events are typically produced by detector noise that is completely unrelated to the actual air shower. As such events are difficult to reconstruct and are not commonly produced by the unrealistic noise model used in the Monte Carlo, this could easily (and we will see, does) explain both the population of poorly reconstructed events observed in data and the discrepancy between data and Monte Carlo seen in figure 4.1. In the following sections, we will explore alterations to the Monte Carlo that provide a more accurate treatment of detector noise. Section 4.3.1 documents attempts to tweak the existing noise model to produce a more accurate noise rate and charge distribution for noise hits, while section 4.3.2 presents an algorithm to use noise from actual data in the simulation. We will see that only this latter technique produces an accurate bin 0 simulation.

68 4.3.1 More Accurate Noise Models

The simplest way to improve the accuracy of the single PE, uncorrelated noise model described above is to increase the noise rate and produce a more realistic value for the total number of noise hits in simulated air-shower events. Figure 4.2 compares the nHitSP10/nHit distribution (defined at the beginning of this chapter) observed in data and produced in Monte Carlo with simulated single PMT noise rates of 10 kHz (the default value), 20 kHz, and 30 kHz, respectively. While the simulations with higher noise rates have more events with a significant fraction of hits out of time with the reconstructed air-shower plane (as they must), the extra noise does not seem to confuse the plane fitter to the point of producing a second population of events in which the majority of hits are out of time with a plane fit that has clearly failed. Even with a more realistic noise rate, the single PE, uncorrelated noise model is not capable of reproducing the bimodal nHitSP10/nHit distribution seen in data.

Figure 4.2: Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air-shower events, scaled to 1s of observation. The MC distributions were produced with single PE, uncorrelated noise occurring at a rate (in each PMT) of 10 kHz, 20 kHz, and 30 kHz. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively.

69 Another layer of complexity can easily be added to the noise simulation by more accurately modeling the charge distribution of noise hits. To implement this change, we collected a large sample of 1500 ns long (the length of a HAWC air-shower event) periods of raw, un-triggered data (the raw data stream observed before the application of an air-shower trigger). While these random chunks of data noise are unlikely to overlap with an actual air shower (with HAWC’s 25 kHz trigger rate, there is only a few percent chance of this occurring), we nonetheless discarded 1500 ns samples with more than 100 hits or any hit with a charge greater than 100 PEs as a quick check that no large air showers were present. Both the total noise rate and the charge of noise hits were modeled from these data samples by counting the total number of hits in each 1500 ns window and recording the charge of each individual hit in all windows. These distributions – the number of noise hits in 1500 ns and the charge of noise hits – are plotted in figure 4.3. The number of hits

distribution is modeled with the Gaussian function fg and the charge distribution (in PEs) with the power-law function fpl (both of which contain irrelevant normalization factors C):

−(x − 52.8)2 ! f = C exp (4.1a) f = C(x + 1.90)−2.13 . (4.1b) g 2 · 12.92 pl

Noise hits are created for a simulated air shower by using the fits fg and fpl as probability

distributions. First, the algorithm samples from fg to choose the number of noise hits to add to the simulated shower. Each noise hit must then be assigned a charge, a time, and to a particular PMT. The PMT is chosen randomly, the time is assigned to a random point in

the 1500 ns shower window, and the charge is chosen by sampling fpl. The main physical difference between this new noise model and the actual noise seen in data comes from how the hit times and PMTs are chosen: As the model throws noise hits in random PMTs at random times, it does not account for correlation between hits (produced, for example, by muons that hit all four PMTs in a tank or tiny fragments of small, sub-threshold air showers). Figure 4.4 again compares observed and simulated distributions of nHitSP10/nHit, this time with the new Monte Carlo noise model based on the Gaussian and power-law fits to the

70 Figure 4.3: Left: the distribution of the number of hits in random 1500ns chunks of raw, un-triggered data with a Gaussian fit. Right: the charge distribution of raw data hits with a power-law fit. The fit functions are given in equation 4.1. Both distributions are area normalized to one.

Figure 4.4: Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air-shower events, scaled to 1s of observation. The MC distribution was produced with the data noise model constructed from the Gaussian and power-law fits to the number of noise hits and charge distributions plotted in figure 4.3 and explained in the accompanying text. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively.

71 number of hits and hit charges in samples of data noise. While the Monte Carlo distribution matches the data much better than it did with the single PE noise, it still does not fully capture the bimodal behavior seen in data. The first peak with poorly reconstructed air showers remains poorly modeled. Apparently, the noise correlation that this new model fails to account for plays a significant role in the reconstruction of small air showers.

4.3.2 A Data Noise Overlay Algorithm

The sources of correlation between detector noise hits – fragments of sub-threshold air showers, muons that leave a signal in multiple PMTs in a tank, etc. – are difficult to accurately model. Rather than attempting to characterize and simulate these correlations, a “data noise overlay” algorithm was developed to directly use noise observed in data in simulated showers. The method is simple: First, we assembled a large sample of raw, un-triggered data (collected before the application of a shower trigger and therefore dominated by detector noise). Then, in each 1500 ns long simulated air-shower event, rather than simulate detector noise, a 1500 ns long chunk of hits is randomly selected from the raw data sample and directly added to the simulated shower as noise. The only additional conceptual detail is that when a 1500 ns noise sample is pulled out of the raw data stream, HAWC’s air-shower trigger condition (28 hits arriving within any 150 ns window) is simulated to check if the sample contains enough hits clustered close enough in time to trigger the detector on its own. “Noise samples” that pass the trigger are not used. This is necessary due to how the simulation is structured. The simulation can only be accurate for an event-size threshold well past the regime of noise dominated triggers. We cannot simulate noise triggered events, because we do not simulate random periods of detector noise with showers thrown in “on top.” We simulate individual showers and add noise to each during the reconstruction phase. Simulated event rates are thus purely determined by expectations for air-shower arrival rates, not noise frequency. As previous noise models assumed no correlation between noise hits, they did not produce enough hits

72 close enough in time to pass the air-shower trigger either on their own or in conjunction with small, sub-threshold simulated showers. This extra step was therefore not previously necessary. As shown in figure 4.5, this data noise overlay does indeed properly model the nHitSP10/nHit distribution. It reproduces the bimodal behavior seen in data and accurately models the population of poorly reconstructed showers. This confirms that inaccuracies in the uncorre- lated detector noise models previously used in the simulation were responsible for the poor modeling of reconstruction quality in bin 0. As we will see in section 5.4.2, HAWC air-shower simulations making use of the data noise overlay algorithm also resolve the errors in high level predictions of the bin 0 gamma-ray excess from (and detection significance of) the Crab Nebula discussed at the beginning of this chapter. The data noise overlay algorithm has therefore been adopted in HAWC’s standard Monte Carlo and is used in all work presented in the remainder of this thesis.

Figure 4.5: Data and Monte Carlo (MC) distributions of the fraction of hits within ±10 ns of the reconstructed air-shower plane for an all-sky, cosmic-ray dominated sample of bin 0 air-shower events, scaled to 1s of observation. The MC distribution was produced with the data noise overlay algorithm discussed in this section. The variables nHit and nHitSP10 refer to the number of hits used in the reconstruction and the number of those that are within ±10 ns of the reconstructed air-shower plane, respectively.

73 Chapter 5 | Improving HAWC’s Low-Energy Sen- sitivity

With the smallest air-shower event sample, bin 0 (defined in section 3.4), now well modeled, we turn our attention to improving the reconstruction of small air showers. Here, we will explore mechanisms to boost HAWC’s low-energy sensitivity so that full advantage can be taken of this (now well understood) data. Section 5.1 provides additional insight on the unique challenges involved with reconstruct- ing small air showers and motivates the development of the methods presented throughout the rest of this chapter. Section 5.2 discusses an additional reconstruction module, the multi-plane fitter, used to address the challenges of section 5.1. Section 5.3 outlines a scheme for re-binning HAWC data and re-optimizing a set of gamma/hadron separation cuts for low-energy analyses with this new reconstruction. The performance of this new low-energy analysis is discussed in sections 5.4 and 5.5. Section 5.4 focuses on an analysis of Crab Nebula data and a comparison of its results to Monte Carlo predictions. Section 5.5 details the improvements to HAWC’s low-energy sensitivity obtained over the standard reconstruction. Sections 5.6 – 5.9 provide supporting material: additional information on the calculation of the Monte Carlo Crab predictions, a review of other low-energy hadron rejection cuts

74 considered, techniques to improve low-energy sensitivity in the existing reconstruction, and a technique for removing detector noise.

5.1 Low-Energy Reconstruction Challenges

To reconstruct air showers from HAWC data, shower planes must be picked out from a background of noise hits (PMT signals) produced, for example, by small fragments of sub-threshold air showers, random atmospheric muons, and PMT afterpulsing. The number of noise hits in a 1500ns HAWC air-shower event window roughly follows a Gaussian distribution with mean ∼50 and standard deviation ∼13 (see section 4.3.1); after hit selection cuts (see section 3.3.1), roughly 1/3 of these end up being considered in the reconstruction. This means

that for low-energy (. 1 TeV) air showers, there can be a comparable number of noise hits and shower hits. (Recall that the bin 0 data sample has events with only, after hit selection

cuts, ∼45 to ∼70 hits total.) The plane fitting algorithms used up to the time of this writing are not ideal and perform poorly in this low signal to noise regime. These problems are illustrated in figures 5.1 and 5.2, which show 3-D spatial representations of HAWC air showers from data and simulation, respectively. The data air shower has an apparent shower plane which the plane fitting algorithm appears to have missed on account of the clusters of high-charge noise hits at the top of the plot. A similar situation is seen in the simulated air shower; here, hits from the air shower can be flagged and distinguished from noise hits, allowing for certainty in the identification of the actual air shower and problematic background noise. The predicted overall impact of detector noise on reconstruction quality can be seen in figure 5.3, which shows HAWC’s simulated angular resolution as a function of air-shower size. For showers with 4.4% of PMTs recording a hit (the lower threshold of event-size bin

0), detector noise worsens HAWC’s angular resolution by a factor of ∼5, broadening the point-spread function well beyond the range required for any interesting point-source analysis.

75 The impact of noise on angular resolution becomes negligible for air showers large enough to

leave a signal in at least ∼20% of the detector.

Figure 5.1: A small air shower from HAWC data. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. The color scale represents charge, with darker points for hits with larger PMT signals.

Figure 5.2: A small simulated HAWC air shower. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. The color scale represents charge, with darker points for hits with larger PMT signals.

76 Figure 5.3: Angular resolution vs. fHit (the fraction of available PMTs in an air shower that record a hit) for simulated gamma rays that land on the main HAWC array with and without detector noise. Angular resolution is calculated as the 68% containment angle in the Monte Carlo ∆θ distribution, where ∆θ is the angle between the reconstructed and true (simulated) shower axis.

5.2 The Multi-Plane Fitter

The multi-plane fitter algorithm was developed by the HAWC collaboration in part to address the issues discussed in section 5.1. At the time it was written, however, the impact of correlated noise (from, e.g., fragments of sub-threshold showers, which should arrive roughly along a plane) on the reconstruction algorithms was not fully appreciated, and the algorithm was not fully vetted and incorporated into the standard HAWC analysis. As the need to identify and remove correlated noise from consideration in the reconstruction became more apparent, work on this software resumed. The following subsections provide a general explanation of what the multi-plane fitter does and explore its impact on reconstruction variables and reconstruction time. See section 5.5 for a detailed analysis of the resulting improvement to low-energy sensitivity (over the standard 10 bin HAWC analysis defined in section 3.4) with the binning and cut scheme developed in section 5.3.

77 5.2.1 How the Algorithm Works

The multi-plane fitter uses a maximum likelihood method to fit multiple (up to three) shower planes to HAWC events, each of which must have at least five hits. However, the point of this algorithm is not to identify and analyze multiple air showers in a HAWC event window. Rather, the intention is to analyze a single air shower under the assumption that a significant fraction of hits in the event window are not associated with that primary shower plane. The additional planes found by the multi-plane fitter could contain hits from small fragments of sub-threshold showers or other types of noise that may or may not be correlated. The multi-plane fitter seeks to identify and discard this noise. All hits associated with the largest shower plane found by the algorithm are treated as the signal of the event window’s air shower, and all other hits are assumed to be noise and thrown out. The plane fit for the largest, primary shower plane found by the multi-plane fitter is not retained. Instead, the hits associated with it are fed to the standard reconstruction chain described in section 3.3 for a core and plane fit result. In practice, the multi-plane fitter is simply a filter to remove noise hits before the standard core and plane fitters are run.

5.2.2 Impact on Event Size

In removing noise hits, the multi-plane fitter changes the reconstructed size of air-shower events as measured in fHit, the fraction of available PMTs in the array with a recorded signal. As background detector noise should not be included in an estimate for event size, this is appropriate. However, if HAWC data is broken into event-size bins by fHit, as is done in the standard HAWC analysis described in section 3.4, applying the multi-plane fitter will shift events down into lower bins. The size of this effect can be seen in figure 5.4. It is, as expected, largest in the lower bins where the starting signal to noise ratio is smallest and the most noise hits are removed.

The multi-plane fitter removes so much noise from bin 0 air showers that only ∼24% of bin 0

78 events remain within the bin’s fHit boundaries (4.4% – 6.7%). The fraction of events that remain in their original bin rises as you go to higher fHit bins (on account of the rising signal to noise ratio of event hits and correspondingly smaller impact of the multi-plane fitter),

increasing to between 70% and 80% in bins 5 – 8 and ∼90% in bin 9. With so many events shifting down to lower bins, it is appropriate to define a new set of fHit thresholds for event-size binning of data reconstructed with the multi-plane fitter. A method for achieving this and the resulting new bin definitions are presented in section 5.3.

Figure 5.4: Left: Event size measured in fHit (= fraction of available PMTs with a hit) with the multi-plane fitter vs. fHit without the multi-plane fitter for a selection of HAWC air-shower events. A one to one line is superimposed along with vertical lines at the boundaries between the standard 10 fHit bins. Right: Events are sorted into the 10 standard fHit bins using, in one case, fHit calculated with the multi-plane fitter applied (MPF bins), and in the other, without the multi-plane fitter applied (default bins). The red curve shows the fraction of events in each default bin that falls into the same MPF bin. The blue curve shows the fraction of events in each MPF bin that falls into the same default bin.

5.2.3 Impact on Reconstruction Time

With HAWC’s large data set, the extra time required to reconstruct events accrued with the addition of new algorithms is a concern. Fortunately, this is not an issue with the multi-plane fitter, which actually speeds up the reconstruction by throwing out noise and negating the need for “downstream” algorithms to consider those discarded hits. The

79 magnitude of this effect can be seen in figure 5.5, which demonstrates a ∼5% decrease in reconstruction time with the application of the multi-plane fitter.

Figure 5.5: Histograms of the times required, in a large number of trials, to reconstruct the same 10,000 HAWC data events without the multi-plane fitter (red) and with the multi-plane fitter (blue).

5.3 A New Low-Energy Multi-Plane Fitter Analysis

This section describes a method for binning data and developing a set of gamma/hadron separation cuts for low-energy air showers reconstructed with the multi-plane fitter. It should therefore be assumed that all reconstruction variables in this section (such as the fHit event-size parameter discussed in section 5.2.2) are calculated with the multi-plane fitter applied. In section 5.3.1, a set of fHit thresholds are developed for the new analysis bins. In section 5.3.2, events are further sorted based on whether or not their reconstructed cores lie on or off the HAWC array. Section 5.3.3 explores the MC energy distributions of these new bins and presents the logic for combining higher-energy data samples in which fine binning is not necessary for analyses focused primarily on . 1 TeV data. Finally, in section 5.3.4, a new set of gamma/hadron separation cuts is developed.

80 5.3.1 Setting the fHit Thresholds for Analysis Bins

As discussed in section 5.2.2, applying the multi-plane fitter changes the estimated size of an air shower as measured in fHit, the fraction of available PMTs with a hit, and shifts events down to lower fHit bins. (Note, however, that the removal of noise is not expected to impact HAWC’s event-by-event energy estimators [74] or any previously published spectra within their reported energy range.) To address the change in estimated event size, a new set of fHit thresholds are developed here. The same basic procedure used to create the 10 fHit bins in the standard HAWC analysis (see section 3.4) is used: make 10 bins with fHit thresholds such that the event rate in each bin is 50% that of the previous bin. (This provides finer binning in the low fHit regions where HAWC’s angular resolution is changing more rapidly as a function of fHit.) However, additional attention is paid to where to set the lowest fHit threshold as this decision impacts how many of the lowest-energy air showers observable by HAWC will pass the final data selection scheme. Recall that HAWC’s air-shower trigger condition, the Simple Multiplicity Trigger (SMT), requires that a certain number of hits, SMT_nHit (which at the time of this writing is 28), arrive within 150ns. The value of SMT_nHit is intentionally set low to ensure that virtually all air-shower data is recorded. This places the SMT trigger in the “noise dominated regime,” where the size of events that pass the trigger is still small enough that most are not actually

reconstructible air showers but periods of high detector noise that happen to provide ∼30 hits within 150 ns (noise events). This creates two issues for the lowest fHit bin. First, as you decrease the lowest fHit threshold you are entering a regime with a higher proportion of noise events and therefore lower sensitivity. This is not an impediment to including this data as long as these issues are well modeled. The second (more serious) problem is that this noise dominated regime is in fact not well modeled by our Monte Carlo. As discussed in chapter 4, HAWC does not simulate periods of detector noise with air showers thrown in “on top;” we simulate air showers with given distributions of energy, arrival

81 direction, and arrival location, add noise to each shower, and weight events to match the desired astrophysical scenario. Our Monte Carlo is fundamentally not designed to simulate these noise events that dominate very low fHit data. The lowest fHit threshold must therefore be set above the noise dominated regime to allow for proper modeling of data and calculation of systematics. This point can be determined by examining the relative event rates of data and Monte Carlo (MC) as a function of fHit. Figure 5.6 shows data and MC fHit distributions and their relative rates. There is a clear discrepancy below an fHit of 0.03. The constant offset above 0.03 is a stable feature of a recent HAWC Monte Carlo software release (used throughout the remainder of this thesis) in all higher-energy data and is under investigation by the collaboration. Figures 5.7 – 5.9 examine if the low fHit discrepancy can be resolved by raising the value of SMT_nHit in HAWC’s air-shower trigger. However, as can be seen, the relative data and MC rates remain unstable below fHit = 0.03.

Figure 5.6: Left: fHit (the fraction of available PMTs that record a hit) distributions for data and Monte Carlo. Right: the number of data events in each fHit bin divided by the number of MC events in each bin. The difference in event rates between data and MC stabilizes at ∼50% above an fHit of 0.03.

The fHit < 0.03 data, denoted as bin -1, is therefore discarded. Bin -1 accounts for the

first ∼50% of HAWC data; the upper fHit threshold of the next bin (bin 0) required to cover the next 50% of HAWC data is 0.05. As can be seen from figure 5.10, bin -1 does not cover an energy range unseen by bin 0 above ∼100 GeV (where HAWC’s sensitivity picks up). This,

82 along with HAWC’s poor sensitivity in very low fHit data, means that there is little to be gained by attempting to retrieve air showers from bin -1 in the first place. Bin 0 is therefore taken to be the first fHit bin in our multi-plane fitter analysis. As discussed at the beginning of this section, the fHit thresholds for the remaining 9 bins are set to make the event rate of each 50% that of the previous bin. (Recall that this provides finer binning at low fHit values where HAWC’s angular resolution changes more rapidly – see, for example, figure 5.3.) Table 5.1 shows the fHit thresholds needed to meet this rate requirement.

Figure 5.7: The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 28. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns.

Figure 5.8: The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 29. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns.

83 Figure 5.9: The ratio of event rates in data to Monte Carlo in small fHit bins with SMT_nHit set to 30. SMT_nHit is a parameter in HAWC’s air-shower trigger, the Simple Multiplicity Trigger: HAWC triggers on events that yield at least SMT_nHit hits within 150 ns.

Figure 5.10: The energy distributions of simulated gamma-ray air showers that fall into bin -1 (0 < fHit 6 0.03) and bin 0 (0.03 < fHit 6 0.05). Bin -1 does not significantly cover any energies not seen by bin 0.

84 fHit Thresholds for a 10 bin Multi-Plane Fitter Analysis Bin Definition Bin 0 0.03 6 fHit < 0.05 Bin 1 0.05 6 fHit < 0.08 Bin 2 0.08 6 fHit < 0.13 Bin 3 0.13 6 fHit < 0.20 Bin 4 0.20 6 fHit < 0.30 Bin 5 0.30 6 fHit < 0.42 Bin 6 0.42 6 fHit < 0.54 Bin 7 0.54 6 fHit < 0.67 Bin 8 0.67 6 fHit < 0.80 Bin 9 0.80 6 fHit

Table 5.1: The fHit thresholds in a 10 bin multi-plane fitter analysis (beginning at fHit = 0.03) required to make the event rate in each bin 50% that of the previous bin.

5.3.2 On-Array vs. Off-Array Events

This section examines whether or not to include off-array events (events with a recon- structed core not on the main HAWC array) in the bins developed in section 5.3.1. As this procedure was developed for use in a point-source analysis, core cuts are optimized for point-source sensitivity. This is relevant because events that land off the array have worse angular resolution than those that land on the array. If including off-array events broadens the detector’s point-spread function beyond the angular extent of the source it is probing, its sensitivity will go down. It is likely that off-array events discarded in the procedure developed here will be beneficial in analyses of extended sources. The point at which including air-shower events that land at a distance from the detector begins to harm HAWC’s point-source sensitivity is determined by examining HAWC’s predicted Crab Nebula detection significance (see section 5.6 for calculation details) as a function of the reconstructed core fiducial scale variable, rec.coreFiduScale. This parameter provides a measure of the distance of an air shower’s reconstructed core from the edge of the

85 HAWC array, with a value of 100 denoting the edge of the array, larger values off the array, and smaller values on the array. See figure 5.11 for a precise definition.

Figure 5.11: A diagram illustrating the rec.coreFiduScale variable, a measure of the distance of the reconstructed core of an air shower to the edge of the HAWC array. Specifically, it is defined as the linear scale factor for the size of the 2-D shape outlining the edge of the HAWC array on which the reconstructed core is located. A value of 100 is chosen for the rec.coreFiduScale value denoting the edge of the array. The red outlines show possible coordinates for positions with three given rec.coreFiduScale values (50, 100, and 150). Small black dots are PMTs in the main HAWC array.

Figure 5.12 (left) shows HAWC’s predicted 1 year Crab significance as a function of a cut on the rec.coreFiduScale variable for Bin 0. This “cut” removes off-array events with rec.coreFiduScale above some cut value. As events that land further from the edge of the

array are included, the predicted sensitivity rises up to a rec.coreFiduScale cut of ∼150, where it levels off. Figure 5.12 (right) shows a 2-D histogram of a collection of simulated gamma-ray air showers’ reconstructed core fiducial scale values (rec.coreFiduScale) vs. their true (simulated) core fiducial scale values (mc.coreFiduScale). There is a group of events with

true cores on the array (with mc.coreFiduScale values between ∼50 and ∼100) but whose

reconstructed cores lie off the array (with rec.coreFiduScale values between ∼100 and ∼150). The additional sensitivity acquired by letting in events with rec.coreFiduScale values between

∼100 and ∼150 likely comes in part from this group of true on-array, reconstructed off-array events. Bin 1 (figure 5.13) looks much the same as bin 0. In bin 2 (figure 5.14), the predicted

86 Crab sensitivity peaks at a rec.coreFiduScale value of ∼105; including events that land more than a few meters from the edge of the detector no longer provide a boost to the predicted Crab signal. Here you can see that the true on-array, reconstructed off-array events now account for a small fraction of the total 100 < rec.coreFiduScale < 150 sample. In bin 3 (figure 5.15) the peak at rec.coreFiduScale = 105 becomes more pronounced. The predicted Crab significance also peaks at a rec.coreFiduScale cut of 105 in bins 4 – 10.

Figure 5.12: An examination of core positions for bin 0 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simu- lated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable.

Figure 5.13: An examination of core positions for bin 1 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simu- lated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable.

87 Figure 5.14: An examination of core positions for bin 2 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simu- lated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable.

Figure 5.15: An examination of core positions for bin 3 air showers. Left: predicted 1 year Crab significance as a function of a cut on core position calculated with simulated gamma rays and data background. Right: reconstructed core position vs. true (simu- lated) core position for simulated gamma-ray showers. See figure 5.11 for a definition of the coreFiduScale variable.

A cut of rec.coreFiduScale 6 105 is therefore built into the definitions of bins 2 – 10. In bins 0 and 1 where including off-array events boosts the predicted Crab signal up to rec.coreFiduScale ≈ 150, the on-array (0 6 rec.coreFiduScale < 100) and off-array (100 6 rec.coreFiduScale < 150) events are separated into different bins. This is done because the off-array events have a different angular resolution and energy distribution. (With the core off the array, the charge gradient of shower hits on the array is less pronounced, making it

88 more difficult to accurately locate the core and properly reconstruct the air shower. And if you have an off-array shower that leaves the same number of hits in the detector – has the same fHit – as a shower that actually lands on the detector, the off-array shower must be larger, must come from a higher-energy primary gamma/cosmic ray.) The resulting modified bin definitions with built-in core cuts are shown in table 5.2.

fHit and coreFiduScale Thresholds for a 12 bin Multi-Plane Fitter Analysis Bin Definition fHit thresholds coreFiduScale thresholds Bin 0a 0.03 6 fHit < 0.05 coreFiduScale < 100 Bin 0b 0.03 6 fHit < 0.05 100 6 coreFiduScale < 150 Bin 1a 0.05 6 fHit < 0.08 coreFiduScale < 100 Bin 1b 0.05 6 fHit < 0.08 100 6 coreFiduScale < 150 Bin 2 0.08 6 fHit < 0.13 coreFiduScale 6 105 Bin 3 0.13 6 fHit < 0.20 coreFiduScale 6 105 Bin 4 0.20 6 fHit < 0.30 coreFiduScale 6 105 Bin 5 0.30 6 fHit < 0.42 coreFiduScale 6 105 Bin 6 0.42 6 fHit < 0.54 coreFiduScale 6 105 Bin 7 0.54 6 fHit < 0.67 coreFiduScale 6 105 Bin 8 0.67 6 fHit < 0.80 coreFiduScale 6 105 Bin 9 0.80 6 fHit coreFiduScale 6 105

Table 5.2: Bin definitions for a 12 bin multi-plane fitter analysis. The fHit thresholds are set to make the event rate in each bin 50% that of the previous bin. See text for an explanation of the coreFiduScale thresholds.

5.3.3 Combining High-Energy Bins

To simplify the low-energy multi-plane fitter analysis developed here, high-energy (> 1 TeV) bins are combined into a single overflow bin. If significant emission is detected in this high-energy bin, that data should be re-analyzed with HAWC’s energy estimators [74] (which

do not have sufficient resolution for use below ∼1 TeV) rather than a simple fHit binning approach.

89 The energy distributions of the bins defined in table 5.2 and plotted in figure 5.17 are calculated from Monte Carlo gamma rays simulated with isotropic arrival directions and a

−11 −1 −2 −1 −1 E −2 flux of 3.6 · 10 T eV cm s sr ( T eV ) . To determine which of these bins are > 1 TeV, two characteristic bin energies are defined and plotted in figure 5.16. By bin 4, gamma

rays (with the above simulated flux) have an average energy of ∼3 TeV, and less than 10%

of them have an energy of < 1 TeV. Bins 4 – 9 are therefore considered the > 1 TeV bins, and data from bins 5 – 9 are combined with bin 4. The new and final bin definitions with this modification (bin 4 as an fHit overflow bin) are given in table 5.3, and their energy distributions are plotted in figure 5.18.

Figure 5.16: Left: the Bin 0a MC energy distribution from figures 5.17 and 5.18 with two characteristic bin energies illustrated. The “90% energy” is defined as the energy above which 90% of the distribution lies. The “peak energy” is, of course, the energy at which the distribution peaks. Right: These two characteristic energies as a function of bin number.

90 Figure 5.17: Monte Carlo energy distributions of simulated gamma rays in the 12 bins defined in table 5.2. Left: gamma rays are simulated with isotropic arrival directions −11 −1 −2 −1 −1 E −2 and a flux of 3.6 · 10 T eV cm s sr ( T eV ) . Right: distributions from the left-hand plot scaled so that each peaks at one.

Figure 5.18: Monte Carlo energy distributions of simulated gamma rays in the 7 bins defined in table 5.3. Left: gamma rays are simulated with isotropic arrival directions −11 −1 −2 −1 −1 E −2 and a flux of 3.6 · 10 T eV cm s sr ( T eV ) . Right: distributions from the left-hand plot scaled so that each peaks at one.

91 fHit and coreFiduScale Thresholds for a 7 bin Multi-Plane Fitter Analysis Bin Definition fHit thresholds coreFiduScale thresholds Bin 0a 0.03 6 fHit < 0.05 coreFiduScale < 100 Bin 0b 0.03 6 fHit < 0.05 100 6 coreFiduScale < 150 Bin 1a 0.05 6 fHit < 0.08 coreFiduScale < 100 Bin 1b 0.05 6 fHit < 0.08 100 6 coreFiduScale < 150 Bin 2 0.08 6 fHit < 0.13 coreFiduScale 6 105 Bin 3 0.13 6 fHit < 0.20 coreFiduScale 6 105 Bin 4 0.20 6 fHit coreFiduScale 6 105

Table 5.3: Bin definitions for a low-energy, 7 bin multi-plane fitter analysis arrived at by taking the bins from table 5.2 and collapsing the high-energy (& 1 TeV) data into one overflow bin (bin 4).

5.3.4 Gamma/Hadron Separation Cuts for New Bins

A set of gamma/hadron separation cuts (conditions for removing background events) for the bins defined in table 5.3 are developed here. The effectiveness of these cuts is evaluated by their impact on Monte Carlo predictions for HAWC’s detection significance of the gamma-ray bright Crab Nebula, a common reference source in TeV astronomy. (Calculation details for these predictions are provided in section 5.6.) We will begin by discussing the gamma/hadron separation variables used, then present the procedure used to optimize the cuts.

5.3.4.1 The Compactness and PINCness Variables Revisited

A handful of gamma/hadron separation variables were considered for a low-energy analysis. Ultimately, a better approach than the standard application of a compactness and PINCness cut was not found (however, as discussed below, the definition of compactness was slightly altered). See section 5.7 for a discussion of the other variables considered and justification for their non-inclusion. Recall from our discussion in section 3.5 that PINCness (the Parameter for Identifying Nuclear Cosmic rays) measures how smoothly the charge of air-shower hits varies as you move away from the core location. Cosmic rays tend to be “less smooth” in this regard, with

92 clusters of higher-charge hits present on top of the general pattern of decreasing charge with increasing distance from the core. The compactness variable is designed to directly identify high-charge hits far from the core (produced, for example, by the muons associated with cosmic-ray air showers). It is defined as the charge of the largest hit more than 40m from the core scaled by air-shower size (to allow for an appropriate definition of what constitutes a “high” charge). See section 3.5 for a more detailed discussion of these variables. Compactness provides most of the gamma/hadron discrimination power at low energies. The parameters involved in the definition of the compactness variable were therefore fine-tuned for our low-energy analysis. (PINCness, which is more effective at higher energies, is left as defined in section 3.5.) Compactness is defined specifically as nHitSP20/CxPE40, where nHitSP20 is the number of hits within +/- 20ns of the reconstructed shower plane and CxPE40 the charge of the largest hit more than 40m from the reconstructed core. An attempt was made to re-optimize these numbers (20ns and 40m) for low-energy showers. As seen in figure 5.19, HAWC’s predicted Crab sensitivity cannot be significantly increased by lowering the 40m parameter in either the lowest-energy bin used (bin 0a) or the highest (bin 4). However, defining the size of a shower with a stricter in-time cut by lowering the 20ns parameter to 10ns provides a very slight gain in both bins. As discussed in section 5.7.1, nHitSP-X is not well modeled for X < 10ns, so values less than 10ns were not considered. Compactness was therefore redefined for this low-energy multi-plane fitter analysis as nHitSP10/CxPE40. Comparisons of the data and Monte Carlo distributions for compactness and PINCness are shown in figures 5.20 – 5.26. Comparing the all-sky, cosmic-ray dominated data and simulated cosmic-ray distributions show that these variables are reasonably well modeled. (There is an overall underestimation of the MC cosmic-ray rate as discussed in section 5.3.1, but this issue has no bearing on compactness and PINCness specifically.) The MC gamma-ray distributions (simulated with an E−2 spectrum and isotropic arrival directions) for both variables have significantly different shapes than their cosmic-ray counterparts, demonstrating the utility of compactness and PINCness as gamma/hadron discriminators.

93 Figure 5.19: Predicted one year Crab significance (calculated with simulated gamma rays and data background) with a compactness cut of nHitSP-X/CxPE-Y > Z for bin 0a (left) and bin 4 (right). nHitSP-X gives the number of hits within +/- X nanoseconds of the reconstructed shower plane. CxPE-Y gives the charge of the largest hit more than Y meters from the reconstructed core. Cut values (Z) are chosen by, for each value of X and Y, looping through Z values and taking the one that gives the largest predicted significance.

Figure 5.20: Comparisons of bin 0a data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

94 Figure 5.21: Comparisons of bin 0b data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

Figure 5.22: Comparisons of bin 1a data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

95 Figure 5.23: Comparisons of bin 1b data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

Figure 5.24: Comparisons of bin 2 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

96 Figure 5.25: Comparisons of bin 3 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

Figure 5.26: Comparisons of bin 4 data and Monte Carlo distributions for the compactness (left) and PINCness (right) variables. The data and MC cosmic-ray distributions are scaled to 1s of observation. The MC gamma-ray distributions are scaled to have the same area as the MC cosmic-ray distributions.

97 5.3.4.2 Cut Optimization

Cuts on the compactness and PINCness variables discussed above were optimized for the 7 bins defined in table 5.3 on predicted Monte Carlo Crab detection significance. The cut values X and Y in the cut compactness > X and PINCness 6 Y are optimized simultaneously. (Events that fail this “cut” condition are assumed to be background and are ignored.) Plots of predicted Crab significance as a function of X and Y are shown in figures 5.27 – 5.33. Final cut values in bins 2 – 4 were taken from the maxima of these plots. In bins 0a, 0b, 1a, and 1b, a PINCness cut (applied with a compactness cut) cannot significantly boost predicted Crab sensitivity (see figures 5.27 – 5.30 and note that Y → ∞ is no PINCness cut). Cut values for these bins were therefore obtained by taking the compactness cut that maximizes predicted Crab significance and adding the strictest PINCness cut that will retain a 99% cut efficiency for gamma rays surviving the compactness cut. The final data selection scheme for a low-energy multi-plane fitter analysis with bin definitions and the cuts obtained from the above method is provided in table 5.4. See section 5.4.1 for an indication of the performance of these cuts in data and section 5.4.2 for a discussion on their gamma-ray efficiencies.

Figure 5.27: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 0a as a function of compactness and PINCness cut values. Compactness provides a small boost to predicted sensitivity, but the PINCness cut is ineffective.

98 Figure 5.28: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 0b as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective.

Figure 5.29: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 1a as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective.

99 Figure 5.30: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 1b as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut is ineffective.

Figure 5.31: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 2 as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, but the PINCness cut provides only a very slight gain.

100 Figure 5.32: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 3 as a function of compactness and PINCness cut values. Compactness provides a significant boost to predicted sensitivity, and the PINCness cut provides a small gain.

Figure 5.33: Predicted 1 year Crab significance (calculated with simulated gamma rays and data background) in bin 4 as a function of compactness and PINCness cut values. Both the compactness and PINCness cuts are effective in improving predicted sensitivity.

101 Bin Definitions and Cuts for a 7 bin Multi-Plane Fitter Analysis Bin Definition Gamma/Hadron Cuts fHit thresholds coreFiduScale thresholds compactness PINCness cut (>) cut (6) Bin 0a 0.03 6 fHit < 0.05 coreFiduScale < 100 3.8 3.5 Bin 0b 0.03 6 fHit < 0.05 100 6 coreFiduScale < 150 3.8 2.5 Bin 1a 0.05 6 fHit < 0.08 coreFiduScale < 100 5.8 3.5 Bin 1b 0.05 6 fHit < 0.08 100 6 coreFiduScale < 150 5.8 2.5 Bin 2 0.08 6 fHit < 0.13 coreFiduScale 6 105 8.3 2.6 Bin 3 0.13 6 fHit < 0.20 coreFiduScale 6 105 11.5 2.4 Bin 4 0.20 6 fHit coreFiduScale 6 105 13.0 1.9

Table 5.4: Bin definitions and cuts for a low-energy, 7 bin multi-plane fitter analysis arrived at by taking the bins from table 5.3 and optimizing a set of gamma/hadron separation cuts as described in the text. Note: compactness is not calculated with the standard definition (rec.nHitSP20/rec.CxPE40) but instead uses rec.nHitSP10 as discussed in section 5.3.4.1

5.4 Performance of the Low-Energy Multi-Plane Fitter Anal-

ysis

This section evaluates the performance of the low-energy multi-plane fitter analysis developed in section 5.3 on Crab Nebula data. Section 5.4.1 outlines Crab measurements in each fHit bin used in the analysis, while section 5.4.2 demonstrates a reasonably robust and accurate simulation by comparing these and other measurements to Monte Carlo predictions.

5.4.1 Results with One Year of Crab Data

As the brightest source in HAWC’s field of view and energy range, the Crab Nebula is a great source for testing new reconstruction algorithms with a relatively small data set. This section shows the results of observing the Crab in HAWC’s 2016 data with the low-energy multi-plane fitter analysis. Three detection significance measurements (calculated with the Li & Ma test statistic [81]) are provided for each fHit bin:

102 1. The significance of the Crab as a function of tophat smoothing angle (signal and background counts for each point on the sky are calculated by including air showers that arrive within this angle of the point in question). This curve peaks at the width of HAWC’s PSF, thereby providing a measure of angular resolution. Significance measure- ments are shown with and without the compactness and PINCness cuts developed in section 5.3.4, providing an indicator of the effectiveness of each.

2. A significance map of the region surrounding the Crab Nebula calculated with the peak smoothing angle from (1).

3. A histogram of the significance values calculated for each map pixel near the declination of the Crab Nebula. This histogram should follow a Gaussian distribution with width 1 (along with excesses in high significance bins corresponding to Crab detections). Plots include this expected distribution along with a Gaussian fit to the significance histogram. Large discrepancies between the fit and the expectation would indicate errors in how significance values are calculated. No such discrepancies were observed.

Figures 5.34 – 5.40 show these three observations in each fHit bin. The significance measurements indicate, as expected, a better angular resolution at higher fHit bins with larger, more easily reconstructible air showers. The angular resolution of bin 0a (figure

5.34) is shown to be ∼1.4◦, improving to under 0.4◦ by bin 4 (with a rise to ∼1.6◦ for the off-array bin 0b and bin 1b events). Crab detections become correspondingly more significant

(from ∼5σ/year in bin 0a to ∼60σ/year in bin 4) with the improved angular resolution and better gamma/hadron separation achievable in larger air showers. The factor by which the

gamma/hadron separation cuts improve Crab sensitivity rises from ∼1.5 in bin 1a up to more than 3 in bin 4. Note that while the apparent location of the Crab in the significance maps for event-size bins 0 and 1 is slightly offset from its true position, this offset (the cause of which is not well understood) is small compared to the angular resolution of these bins and

103 is therefore not a cause for significant concern. Lastly, the significance histograms are fit to Gaussians that match the expectation well, demonstrating robust significance calculations.

Figure 5.34: Measurements of the Crab Nebula using all 2016 event data from bin 0a. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

104 Figure 5.35: Measurements of the Crab Nebula using all 2016 event data from bin 0b. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

105 Figure 5.36: Measurements of the Crab Nebula using all 2016 event data from bin 1a. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.8◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

106 Figure 5.37: Measurements of the Crab Nebula using all 2016 event data from bin 1b. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 1.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

107 Figure 5.38: Measurements of the Crab Nebula using all 2016 event data from bin 2. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.6◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

108 Figure 5.39: Measurements of the Crab Nebula using all 2016 event data from bin 3. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

109 Figure 5.40: Measurements of the Crab Nebula using all 2016 event data from bin 4. Top: measured significance at the location of the Crab as a function of tophat smoothing angle with and without the compactness and PINCness gamma/hadron separation cuts. Bottom left: the significance map around the Crab Nebula (calculated in J2000 coordinates with both cuts and a 0.4◦ smoothing angle). The cross denotes the actual location of the Crab. Bottom right: a histogram of significances (calculated with both cuts) measured in every pixel near the declination of the Crab nebula. Excesses above the Gaussian fit at large significance values are Crab detections. The “spike” at 0 is a software artifact stemming from empty pixels at map edges.

110 5.4.2 Agreement with MC Predictions of Crab Observations

Here, a variety of Crab Nebula measurements (with and without gamma/hadron separation cuts) are compared to Monte Carlo predictions to better understand the low-energy multi- plane fitter analysis and provide a sanity check of high-level modeling. (See section 5.6 for a discussion on how Crab gamma-ray excess count and detection significance predictions were made.) As in section 5.4.1, observations are made using HAWC’s 2016 data set. The following measurements are reported for each bin of the low-energy multi-plane fitter analysis (see figure 5.41 for items 1 – 3 and figure 5.42 for item 4):

1. The total number of gamma rays from the Crab Nebula. Data results are obtained by integrating a Gaussian fit to the Crab excess (the observed number of air showers minus the cosmic-ray background estimate) using air showers reconstructed within 3◦ of the Crab. MC results are taken from a simple count of simulated gamma rays weighted to match the expected Crab spectrum.

2. Angular resolution. Defining ∆θ as the angle between the true (simulated) and recon- structed shower axes for a simulated air-shower event, MC predictions are calculated

as the minimum angle θr for which 68% of simulated showers (corresponding to ∼1σ

in a Gaussian distribution) have ∆θ ≤ θr. Data results are provided for two methods: taking the standard deviation of the Gaussian fit to the Crab excess described in (1) and finding the peak of the significance vs. smoothing angle curves shown in section 5.4.1.

3. The detection significance of the Crab Nebula after ∼1 year of observation. The significance observed in data was calculated with the Li & Ma test statistic [81] using signal and background counts obtained by summing over all events reconstructed within

θr of the Crab, with θr the angular resolution for a given bin (calculated from the √ peak significance method). Predictions are calculated as signal/ background using gamma-ray MC (signal) and data background (see section 5.6 for details).

111 4. Gamma-ray efficiency (the fraction of gamma-ray events that survive a cut) for the PINCness and compactness gamma/hadron separation cuts. Results are shown for the application of both cuts together and each individually. Efficiencies for both data and MC are calculated by taking the ratio of the corresponding excess counts from (1) with and without cuts.

Figures 5.41 and 5.42 show good data/MC agreement for all of the above measurements. The observed excess in bin 0a Crab data without gamma/hadron separation cuts is consistent with 0, indicating a non-detection in this sample. This is not in fact in conflict with the predictions as the calculation for MC excess is a simple count of simulated gamma rays and therefore does not rely on the significant excess above the cosmic-ray background needed to make conclusions with data. The slightly larger Crab excess in high fHit bins (and correspondingly higher detection significance) seen in data over MC likely results from using a Crab spectrum that is inaccurate at high energies for MC predictions (see section 5.6) and is therefore not a cause for concern. Also of note is the result that the optimal compactness and PINCness cuts provide gamma-ray efficiencies that are above 50% in all fHit bins and significantly higher in low fHit bins.

112 Figure 5.41: Comparisons of Crab measurements (using all HAWC data from 2016) to MC predictions for each bin of the low-energy multi-plane fitter analysis. Top: angular resolution without (left) and with (right) gamma/hadron separation cuts. Middle: observed excess (number of air showers minus cosmic-ray background estimate, a measure of the number of detected gamma rays) without (left) and with (right) cuts. Bottom: ∼1 year Crab significance without (left) and with (right) cuts. See text for details on how the reported results were calculated.

113 Figure 5.42: Gamma-ray efficiencies for the gamma/hadron separation cuts used in the low-energy multi-plane fitter analysis. Data results are obtained by taking the ratio of observed Crab excess measurements (plotted in figure 5.41) with and without cuts. As no excess was detected in bin 0a data without cuts, efficiency observations are not reported for this bin. Efficiencies are shown for the application of both the compactness and PINCness cuts (top) and for the two cuts individually (bottom).

114 5.5 Improvements to Low-Energy Sensitivity

This section compares the low-energy sensitivity achievable with the multi-plane fitter analysis developed in section 5.3 with HAWC’s standard 10 event-size bin analysis (see section 3.4). Recall that the first of those 10 bins (containing the smallest, lowest-energy air showers) in the standard analysis (bin 0) has, in the past, been ignored due to now resolved modeling issues (see chapter 4). This was done in [72], the paper on HAWC’s measurements of the Crab Nebula that first presented the event-size bin analysis. Here, the new techniques developed in section 5.3 are compared to this Crab paper analysis with two additions: bin 0 added back in and core location cuts to match those developed in section 5.3.2. The purpose is not to compare the new analysis to the “best” one can do with the current reconstruction (see section 5.8), but to compare it to one that is mostly identical except for the use of the multi-plane fitter (the analysis in [72] also uses compactness and PINCness as gamma/hadron separation variables). The comparison, between the new Low-Energy Multi-Plane Fitter (LEMPF) analysis and the standard 9 bin analysis from the HAWC Crab paper plus bin 0 and core cuts – the Extended Crab Paper (ECP) analysis, thus provides an indication of the improvement to low-energy sensitivity obtained from the addition of the multi-plane fitter. The compactness and PINCness cuts for the re-introduced bin 0 in the ECP analysis are optimized here using the same method employed in section 5.3.4 for the LEMPF analysis. A plot of predicted one year Crab significance as a function of compactness (calculated as nHitSP10/CxPE40 – again, see section 5.3.4) and PINCness cut values for ECP bin 0 is shown in figure 5.43. The compactness cut of 4.0 at the peak is chosen. A PINCness cut, which again provides no additional predicted gain to Crab sensitivity, is chosen by taking the strictest cut possible while maintaining a gamma-ray efficiency of 99% (as is done for the lowest-energy LEMPF bins in section 5.3.4). The definitions of the bins used in the ECP analysis with these additions are provided in table 5.5. Note that the multi-plane fitter is not applied in the ECP analysis, so fHit (the fraction of available PMTs that record a hit in an

115 air-shower event) is calculated differently (as discussed in section 5.2.2); this is why the fHit bin thresholds for the ECP analysis are different.

Figure 5.43: Predicted 1 year bin 0 Crab significance in the Extended Crab Pa- per analysis. PINCness and compactness (nHitSP10/CxPE40) cuts are applied and significance is plotted as a function of the cut values.

fHit and coreFiduScale Binning Thresholds for the Extended Crab Paper (ECP) Analysis Bin Definition Gamma/Hadron Cuts fHit thresholds coreFiduScale thresholds compactness PINCness cut (>) cut (6) Bin 0 0.044 6 fHit < 0.067 coreFiduScale < 150 4.0 3.5 Bin 1 0.067 6 fHit < 0.105 coreFiduScale < 150 7.0 2.2 Bin 2 0.105 6 fHit < 0.162 coreFiduScale 6 105 9.0 3.0 Bin 3 0.162 6 fHit < 0.247 coreFiduScale 6 105 11.0 2.3 Bin 4 0.247 6 fHit < 0.356 coreFiduScale 6 105 15.0 1.9 Bin 5 0.356 6 fHit < 0.485 coreFiduScale 6 105 18.0 1.9 Bin 6 0.485 6 fHit < 0.618 coreFiduScale 6 105 17.0 1.7 Bin 7 0.618 6 fHit < 0.740 coreFiduScale 6 105 15.0 1.8 Bin 8 0.740 6 fHit < 0.840 coreFiduScale 6 105 15.0 1.8 Bin 9 0.840 6 fHit coreFiduScale 6 105 3.0 1.6

Table 5.5: Bin definitions and cuts used in the Extended Crab Paper (ECP) analysis. Note that compactness is defined as nHitSP20/CxPE40 in bins 1 – 9 and nHitSP10/CxPE40 in bin 0 (see section 5.3.4.1 for details).

116 5.5.1 Predicted Sensitivity Gain to Point Sources with a Low-Energy

Cut-off

The introduction of the multi-plane fitter will likely be most consequential in the analysis of sources with a low-energy cut-off that may not otherwise be detectable by HAWC. The improvement to HAWC’s sensitivity to such a source from the addition of the multi-plane fitter is therefore studied here. Figure 5.44 compares the predicted detection significance

per transit to a source with a low-energy cut-off as a function of cut-off energy Ec for the LEMPF and ECP analyses. The source is simulated at declination 22◦ with a flux of

α −E/Ec −11 −1 −2 −1 dN/dE = φo(E/T eV ) e , where φo = 3.45 · 10 T eV cm s (roughly the flux at 1 TeV from the Crab Nebula), and results are shown for α = -2 and α = -3. For both analyses, √ significances are calculated in each bin as signal/ background (where the signal count is obtained from simulated gamma rays and the background count from data, see section 5.6 for details) and added in quadrature.

Figure 5.44: Left: predicted significance per transit (calculated with MC sig- nal and data background) for a source at declination 22◦ with flux dN/dE = α −E/Ec −11 −1 −2 −1 φo(E/T eV ) e , where φo = 3.45 · 10 T eV cm s . Curves are plotted as a function of cut-off energy (Ec) for two values of α and for both the ECP and LEMPF analyses. Right: the ratio of the significance results obtained with the LEMPF analysis to those obtained with the ECP analysis.

117 As expected, the LEMPF analysis is shown to be more sensitive except for harder spectra

(α = −2) with high cut-off energies (Ec & 3 T eV ) where the combination of high-energy bins discussed in section 5.3.3 lowers sensitivity to high-energy air showers. For softer spectra with cut-off energies as low as 100 GeV, the predicted sensitivity gain achieved with the LEMPF analysis reaches ∼50%. Given the robust nature of such high level MC predictions demonstrated in section 5.4.2, this constitutes a strong argument for employing the multi-plane fitter in the analysis of sources with a low-energy cut-off.

5.5.2 Sensitivity Gain in Low-fHit Crab Data

To test the predicted gain to low-energy sensitivity found in section 5.5.1, the significance of Crab Nebula detections in low-fHit (small air-shower) 2016 data is examined here under the LEMPF and ECP analyses. To ensure an identical data sample is used in the comparison, ECP bins 0 – 3 are chosen for study. Figures 5.45 – 5.48 show significance maps of the Crab Nebula in each of these data samples under the two analyses. As the data selection is defined by the ECP bins, the ECP significance maps are obtained after simply applying the ECP gamma/hadron separation cuts for the selected bin. In the LEMPF analysis, air showers reconstructed in the selected ECP bin are re-processed with the multi-plane fitter, sorted into their LEMPF bins, the LEMPF cuts are applied, and the surviving events used to create one combined map. HAWC is clearly much more sensitive to small, low-fHit air showers with the LEMPF analysis. Introducing the multi-plane fitter boosts the significance of Crab detections by

∼40% in ECP bin 0, ∼25% in ECP bins 1 and 2, and ∼3% in ECP bin 3. (See figure 5.49 for MC predictions of the gamma-ray energy distributions of these data sets under the assumed Crab spectrum from section 5.6.) As expected, the impact is strongest in the lower-fHit data where a larger fraction of hits in an air-shower event is produced by detector noise. The difference in sensitivity between the two analyses largely disappears by ECP bin 3.

118 Figure 5.45: Crab Nebula significance maps for all events from 2016 that fall into bin 0 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼40% boost to ECP bin 0 Crab sensitivity.

Figure 5.46: Crab Nebula significance maps for all events from 2016 that fall into bin 1 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼25% boost to ECP bin 1 Crab sensitivity.

119 Figure 5.47: Crab Nebula significance maps for all events from 2016 that fall into bin 2 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼25% boost to ECP bin 2 Crab sensitivity.

Figure 5.48: Crab Nebula significance maps for all events from 2016 that fall into bin 3 in the ECP data selection scheme. Events are analyzed with the ECP (left) and LEMPF (right) methods. The LEMPF analysis provides a ∼3% boost to ECP bin 3 Crab sensitivity.

120 Figure 5.49: Predicted Monte Carlo energy distributions of gamma rays from the Crab Nebula that fall into ECP bins 0 – 3. To allow for easy comparison of bin energy ranges, the distributions are scaled so that each peaks at one.

5.6 MC Crab Predictions

This section motivates the spectral assumption used in all Monte Carlo (MC) Crab Nebula predictions presented in this chapter and provides additional details on the calculation of significance predictions.

5.6.1 Significance and Excess Calculations

The detection significance for an expected excess of S signal counts on top of a background √ of B background counts is approximated as S/ B. To avoid inaccuracies in the simulation, background counts are obtained directly from data by measuring the cosmic-ray event rate from the Crab’s declination. The signal counts for Crab excess and significance predictions are obtained by simulating the detector’s response to gamma rays originating from the Crab Nebula’s position (at a declination of 22◦N) as it transits across the sky; the spectral assumptions used for this calculation are discussed in section 5.6.2. HAWC’s point-spread

121 function is modeled by only counting events that are reconstructed within an angle θr of

their true (simulated) direction, where θr is the angular resolution of the data sample (e.g. event-size bin) in question.

5.6.2 Spectral Assumptions

Due to now resolved modeling issues in our small air-shower data (see chapter 4), HAWC’s Crab Nebula spectral measurements [72] were made without the low-energy “bin 0” data (defined in section 3.4) of particular interest in our efforts to improve HAWC’s low-energy

sensitivity. This measurement, therefore, does not extend below ∼1 TeV. For the purpose of optimizing cuts and providing sanity checks on high level modeling, we assume a spectrum for the Crab Nebula (rather than fitting one with a large low-energy data set for every such test) using results from other experiments. The default Crab spectrum, hereafter referred to as the reference spectrum, used in HAWC simulations of transiting sources is:

dN E = (3.45 · 10−11 T eV −1cm−2s−1)( )−2.63 . (5.1) dEdAdt T eV

This was not intended to represent an accurate measurement, but a simple model that would provide decent “ball park” estimates at higher energies. It is a reasonable choice above a few TeV, but turns out to be very inaccurate at the lowest energies observable by HAWC

(∼100 GeV – 1 TeV). Figure 5.50 compares this reference spectrum to those measured by MAGIC [82] and HAWC [72]. The Crab flux measured by MAGIC is significantly lower than the reference spectrum below 1 TeV. As discussed above, the Crab spectrum reported by HAWC in [72] does not extend below this energy. With HAWC’s < 1 TeV data included, figure 5.51 shows HAWC’s low-energy Crab data to be inconsistent with the reference spectrum and consistent with the

MAGIC result. This plot compares HAWC’s measured ∼1 year Crab photon count (calculated

122 Figure 5.50: A comparison of the reference power-law Crab spectrum (see text) with the log-parabola Crab spectra measured by HAWC [72] and MAGIC [82]. Systematics bands are shown for both fits, along with MAGIC’s measured flux points. As the HAWC analysis employed event-size bins rather than energy bins (see [72] for details), HAWC flux points were not calculated. from a Gaussian fit to excess events within 3◦ of the Crab) to the excess predictions obtained in simulation for the reference and measured MAGIC spectra. Measurements are shown for each bin of the multi-plane fitter analysis developed in section 5.3. The large discrepancy between the HAWC measurements and the reference spectrum predictions in the low-energy bins make it clear that the reference spectrum given by equation 5.1 cannot be used for even ball park estimates of Crab measurements below 1 TeV (see figure 5.18 in section 5.3.3 for the energy distribution of each bin). As discussed in section 5.4.2, the non-detection of the Crab in bin 0a data without gamma/hadron separation cuts means that the consistency of this data point with MC predictions cannot be evaluated. While assuming the measured MAGIC spectrum for MC Crab predictions yields results consistent with data in the lowest-energy bins, it appears to underestimate the Crab excess in bins 2 – 4. Furthermore, the MAGIC measurement only extends up to 30 TeV, so its

123 Figure 5.51: The observed and predicted Crab excess (number of air showers minus cosmic-ray background estimate, a measure of the number of detected gamma rays) in each bin of the low-energy multi-plane fitter analysis developed in section 5.3. Results are shown without (left) and with (right) gamma/hadron separation cuts. Monte Carlo predictions are calculated for the reference and MAGIC Crab spectra from figure 5.50.

log-parabola fit should not be trusted above that energy. It was therefore decided to use the MAGIC spectrum at low energies and the reference spectrum at high energies for the Crab predictions in sections 5.3 – 5.5. These two spectra do not quite intersect, but can be made tangential by lowering the reference spectrum’s flux normalization at 1 TeV by 0.6% (a change small enough that it should not have a significant impact on MC predictions) to 3.43 · 10−11 T eV −1cm−2s−1. With this alteration, the reference and measured MAGIC spectra intersect at 2.15 TeV. The spectrum chosen for MC crab predictions is therefore:

  E E  −11 −2.47−0.24log( T eV ) dN 3.23 · 10 ( T eV ) , E ≤ 2.15 T eV (T eV cm2s) = . (5.2) dEdAdt   −11 E −2.63 3.43 · 10 ( T eV ) , E > 2.15 T eV

As can be seen in figure 5.51, the reference spectrum also underestimates HAWC’s measured Crab excess in bins 3 and 4, a feature not present in the standard event-size bin analysis when using HAWC’s measured Crab spectrum [72]. This underestimation may therefore be explained by the discrepancy between the reference spectrum and HAWC’s

124 measured log-parabola fit seen in figure 5.50; between ∼2 and ∼20 TeV, the fit provides a larger flux than the reference spectrum. While these small high-energy inconsistencies are retained by the spectral choice of equation 5.2, they do not present a significant problem. The high-energy predictions given by this spectrum should still be fairly reasonable. Furthermore, all techniques and analyses presented in this chapter that make use of Crab predictions produced with the spectral assumption of equation 5.2 are primarily intended for low energies (less than a few TeV) where said Crab predictions are consistent with data.

5.7 Other Gamma/Hadron Separation Variables Considered

in the Multi-Plane Fitter Analysis

This section documents additional cut variables that were considered for the low-energy multi-plane fitter analysis developed in section 5.3. Section 5.7.1 discusses modeling issues with a parameter used in several variables to scale cut values by the estimated size of an air shower. Section 5.7.2 considers the use of a quality cut to remove particularly noisy events, and section 5.7.3 explores an alternative to compactness for low-energy events.

5.7.1 Modeling of nHitSP-X Variables

Two of the event-cut variables considered for multi-plane fitter analyses include in their definition the parameter nHitSP-X, the number of hits within X nanoseconds of the recon- structed shower plane, for some X. This useful parameter provides an estimate of the size of an air shower that, for small X values, is not significantly impacted by detector noise. As seen in sections 5.3.4 and 5.7.2, the ability of these cuts to improve predicted low-energy sensitivity increases for smaller values of X. However, at smaller time scales, the precision with which one needs to model air showers to produce an accurate simulation increases. The purpose of this section is to determine the lowest value of X one can use before the variable nHitSP-X is no longer properly modeled.

125 Figure 5.52 compares data and Monte Carlo distributions of nHitSP-X (as a fraction of the total number of hits in the air-shower event) for three values of X. For X = 4ns, it is clear that the data and MC distributions do not match, but the modeling qualitatively appears to improve as X increases. These distributions are sharply peaked, so one parameter that can be used to quantify how well the data and MC distributions match is the difference in their peak positions. Figure 5.53 shows these peak positions and the percent error in the MC peak position as a function of X. The location of the MC peak is clearly poorly modeled for X less than 10ns. Values of X below 10ns were therefore not considered in any cut variable defined in terms of the parameter nHitSP-X.

Figure 5.52: Data and Monte Carlo distributions of the fraction of hits close in time with the reconstructed shower plane: nHitSP-X/nHit, where nHitSP-X is the number of hits within X nanoseconds of the plane and nHit is the total number of hits in the air-shower event. Distributions are shown for bin 4 of the low-energy multi-plane fitter analysis developed in section 5.3 with X = 4ns (top), 12ns (bottom left), and 20ns (bottom right).

126 Figure 5.53: Left: the position of peaks in the nHitSP-X/nHit distributions plotted in figure 5.52 as a function of X. Right: the percent error in the MC peak positions, (MC_peak_position - data_peak_position)/data_peak_position, as a function of X.

5.7.2 A Quality Cut on the Fraction of In-Time Hits

This section explores whether the quality cut on fShowerPlane used for the standard reconstruction in section 5.8 can be of use in a multi-plane fitter analysis. This cut requires that a large fraction of hits be close in time with the reconstructed shower plane; fShowerPlane = nHitSP-X/nHit, where nHit is the total number of hits in an air-shower event and nHitSP-X the number of hits within X nanoseconds of the shower plane. Results are shown for X values of 10ns and 20ns. In the standard reconstruction, an fShowerPlane cut improves angular resolution in low-energy samples by removing poorly reconstructed events with high levels of detector noise. The multi-plane fitter throws out hits not associated with the largest shower plane it identifies (see section 5.2), forcing all events to have hits fairly close in time with said plane. It was therefore not expected that an fShowerPlane cut would be as effective in a multi-plane fitter analysis. However, as cosmic-ray air-shower planes tend to be thicker than gamma-ray shower planes, fShowerPlane retains potential utility as a gamma/hadron discriminator. To test this potential utility, an fShowerPlane cut was optimized along with compactness and PINCness for the low-energy multi-plane fitter bins of section 5.3. As in section 5.3.4,

127 the cuts were optimized on the Monte Carlo Crab Nebula detection significance prediction detailed in section 5.6. Surprisingly, this method predicted that an fShowerPlane cut with X = 10ns (for the nHitSP-X parameter) could almost double point-source sensitivity to Crab-like sources in higher-energy bins. However, as shown in figure 5.54, including this cut in data has little effect on low-energy Crab observations and lowers detected Crab significance

in the highest-energy bin 4 sample by a factor of ∼6. The cut optimization was repeated with X = 20ns in the hopes of improving the modeling of the fShowerPlane variable, but (see figure 5.54) it remained the case that while the MC optimization procedure predicted an improvement to Crab sensitivity, in data the cut lowered observed Crab significance (though

this time, in bin 4, by a smaller factor of ∼2).

Figure 5.54: Source significance at the location of the Crab Nebula, calculated with the Li & Ma test statistic [81] using all events reconstructed within an angle θs of the Crab Nebula; θs is referred to as a “smoothing angle.” Significance is plotted as a function of this smoothing angle for bins 0a (left) and 4 (right) of the low-energy multi-plane fitter analysis developed in section 5.3. Compactness, PINCness, and fShowerPlane cuts are optimized (using the method of section 5.3.4) and applied. Results are shown with and without the fShowerPlane cut and for the use of nHitSP10 and nHitSP20 in the definitions of compactness and fShowerPlane.

The cause of the discrepancy between the cut optimization results and Crab measurements can be seen in figure 5.55, which shows data/MC comparisons of fShowerPlane (defined with X = 10ns) in the lowest and highest-energy bins (0a and 4, respectively). To a large extent in bin 4, and to a lesser extent in bin 0a, there is substantial disagreement between data and

128 MC. Figure 5.55 makes it clear that the optimization procedure, which uses Monte Carlo gamma rays and data background, was picking up on differences between data and MC, not between gamma rays and cosmic rays. Furthermore, the similarity of the MC gamma-ray and MC cosmic-ray distributions of fShowerPlane demonstrate that there is very little to be gained from fShowerPlane as a gamma/hadron discriminator, even without the modeling problem. An fShowerPlane cut was therefore not included in the low-energy multi-plane fitter analysis of section 5.3.

Figure 5.55: Comparisons of the data and Monte Carlo distributions of the fShow- erPlane variable (calculated as nHitSP10/nHit) for bins 0a (left) and 4 (right) of the low-energy multi-plane fitter analysis developed in section 5.3. The data and MC cosmic-ray distributions are scaled to one second of observation. The MC gamma-ray distribution is scaled to have the same area as the MC cosmic-ray distribution.

5.7.3 A Muon Identification Variable

Compactness, defined by the largest charge observed more than 40m from the reconstructed core, is a remarkably successful gamma/hadron separation variable (at lower energies) given its simplicity. The high-charge hits used in the calculation of compactness are often produced by muons; this section explores an attempt to create a more sophisticated version of the compactness cut by directly identifying muon signatures. The muon identification variable employed for this purpose, fMuonLike, is defined as the fraction of tanks participating in an air-shower event deemed to have signals produced by

129 muons, deemed to be “muon-like.” Two requirements are used to determine what constitutes a muon-like tank signal: the average charge of hits in all available (up to three) 8 inch PMTs must be at least α, and the charge observed in the tank’s central 10 inch PMT, if it is active, must be at least β. The values of α and β were to be optimized separately for each analysis bin. Several modifications to the muon identification variable so defined were considered, but upon qualitative examination of their impact on simulated gamma-ray and cosmic-ray fMuonLike distributions (with various reasonable values of any additional parameters), they were found to either not significantly impact or to worsen the gamma/hadron discrimination power of the variable and were therefore not adopted. These modifications include:

• Modifying the definition of a muon-like tank signal by:

– Also requiring that a certain number of PMTs participate by excluding tanks with more than γ available PMTs that do not have a hit (where γ is to be optimized for each analysis bin and can take on the values of 0, 1, 2, or 3).

– Also requiring that muon-like tank signals be isolated by excluding those with another participating tank nearby in space.

– Also requiring that muon-like tank signals be at least a certain distance from the reconstructed core.

– Ignoring whether or not all four PMTs in a tank are active and simply requiring that a muon-like tank signal have a certain amount of charge and a certain number of participating PMTs.

• Instead of defining the muon identification variable as the fraction of tanks with muon-like signals, defining it as:

– The number of tanks with a muon-like signal

– The total charge of all muon-like tank signals

130 – The total charge of all muon-like tank signals normalized by air-shower size (by dividing by nHitSP10, the number of hits within 10ns of the reconstructed shower plane).

– The total charge of the brightest (highest-charge) muon-like tank signal

– The total charge of the brightest muon-like tank signal normalized by air-shower size (by dividing by nHitSP10, the number of hits within 10ns of the reconstructed shower plane).

To evaluate whether fMuonLike is more effective than compactness in improving low- energy sensitivity, the performances of the two cuts were compared in the original bin 0 data sample: events that, before application of the multi-plane fitter (which is employed in this analysis) had fHit and coreFiduScale values within the bin 0 thresholds defined in table 5.5 (from section 5.5). With this low-energy data sample, the fMuonLike parameters α and β were optimized on predicted Crab significance; their optimal values were both found to be 4 PEs. Using these parameters for fMuonLike, the fMuonLike and compactness Monte Carlo distributions of original bin 0 cosmic and gamma rays (simulated with isotropic arrival

−11 −1 −2 −1 −1 E −2 directions and a flux of 3.6 · 10 T eV cm s sr ( T eV ) ) are plotted in figure 5.56. The gamma rays and cosmic rays seem to separate out quite well in both variables; to quantify which allows for better gamma/hadron discrimination, the gamma-ray efficiency (fraction of gamma rays retained) and hadron rejection (fraction of cosmic rays removed) values of fMuonLike and compactness cuts were compared. The results are provided in figure 5.57, which shows that for the compactness cut that gives any particular hadron rejection rate, the fMuonLike cut that gives the same hadron rejection rate will provide a slightly higher gamma-ray efficiency. According to the Monte Carlo, then, the fMuonLike variable provides better gamma/hadron discrimination than compactness in original bin 0 events. Based on this promising result, the optimization procedure for gamma/hadron cuts described in section 5.3.4 for the low-energy multi-plane fitter analysis was repeated with an fMuonLike cut included. (The α and β parameters were optimized separately for each

131 Figure 5.56: Distributions of the fMuonLike (left) and compactness (right) variables for simulated gamma and cosmic rays reconstructed in original bin 0 (see text). All distributions are area normalized to one to allow for easy qualitative comparison of their shapes. Primary particles are simulated with isotropic arrival directions and a −11 −1 −2 −1 −1 E −2 flux of 3.6 · 10 T eV cm s sr ( T eV ) .

Figure 5.57: Simulated gamma-ray efficiency vs. hadron (cosmic-ray) rejection for com- pactness and fMuonLike cuts in original bin 0 events (see text). The curves are built up by varying cut values in small increments and, for each cut value, calculating the result- ing gamma-ray efficiency and hadron rejection values. Primary particles are simulated −11 −1 −2 −1 −1 E −2 with isotropic arrival directions and a flux of 3.6 · 10 T eV cm s sr ( T eV ) . bin along with the cut value.) It was found that including an fMuonLike cut could not provide any significant gain to predicted Crab sensitivity on top of a compactness cut, and substituting compactness for fMuonLike lowered predicted Crab sensitivity – substantially in

132 the higher-energy bins (as expected) and very slightly in the lower-energy bins. Note that this result is not in conflict with figure 5.57, as here we are discussing a point source (rather than isotropic) simulation with data (rather than MC) background. To compare the performance of compactness and fMuonLike cuts without such Monte Carlo influences, these cuts (both the variable parameters and cut values) were optimized on data, on observed Crab significance, for bin 0a of the multi-plane fitter analysis of section 5.3. (As the data optimization procedure is computationally expensive, it was only done for the lowest- energy bin – bin 0a – where fMuonLike has the most potential.) Crab Nebula significance maps made from 2016 bin 0a events with the data optimized compactness and fMuonLike cuts are shown in figure 5.58. The two cuts yield similar Crab detection significances; there is no strong evidence that fMuonLike provides an advantage over compactness. An fMuonLike cut was therefore not included in the low-energy multi-plane fitter analysis of section 5.3.

Figure 5.58: Crab Nebula detection significance maps – of all 2016 events reconstructed in bin 0a of the multi-plane fitter analysis developed in section 5.3 – with a data optimized compactness cut (left) and data optimized fMuonLike cut (right). Significance values are calculated in each map pixel with the Li & Ma test statistic [81] using all events within 1.3◦ of the pixel for signal and background counts.

133 5.8 Improving Low-Energy Sensitivity with the Existing Re-

construction

Before exploring techniques to improve air-shower reconstruction for low-energy events, an attempt was made to recover and boost the sensitivity of the original bin 0 sample (events that provide a signal in 4.4% - 6.7% of the array) using the existing reconstruction. These events have previously been ignored due to the modeling issues resolved in chapter 4 and have not yet been incorporated into any published analysis with a robust set of gamma/hadron separation cuts. A set of such cuts are developed here. While they do not yield as great an increase in low-energy sensitivity as the multi-plane fitter method of section 5.3, they may be utilized by analyses using data that has already been reconstructed. (Re-processing the entire HAWC data set with the multi-plane fitter is a long, formidable computational task that, at the time of this writing, has not yet been carried out.) The original bin 0 event sample is largely populated by poorly reconstructed events, many of which (when examined by eye) do not even appear to have an obvious shower plane. A set of quality cuts was therefore developed to remove such difficult to reconstruct, noise dominated events. (Note that here we are simply discarding these entire events, rather than attempting to improve their reconstruction.) As fits to air-shower planes tend to become less accurate for events that land far away from the array, a cut on the coreFiduScale variable (defined in figure 5.11) was introduced. Some poorly reconstructed events can be directly identified and removed with a cut on fShowerPlane, defined as the fraction of hits in an air-shower event within +/- 10ns of the reconstructed shower plane. As discussed in chapter 4 and plotted again below in figure 5.59, the original bin 0 fShowerPlane distribution is bimodal, with a population (the first peak) of events that (when examined by eye) appear to have no obvious shower plane or an apparent shower plane that was not found by the reconstruction.

134 Figure 5.59: Distribution in data of fShowerPlane for events reconstructed in original bin 0. The fShowerPlane variable is defined as nHitSP10/nHit, where nHit and nHitSP10 are the total number of hits used in the reconstruction of an air-shower event and the number of those that arrive within ±10ns of the reconstructed shower plane, respectively.

In addition to coreFiduScale and fShowerPlane, the compactness and PINCness cuts used in event-size bins 1 – 9 (see sections 3.4 and 3.5) were introduced. As in the low-energy bins of the multi-plane fitter analysis, the PINCness cut provides very little gain in sensitivity; most of the gamma/hadron separation power comes from compactness. A Monte Carlo simulation was used to optimize cut values on predicted Crab significance through the same procedure discussed in section 5.3.4.2. The resulting set of cuts proposed for the original bin 0 event sample are:

• core cut: coreFiduScale < 150

• fShowerPlane cut: nHitSP10/nHit ≥ 0.61

• compactness cut: nHitSP10/CxPE40 ≥ 3.8

• PINCness cut: PINCness ≤ 3.5 where nHit, nHitSP10, and CxPE40 are defined, respectively, as the number of hits in the air-shower event, the number within +/- 10ns of the reconstructed shower plane, and the largest charge of any hit more than 40m from the reconstructed shower core. The use of

135 nHitSP20 (the number of hits within +/- 20ns of the reconstructed plane) in the definitions of fShowerPlane and compactness was tested as well (the existing reconstructions provided no other values for X in nHitSP-X) but was found to yield slightly lower predicted sensitivity. Figure 5.60 shows the improvement to original bin 0 sensitivity afforded by these cuts as indicated by the detection significance of the Crab Nebula in 2016 data. The non-detection of

HAWC’s brightest gamma-ray source with one year of bin 0 data is improved to a ∼7σ/year signal.

Figure 5.60: Detection significance maps of the region around the Crab Nebula (located at the cross) for all 2016 data reconstructed in original bin 0. Maps are shown without cuts (left) and with cuts (right). Both maps use the same reconstruction. Significance values are obtained in each pixel by summing signal and background counts for all air showers reconstructed within 1.2◦.

A substantial part of this observed improvement in bin 0 sensitivity comes from the fShowerPlane cut, which provides a ∼15% higher observed Crab detection significance than the core, compactness, and PINCness cuts alone. An attempt was therefore made to optimize an fShowerPlane cut for the other small air-shower event-size bins defined in section 3.4. However, including an fShowerPlane cut in bin 1 does not have a significant impact on measured Crab significance, and by bin 2, the modeling issues discussed in section 5.7.2

136 become important and the Monte Carlo optimized fShowerPlane cut significantly lowers sensitivity in data.

5.9 An Alternate Noise Discrimination Technique

This section presents an alternative technique to address the challenge of reconstructing small air showers in events where a significant fraction of hits are produced by detector noise. The original bin 0 sample (events that provide a signal in 4.4% - 6.7% of the array) is used here to develop and test this new method. For reasons discussed at the end of the section, it was ultimately not adopted but may be of interest in future attempts to further boost HAWC’s low-energy sensitivity. Unlike the multi-plane fitter, the technique discussed here works by attempting to directly identify noise hits and exclude them from the shower plane fit. While hits produced by air-shower particles tend to be close in time and space to other air-shower signals, detector noise can appear anywhere at the detector and at any time. Noise hits therefore tend to be less clustered in time and space. This difference was exploited to create a parameter, nearbyFTank, to distinguish between noise hits and those produced by the air shower. The noise discrimination variable nearbyFTank is defined for each tank in the HAWC array as the fraction of all other tanks participating in an event that are within 90m and 90ns. (As many shower particles and sources of detector noise produce multiple hits in a given tank, it is easier to distinguish between noise and shower tanks than noise and shower hits.) The location and time of each tank is calculated by averaging those properties from each PMT hit it contains. Tanks with higher values of this parameter are clustered with other tanks in the event and are therefore more likely to contain signals from the actual air shower. This can be seen in figure 5.61, which shows distributions of nearbyFTank for shower and noise hits in simulated original bin 0 gamma-ray events (produced with the accurate treatment of detector noise developed in chapter 4).

137 Figure 5.61: Distributions of the nearbyFTank variable for all hits in a sample of simulated original bin 0 gamma rays. Hits produced by simulated air-shower particles originating from the primary gamma are shown in blue. Those added as detector noise (see chapter 4 for details) are shown in red. Both the shower hit and noise hit distributions are area normalized to one to allow for easy comparison of their shapes.

The 90m and 90ns parameters used to determine if another tank is “nearby” were obtained by optimizing on the separation of the two distributions plotted in figure 5.61. If you remove hits with nearbyFTank below some cut value, you will remove a certain fraction of noise hits and retain a certain fraction of shower hits. As both the removal of noise and the retention of signal are desired, the effectiveness of such a cut can be parameterized by the product of these two fractions. According to Monte Carlo simulations, the values of 90m and 90ns allow for the greatest value of this optimization parameter for any nearbyFTank cut value. This is the sense in which the distributions of figure 5.61 were maximally separated. It was later verified that upon implementation of nearbyFTank in the reconstruction in the manner discussed below, the values of 90m and 90ns yield the greatest improvement to low-energy sensitivity as measured by predicted bin 0 detection significance of the Crab Nebula. To effectively introduce such a noise discrimination variable into HAWC’s air-shower reconstruction, it is necessary to understand how the default algorithms fail. In the HAWC reconstruction software, the shower plane fitting algorithm is run several (five) times in every instance that a plane is fit to an air-shower front. In the standard reconstruction, almost all hits are used in the first fit. The resulting plane is refined over subsequent iterations, which

138 only make use of hits that are close in time with the plane found in the previous fit. This is a fast and effective technique if the initial fit aligns reasonably well with the true shower plane, if the assumption that most of the energy seen in the event was deposited by the air shower is accurate. This is not the case in a large fraction of low-energy showers, where the first plane guess is often thrown off by detector noise. When this occurs, the subsequent refinement fits completely ignore the true shower signals. A noise discrimination variable can improve the initial guess for the shower plane by excluding from that first fit hits that are likely to represent detector noise. The nearbyFTank variable was therefore implemented in the reconstruction as a threshold that must be passed by hits used in the initial plane fit; only hits from tanks with a nearbyFTank value above some threshold are considered. The value for this threshold, 0.65, was optimized on the predicted bin 0 detection significance of the Crab Nebula achieved in Monte Carlo simulations with different threshold values. The technique to employ noise discrimination in the plane fitter as described above, hereafter referred to as the nearbyFTank algorithm, can be seen in action in figure 5.62, a 3D plot of an original bin 0 air-shower event. There is a clear apparent shower plane, but the default reconstruction appears to completely miss it on account of just a few high-charge noise hits near the top of the plot. After the initial plane fit is thrown off by these high-charge hits, further refinements assume that the hits out of time with this first guess are noise, and the majority of the apparent shower hits are not used in the final plane fit result. However, when the initial plane fit is seeded with hits that have high values of nearbyFTank (highlighted in figure 5.62, right) the reconstruction ignores the noise hits that previously confused the plane fitter and the result appears to be much more accurate. The gain to low-energy sensitivity provided by the nearbyFTank algorithm was evaluated with a set of gamma/hadron separation and event quality cuts (compactness, coreFiduScale, and fShowerPlane – see sections 3.5.1 and 5.8). The cuts were optimized on predicted (Monte Carlo) Crab Nebula detection significance for original bin 0 events reconstructed with

139 Figure 5.62: A small air shower from HAWC data (the same shown in figure 5.1) reconstructed with the standard HAWC plane fitter (left) and with the nearbyFTank noise discrimination algorithm (right). The reconstructed shower core and plane are depicted by the black star and gray plane, respectively. Each point represents a hit. The spatial coordinates are calculated for a particular moment in time by treating each hit as an air-shower particle assumed to have been traveling at the speed of light in the direction of the reconstructed air-shower plane. (And switching reconstruction algorithms changes the plane fit; this is why the event appears to have a different orientation in the two plots.) Left: the color scale represents charge, with darker points for hits with larger PMT signals. Right: the color scale represents shower particle likelihood, with darker points for hits with a larger nearbyFTank value.

the nearbyFTank algorithm. For comparison, the same was done for original bin 0 events reconstructed with the multi-plane fitter (minus the fShowerPlane cut – see section 5.7.2). A PINCness cut (defined in section 3.5.2) was evaluated but not found to be helpful in either cut set. See table 5.6 for the cut values obtained with each reconstruction. The performance of the nearbyFTank algorithm and the multi-plane fitter in original bin 0 is compared in figure 5.63, which shows the observed detection significance of the Crab Nebula in HAWC’s 2015 and 2016 data sets with and without the above gamma/hadron separation cuts. The nearbyFTank algorithm appears to perform very well in the 2015 data set; however, for reasons unknown, it yields a much weaker Crab signal in 2016 data. This

discrepancy is too large to be explained by the ∼1σ fluctuations expected in these significance measurements, and is particularly odd given that HAWC was operating with just 250 out of 300 tanks in the first few months of 2015 before construction of the array was completed.

140 The event cuts and changes to the charge calibration and air-shower trigger threshold were explored as causes for this discrepancy, but the source was not found. Fortunately, it does not appear in the multi-plane fitter reconstruction, which provides similar sensitivity to the Crab Nebula in original bin 0 as the nearbyFTank algorithm does in 2015 data.

Original Bin 0 Event Cuts for new Reconstruction Algorithms nearbyFTank Algorithm Multi-Plane Fitter core cut coreFiduScale < 150 coreFiduScale < 150 compactness cut nHitSP6/CxPE19 ≥ 2.4 nHitSP4/CxPE15 ≥ 2.3 fShowerPlane cut nHitSP15/nHit ≥ 0.57 none

Table 5.6: Cut values obtained for the original bin 0 event sample by optimizing on predicted Crab Nebula detection significance. Values of X and Y in the parameters nHitSP-X (the number of hits within ± X ns of the reconstructed shower plane) and CxPE-Y (the largest charge more than Y meters from the reconstructed core) used in the definition of compactness and fShowerPlane were allowed to vary in the optimization. Note: the compactness cut listed for the multi-plane fitter reconstruction with this X,Y optimization is used only for comparison to the nearbyFTank algorithm and is not employed in the main multi-plane fitter analysis of section 5.3 due to the since discovered modeling issues discussed in section 5.7.1.

Figure 5.63: Gamma-ray detection significance (the event counts’ fluctuation above background in number of standard deviations) at the location of the Crab Nebula, calculated with the Li & Ma test statistic [81] using all events reconstructed within an angle θs (the “smoothing angle”) of the Crab. Significance is plotted as a function of this smoothing angle for the original bin 0 event sample reconstructed with the multi-plane fitter (left) and nearbyFTank noise discrimination algorithm (right). Results are shown with and without Monte Carlo optimized gamma/hadron separation cuts (see table 5.6) and plotted separately for HAWC’s 2015 and 2016 data sets.

141 The multi-plane fitter has one other important advantage: as discussed in section 5.2.3, it speeds up the reconstruction. On the other hand, the calculation of the nearbyFTank parameter requires, for each hit, looping over all other hits in the event (to check how many are “nearby”). This introduces N 2 computations for events with N hits and thereby slows down the reconstruction. Thus, for outperforming the nearbyFTank algorithm in reconstruction time and maintaining stability in performance across time, the multi-plane fitter was chosen as the recommended algorithm for improving air-shower reconstruction in small, noisy showers. Several variations of the noise discrimination technique outlined in this section were explored, but none outperformed the nearbyFTank algorithm as described above. These variations include:

• Base the noise discrimination variable on the fraction of nearby hits rather than nearby tanks.

• Instead of considering the fraction of participating tanks that are nearby, calculate the fraction of charge recorded in the event that was deposited nearby.

• Consider the total number of hits or tanks or total amount of charge deposited nearby rather than the fraction of the event that this number/amount constitutes.

• Instead of seeding the plane fitter with hits that have a nearbyFTank (or one of the above variations of nearbyFTank) value above some optimized hard coded threshold, seed it with the N hits that have the largest value of the noise discrimination variable; optimize N.

142 Chapter 6 | HAWC’s GRB Search Algorithms

Three search algorithms were considered for use in the HAWC 41 month GRB search of chapter 7, a triggered analysis of all known GRBs within HAWC’s field of view. A single-bin excess analysis excluding the bin 0 data was employed for HAWC’s published 18 month GRB search of 2017 [83]. This method is briefly outlined in section 6.1 and was included primarily for comparison with the new analyses, which use the reconstruction and binning techniques described in chapter 5. A multi-bin excess analysis is described in section 6.2, and ZEBRA, the maximum likelihood ratio technique ultimately employed for the search, is described in section 6.3. Section 6.4 compares the sensitivity of these algorithms, motivates the choice of ZEBRA for use in the 41 month search, and demonstrates the ability of the methods developed here to detect the brightest known GRBs.

6.1 The Published Single-Bin Excess Analysis

The technique employed in [83] looks for an excess of events in a search circle of one degree radius centered on the GRB coordinates. All events from air-shower size bins 1 - 9 (see section 3.4) that pass the gamma/hadron separation cuts of size bin 1 (which makes up

∼50% of the sample and contains the lowest-energy events) are included. The background is estimated by counting events from the search circle at different times and applying a small

143 (< 1%) scaling factor to account for the temporal sinusoidal variation of HAWC’s all-sky rate. Poisson statistics are then used to evaluate the significance of any excess in event counts observed during the search. For the case of poorly localized GRBs (those detected by Fermi-GBM but not Fermi-LAT or Swift), the search region (defined by the positional uncertainty reported by Fermi-GBM) is covered by multiple 1 degree radius search circles offset from one another by 0.3 degrees in right ascension and declination. The most significant excess from any of these search circles is then reported after applying a trials correction. The post-trial significance is obtained by repeating the analysis at earlier and later times to produce a pre-trial significance distribution for off-observations (with no signal contamination) from which the probability of observing the reported excess by chance can be calculated.

6.2 A Multi-Bin Excess Analysis

This search algorithm expands the simple approach outlined in the previous section with, conceptually, three enhancements: The low-energy bin 0 data is included in the analysis, the more sensitive binning and reconstruction outlined in chapter 5 is employed, and each bin is evaluated independently for an observed excess. This last change allows for each bin’s optimal gamma/hadron separation cuts (from section 5.3.4.2) to be used in rejecting the cosmic-ray background and for each bin’s angular resolution to be used in defining the size of its search circle. The angular resolution of each bin is taken as the tophat smoothing angle from section 5.4.1 that maximizes the significance of the Crab Nebula. These search radii are documented in table 6.1 below. Here, as the GRB transits across the sky, an excess analysis similar to that described in section 6.1 is performed separately in each bin. The total number of events within each bin’s search circle (the region of the sky within an angle given by table 6.1 of the GRB location) is counted, and the significance of any upwards fluctuation is evaluated with Poisson statistics.

144 Search Radii for Multi-Bin Excess Analysis Search Radius (degrees) Bin 0a 1.4 Bin 0b 1.6 Bin 1a 0.8 Bin 1b 1.6 Bin 2 0.6 Bin 3 0.45 Bin 4 0.35

Table 6.1: The search radii used in the multi-bin excess analysis. Recall that the “b” bins contain events that land off the array. These have worse angular resolution, which is why the search radius jumps to 1.6◦ in bins 0b and 1b.

However, greater care must be taken in calculating the background. The higher-energy bins have lower cosmic-ray fluxes and better angular resolution - hence smaller search circles - and thus have very low event rates. The background expectations for these bins can be of the order of one event per hour. There is therefore not enough statistics to accurately measure the background by shifting the search circle forward or backwards in time, as was done in the single-bin excess analysis. Instead (working in local coordinates), all events within ±4 hours of the burst (excluding ±30 minutes around the search time to avoid signal contamination) from any azimuth angle in the zenith band defined in figure 6.1 are included in the background calculation. With this approach to measuring a background count, the variation of HAWC’s event rate in azimuth must be taken into account in addition to the variation in time. (The cause of this azimuthal variation is still under investigation, but is believed to be related to the local magnetic field.) As in [83], the variation of HAWC’s all-sky rate in time was found to be well described by a sine function. The variation in azimuth is well described by a sum of two sine functions. The fit to the temporal variation is identical in all analysis bins, but varies from day to day and thus must be re-calculated for each burst. The fit to the azimuthal variation changes from bin to bin as well as from burst to burst. Figure 6.2 shows fits to the temporal and azimuthal variations in randomly chosen bins for data taken around

145 Figure 6.1: A diagram illustrating the background region for the multi-bin excess analysis. For any temporal search window centered at time ts, the local zenith, θ(ts), and azimuth, φ(ts), coordinates of the burst are calculated. The zenith band for the background region is defined by θ(ts) ± α, where α is the bin-specific search radius from table 6.1. All events reconstructed in this zenith band with event times within four hours of the burst time tburst (excluding those within 30 minutes of ts) are included in the background calculation. a randomly chosen known GRB within HAWC’s field of view. The less than 1% variation in time seen for this burst is typical, as is the ∼2.5% variation in azimuth. The amplitude of the azimuthal fits show greater variation from burst to burst, however, and can reach as high as 5%. The maximum amplitude in the temporal fits is ∼1%.

These fits to event rate variations in time (ft, calculated separately for each burst) and azimuth (fa, calculated separately for each burst and each bin) can then be used to weight background events from the shaded background region in figure 6.1. When evaluating a background event from a particular bin with time tb and local azimuth angle φb, the total background count for the search window at time ts and azimuth φs is incremented by:

f (t ) f (φ ) background weight = t s a s . (6.1) ft(tb) fa(φb)

146 Figure 6.2: Example background variation fits for the randomly chosen burst GRB 161113A. Plotted is the event rate (normalized by the average) of all bin 0a events in HAWC’s field of view within four hours of the burst. Left: Each blue data point is the measured all-sky rate obtained with one minute of data. The red line shows the sinusoidal fit. Right: Each blue data point is the measured rate in an azimuth band of 1 degree width for all events in the 8 hour observation period. The red line shows the fit to the sum of two sines function.

This background count will, of course, have a different solid angle and duration than the signal observations. To account for this difference and obtain a final background expectation, the raw background count is weighted by a factor α:

t Ω α = on on (6.2) toff Ωoff where ton and Ωon are the search length and solid angle of the search circle used to obtain a

signal count, and toff and Ωoff are the same parameters for the background observation.

With the background expectation µi and signal count ni for a particular bin, we can then

evaluate the p-value pi, the probability of observing ni or more counts by chance, for the bin in question using Poisson statistics:

∞ k ni−1 k X µ X µ p = p(n , µ ) = i e−µi = 1 − i e−µi . (6.3) i i i k! k! k=ni k=1

The seven single-bin p-values pi must then be combined into a single test statistic. A variety of methods exist for combining independent p-values (see, e.g., [84]). Here, we are analyzing extra-galactic sources and expect that EBL attenuation might cause any significant

excess to be present in the lower-energy (∼100 GeV - 1 TeV) bins and not at all in the

higher-energy (>1 TeV) bins. We therefore require a test statistic sensitive to the smallest

147 (most significant) single-bin p-values. Fisher’s method provides a test statistic that meets this requirement and was therefore adopted. The test statistic from Fisher’s method is defined as:

X TS = −2 ln(pi) . (6.4) i For uniform p-value distributions, TS follows a χ2 distribution with 2N degrees of freedom, where N is the number of independent p-values combined in the sum from equation 6.4.

However, for short timescale searches, we often have µi << 1 for the higher-energy bins. In these cases, absent a source, the event counts will almost always be zero. As the probability of observing zero or more counts is 100%, this yields a highly non-uniform p-value distribution peaked at 1. Even with a background expectation of a few events, the p-value

2 distribution is sufficiently non-uniform to break the TS ∼ χ2N approximation. However,

under the null hypothesis (Poisson distributed pure background) Ho, we can precisely calculate

the probability of measuring TS from the search count measurements ni and background

expectations µi:

n     µ i Y Y i −µi P TS|Ho(~µ) = P ni|Ho(µi) = e . (6.5) i i ni! This allows for a brute force calculation of the null distribution of TS for a given set of single-bin background expectations (~µ) by evaluating equation 6.5 for all possible integer

combinations of ni. Of course, as the Poisson distribution never goes precisely to zero, there

are technically infinitely many combinations of ni. However, to calculate the probability   of observing TSmeasured precisely, P TSmeasured|Ho(~µ) , it is sufficient to evaluate all integer

combinations of ni that yield a TSpossible with:

    P TSpossible|Ho(~µ) < P TSmeasured|Ho(~µ) . (6.6)

The total p-value for the multi-bin measurement, the probability of observing a TS greater

than or equal to TSmeasured by chance is then:

148 X   ptot = 1 − P TSj|Ho(~µ) (6.7) j where the index j runs over all possible TS values that satisfy equation 6.6. The probability

ptot can then be used as a measure of the consistency of the seven search counts (from

the seven bins) with the pure background null hypothesis Ho(~µ). For comparison with the

conventional 5σ detection threshold, ptot can be converted into a standard normal significance with the inverse complementary error function:

√ −1 S = 2 erfc (2 ptot) . (6.8)

To validate the background measurements and statistical assumptions made above, we examined a random patch of empty sky with a two second search duration. Roughly 106

measurements of TS and ptot were made at different positions and times. By definition, the

fraction of measurements with ptot ≤ p (the observation frequency of some p-value p) should be equal to p for any choice of p. Figure 6.3 demonstrates that the observation frequency

of p-values in the empty sky test meets this expectation, that the assigned values of ptot accurately quantify the consistency of the seven single-bin measurements with a Poisson null hypothesis constructed from the measured background expectations. The approach outlined above does, however, have limitations. While the brute force

calculation described by equations 6.5 through 6.7 allows for precise calculation of ptot,

the process of evaluating the relevant combinations of integer signal counts ni becomes computationally impractical for large background expectations. The background is determined by the cosmic-ray flux, the size of the search circle, and the search duration ∆t. For the

search circle radii listed in table 6.1, the time required to compute ptot becomes prohibitive

for ∆t & 1 min. This method is therefore ill-suited for the study of long GRBs. The computational requirements are also prohibitive in calculating the pre-trial distribution of

ptot needed to determine the trials factor associated with spatially overlapping search circles,

149 Figure 6.3: Frequency of measuring p-values calculated with equation 6.7 for a random patch of sky with no known gamma-ray sources. The observation frequency for a given p-value p is calculated as the fraction of all measurements with an assigned p-value less than or equal to p. The good agreement with the black one-to-one line validates the background calculations and statistical assumptions. which are required for the analysis of bursts with large positional uncertainties. It should be possible to overcome these computational hurdles by analyzing the null distribution of TS with the kinds of simulations discussed in section 6.3 that are implemented for the ZEBRA analysis. However, as discussed in section 6.4.1, this multi-bin excess analysis does not provide any advantage in sensitivity over the ZEBRA method. Potential solutions to these computational barriers were therefore not pursued.

6.3 ZEBRA

As we implemented the multi-bin excess analysis of the previous section, the ZEnith Band Response Analysis (ZEBRA) framework was developed in parallel by the HAWC collaboration to overcome the shortcomings of the single-bin analysis of section 6.1 with a different approach. Here, we will briefly summarize the ZEBRA algorithms and detail the procedures used to tune its parameters to the specific needs of our GRB search. In section

150 6.4, we will compare the sensitivity of the various ZEBRA search modes outlined here and discuss the circumstances under which they are used in the HAWC 41 month GRB search. The ZEBRA analysis employs a maximum likelihood ratio calculation to compare the pure background, Poisson null hypothesis to the expected signal from a source with an input spectral hypothesis. The expected detector response to the assumed spectrum is

simulated separately in zenith bands of ∼5◦ width. With a specified source location and time, a HEALPix sky map [85] containing the source exposure information is constructed and convolved with the detector response to determine the expected number of excess events from the simulated source in each map pixel. The background in each pixel is calculated by convolving the all-sky event rate with a local acceptance map detailing the probability of an event arriving from a given pixel. In this analysis, both the all-sky rate and local acceptance are calculated separately for each GRB using data observed in the one hour period around the burst. Lastly, a data map containing the number of events observed in each pixel during the search is constructed. The resolution chosen for the excess, background, and data maps

provides pixels of solid angle ∼4 · 10−6 steradians with a mean spacing of 0.1145 degrees. With the data, excess, and background maps in hand, a flux dependent likelihood ratio can be defined as:

P (b + e f, d ) L(f) = 2 X i i i (6.9) i P (bi, di) where the sum runs over every pixel from every bin out to the 95% containment radius of

the point-spread function calculated in the detector response simulation; di, bi, and ei are, respectively, the data, background, and expected excess per unit flux values in pixel i of the corresponding maps; f is the flux normalization for the spectral hypothesis; and

di x −x P (x, di) = e (6.10) di!

is the Poisson probability of observing di events with Poisson mean x.

151 A test statistic TS is then calculated from this likelihood ratio function using one of two methods. In the “standard” mode, the reported TS is the maximum value of L(f) for any f. In the case of a detection, the value of f that maximizes L(f) can be taken as the measured flux. There is a potential flaw in this approach, however. In the GRB search, we are studying extra-galactic sources with, in most cases, unknown redshifts. If we underestimate the redshift, we may have a significant expected excess in the higher-energy (>1 TeV) bins, but EBL attenuation will eliminate any detectable emission at those energies. So even if there is significant emission in the lower-energy bins, the data will be inconsistent with our spectral hypothesis and we will get a low TS. We therefore also test a “spectrum-independent” mode. Here, equation 6.9 is separated into seven sums running over pixels in one of the seven air-shower size bins used in our GRB search, and the likelihood ratio is maximized separately for each bin. The final TS is then calculated as the sum of the seven independently maximized likelihood ratios. Denoting bins with the index j:

P (b + e f , d ) L(f~) = 2 X X i i j i . (6.11) j i P (bi, di)

In this method, a different flux normalization fj is found in each bin. If we treated the search region in each bin as one big “pixel”, as is done in the multi-bin excess analysis of the previous section, the sum over i in equation 6.11 would disappear. We would have a

single value for the data, excess, and background in each bin (dj, ej, and bj, respectively),

and the fj values that maximize the likelihood ratio would clearly be given by ejfj = dj − bj (excess = data - background), regardless of spectral assumptions. This would be a truly spectrum-independent test statistic. The TS given by maximizing the likelihood ratios in equation 6.11 is not; a spectral hypothesis is still required to determine the point-spread function in the detector response simulation and adjust the expected excess in each pixel for angular distance from the source location. To calculate the point-spread function, we use a simple power law with spectral index −2. However, HAWC’s point-spread function is not

152 strongly dependent on the source spectrum, and the primary danger in assuming a spectrum, the issues with unknown source redshifts discussed above, is avoided. We therefore refer to this method as ZEBRA’s “spectrum-independent” mode. Before discussing the interpretation of the TS obtained in the standard and spectrum- independent search modes and comparing the sensitivity of these analyses, one additional complication must be introduced. Many of the bursts studied in HAWC’s 41 month GRB search were detected only by Fermi-GBM, which cannot always precisely localize GRBs and sometimes provides uncertainties in the measured position of as large as 15 degrees. For bursts with a positional uncertainty larger than HAWC’s angular resolution, we run a “grid search”, analyzing the entire uncertainty region defined by the patch of sky with an angle to the measured source location less than or equal to Fermi-GBM’s reported 1σ position error. This is done by breaking the uncertainty region into a grid of search points separated by

∼0.23 degrees (see table 6.1 from section 6.2 to compare this number to the angular resolution of the seven analysis bins used in the 41 month GRB search). The ZEBRA analysis is then run on every one of these search points, and the reported TS for the search is given by the maximum TS of any of these spatial trials. We now turn our attention to the interpretation of ZEBRA’s output TS when running in any of the search modes discussed above. When evaluating a maximum likelihood ratio, one would typically apply Wilks’ theorem and assume that the TS has a χ2 distribution under the null hypothesis. However, due to the issues associated with low statistics in short duration searches (discussed in detail in the context of the multi-bin excess analysis of the previous section), Wilks’ theorem does not apply even for the standard search mode with a single spatial trial. Even if Wilks’ theorem could be employed for this simple case, the grid search discussed above incurs a spatial trials factor that must be taken into account, and the spectrum-independent search introduces additional degrees of freedom to the TS by maximizing the likelihood ratio separately in each bin.

153 To address such complications, ZEBRA provides a method to simulate its test statistic for pure background trials and determine the minimum TS corresponding to an upwards fluctuation at or above the standard 5σ detection threshold. As discussed above, the ZEBRA TS is calculated from a set of data, background, and expected excess maps. The simulated excess map is constructed with an input spectral hypothesis for a source at zenith bursting for a specified duration. The background map is produced by convolving HAWC’s measured all-sky rate with a simulated local acceptance map. Lastly, the data map is produced by Poisson fluctuating the background. Many such data maps are created, and TS is calculated for each trial. The False Alarm Rate (FAR) at a given TS, the rate at which the analysis will obtain an equal or larger TS from pure background, is then calculated as the number of trials that yielded an equal or larger TS divided by the total simulation duration (the number of trials multiplied by the search duration simulated in each trial). In the case of a grid search, the FAR is reported per square degree of the simulated grid. An exponential function Ae−T S/2 is then fit to the tail of the simulated FAR distribution and used to extrapolate the results down to higher, more unlikely TS values. The results of the false alarm rate simulations we carried out for the GRB analysis are shown below. We use a simple E−2 power law for the spectral hypothesis, a 60s search duration, and a 100 square degree patch of sky for the grid search. Figure 6.4 shows the results for ZEBRA’s standard search mode without spatial trials, figure 6.5 for ZEBRA’s spectrum- independent search mode without spatial trials, and figure 6.6 for ZEBRA’s spectrum- independent grid search. (As we will see in section 6.4.1, the spectrum-independent search is more sensitive for bursts at unknown redshifts, and since all GRBs in our sample with known redshifts have precise position measurements, we did not evaluate the TS for a standard grid search.)

154 Figure 6.4: Simulated false alarm rate as a function of TS for a targeted search in ZEBRA’s standard analysis mode. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail.

Figure 6.5: Simulated false alarm rate as a function of TS for a targeted search in ZEBRA’s spectrum-independent analysis mode. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail.

155 Figure 6.6: Simulated false alarm rate as a function of TS for a grid search in ZEBRA’s spectrum-independent analysis mode. Search points are spread out by ∼0.23 degrees over a 100 square degree grid. Data maps are simulated by Poisson fluctuating the measured background (from an empty patch of sky at zenith) over many 60s trials. The false alarm rate at a given TS is calculated as the number of trials for which the ZEBRA search yielded an equal or greater output TS divided by the total simulation duration. The blue line shows the simulation results, the light blue band the 1σ statistical error, and the dashed orange line an exponential fit to the tail.

Defining the search duration as ∆t (60s for the simulations documented here) and the

solid angle of the position uncertainty region for grid searches as Ωsearch, the p-value of a given TS (the probability of observing an equal or greater TS from a background fluctuation) is then given by

p(TS) = ∆t · FAR(TS) (6.12)

for a targeted search with no spatial trials, and

p(TS) = ∆t · Ωsearch · FAR(TS) (6.13)

for a grid search. Using the exponential fits to the FAR tails shown in figures 6.4 through 6.6, we can calculate the minimum TS with a p-value corresponding to a 5σ fluctuation

(∼3 · 10−7). Note that at the 5σ level, a factor of two error in the p-value only corresponds to

156 a ∼3% error in the corresponding standard normal significance, and an error in the p-value

of a full order of magnitude only causes a ∼10% error in the significance. We therefore do not need to precisely determine the p-value to know if a fluctuation is near or beyond the 5σ detection threshold, and can proceed with confidence that the exponential FAR fits are sufficiently accurate to determine the 5σ T S. The detection threshold results for the three ZEBRA search modes simulated in figures 6.4 - 6.6 are provided in table 6.2, with the minimum 5σ spectrum-independent grid search TS calculated for position uncertainties of 1, 5, and 10 degrees. (For bursts with uncertain

positions in our 41 month GRB search, the 5σ T S is recalculated for the Ωsearch corresponding to the position uncertainty reported by Fermi-GBM.) This analysis was repeated for search durations ranging from a second to an hour and for various zenith angles; the results for the minimum 5σ T S values were consistent within factors of a few percent. The detection threshold is therefore taken to be independent of ∆t and the source zenith angle. In the event of a detection near the 5σ level, it would be prudent to re-visit this assumption and re-evaluate the false alarm rate for the search duration and source zenith angle used in the successful search. However, this is not a scenario that we encountered. The FAR simulation conducted here and resultant detection thresholds listed in table 6.2 are for a simulation at zenith, where the background is highest, upwards background fluctuations the most likely, and hence detection threshold results the most conservative.

Minimum 5σ TS Values for ZEBRA Analyses Search Mode Position Uncertainty (degrees) 5σ TS Standard 0 25 Spectrum Independent 0 35 Spectrum Independent 1 44 Spectrum Independent 5 50 Spectrum Independent 10 53

Table 6.2: The minimum TS corresponding to a 5σ fluctuation for various ZEBRA analyses. A grid search with search points separated by ∼0.23 degrees is employed for non-zero position uncertainties.

157 It should also be noted that these detection thresholds are calculated for an analysis with a single temporal search window, an analysis without multiple time trials. However, as discussed in section 7.2, our 41 month GRB search does indeed employ multiple, overlapping time windows. So to calculate the appropriate minimum 5σ T S values for the actual search, one would have to apply a temporal trials factor. As conducting multiple trials makes it more likely to see an upwards fluctuation in the background, the post-trials 5σ T S values are necessarily larger than those reported in table 6.2. However, we did not observe any fluctuations that came close to even the pre-trials 5σ T S values calculated here. It was therefore not necessary to compute a temporal trials factor to conclude that the results of the search were consistent with background. In the event of a detection, however, this issue would need to be addressed. As the ZEBRA FAR simulation algorithms allow for multiple, overlapping time trials, this would entail a simple extension of the false alarm rate analysis presented here.

6.4 HAWC’s Sensitivity to GRBs

We now turn our attention to analyzing HAWC’s sensitivity to gamma-ray bursts with the search methods discussed in the previous sections. We will motivate the use of the ZEBRA algorithms for the 41 month GRB search and discuss the prospects of observing a GRB with HAWC. Sensitivity is calculated as the minimum flux normalization at 1 TeV that yields a 50% detection probability. An E−2 spectrum is assumed for a burst at redshift z, and the resultant flux at HAWC is calculated with the Dominguez 2011 EBL attenuation model [49]. This flux is then passed through HAWC’s detector simulations to obtain an expected number of observed excess events from the source. For a given flux normalization, many trials are conducted where the background (measured from data at the hypothetical burst’s assumed zenith angle) is Poisson fluctuated. The detection significance of the excess event count is

158 then calculated using one of the search algorithms from sections 6.1 - 6.3, and the fraction of trials that yield a significance above the 5σ detection threshold is obtained. The flux normalization (calculated at 1 TeV) is then varied, and the minimum normalization that yields a 50% detection probability is taken as the reported sensitivity.

6.4.1 Search Algorithm Comparisons

As discussed in detail in section 6.3, the ZEBRA algorithms can be run in various search modes. We begin by determining ZEBRA’s most sensitive search mode before comparing it to the other algorithms documented in sections 6.1 - 6.3. For this purpose, we consider short GRBs of 0.3s duration with a precisely known position (no spatial trials) at a 5◦ zenith angle. Figure 6.7 shows HAWC’s sensitivity to such a burst as a function of redshift for three ZEBRA search modes. In red are the results obtained with ZEBRA running in its standard search mode with a spectral hypothesis given by an intrinsic E−2 power law adjusted for EBL attenuation (calculated with the Dominguez 2011 model [49]). This provides the best sensitivity of any ZEBRA search mode tested. However, in the 41 month search, we typically do not know the burst redshift and cannot properly account for EBL attenuation. To address these cases, we also test ZEBRA in its standard search mode with no EBL attenuation built into its E−2 spectral hypothesis (in green), and ZEBRA in its spectrum-independent search mode (in blue). The right panel of figure 6.7 compares these two search modes to the best case scenario of the standard, known redshift search; the dashed blue line shows the ratio of the blue to red curves from the left panel, and the dashed green line the ratio of the green to red curves from the left panel. As predicted (for the reasons discussed in section 6.3), the standard search with no EBL attenuation considered performs comparatively worse than the standard, known redshift search, with the loss in sensitivity increasing with z and exceeding 40% at z ≈ 0.8. The spectrum-independent search, however, performs as expected, with its sensitivity remaining

fairly stable at ∼10% worse than the standard, known z search regardless of the burst

159 Figure 6.7: A comparison of HAWC’s sensitivity with various ZEBRA search modes, as a function of redshift, to short (0.3s) GRBs with a simple E−2 power-law spectrum. Sensitivity is calculated as the minimum flux normalization at 1 TeV that yields a 50% detection probability. Left: sensitivity with ZEBRA’s standard search mode with EBL attenuation (calculated with the actual burst redshift) built into the E−2 spectral hypothesis (in red), ZEBRA’s standard search mode with no EBL attenuation built into the E−2 spectral hypothesis (in green), and ZEBRA’s spectrum-independent search mode (in blue). Right: the ratio of the blue and green sensitivity curves to the best case scenario (red curve) of the standard search mode with a known burst redshift. redshift. We therefore determined that ZEBRA’s optimal search mode is the standard search in the known redshift case and the spectrum-independent search in the case of an unknown burst redshift. These conclusions remained valid when we repeated this analysis for several combinations of the source spectral index and the index used in ZEBRA’s spectral hypothesis. It seems that comparing the null hypothesis to something physical (and eliminating the extra degrees of freedom incurred by considering each analysis bin separately) is always preferable unless your spectral assumptions fail to take into account cut-offs (intrinsic or EBL induced) that eliminate emission from the higher-energy bins. Now that the search modes that optimize sensitivity in the ZEBRA analysis have been determined, we compare the sensitivity of these ZEBRA searches to the other algorithms considered: the single-bin excess analysis from section 6.1 and HAWC’s published GRB paper [83] and the multi-bin excess analysis of section 6.2. Figure 6.8 compares the sensitivity of these two excess algorithms to ZEBRA in its standard and spectrum-independent search modes. As done before, sensitivity is plotted as a function of redshift for a 0.3s burst with

160 an E−2 spectrum. The ZEBRA searches are clearly significantly more sensitive. The right panel of figure 6.8 plots the ratio of the sensitivity curves of the left panel to the best case standard, known z ZEBRA search.

Figure 6.8: A comparison of HAWC’s sensitivity with the search algorithms of sections 6.1 - 6.3, as a function of redshift, to short (0.3s) GRBs with a simple E−2 power-law spectrum. Sensitivity is calculated as the minimum flux normalization at 1 TeV that yields a 50% detection probability. Left: sensitivity with ZEBRA’s standard search mode with EBL attenuation (calculated with the actual burst redshift) built into the E−2 spectral hypothesis (in red), ZEBRA’s spectrum-independent search mode (in blue), the multi-bin excess analysis of section 6.2 (in cyan), and the single-bin excess analysis of section 6.2 (in black). Right: the ratio of the blue, cyan, and black sensitivity curves to the best case scenario (red curve) of the standard search mode with a known burst redshift.

The published single-bin excess analysis [83] does not have the low-energy (<1 TeV) improvements discussed in section 5. As a result, its sensitivity declines with increasing redshift at a more rapid rate than the other analyses. (At large redshifts, EBL attenuation removes almost all of the higher-energy events, emphasizing the importance of the low-energy improvements.) At a redshift of z = 1, this single-bin analysis is less sensitive by a full order of magnitude than the standard, known z ZEBRA search. While the multi-bin excess analysis does have the low-energy enhancements, the sensitivity of its simple excess search, where neither a physical alternative hypothesis nor the details of HAWC’s point-spread function are

taken into account, is worse than the standard, known z ZEBRA search by a factor of ∼2.4.

This represents a significantly greater loss in sensitivity than the factor of ∼10% incurred

161 when switching from the standard ZEBRA search to the spectrum-independent approach needed for bursts with unknown redshift.

6.4.2 The Most Sensitive Search Method

In summary, as the most sensitive GRB search method of those considered in sections 6.1 - 6.3, we chose the ZEBRA algorithm for use in HAWC’s 41 month GRB search. For bursts with known redshift, we run ZEBRA in its standard search mode with a spectral hypothesis defined by an E−2 power law adjusted for EBL attenuation (calculated with the Dominguez 2011 model [49]). For the more typical case of an unknown burst redshift, we use ZEBRA’s “spectrum-independent” mode, which only uses the E−2 hypothesis to calculate HAWC’s point-spread function. To put the sensitivities quoted for this search in the previous section into context, we consider its predicted performance to GRB 130427A, one of the brightest bursts observed by Fermi (and the GRB with the highest-energy photon ever observed by Fermi-LAT). HAWC was still under construction at the time of this GRB, with only 30 of the 300 tanks in the main array operational. It was also outside HAWC’s field of view. We therefore, in figure 6.9 below, compare the spectrum measured by Fermi [23] to HAWC’s predicted sensitivity to this spectrum (with the full detector operational) for an identical burst within HAWC’s field of view (at zenith angles of 0 and 30 degrees). GRB 130427A occurred at a redshift of 0.34; we use this value in plotting HAWC’s sensitivity under ZEBRA’s standard search mode. It is clear that HAWC should easily be able to detect such a GRB, even if it were observed close to the edge of our field of view at a zenith angle of 30 degrees. This would be so even if we did not know the burst redshift and ran the ∼10% less sensitive spectrum-independent search. While we did not detect any emission from any of the GRBs analyzed in the 41 month GRB search, this result indicates that, using the methods developed here, the prospects of HAWC observing a bright GRB in the future are indeed promising.

162 Figure 6.9: GRB 130427A (z = 0.34) is, to date, the GRB with the highest-energy photon observed by Fermi-LAT. The blue dashed line shows Fermi’s spectral fit to the prompt emission observed by Fermi-GBM. A fit to the combined GBM and LAT data for data taken between 11.5 and 33 seconds after the burst is shown in dashed red with a 1σ error contour. The dotted red line is the extrapolation of this fit adjusted for EBL attenuation [49]. HAWC’s sensitivity to this EBL adjusted spectrum is shown in black for zenith angles of 0 and 30 degrees.

163 Chapter 7 | The 41 Month GRB Search

Using the reconstruction and binning techniques of chapter 5 and the ZEBRA search algorithm outlined at the beginning of section 6.4.2, we analyzed 41 months of HAWC data

for &100 GeV emission from known gamma-ray bursts. Section 7.1 provides details on the burst sample used in the search, and section 7.2 describes the time windows analyzed for emission. The search results are presented in section

7.3. No fluctuations inconsistent with background were observed, so upper limits on the &100 GeV flux are discussed in section 7.3.2. These limits are compared to Fermi measurements in section 7.3.3, and their GRB model implications are discussed in section 7.4.

7.1 Burst Sample

The online GRB catalogs for Fermi-GBM [86], Fermi-LAT [87], and Swift [88] were scanned for bursts occurring between December 2014 and April 2018 that fall within HAWC’s

field of view (zenith angle < 45 degrees) and occurred when HAWC was taking data. The Caltech gamma-ray burst online index [89] was checked for additional bursts detected by other instruments, but none were found that fall within HAWC’s field of view. This process provided 130 GRBs for the HAWC search. Details for each burst relevant to understanding our analysis are provided in table A.1 in appendix A.

164 7.2 Time Windows

Section 6.3 explains how we search for emission in a given period of time. Here, we describe which time periods are analyzed in our search. For each GRB, we search for short timescale emission and long timescale emission. The windows used in the short timescale search are set by the burst T 90 (as measured by Fermi-

GBM, or Swift-BAT if the GRB wasn’t seen by Fermi). Here, we look from tb − T 90 to

tb + 3 · T 90, where tb is the start time for the observed T90 emission period, using search windows of duration T 90 and 2 · T 90 with 50% overlap (see figure 7.1). In the long timescale search, we analyze data from 10 minutes before the burst to an hour after using a 70 minute search window for the full duration, seven non-overlapping 10 minute search windows, and 70 non-overlapping one minute search windows. While we did not expect to see significant emission in a single one or ten minute window long after the burst, these searches were performed to qualitatively check the data for an unusual clustering of upwards fluctuations that could indicate a more appropriate time window for analysis.

Figure 7.1: A diagram illustrating the time windows used to search for emission in the short timescale analysis. Time 0 is the starting time for the T90 emission period, and T90 denotes the length of time for 90% of the emission to be observed in the instrument that detected the burst.

In the case of a detection, it would become necessary to calculate the trials factor incurred by searching multiple time periods in this manner. However, as discussed in section 7.3, no significant (5σ level) excesses were observed. Thus, analyzing the temporal trials factor was not necessary to conclude that all results were consistent with pure background.

165 7.3 Results

While no significant emission was detected from any burst in our sample (as discussed in

more detail in section 7.3.1 below), substantial improvements were made in HAWC’s &100 GeV gamma-ray burst flux limits. The limit calculation is described in section 7.3.2, and the HAWC limits are compared to lower-energy Fermi detections in section 7.3.3.

7.3.1 Background Consistency

The time windows described in section 7.2 for all 130 bursts in the HAWC 41 month GRB sample were analyzed for background consistency using the ZEBRA search algorithm of section 6.3. Recall that the ZEBRA test statistic (TS) corresponding to a 5σ background fluctuation is dependent on whether or not the burst redshift is known and on the burst’s positional uncertainty (as reported by the instrument – Fermi-GBM, Fermi-LAT, or Swift – with the most precise localization). The 5σ TS for a single temporal trial was calculated for each burst as described in section 6.3 and compared to the output TS values observed in the search. The measured output TS values for GRB 171120A (a burst with interesting flux limits that we discuss in more detail in section 7.3.3) is shown in figure 7.2 below as an example. The measured TS values are plotted for all time windows discussed in section 7.2 with the TS axis scaled from 0 to 35: the 5σ fluctuation level for this GRB. As can be seen, none of the temporal trials exhibit any fluctuations that are inconsistent with background. We examined versions of this plot for all 130 bursts in the 41 month HAWC sample. None show any interesting fluctuations. The 1 and 10 minute trials in the long timescale search (e.g. figure 7.2 left) were qualitatively examined for an unusual clustering of upwards fluctuations that might indicate emission in a time window that wasn’t analyzed directly, but no such behavior was seen.

166 Figure 7.2: Example search output for GRB 171120A (T90 = 64s). This burst is precisely localized but does not have a redshift measurement, so we ran the targeted spectrum-independent search (which has a minimum 5σ TS of 35 – see section 6.3). The search test statistic (TS) is plotted as a function of time for all search windows in the long timescale search (left) and short timescale search (right). The y-axis denotes the duration of the search window (∆t). Within each horizontal band associated with a given search duration, adjacent vertical bands represent the output TS for the individual search windows of that duration. These vertical bands are centered on the time at the middle of the individual search windows that they represent. Their widths are equal to the search duration in the long timescale search; in the short timescale search, we have overlapping windows and the plotted widths are shortened to allow an entry for every search window analyzed. The color axis is scaled from 0 to the minimum 5σ TS obtained in section 6.3.

It is worth highlighting again that while this analysis corrects for multiple spatial trials in the case of poorly localized GRBs, no temporal trials correction is implemented. As this correction would, necessarily, increase the TS corresponding to a 5σ fluctuation, the consistency of the data with pure background is even stronger than this analysis suggests.

7.3.2 Flux Limits

As no significant emission was detected for any burst in our 41 month GRB sample, upper limits on the HAWC flux were calculated for the prompt T90 emission period, the time window during which Fermi-GBM (or Swift-BAT if there was no GBM detection) observed 90% of its detected flux, and for the 70 minute search period. The T90 time window is, of course, of particular interest as that is when the bulk of the known lower-energy emission was

167 actually observed. As the higher-energy GeV emission detected in many GRBs by Fermi-LAT is often seen arriving well after the initial T90 period, limits were calculated for the longer 70 minute time window as well. Since there is only one example of observed TeV emission from a GRB (with MAGIC’s detection of 190114C, which was outside of HAWC’s field of view), a certain amount of arbitrariness in the choice of the specific duration used in this longer timescale limit (70 minutes) was unavoidable. The limits obtained for these two time periods for all 130 bursts are provided in table B.1 in appendix B; limits for bursts detected by Fermi-LAT, calculated for the same time window used in the LAT measurement, are discussed in greater detail in section 7.3.3. We calculate flux limits as the upper end of 90% frequentist confidence intervals constructed with the Feldman & Cousins ordering principle [97] assuming an E−2 power-law spectrum. This spectrum is adjusted for EBL attenuation using the Dominguez 2011 EBL model from Ref. [49]. If the burst redshift is unknown, limits are calculated for assumed redshifts of 0.3 and 1.0. As specific time windows for the flux limits were chosen without regard to the search output, a temporal trials correction is, again, unnecessary. In the case of poorly localized GRBs with multiple spatial trials, we calculate the flux limits using the spatial trial with the largest upwards fluctuation. This choice will always yield the largest limit and therefore provides the most conservative result. The energy range over which these limits are valid depends on the burst redshift and zenith angle as well as its assumed spectrum. As redshift increases, EBL attenuation removes a greater fraction of the higher-energy photons, skewing HAWC’s observed energy distribution down to lower energies. As zenith angle increases, a greater fraction of the lower-energy photons is absorbed by Earth’s atmosphere before reaching the HAWC detector, skewing the observed energy distribution towards higher energies. We determine the energy range of each of the seven air-shower size bins used in this analysis for a given burst as the central 90% of the energy distribution modeled in ZEBRA’s detector response simulation for a burst with an E−2 spectrum (see section 6.3). The overall energy range for the flux limits is taken from the

168 lower bound for the lowest-energy bin (bin 0a) and the upper bound for the highest-energy bin (bin 4). For a redshift of 1.0 and zenith angle of 0 degrees, parameters that yield particularly low energy bounds, this process yields an energy range of 32 → 750 GeV. For a redshift of 0.3 and zenith angle of 40 degrees, parameters that yield particularly high energy bounds, we obtain an energy range for our flux limits of 68 → 4900 GeV. The bin 0a and bin 4 energy distributions used to obtain the energy bounds for these example parameters are shown in figure 7.3 below.

Figure 7.3: Energy distributions in Bin 0a (left) and Bin 4 (right) for bursts with an E−2 power-law spectrum. The lower (upper) 5% of the bin 0a (bin 4) distributions are shaded; the energy of this 5% threshold is taken as the lower (upper) bound for the energy range over which our flux limits are valid. Distributions and bounds for a GRB at a redshift of 1.0 and zenith angle of 0 degrees are shown in blue; those for a GRB at a redshift of 0.3 and zenith angle of 40 degrees are shown in red.

7.3.3 Comparisons with Fermi Measurements

The Fermi space telescope provides the highest-energy detections for gamma-ray bursts in the HAWC 41 month search sample. Comparing Fermi’s spectral and fluence measurements with HAWC upper limits is therefore useful in considering potential model constraints implied by the HAWC limits. Here, we present the results of these comparisons; the model implications are discussed in section 7.4.

169 We begin by comparing Fermi’s measured 10 - 1000 keV fluence for the prompt T90 emission of all Fermi-GBM detected bursts in our sample with the 80 - 800 GeV fluence implied by integrating HAWC’s prompt T90 flux upper limits (discussed in the previous section). The 80 - 800 GeV energy range was chosen to maintain consistency with HAWC’s previously published results [83] and because the HAWC energy range study presented in section 7.3.2 indicates that this is a reasonable range for most bursts in our analysis. The results of this comparison are shown below for both the z = 0.3 HAWC limits (figure 7.4 left) and the z = 1.0 limits (figure 7.4 right). As discussed in section 7.4, short GRBs (plotted as red dots) are more likely to have lower redshifts and the z = 0.3 assumption is more likely, whereas long GRBs (plotted as blue crosses) are more likely to have larger redshifts and the z = 1.0 assumption is more likely. The Fermi-LAT detected bursts (discussed in more detail below) are plotted as green squares. The HAWC limits on many of these GRBs (including the LAT bursts) are well below the one-to-one equal fluence line. These are the bursts for which the HAWC limits may imply interesting model constraints. However, without knowing anything about these bursts’ spectral behavior across the five decades in energy between the GBM measurements and the HAWC limits, we cannot draw any conclusions from the information presented in figure 7.4 alone. We therefore turn our attention to the four bursts in this sample with higher-energy Fermi-LAT spectral measurements: GRBs 160821A, 170206A, 170214A, and 171120A. In all of these bursts, there are LAT detected photons at or above 1 GeV. Figures 7.5 – 7.8 below show the Fermi power-law spectral fits for these GRBs plotted from 100 MeV out to the highest energy reported in their respective GCN circulars ( [93], [94], [95], and [96]). These power laws are then extrapolated out to higher energies using the Dominguez 2011 EBL attenuation model [49] assuming redshifts of 0.3 and 1.0. HAWC limits in the energy range obtained using the procedure detailed in section 7.3.2 (with Fermi’s measured spectrum) are shown as well. These are obtained by integrating HAWC flux upper limits calculated for the same redshifts, assuming the same spectrum reported by Fermi-LAT, and for the same time

170 Figure 7.4: HAWC fluence upper limits between 80 and 800 GeV vs. measured Fermi-GBM fluences between 10 and 1000 keV for all GRBs in the HAWC 41 month search sample that were detected by Fermi-GBM. Bursts whose position uncertainty regions extend beyond HAWC’s field of view (zenith < 45 degrees) were excluded. Fluence values are shown for the prompt emission in the initial Fermi-GBM T90 period. HAWC fluence upper limits are calculated assuming an E−2 power law for z = 0.3 (left) and z = 1.0 (right) using the flux limits presented in table B.1 (see appendix B). Short GRBs are plotted as red circles, long GRBs as blue crosses, and bursts with a LAT detection (160821A, 170206A, 170214A, and 171120A) as green squares. (Burst 170206A is a short GRB; the other three LAT bursts are long GRBs.) A black one-to-one line is included in the figures. window used in the LAT fits. (See section 7.3.2 for additional calculation details on HAWC flux limits.) For GRBs 160821A and 170214A the HAWC limits are unconstraining under both redshift assumptions; they do not rule out the possibility that the spectra measured by Fermi-LAT extend beyond 1 TeV. Bursts 170206A and 171120A are more interesting. Here, the HAWC z = 0.3 limits are at or below the extrapolation of Fermi’s measured power-law spectrum, indicating that if a low-redshift assumption is valid, these bursts must have a cut-off or spectral

steepening below the lower bound of HAWC’s energy range at ∼100 GeV. Note, however, that these conclusions rely on the veracity of Fermi’s reported best fit power-law spectra, and the uncertainties in their fit parameters do allow for power-law extrapolations that fall below HAWC’s z = 0.3 limits. The potential model implications of these measurements are discussed further in the following section.

171 Figure 7.5: A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 160821A, using data from 92.08 - 1459.15s after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [93]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 92.08 - 1459.15s time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits.

Figure 7.6: A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 170206A, using data from 0.208 - 20.208 after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [94]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 0.208 - 20.208 time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits.

172 Figure 7.7: A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 170214A, using data from 39.49 - 751.99 after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [95]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 39.49 - 751.99 time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits.

Figure 7.8: A comparison of Fermi-LAT’s power-law fit to the spectrum of GRB 171120A, using data from 0.31 - 5275.99s after the burst, to HAWC upper limits. The Fermi fit with a light blue error band (taking into account uncertainty in fit parameters) is plotted out to the highest-energy photon seen by Fermi-LAT (as reported in [96]). Extrapolations of the Fermi fit assuming z = 0.3 and z = 1.0 are shown in dashed blue. HAWC upper limits are calculated for Fermi’s measured power-law spectrum, also for z = 0.3 and z = 1.0 and for the same 0.31 - 5275.99s time window. The Dominguez 2011 EBL model [49] is used in the LAT extrapolations and HAWC upper limits.

173 7.4 Conclusions

The HAWC spectral limits for GRBs 170206A and 171120A, shown in figures 7.6 and 7.8, are potentially interesting for GRB modeling. The limits for both of these bursts suggest a

spectral cut-off or steepening below ∼100 GeV (between the Fermi and HAWC measurements)

under the assumption of z . 0.3. As short GRBs detected at Earth tend to have lower redshifts than long GRBs, the low-redshift assumption is more likely for GRB 170206A (T90 = 1.168s) than for GRB 171120A (T90 = 64.0s). This can be seen in figure 7.9 below, which compares the redshift distributions of short and long GRBs for all Swift detected bursts with known redshifts (not just those in the HAWC 41 month search sample) using data from [88]. The redshift

distribution of short bursts is peaked at low redshifts, with ∼83% of GRBs with z < 1. Long

GRBs are observable out to larger redshifts. Only ∼24% of those seen by Swift have z < 1.

Figure 7.9: Redshift distributions for Swift GRBs detected up to March 2019 [88]. This sample contains 23 short GRBs with known redshifts (plotted in blue) and 331 long GRBs with known redshifts (plotted in red). Both histograms are area normalized to one to allow for easy comparison of their shapes.

The limits for GRB 170206A (previously published in [83]) therefore remain the most interesting for GRB modeling. While the new limits from this work are still not strong enough to directly rule out any model, they are consistent with those that predict a cut-off

174 or spectral steepening at high energies. Synchrotron self-Compton emission, for example, is commonly used to explain high-energy emission from GRBs and can predict such a cut-off. As we discussed in section 1.1.2.1, high-energy emission in SSC prompt emission models is produced by electrons that get accelerated in internal shocks and emit synchrotron photons that are inverse-Compton scattered to higher energies by the same population of electrons (see, e.g., [98]). If this occurs co-spatially with the prompt keV-MeV emission, these lower-energy photons would produce a high density of target photons for pair production with the SSC emission, yielding a sharp high-energy cut-off consistent with that implied by the z = 0.3 HAWC upper limits for GRB 170206A. Note, though, that the observations for this burst presented in figure 7.6 extend into the early afterglow. Emission constrained by these limits could not, therefore, consist entirely of SSC photons produced by internal shock accelerated electrons. However, modifications to this prompt SSC model placing the high-energy emission zone at later times and larger radii than the prompt emission can still yield a spectral steepening at GeV energies [45] that could explain HAWC’s non-detection of GRB 170206A. As discussed above, this work produced limits for a second burst, GRB 171120A, that

also imply a cut-off or spectral steepening below 100 GeV for z . 0.3. While the emission mechanisms discussed in the context of GRB 170206A could explain such a cut-off or steepening here as well, the low-redshift assumption is less likely in this case. The results presented here are thus consistent with SSC emission. However, we cannot specifically rule out any leptonic or hadronic emission model. Further improvements to HAWC’s sensitivity below 1 TeV or future observations of bright HAWC GRB candidates at a favorable location are necessary to put more stringent constraints on GRB models.

175 Appendix A| List of Bursts in the 41 Month GRB Search

Table A.1 below lists the HAWC zenith angle, position uncertainty, redshift (if known), and T90 of each burst in the HAWC 41 month GRB search. The last three columns indicate (with a “yes” or “no”) whether each burst was detected by Fermi-GBM, Fermi-LAT, and Swift. The coordinates and uncertainty in the GRB position was taken from the most precise measurement out of any instrument on-board the Fermi or Swift telescopes (Fermi-GBM, Fermi-LAT, Swift-BAT, Swift-XRT, or Swift-UVOT). The HAWC zenith angle and position uncertainty are sufficient to understand the HAWC results; for the actual coordinates of each burst, see [86], [87], and [88]. Also note that no T90 value was reported for the Swift bursts 150530B, 161202A, and 170714A. Based on information in the GCN circulars for GRBs 150530B [90] and 161202A [91], T90 values of 2s and 70s were chosen for these bursts, respectively. For GRB 170714A, Swift saw “continuous weak emission” for the duration of their observation [92]. We therefore skip the short timescale search for this burst (see section 7.2) and do not report a T90 measurement here.

176 Table A.1: Bursts in the HAWC 41 Month GRB Search

HAWC Position Zenith (deg) Error (deg) redshift T90 (s) GBM LAT Swift

141205A 19.36 3.33e-02 ? 1.10 yes no yes

150203A 12.07 4.17e-04 ? 25.80 yes no yes

150211A 44.05 4.44e-04 ? 13.60 no no yes

150302A 30.04 9.17e-04 ? 23.74 no no yes

150317A 36.50 5.00e-04 ? 23.29 no no yes

150323A 26.54 4.44e-04 0.59 149.60 no no yes

150323B 36.07 1.19e-03 ? 56.32 yes no yes

150416A 42.69 1.93e+00 ? 33.28 yes yes no

150423A 12.63 4.44e-04 1.39 0.22 no no yes

150527A 41.47 4.17e-04 ? 112.00 yes no yes

150530A 38.61 4.44e-04 ? 6.62 yes no yes

150530B 28.34 5.00e-02 ? 2.00 no no yes

150710A 5.40 7.78e-04 ? 0.15 no no yes

150716A 40.03 1.14e-03 ? 44.00 no no yes

150811A 35.28 1.67e-04 ? 34.00 no no yes

150817A 32.40 3.89e-04 ? 38.80 yes no yes

151205A 21.93 4.17e-04 ? 62.80 yes no yes

151228B 11.22 1.61e-04 ? 48.00 yes no yes

160310A 34.33 1.39e-04 ? 18.17 yes yes no

177 160410A 31.59 1.36e-04 1.72 8.20 no no yes

160705B 34.80 4.17e-04 ? 54.40 no no yes

160714A 44.88 4.50e-02 ? 0.35 yes no yes

160804A 18.86 1.22e-04 0.74 144.20 yes no yes

160821A 24.89 1.67e-02 ? 43.01 yes yes yes

161202A 32.02 3.89e-04 ? 70.00 no yes yes

170115A 5.41 1.00e-03 ? 48.00 no no yes

170205A 38.47 1.19e-04 ? 26.10 no no yes

170206A 11.10 8.50e-01 ? 1.17 yes yes no

170214A 31.95 1.94e-04 ? 122.88 yes yes no

170307A 9.40 4.17e-02 ? 56.92 yes no yes

170318A 41.07 4.72e-04 ? 133.70 no no yes

170318B 28.75 5.00e-04 ? 160.00 yes no yes

170419B 36.62 2.33e-02 ? 77.20 yes no yes

170526A 22.42 5.00e-04 ? 7.90 no no yes

170531A 42.66 9.72e-04 ? 26.74 no no yes

170705A 33.61 1.25e-04 2.01 217.30 yes no yes

170714A 21.14 3.89e-04 0.79 ? no no yes

170810A 16.05 1.17e-04 ? 152.40 yes yes yes

170823A 24.37 1.67e-02 ? 69.40 no no yes

171007A 39.46 5.28e-04 ? 105.00 yes no yes

178 171027A 21.70 3.89e-04 ? 96.60 no no yes

171115A 38.08 4.44e-04 ? 38.20 no no yes

171120A 3.68 4.44e-04 ? 64.00 yes yes yes

171123A 14.93 4.17e-04 ? 58.50 no no yes

180205A 23.16 1.17e-04 1.41 15.50 yes no yes

180325A 41.30 1.22e-04 2.25 94.10 no no yes

180402A 36.18 5.83e-04 ? 0.18 yes no yes bn141202470 40.83 3.32e+00 ? 1.34 yes no no bn141230142 18.01 3.86e+00 ? 9.86 yes no no bn150105257 41.71 1.00e+00 ? 80.64 yes no no bn150126868 32.66 5.10e-01 ? 96.51 yes no no bn150128791 40.05 3.32e+00 ? 85.25 yes no no bn150131951 43.89 5.31e+00 ? 8.19 yes no no bn150201040 39.53 1.39e+01 ? 0.51 yes no no bn150208929 27.19 4.17e+00 ? 0.13 yes no no bn150329288 42.92 1.17e+01 ? 28.93 yes no no bn150502435 39.24 1.00e+00 ? 109.31 yes no no bn150522944 40.02 1.05e+01 ? 1.02 yes no no bn150622393 44.44 1.00e+00 ? 60.67 yes no no bn150705588 38.34 1.26e+01 ? 0.70 yes no no bn150811849 37.77 9.90e-01 ? 0.64 yes no no

179 bn150904479 40.25 1.09e+01 ? 23.30 yes no no bn150906944 23.76 5.19e+00 ? 0.32 yes no no bn150928359 42.79 4.57e+00 ? 53.50 yes no no bn151022577 33.64 2.14e+01 ? 0.32 yes no no bn151030999 12.31 1.00e+00 ? 116.48 yes no no bn151129333 42.17 5.67e+00 ? 52.22 yes no no bn160102500 40.78 5.81e+00 ? 25.34 yes no no bn160113398 29.25 1.20e+00 ? 24.58 yes no no bn160118060 44.09 1.49e+00 ? 46.85 yes no no bn160131174 17.78 4.71e+00 ? 205.31 yes no no bn160206430 34.06 4.17e+00 ? 21.50 yes no no bn160211119 44.87 4.97e+00 ? 0.96 yes no no bn160215773 34.63 3.44e+00 ? 141.31 yes no no bn160220868 39.50 1.58e+01 ? 22.53 yes no no bn160228034 39.81 1.24e+01 ? 16.13 yes no no bn160301215 29.73 3.48e+00 ? 29.70 yes no no bn160303201 42.94 9.05e+00 ? 48.13 yes no no bn160406503 20.28 1.18e+01 ? 0.43 yes no no bn160515819 36.49 3.43e+00 ? 84.48 yes no no bn160521839 34.70 4.44e+00 ? 15.87 yes no no bn160527080 42.64 6.55e+00 ? 25.34 yes no no

180 bn160605847 39.83 4.95e+00 ? 5.50 yes no no bn160728337 14.73 1.34e+01 ? 18.69 yes no no bn160813297 29.77 1.08e+01 ? 7.94 yes no no bn160816414 25.84 1.91e+01 ? 11.78 yes no no bn160820496 40.67 4.52e+00 ? 0.38 yes no no bn160920249 30.52 4.44e+00 ? 15.62 yes no no bn160925221 39.36 5.45e+00 ? 50.94 yes no no bn161005977 29.88 4.34e+00 ? 19.46 yes no no bn161009651 42.58 4.35e+00 ? 92.16 yes no no bn161012214 28.80 1.27e+01 ? 11.01 yes no no bn161026373 23.16 1.17e+01 ? 0.11 yes no no bn170101116 32.32 1.20e+00 ? 12.80 yes no no bn170101374 20.06 9.07e+00 ? 2.30 yes no no bn170114917 39.81 1.00e+00 ? 12.03 yes no no bn170119228 32.10 4.41e+00 ? 28.93 yes no no bn170203486 38.39 1.41e+01 ? 0.34 yes no no bn170219002 31.20 1.41e+00 ? 0.10 yes no no bn170228773 16.00 3.18e+00 ? 46.08 yes no no bn170302166 30.90 1.22e+01 ? 3.84 yes no no bn170302876 14.08 5.24e+00 ? 89.86 yes no no bn170306130 41.60 2.26e+00 ? 32.26 yes no no

181 bn170310417 34.96 1.37e+01 ? 13.57 yes no no bn170403583 35.96 7.16e+00 ? 0.48 yes no no bn170423719 36.65 1.17e+00 ? 46.59 yes no no bn170520202 39.22 1.65e+01 ? 6.14 yes no no bn170604603 35.12 4.10e+00 ? 0.32 yes no no bn170606968 31.65 3.12e+00 ? 11.26 yes no no bn170610689 31.78 2.57e+00 ? 19.20 yes no no bn170611937 42.13 5.61e+00 ? 27.90 yes no no bn170705244 44.23 8.58e+00 ? 5.89 yes no no bn170709334 16.73 7.49e+00 ? 1.86 yes no no bn170722525 17.92 4.49e+00 ? 37.38 yes no no bn170818137 22.16 1.15e+01 ? 0.58 yes no no bn170830135 26.59 2.66e+00 ? 114.95 yes no no bn171009138 28.00 5.11e+00 ? 47.10 yes no no bn171017823 39.65 5.54e+00 ? 86.27 yes no no bn171025416 23.80 1.64e+01 ? 19.71 yes no no bn171209671 43.38 3.64e+00 ? 13.82 yes no no bn171212222 5.62 3.26e+00 ? 19.97 yes no no bn180103090 14.26 6.97e+00 ? 0.02 yes no no bn180113011 24.48 1.21e+00 ? 15.87 yes no no bn180119837 42.43 2.52e+00 ? 3.01 yes no no

182 bn180127049 24.98 4.96e+00 ? 34.82 yes no no

bn180128881 40.56 9.22e+00 ? 1.79 yes no no

bn180225417 39.01 7.47e+00 ? 0.90 yes no no

bn180307073 31.90 3.00e+00 ? 136.71 yes no no

bn180309322 15.40 2.08e+00 ? 24.58 yes no no

bn180409346 44.02 1.00e+00 ? 13.06 yes no no

Table A.1: The name, HAWC zenith, position uncertainty, redshift, T90, and instruments that observed the bursts in the HAWC 41 month GRB search (e.g. a “yes” in the LAT column means the burst was detected by Fermi-LAT). In the case of detection by multiple experiments, the coordinates and position uncertainty measured by the most precise instrument are used. Note that the first three numbers of each burst name indicate the date of observation (e.g. 170714A was observed on July 14, 2017 and bn180113011 on January 13, 2018). The numbers following ‘bn’ in burst names denote the Fermi-GBM trigger number. Question marks indicate unknown quantities.

183 Appendix B| Flux Limits for Bursts in the 41 Month GRB Search

Table B.1 below provides flux limits for all bursts in the HAWC 41 month GRB search (listed in appendix A). Limits are provided for the prompt T90 search window and the long timescale 70 minute search window (see section 7.2 for details). Limits are provided for the actual burst redshift if known (in these cases, the redshift is specified underneath the burst name in the leftmost column) and for redshifts of 0.3 and 1.0 otherwise. Flux upper limits are provided at 1 TeV; a power-law spectrum of A (E/T eV )−2 is assumed and upper limits are calculated for the normalization factor A, in units of T eV −1cm−2s−1, as the upper bound of 90% frequentist confidence intervals constructed with the Feldman & Cousins ordering principle [97]. See section 7.3.2 for additional details.

Table B.1: HAWC E−2 Power-Law Flux Limits at 1 TeV (TeV−1 cm−2 s−1)

70 minute 70 minute 70 minute T90 limits T90 limits T90 limits limits for limits for limits for for burst z for z = 0.3 for z = 1.0 burst z z = 0.3 z = 1.0

141205A NA 1.53e-07 9.83e-07 NA 1.19e-09 1.13e-08

150203A NA 7.77e-09 6.09e-08 NA 7.05e-10 3.68e-09

184 150211A NA 2.48e-07 4.80e-06 NA 1.93e-09 1.97e-08

150302A NA 5.35e-08 3.56e-07 NA 6.75e-10 6.63e-09

150317A NA 6.54e-08 5.22e-07 NA 5.94e-10 4.53e-09

150323A (z = 0.593) 1.25e-08 NA NA 6.02e-09 NA NA

150323B NA 3.53e-08 2.62e-07 NA 7.87e-10 5.62e-09

150416A NA 1.63e-07 1.11e-06 NA 8.48e-09 5.37e-08

150423A (z = 1.394) 1.59e-06 NA NA 2.78e-10 NA NA

150527A NA 8.33e-08 7.79e-07 NA 2.28e-09 2.17e-08

150530A NA 9.33e-08 1.35e-06 NA 4.56e-09 6.22e-08

150530B NA 3.16e-08 2.84e-07 NA 7.44e-10 3.73e-09

150710A NA 4.22e-07 2.12e-06 NA 6.83e-10 5.82e-09

150716A NA 7.72e-08 1.06e-06 NA NA NA

150811A NA 5.10e-08 5.79e-07 NA 4.43e-09 3.86e-08

150817A NA 4.33e-08 3.40e-07 NA 3.59e-09 3.70e-08

151205A NA 9.47e-09 7.25e-08 NA 6.21e-10 4.66e-09

151228B NA 4.44e-09 2.05e-08 NA 4.80e-10 4.82e-09

160310A NA 1.10e-07 9.18e-07 NA 1.64e-09 4.27e-09

160410A (z = 1.717) 6.74e-07 NA NA 1.58e-07 NA NA

160705B NA 4.79e-08 5.89e-07 NA 8.20e-10 4.37e-09

160714A NA 2.02e-06 2.01e-05 NA 1.37e-08 2.73e-07

160804A (z = 0.736) 2.64e-09 NA NA 4.83e-09 NA NA

185 160821A NA 3.16e-08 2.08e-07 NA 6.98e-09 6.86e-08

161202A NA 1.02e-08 8.09e-08 NA 2.05e-09 1.04e-08

170115A NA 2.48e-09 1.44e-08 NA 4.27e-10 7.18e-10

170205A NA 2.73e-07 2.36e-06 NA 4.52e-09 2.12e-08

170206A NA 5.67e-08 3.78e-07 NA 8.62e-10 3.94e-09

170214A NA 5.96e-08 5.04e-07 NA 5.45e-09 5.71e-08

170307A NA 6.38e-09 3.32e-08 NA 3.33e-10 2.62e-09

170318A NA 3.89e-08 4.29e-07 NA 2.29e-09 1.45e-08

170318B NA 6.96e-09 5.61e-08 NA 7.19e-10 2.13e-08

170419B NA 2.40e-08 3.38e-07 NA 1.08e-08 8.64e-08

170526A NA 3.60e-08 2.34e-07 NA 6.68e-10 6.19e-09

170531A NA 5.71e-07 4.74e-06 NA 1.16e-08 8.72e-08

170705A (z = 2.01) 1.51e-06 NA NA 3.17e-07 NA NA

170714A (z = 0.793) NA NA NA 2.67e-09 NA NA

170810A NA 1.15e-09 1.36e-08 NA 1.33e-10 1.15e-09

170823A NA 9.37e-09 3.56e-08 NA 2.31e-09 2.58e-08

171007A NA 7.86e-08 8.19e-07 NA 4.65e-09 4.07e-08

171027A NA 5.84e-09 6.51e-08 NA 2.91e-10 4.17e-09

171115A NA 5.57e-08 4.09e-07 NA 8.18e-10 5.18e-09

171120A NA 2.20e-09 3.26e-08 NA 1.05e-10 1.38e-09

171123A NA 1.10e-08 8.28e-08 NA 4.15e-10 2.18e-09

186 180205A (z = 1.409) 7.36e-08 NA NA 1.69e-08 NA NA

180325A (z = 2.250) 1.42e-06 NA NA 1.78e-07 NA NA

180402A NA 1.80e-06 1.58e-05 NA 5.45e-09 3.76e-08 bn141202470 NA 6.80e-07 7.40e-06 NA 1.77e-08 1.39e-07 bn141230142 NA 2.36e-08 1.82e-07 NA 1.05e-09 8.04e-09 bn150105257 NA 1.26e-07 9.22e-07 NA 7.84e-09 7.24e-08 bn150126868 NA 1.75e-08 1.33e-07 NA 6.00e-09 4.87e-08 bn150128791 NA 1.61e-07 1.67e-06 NA NA NA bn150131951 NA 6.69e-07 6.25e-06 NA 1.51e-08 1.12e-07 bn150201040 NA 1.43e-06 1.58e-05 NA 1.35e-08 7.40e-08 bn150208929 NA 1.10e-06 7.69e-06 NA 5.94e-09 5.34e-08 bn150329288 NA 8.08e-08 3.23e-07 NA 9.75e-09 5.17e-08 bn150502435 NA 6.11e-08 4.93e-07 NA 1.72e-08 1.24e-07 bn150522944 NA 2.36e-07 2.25e-06 NA 9.94e-09 8.91e-08 bn150622393 NA 3.33e-07 3.29e-06 NA NA NA bn150705588 NA 8.67e-06 8.47e-05 NA 4.60e-09 3.42e-08 bn150811849 NA 3.56e-07 3.48e-06 NA 7.92e-09 6.07e-08 bn150904479 NA 1.90e-07 2.06e-06 NA 6.70e-08 6.59e-07 bn150906944 NA 3.63e-07 3.11e-06 NA 8.69e-10 6.41e-09 bn150928359 NA 1.69e-07 2.13e-06 NA 1.50e-08 1.69e-07 bn151022577 NA 2.59e-05 2.62e-04 NA 8.95e-09 7.20e-08

187 bn151030999 NA 5.43e-09 3.45e-08 NA 1.35e-09 7.54e-09 bn151129333 NA 6.52e-08 2.55e-07 NA 1.59e-08 9.15e-08 bn160102500 NA 8.44e-08 1.58e-06 NA 9.86e-09 6.30e-08 bn160113398 NA 4.16e-08 2.59e-07 NA 1.65e-09 1.20e-08 bn160118060 NA 3.32e-07 2.75e-06 NA 1.91e-08 2.99e-07 bn160131174 NA 1.01e-09 1.27e-08 NA 2.28e-09 1.48e-08 bn160206430 NA 9.59e-08 8.63e-07 NA 1.03e-08 1.19e-07 bn160211119 NA 1.86e-06 1.59e-05 NA 2.03e-08 1.80e-07 bn160215773 NA 1.27e-08 1.04e-07 NA 3.07e-09 2.67e-08 bn160220868 NA 2.42e-07 1.91e-06 NA 5.05e-09 2.32e-08 bn160228034 NA 2.85e-07 2.50e-06 NA 6.23e-09 2.95e-08 bn160301215 NA 2.75e-08 2.43e-07 NA 8.25e-09 5.72e-08 bn160303201 NA 2.43e-07 1.69e-06 NA 4.07e-08 4.64e-07 bn160406503 NA 8.53e-08 5.02e-07 NA 1.87e-09 9.08e-09 bn160515819 NA 6.95e-08 5.39e-07 NA 1.46e-08 1.71e-07 bn160521839 NA 5.42e-08 8.30e-07 NA 1.63e-08 1.52e-07 bn160527080 NA 6.64e-07 8.14e-06 NA 4.51e-09 3.06e-08 bn160605847 NA 2.38e-06 2.39e-05 NA 1.55e-08 1.20e-07 bn160728337 NA 1.02e-08 1.23e-07 NA 2.20e-09 1.39e-08 bn160813297 NA 4.45e-07 3.98e-06 NA 1.10e-08 8.78e-08 bn160816414 NA 2.84e-08 2.21e-07 NA 2.10e-09 1.27e-08

188 bn160820496 NA 9.29e-07 8.74e-06 NA NA NA bn160920249 NA 7.08e-08 5.04e-07 NA 5.22e-09 1.92e-08 bn160925221 NA 1.20e-07 4.04e-07 NA 1.79e-08 1.15e-07 bn161005977 NA 2.72e-08 1.01e-07 NA 1.55e-08 1.30e-07 bn161009651 NA 1.01e-07 1.36e-06 NA NA NA bn161012214 NA 7.22e-08 6.67e-07 NA 3.50e-09 7.47e-09 bn161026373 NA 1.21e-06 9.05e-06 NA 8.76e-10 3.51e-09 bn170101116 NA 9.71e-08 7.33e-07 NA 3.60e-09 2.52e-08 bn170101374 NA 4.54e-08 2.69e-07 NA 2.63e-09 1.74e-08 bn170114917 NA 1.43e-07 1.40e-06 NA 9.52e-09 5.96e-08 bn170119228 NA 1.46e-08 1.55e-07 NA 2.89e-09 2.09e-08 bn170203486 NA 1.85e-06 9.62e-06 NA 9.35e-09 7.46e-08 bn170219002 NA 5.96e-07 4.90e-06 NA 4.64e-09 3.16e-08 bn170228773 NA 1.52e-08 9.63e-08 NA 1.94e-09 1.42e-08 bn170302166 NA 3.54e-07 3.53e-06 NA 8.07e-09 8.00e-08 bn170302876 NA 8.90e-09 8.43e-08 NA 2.33e-09 1.50e-08 bn170306130 NA 1.05e-07 6.12e-07 NA 8.98e-09 5.30e-08 bn170310417 NA 7.46e-07 7.73e-06 NA 7.80e-09 6.00e-08 bn170403583 NA 4.84e-07 4.72e-06 NA 2.06e-08 1.93e-07 bn170423719 NA 6.29e-08 6.68e-07 NA 1.80e-08 1.68e-07 bn170520202 NA 1.17e-07 5.22e-07 NA 8.05e-09 8.30e-08

189 bn170604603 NA 1.16e-06 1.14e-05 NA 1.12e-08 8.96e-08 bn170606968 NA 4.49e-08 5.85e-07 NA 1.51e-08 8.74e-08 bn170610689 NA 8.25e-08 7.78e-07 NA 4.60e-09 2.09e-08 bn170611937 NA 1.28e-07 8.98e-07 NA 9.54e-09 6.64e-08 bn170705244 NA 2.68e-07 3.11e-06 NA 1.61e-08 1.50e-07 bn170709334 NA 5.07e-08 3.91e-07 NA 2.09e-09 1.54e-08 bn170722525 NA 3.99e-08 2.88e-07 NA 1.15e-09 5.84e-09 bn170818137 NA 3.35e-08 2.40e-07 NA 3.15e-09 1.69e-08 bn170830135 NA 1.69e-08 1.40e-07 NA 5.41e-09 4.79e-08 bn171009138 NA 1.65e-08 2.09e-07 NA 1.99e-09 1.96e-08 bn171017823 NA 1.04e-07 1.36e-06 NA 2.59e-08 2.49e-07 bn171025416 NA 3.15e-09 2.52e-08 NA 1.74e-09 1.23e-08 bn171209671 NA 2.65e-07 3.17e-06 NA 1.68e-08 1.82e-07 bn171212222 NA 4.03e-09 3.78e-08 NA 1.73e-09 1.53e-08 bn180103090 NA 3.34e-06 1.97e-05 NA 2.24e-09 1.37e-08 bn180113011 NA 4.90e-08 2.01e-07 NA 4.31e-09 2.51e-08 bn180119837 NA 8.05e-07 3.71e-06 NA 1.83e-08 1.70e-07 bn180127049 NA 6.06e-09 5.03e-08 NA 5.62e-09 3.10e-08 bn180128881 NA 6.11e-06 5.76e-05 NA 2.09e-08 1.35e-07 bn180225417 NA 7.10e-07 3.66e-06 NA 1.59e-08 8.49e-08 bn180307073 NA 3.39e-08 1.97e-07 NA 3.51e-09 2.51e-08

190 bn180309322 NA 1.52e-08 9.44e-08 NA 9.25e-10 6.93e-09

bn180409346 NA 2.30e-07 2.25e-06 NA NA NA

Table B.1: Flux upper limits at 1 TeV for bursts in the HAWC 41 month GRB search. A power-law spectrum of A (E/T eV )−2 is assumed and upper limits are calculated for the normalization factor A, in units of T eV −1cm−2s−1, as the upper bound of 90% frequentist confidence intervals constructed with the Feldman & Cousins ordering principle [97]. Flux limits are provided for the burst redshift if known and for redshifts of 0.3 and 1.0 otherwise. NA (not applicable) is placed in the z = 0.3, 1.0 entries for bursts of known redshifts and in the known “burst z” column for bursts with unknown redshifts. If known, the burst redshift is denoted underneath the burst name in the left most column. Bursts beginning in ‘bn’ were only detected by Fermi-GBM; the numbers following ‘bn’ denote the Fermi-GBM trigger number. See appendix A for additional information on GRBs in this table.

191 Bibliography

[1] R. W. Klebesadel, I. B. String, and R. A. Olson, 1973, ApJ, 182, L85

[2] N. Gehrels, E. Chipman, and D. Kniffen, 1994, ApJS, 92, 351

[3] C. G. Meegan et al., 1992, Nature, 355, 143

[4] G. Fishman and C. Meegan, 1995, ARAA, 33, 415

[5] P. Mészáros, 2006, RPPh, 69, 2259

[6] C. Kouveliotou et al., 1993, ApJ, 413, L101

[7] R. Mallozzi, “Gamma-Ray Astrophysics NSSTC BATSE GRB Durations,” NASA, https://f64.nsstc.nasa.gov/batse/grb/duration/ [8] Y.-P. Qin et al., 2000, PASJ, 52, 759

[9] R. D. Preece et al., 2000, ApJS, 126, 19

[10] D. Band et al., 1993, ApJ, 412, 281

[11] G. Boella et al., 1997, A&AS, 122, 299

[12] J. S. Bloom, S. G. Djorgovsky, and S. R. Kulkarni, 2001, ApJ, 554, 678

[13] M. Vietri, 1997, ApJ, 478, L9

[14] D. Q. Lamb et al., 2004, NewAR, 48, 423

[15] K. Z. Stanek et al., 2003, ApJL, 591, L17

[16] N. Gehrels et al., 2004, ApJ, 611, 1005

[17] B. Zhang et al., 2006, ApJ, 642, 354

[18] J. X. Prochaska et al., 2006, ApJ, 642, 989

[19] N. Kawai et al., 2006, Nature, 440, 184

[20] C. Meegan et al., 2009, ApJ, 702, 791

[21] W. B. Atwood et al., 2009, ApJ, 697, 1071

192 [22] M. Ajello et al., 2019, ApJ, 878, 52

[23] M. Ackerman et al., 2014, Science, 343, 42

[24] N. Gehrels and P. Mészáros, 2012, Science, 337, 932

[25] P. Hofverberg, 2011, NIMPA, 639, 23

[26] J. Aleksić et al., 2016, APh, 72, 76

[27] E. L. Ruiz-Velasco, 2019, CTA Symposium, https://indico.cta-observatory.org/event/1946/contributions/19893/

[28] R. Mirzoyan et al., 2019, GCN Circular, 23701, https://gcn.gsfc.nasa.gov/gcn3/23701.gcn3

[29] X. Wang et al., 2019, arXiv:1905.11312 [astro-ph.HE]

[30] J. Aasi et al., 2015, CQGra, 32, 74001

[31] F. Acernese et al., 2017, IJMPA, 32, 1

[32] B. P. Abbot et al., 2017, ApJL, 848, L13

[33] D. A. Coulter et al., 2017, Science, 358, 1556

[34] M. G. Aartsen et al., 2015, ApJL, 805, L5

[35] M. G. Aartsen et al., 2017, ApJ, 843, 112

[36] H. A. Ayala Solares et al., 2020, APh, 114, 68

[37] T. Piran, 1999, PhR, 314, 575

[38] P. Mészáros et al., 2015, arXiv:1506.02707 [astro-ph.HE]

[39] S. R. Kulkarni et al., 1999, Nature, 398, 389

[40] D. N. Burrows et al., 2006, ApJ, 653, 468

[41] S. Razzaque, P. Mészáros, and B. Zhang, 2004, ApJ, 613, 1072

[42] N. Gehrels, E. Ramirez-Ruiz, and D. B. Fox, 2009, ARA&A, 47, 567

[43] F. Piron, 2016, CRPhy, 17, 617

[44] A. M. Beloborodov, R. Hascoët, and I. Vurm, 2014, ApJ, 788, 36

[45] X. Zhao, Z. Li, and J. Bai, 2011, ApJ, 726, 89

[46] S. Razzaque, C. D. Dermer, and J. D. Finke, 2010, OAJ, 3, 150

[47] K. Asano, S. Guiriec, and P. Mészáros, 2009, ApJL, 705, L191

193 [48] A. Cooray, 2016, RSOS, 3, 150555

[49] A. Dominguez et al., 2011, MNRAS, 410, 2556

[50] J. Matthews, 2005, APh, 22, 387

[51] R. Engel, D. Heck, and T. Pierog, 2011, ARNPS, 61, 467

[52] B. Rossi and K. Greisen, 1941, RvMP, 13, 240

[53] G. Sinnis, 2009, NJPh, 11, 055007

[54] S. Mollerach and E. Roulet, 2018, PrPNP, 98, 85

[55] V. Schoenfelder et al., 1993, ApJS, 86, 657

[56] M. Basset, 2007, NIMPA, 572, 474

[57] E. Caroli et al., 1987, SSRv, 45, 349

[58] C. Winkler, 1994, ApJS, 92, 327

[59] J. Rajotte, 2014, NIMPA, 766, 61

[60] H. Anderhub et al., 2013, JInst, 8, P06008

[61] M. Actis et al., 2011, ExA, 32, 193

[62] T. K. Sako et al., 2009, APh, 32, 177

[63] R. Atkins et al., 2003, ApJ, 595, 803

[64] A. Smith, 2015, arXiv:1508.05826 [astro-ph.IM]

[65] A. U. Abeysekara et al., 2018, NIMPA, 888, 138

[66] A. U. Abeysekara et al., 2014, arXiv:1410.6681 [astro-ph.IM]

[67] S. D. Barthelmy et al., 1998, AIP Conference Proceedings, 428, 99

[68] ZeroMQ documentation, https://zeromq.org

[69] H. A. Solares et al., 2016, PoS (ICRC2015), 236, 997, arXiv:1508.04312

[70] E. Bonamente, R. Lauer, and F. Salesa, 2012, Internal HAWC Technical Note: TN015

[71] R. Becker-Szendy et al., 1995, NIMPA, 352, 629

[72] A. U. Abeysekara et al., 2017, ApJ, 843, 39

[73] K. Greisen, 1960, ARNPS, 10, 63

[74] A. U. Abeysekara et al., 2019, ApJ, 881, 134

194 [75] A. U. Abeysekara et al., 2017, ApJ, 843, 40 [76] A. U. Abeysekara et al., 2017, Science, 6365, 911 [77] A. U. Abeysekara et al., 2017, ApJ, 842, 85 [78] A. U. Abeysekara et al., 2018, ApJ, 865, 57 [79] D. Heck et al., 1998, CORSIKA documentation, https://web.ikp.kit.edu/corsika/physics_description/corsika_ph [80] S. Agnostinelli et al., 2003, NIMPA, 506, 250 [81] T. P. Li and Y. Q. Ma, 1983, ApJ, 272, 317 [82] J. Aleksić et al., 2015, JHEAp, 5-6, 30 [83] R. Alfaro et al., 2017, ApJ, 843, 88 [84] N. Heard and P. Rubin-Delanchy, 2017, arXiv:1707.06897 [stat.ME] [85] K. M. Górski et al., 2005, ApJ, 622, 759 [86] Online Fermi-GBM Burst Catalog, NASA, https://heasarc.gsfc.nasa.gov/W3Browse/all/fermigbrst.html [87] Online Fermi-LAT GRB Catalog, NASA, https://fermi.gsfc.nasa.gov/ssc/observations/types/grbs/lat_grbs/ [88] Online Swift GRB Catalog, NASA, https://swift.gsfc.nasa.gov/archive/grb_table/ [89] GRBOX: Gamma-Ray Burst Online Index, CALTECH, http://www.astro.caltech.edu/grbox/grbox.php [90] J. R. Cummings et al., 2015, GCN Circular, 17895, https://gcn.gsfc.nasa.gov/gcn3/17895.gcn3 [91] B. Sbarufatti et al., 2016, GCN Circular, 20227, https://gcn.gsfc.nasa.gov/gcn3/20227.gcn3 [92] D. M. Palmer et al., 2017, GCN Circular, 21347, https://gcn.gsfc.nasa.gov/gcn3/21347.gcn3 [93] M. Arimoto et al., 2016, GCN Circular, 19836, https://gcn.gsfc.nasa.gov/gcn3/19836.gcn3 [94] F. Fana Dirirsa et al., 2017, GCN Circular, 20617, https://gcn.gsfc.nasa.gov/gcn3/20617.gcn3 [95] J. L. Racusin et al., 2017, GCN Circular, 20676, https://gcn.gsfc.nasa.gov/gcn3/20676.gcn3

195 [96] F. Longo et al., 2017, GCN Circular, 22136, https://gcn.gsfc.nasa.gov/gcn3/22136.gcn3

[97] G. J. Feldman and R. D. Cousins, 1998, PhRvD, 57, 3873

[98] A. Galli and D. Guetta, 2008, A&A, 480, 5

196 Vita Matthew M. Rosenberg

Education

Ph.D. in Physics (expected 2019) The Pennsylvania State University Bachelor of Science with Honors, Physics (2014) University of Pittsburgh

Research Experience

HAWC Gamma-Ray Observatory (2015 – 2019) Detector calibration, data analysis & processing with C++, python, and bash scripting The Pennsylvania State University MINERνA Neutrino Experiment (2011 – 2014) Data analysis & processing with C++ and bash scripting University of Pittsburgh

Teaching Experience

Teaching assistant, introductory physics (2014 – 2016) Led students in recitation and laboratory sections The Pennsylvania State University

Selected Publications

Constraints on >100 GeV Emission from Gamma-Ray Bursts with 41 Months of HAWC Data HAWC Collaboration: A. U. Abeysekara et al. In preparation

Awards

Halliday Award for Excellence in Undergraduate Research (2014) Department of Physics and Astronomy, University of Pittsburgh Peter F. M. Koehler Sophomore/Junior Academic Achievement Award (2013) Department of Physics and Astronomy, University of Pittsburgh Julia Thompson Award for Excellence in Undergraduate Writing (2012) Department of Physics and Astronomy, University of Pittsburgh